id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
13,527,936 | https://en.wikipedia.org/wiki/Pi%20Piscium | Pi Piscium (π Piscium) is a solitary, yellow-white hued star in the zodiac constellation of Pisces. It is faintly visible to the naked eye, having an apparent visual magnitude of 5.60. Based upon an annual parallax shift of 28.50 mas as seen from Earth, it is located about 1114 light years from the Sun. It is a member of the thin disk population of the Milky Way.
This is an ordinary F-type main-sequence star with a stellar classification of F0 V. At the estimated age of two billion years, it is about 55% of the way through its main sequence lifetime and still has a relatively high rate of spin with a projected rotational velocity of 105.9 km/s. The star has 1.5 times the mass of the Sun and is radiating 6.3 times the Sun's luminosity at an effective temperature of 6,850 K.
Naming
In Chinese, (), meaning Official in Charge of the Pasturing, refers to an asterism consisting of refers to an asterism consisting of π Piscium, η Piscium, ρ Piscium, ο Piscium and 104 Piscium. Consequently, the Chinese name for π Piscium itself is (, .)
References
F-type main-sequence stars
Pisces (constellation)
Piscium, 102
Piscium, Pi
Durchmusterung objects
009919
007535
0463 | Pi Piscium | [
"Astronomy"
] | 307 | [
"Pisces (constellation)",
"Constellations"
] |
13,528,437 | https://en.wikipedia.org/wiki/Benzyl%20cyanide | Benzyl cyanide (abbreviated BnCN) is an organic compound with the chemical formula C6H5CH2CN. This colorless oily aromatic liquid is an important precursor to numerous compounds in organic chemistry.
It is also an important pheromone in certain species.
Preparation
Benzyl cyanide can be produced via Kolbe nitrile synthesis between benzyl chloride and sodium cyanide and by oxidative decarboxylation of phenylalanine.
Benzyl cyanides can also be prepared by arylation of silyl-substituted acetonitrile.
Reactions
Benzyl cyanide undergoes many reactions characteristic of nitriles. It can be hydrolyzed to give phenylacetic acid or it can be used in the Pinner reaction to yield phenylacetic acid esters. Hydrogenation gives β-phenethylamine.
The compound contains an "active methylene unit". Bromination occurs gives PhCHBrCN. A variety of base-induced reactions result in the formation of new carbon-carbon bonds.
Uses
Benzyl cyanide is used as a solvent and as a starting material in the synthesis of fungicides (e.g. Fenapanil), fragrances (phenethyl alcohol), antibiotics, and other pharmaceuticals. The partial hydrolysis of BnCN results in 2-phenylacetamide.
Pharmaceuticals
Benzyl cyanide is a useful precursor to numerous pharmaceuticals. Examples include:
Antiarrhythmics (e.g. disopyramide)
Antidepressants: E.g. Milnacipran & Lomevactone
Antihistamines (e.g. levocabastine (para-fluoro), Pheniramine & Azatadine.
Antitussives (e.g. isoaminile, oxeladin, butethamate, pentapiperide, and pentoxyverine)
Diuretics (e.g. triamterene)
Hypnotics (e.g. alonimid and phenobarbital) & Phenglutarimide
Spasmolytics (e.g. pentapiperide and drofenine)
Stimulants (e.g. methylphenidate)
Opioids (e.g. ethoheptazine, pethidine, and phenoperidine) & methadone
Regulation
Because benzyl cyanide is a useful precursor to numerous drugs with recreational use potential, many countries strictly regulate the compound.
United States
Benzyl cyanide is regulated in the United States as a DEA List I chemical.
China
Benzyl cyanide is regulated in People's Republic of China as a Class III drug precursor since 7 June 2021.
Safety
Benzyl cyanide, like related benzyl derivatives, is an irritant to the skin and eyes.
See also
Bromobenzyl cyanide
References
External links
EPA Chemical Profile for phenylacetonitrile
Nitriles
Benzyl compounds | Benzyl cyanide | [
"Chemistry"
] | 641 | [
"Highly-toxic chemical substances",
"Nitriles",
"Harmful chemical substances",
"Functional groups"
] |
13,528,996 | https://en.wikipedia.org/wiki/Heart%20Nebula | The Heart Nebula (also known as the Running Dog Nebula, Sharpless 2-190) is an emission nebula, away from Earth and located in the Perseus Arm of the Galaxy in the constellation Cassiopeia. It was discovered by William Herschel on 3 November 1787. It displays glowing ionized hydrogen gas and darker dust lanes.
The brightest part of the nebula (a knot at its western edge) is separately classified as NGC 896, because it was the first part of the nebula to be discovered. The nebula's intense red output and its morphology are driven by the radiation emanating from a small group of hot stars near the nebula's center. This open cluster of stars, known as Collinder 26, Melotte 15, or IC 1805, contains a few bright stars nearly 50 times the mass of the Sun, and many more dim stars that are only a fraction of the Solar mass.
The Heart Nebula is also made up of ionised oxygen and sulfur gasses, which are responsible for the rich blue and orange colours seen in narrowband images. The nebula also spans almost 2 degrees in the sky, covering an area four times that of the diameter of the full moon.
Gallery
See also
Soul Nebula
List of NGC objects (1–1000)
References
External links
Heart Nebula Data download & Processing Guide
Heart Nebula at Atlas of the Universe
NGC objects
Cassiopeia (constellation)
H II regions
Perseus Arm
Sharpless objects
IC objects
Discoveries by William Herschel | Heart Nebula | [
"Astronomy"
] | 301 | [
"Cassiopeia (constellation)",
"Constellations"
] |
13,529,591 | https://en.wikipedia.org/wiki/Fossilworks | Fossilworks was a portal which provides query, download, and analysis tools to facilitate access to the Paleobiology Database, a large relational database assembled by hundreds of paleontologists from around the world.
History
Fossilworks was created in 1998 by John Alroy and housed at Macquarie University. It included many analysis and data visualization tools formerly included in the Paleobiology Database.
References
External links
Paleontology websites
Biological databases | Fossilworks | [
"Biology"
] | 86 | [
"Bioinformatics",
"Biological databases"
] |
13,529,988 | https://en.wikipedia.org/wiki/Westerhout%205 | Westerhout 5 (Sharpless 2-199, LBN 667, Soul Nebula) is an emission nebula located in Cassiopeia. Several small open clusters are embedded in the nebula: CR 34, 632, and 634 (in the head) and IC 1848 (in the body). The object is more commonly called by the cluster designation IC 1848.
Small emission nebula IC 1871 is present just left of the top of the head, and small emission nebulae 670 and 669 are just below the lower back area.
The galaxies Maffei 1 and Maffei 2 are both nearby the nebula, although light extinction from the Milky Way makes them very hard to see.
Once thought to be part of the Local Group, they are now known to belong to their own group- the IC 342/Maffei Group.
This complex is the eastern neighbor of IC1805 (Heart Nebula) and the two are often mentioned together as the "Heart and Soul".
Star formation
W5, a radio source within the nebula, spans an area of sky equivalent to four full moons and is about 6,500 light-years away in the constellation Cassiopeia. Like other massive star-forming regions, such as Orion and Carina, W5 contains large cavities that were carved out by radiation and winds from the region's most massive stars. According to the theory of triggered star formation, the carving out of these cavities pushes gas together, causing it to ignite into successive generations of new stars. The image in the gallery above contains some of the best evidence yet for the triggered star formation theory. Scientists analyzing the photo have been able to show that the ages of the stars become progressively and systematically younger with distance from the center of the cavities.
References
H II regions
Astronomical radio sources
Sharpless objects
Articles containing video clips
Cassiopeia (constellation)
Emission nebulae
IC objects
Star-forming regions | Westerhout 5 | [
"Astronomy"
] | 397 | [
"Astronomical radio sources",
"Cassiopeia (constellation)",
"Astronomical events",
"Constellations",
"Astronomical objects"
] |
13,530,107 | https://en.wikipedia.org/wiki/Technological%20momentum | Technological momentum is a theory about the relationship between technology and society over time. The term, which is considered a fourth technological determinism variant, was originally developed by the historian of technology Thomas P. Hughes. The idea is that relationship between technology and society is reciprocal and time-dependent so that one does not determine the changes in the other but both influence each other.
Theory
Hughes's thesis is a synthesis of two separate models for how technology and society interact. One, technological determinism, claims that society itself is modified by the introduction of a new technology in an irreversible and irreparable way—for example, the introduction of the automobile has influenced the manner in which American cities are designed, a change that can clearly be seen when comparing the pre-automobile cities on the East Coast to the post-automobile cities on the West Coast. Technology, under this model, self-propagates as well—there is no turning back once the adoption has taken place, and the very existence of the technology means that it will continue to exist in the future.
The other model, social determinism, claims that society itself controls how a technology is used and developed—for example, the rejection of nuclear power technology in the USA amid the public fears after the Three Mile Island incident.
Technological momentum takes the two models and adds time as the unifying factor. In Hughes's theory, when a technology is young, deliberate control over its use and scope is possible and enacted by society. However, as a technology matures, and becomes increasingly enmeshed in the society where it was created, its own deterministic force takes hold, achieving technological momentum in the process. According to Hughes this inertia, which is particularly the case for large technological systems with their technological and social components, makes them difficult to influence and steer as they start to go more on their own way, assuming deterministic traits in the process. In other words, Hughes's says that the relationship between technology and society always starts with a social determinism model, but evolves into a form of technological determinism over time and as its use becomes more prevalent and important.
Since its introduction by Hughes, the technological momentum concept has been applied by a number of other historians of technology. For instance, it is considered an effective approach to reconciling the apparently opposite perspectives of the autonomy of technology and the social and political motivations behind technological choices. It is able to describe how socially and politically conditioned technological institutions become independent and autonomous over time.
Notes
References
Thomas P. Hughes, "The Evolution of Large Technological Systems," in Wiebe E. Bijker, Thomas P. Hughes, and Trevor Pinch, eds., The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology, 2012 (1987), pp. 45-76.
Thomas P. Hughes, "Technological momentum," in Albert Teich, ed., Technology and the Future, 8th edn., 2000.
Thomas P. Hughes, "Technological momentum," in Merritt Roe Smith and Leo Marx, ed., Does Technology Drive History?: The Dilemma of Technological Determinism, Massachusetts Institute of Technology, 1994, pp. 101–113
Thomas P. Hughes, "Technological Momentum in History: Hydrogenation in Germany 1898-1933", Past and Present, No. 44 (Aug., 1969), pp. 106–132
History of technology
Technological change | Technological momentum | [
"Technology"
] | 705 | [
"Science and technology studies",
"History of science and technology",
"History of technology"
] |
13,530,209 | https://en.wikipedia.org/wiki/Work%20measurement | Work measurement is the application of techniques which is designed to establish the time for an average worker to carry out a specified manufacturing task at a defined level of performance. It is concerned with the duration of time it takes to complete a work task assigned to a specific job. It means the time taken to complete one unit of work or operation it also that the work should completely complete in a complete basis under certain circumstances which take into account of accountants time
Usage
Work measurement helps to uncover non-standardization that exist in the workplace and non-value adding activities and waste. A work has to be measured for the following reasons:
To discover and eliminate lost or ineffective time.
To establish standard times for performance measurement.
To measure performance against realistic expectations.
To set operating goals and objectives.
Techniques
Analytical estimating
Predetermined motion time systems
Standard data system
Synthesis from elemental data
Time study
Work sampling
Purpose
Work Measurement is a technique for establishing a Standard Time, which is the required time to perform a given task, based on time measurements of the work content of the prescribed method, with due consideration for fatigue and for personal and unavoidable delays.
Method study is the principal technique for reducing the work involved, primarily by eliminating unnecessary movement on the part of material or operatives and by substituting good methods for poor ones. Work measurement is concerned with investigating, reducing and subsequently eliminating ineffective time, that is time during which no effective work is being performed, whatever the cause.
Work measurement, as the name suggests, provides management with a means of measuring the time taken in the performance of an operation or series of operations in such a way that ineffective time is shown up and can be separated from effective time. In this way its existence, nature and extent become known where previously they were concealed within the total.
Uses
Revealing existing causes of ineffective time through study, important though it is, is perhaps less important in the long term than the setting of sound time standards, since these will continue to apply as long as the work to which they refer continues to be done. They will also show up any ineffective time or additional work which may occur once they have been established.
In the process of setting standards it may be necessary to use work measurement:
To compare the efficiency of alternative methods. Other conditions being equal, the method which takes the least time will be the best method.
To balance the work of members of teams, in association with multiple activity charts, so that, as nearly as possible, each member has a task taking an equal time to perform.
To determine, in association with man and machine multiple activity charts, the number of machines an operative can run.
The time standards, once set, may then be used:
To provide information on which the planning and scheduling of production can be based, including the plant and labour requirements for carrying out the programme of work and the utilisation of available capacity.
To provide information on which estimates for tenders, selling prices and delivery promises can be based.
To set standards of machine utilisation and labour performance which can be used for any of the above purposes and as a basis for incentive schemes.
To provide information for labour-cost control and to enable standard costs to be fixed and maintained.
It is thus clear that work measurement provides the basic information necessary for all the activities of organising and controlling the work of an enterprise in which the time element plays a part. Its uses in connection with these activities will be more clearly seen when we have shown how the standard time is obtained.
Techniques of work measurement
The following are the principal techniques by which work measurement is carried out:
Time study
Activity sampling
Predetermined motion time systems
Synthesis from standard data
Estimating
Analytical estimating
Comparative estimating
Of these techniques we shall concern ourselves primarily with time study, since it is the basic technique of work measurement. Some of the other techniques either derive from it or are variants of it.
Time study
Time Study consists of recording times and rates of work for elements of a specified job carried out under specified conditions to obtain the time necessary to carry out a job at a defined level of performance.
In this technique the job to be studied is timed with a stopwatch, rated, and the Basic Time calculated.
Requirements for effective time study
The requirements for effective time study are:
a. Co-operation and goodwill
b. Defined job
c. Defined method
d. Correct normal equipment
e. Quality standard and checks
f. Experienced qualified motivated worker
g. Method of timing
h. Method of assessing relative performance
i. Elemental breakdown
j. Definition of break points
k. Recording media
One of the most critical requirements for time study is that of elemental breakdown. There are some general rules concerning the way in which a job should be broken down into elements. They include the following. Elements should be easily identifiable, with definite beginnings and endings so that, once established, they can be repeatedly recognised. These points are known as the break points and should be clearly described on the study sheet. Elements should be as short as can be conveniently timed by the observer. As far as possible, elements – particularly manual ones – should be chosen so that they represent naturally unified and distinct segments of the operation.
Performance rating
Time Study is based on a record of observed times for doing a job together with an assessment by the observer of the speed and effectiveness of the worker in relation to the observer's concept of Standard Rating.
This assessment is known as rating, the definition being given in BS 3138 (1979):
The numerical value or symbol used to denote a rate of working.
Standard rating is also defined (in this British Standard BS3138) as:
"The rating corresponding to the average rate at which qualified workers will naturally work, provided that they adhere to the specified method and that they are motivated to apply themselves to their work. If the standard rating is consistently maintained and the appropriate relaxation is taken, a qualified worker will achieve standard performance over the working day or shift."
Industrial engineers use a variety of rating scales, and one which has achieved wide use is the British Standards Rating Scale which is a scale where 0 corresponds to no activity and 100 corresponds to standard rating. Rating should be expressed as 'X' BS.
Below is an illustration of the Standard Scale:
Rating walking pace
0 no activity
50 very slow
75 steady
100 brisk (standard rating)
125 very fast
150 exceptionally fast
The basic time for a task, or element, is the time for carrying out an element of work or an operation at standard rating.
Basic time = observed time x observed rating
The result is expressed in basic minutes – BMs.
The work content of a job or operation is defined as: basic time + relaxation allowance + any allowance for additional work – e.g. that part of contingency allowance which represents work.
Standard time
Standard time is the total time in which a job should be completed at standard performance i.e. work content, contingency allowance for delay, unoccupied time and interference allowance, where applicable.
Allowance for unoccupied time and for interference may be important for the measurement of machine-controlled operations, but they do not always appear in every computation of standard time. Relaxation allowance, on the other hand, has to be taken into account in every computation, whether the job is a simple manual one or a very complex operation requiring the simultaneous control of several machines. A contingency allowance will probably figure quite frequently in the compilation of standard times; it is therefore convenient to consider the contingency allowance and relaxation allowance, so that the sequence of calculation which started with the completion of observations at the workplace may be taken right through to the compilation of standard time.
Contingency allowance
A contingency allowance is a small allowance of time which may be included in a standard time to meet legitimate and expected items of work or delays, the precise measurement of which is uneconomical because of their infrequent or irregular occurrence.
Relaxation allowance
A relaxation allowance is an addition to the basic time to provide the worker with the opportunity to recover from physiological and psychological effects of carrying out specified work under specified conditions and to allow attention to personal needs. The amount of the allowance will depend on the nature of the job. Examples are:
Personal 5–7%
Energy output 0–10%
Noisy 0–5%
Conditions 0–100%
e.g. Electronics 5%
Other allowances
Other allowances include process allowance which is to cover when an operator is prevented from continuing with their work, although ready and waiting, by the process or machine requiring further time to complete its part of the job. A final allowance is that of Interference which is included whenever an operator has charge of more than one machine and the machines are subject to random stoppage. In normal circumstances the operator can only attend to one machine, and the others must wait for attention. This machine is then subject to interference which increased the machine cycle time.
It is now possible to obtain a complete picture of the standard time for a straightforward manual operation.
Activity Sampling
Activity sampling is a technique in which a large number of instantaneous observations are made over a period of time of a group of machines, processes or workers. Each observation records what is happening at that instant and the percentage of observations recorded for a particular activity or delay is a measure of the percentage of time during which the activity or delay occurs.
The advantages of this method are that
It is capable of measuring many activities that are impractical or too costly to be measured by time study.
One observer can collect data concerning the simultaneous activities of a group.
Activity sampling can be interrupted at any time without effect.
The disadvantages are that
It is quicker and cheaper to use time study on jobs of short duration.
It does not provide elemental detail.
The type of information provided by an activity sampling study is:
The proportion of the working day during which workers or machines are producing.
The proportion of the working day used up by delays. The reason for each delay must be recorded.
The relative activity of different workers and machines.
To determine the number of observations in a full study the following equation is used:
Where:
Predetermined motion time system
A predetermined motion time system is a work measurement technique whereby times established for basic human motions (classified according to the nature of the motion and the conditions under which it is made) are used to build up the time for a job at a defined level of performance.
The systems are based on the assumption that all manual tasks can be analysed into basic motions of the body or body members. They were compiled as a result of a very large number of studies of each movement, generally by a frame-by-frame analysis of films of a wide range of subjects, men and women, performing a wide variety of tasks.
The first generation of PMT systems, MTM1, were very finely detailed, involving much analysis and producing extremely accurate results. This attention to detail was both a strength and a weakness, and for many potential applications the quantity of detailed analysis was not necessary, and prohibitively time -consuming. In these cases "second generation" techniques, such as Simplified PMTS, Master Standard Data, Primary Standard Data and MTM2, could be used with advantage, and no great loss of accuracy. For even speedier application, where some detail could be sacrificed then a "third generation" technique such as Basic Work Data or MTM3 could be used.
Synthesis
Synthesis is a work measurement technique for building up the time for a job at a defined level of performance by totaling element times obtained previously from time studies on other jobs containing the elements concerned, or from synthetic data.
Synthetic data is the name given to tables and formulae derived from the analysis of accumulated work measurement data, arranged in a form suitable for building up standard times, machine process times, etc. by synthesis.
Synthetic times are increasingly being used as a substitute for individual time studies in the case of jobs made up of elements which have recurred a sufficient number of times in jobs previously studied to make it possible to compile accurate representative times for them.
Estimating
The technique of estimating is the least refined of all those available to the work measurement practitioner. It consists of an estimate of total job duration (or in common practice, the job price or cost). This estimate is made by a craftsman or person familiar with the craft. It normally embraces the total components of the job, including work content, preparation and disposal time, any contingencies etc., all estimated in one gross amount.
Analytical estimating
This technique introduces work measurement concepts into estimating. In analytical estimating the estimator is trained in elemental breakdown, and in the concept of standard performance. The estimate is prepared by first breaking the work content of the job into elements, and then utilising the experience of the estimator (normally a craftsman) the time for each element of work is estimated – at standard performance. These estimated basic minutes are totalled to give a total job time, in basic minutes. An allowance for relaxation and any necessary contingency is then made, as in conventional time study, to give the standard time.
Comparative estimating
This technique has been developed to permit speedy and reliable assessment of the duration of variable and infrequent jobs, by estimating them within chosen time bands. Limits are set within which the job under consideration will fall, rather than in terms of precise capital standard or capital allowed minute values. It is applied by comparing the job to be estimated with jobs of similar work content, and using these similar jobs as "bench marks" to locate the new job in its relevant time band – known as Work Group.
Uses
To balance the work of members of teams, in association with the multiple activity charts, so that, as far as possible, each member has tasks taking an equal time.
To compare the efficiency of alternative methods. Other conditions being equal, the method which takes the least time will be the best method.
To determine, in association with man and machine multiple activity charts, the number of machines a worker can run.
Balayla model – work measurement in the service sector
The work measurement concept has evolved from the manufacturing world but has not been fully adopted yet to the global shift to the service sector. Certain factors create inherent difficulties in determining standard times for labor allocation in service jobs: (a) wide variation in Time Between Arrivals and Service Performance Time; (b) the difficulty of assessing the damage done to the organization by long customer Waiting Times for service. This difficulty makes it hard to calculate the Break-Even Point between raising worker output, which minimizes labor costs but increases customer Waiting Times and reduces service quality.
Dr. Isaac Balayla & Professor Yissachar Gilad from the Technion, Israel, developed the Balayla (Balaila) Model which overcomes most of the above-mentioned difficulties, by taking a multi-domain approach: 1) The model deploys a series of indicators for a correlation between output and Waiting Times. The indicator values are affected by service level of urgency. 2) the model determines the best Break-Even point by comparing the operational cost of an additional worker with the economical benefit caused by the decrease in WT. Thus, the model finds the best balance between worker output and service quality.
References
Balayla I.(2012) A manpower allocation model for service jobs (Balayla Model), IJSMET International Journal of Service Science, Management, Engineering, and Technology, 3(2), April–June 2012, pp 13–34.
Industrial engineering | Work measurement | [
"Engineering"
] | 3,129 | [
"Industrial engineering"
] |
13,530,724 | https://en.wikipedia.org/wiki/TRPM1 | Transient receptor potential cation channel subfamily M member 1 is a protein that in humans is encoded by the TRPM1 gene.
Function
The protein encoded by this gene is a member of the transient receptor potential (TRP) family of non-selective cation channels. It is expressed in the retina, in a subset of bipolar cells termed ON bipolar cells. These cells form synapses with either rods or cones, collecting signals from them. In the dark, the signal arrives in the form of the neurotransmitter glutamate, which is detected by a G protein-coupled receptor (GPCR) signal transduction cascade. Detection of glutamate by the GPCR Metabotropic glutamate receptor 6 results in closing of the TRPM1 channel. At the onset of light, glutamate release is halted and mGluR6 is deactivated; this results in opening of the TRPM1 channel, influx of sodium and calcium, and depolarization of the bipolar cell.
In addition to the retina, TRPM1 is also expressed in melanocytes, which are melanin-producing cells in the skin. The expression of TRPM1 is inversely correlated with melanoma aggressiveness, suggesting that it might suppress melanoma metastasis. However, subsequent work showed that a microRNA located in an intron of the TRPM1 gene, rather than the TRPM1 protein itself, is responsible for the tumor suppressor function. The expression of both TRPM1 and the microRNA are regulated by the Microphthalmia-associated transcription factor.
Clinical significance
Mutations in TRPM1 are associated with congenital stationary night blindness in humans and coat spotting patterns in Appaloosa horses.
See also
TRPM
References
External links
Ion channels | TRPM1 | [
"Chemistry"
] | 370 | [
"Neurochemistry",
"Ion channels"
] |
13,530,734 | https://en.wikipedia.org/wiki/NPAS2 | Neuronal PAS domain protein 2 (NPAS2) also known as member of PAS protein 4 (MOP4) is a transcription factor protein that in humans is encoded by the NPAS2 gene. NPAS2 is paralogous to CLOCK, and both are key proteins involved in the maintenance of circadian rhythms in mammals. In the brain, NPAS2 functions as a generator and maintainer of mammalian circadian rhythms. More specifically, NPAS2 is an activator of transcription and translation of core clock and clock-controlled genes through its role in a negative feedback loop in the suprachiasmatic nucleus (SCN), the brain region responsible for the control of circadian rhythms.
Discovery
The mammalian and mouse Npas2 gene was first sequenced and characterized in 1997 Dr. Steven McKnight's lab and published by Yu-Dong Zhou et al. The gene’s cDNAs encoding mouse and human forms of NPAS2 were isolated and sequenced. RNA blotting assays were used to demonstrate the selective presence of the gene in brain and spinal cord tissues of mice. In situ hybridization indicated that the pattern of Npas2 mRNA distribution in mouse brain is broad and complex, and is largely non-overlapping with that of Npas1.
Using Immunohistochemistry of human testis, Ramasamy et al. (2015) found the presence of NPAS2 protein in both germ cells within the tubules of the testes and in the interstitial space of Leydig cells.
Structure
In humans
The Npas2 gene resides on chromosome 2 at the band q13. The gene is 176,679 bases long and contains 25 exons. The predicted 824-amino acid human NPAS2 protein shares 87% sequence identity with mouse Npas2.
In mice
The Npas2 gene has been found to reside on chromosome 1 at 17.98 centimorgans and is 169,505 bases long.
Function
In the brain
The NPAS2 protein is a member of the basic helix-loop-helix (bHLH)-PAS transcription factor family and is expressed in the SCN. NPAS2 is a PAS domain-containing protein, which binds other proteins via their own protein-protein (PAS) binding domains. Like its paralogue, CLOCK (another PAS domain-containing protein), the NPAS2 protein can dimerize with the BMAL1 protein and engage in a transcription/translation negative feedback loop (TTFL) to activate transcription of the mammalian Per and Cry core clock genes. NPAS2 has been shown to form a heterodimer with BMAL1 in both the brain and in cell lines, suggesting its similarity in function to the CLOCK protein in this TTFL.
Compensation is a key feature of TTFLs that regulate circadian rhythms. BMAL1 compensates for CLOCK in that if CLOCK is absent, BMAL1 will upregulate to maintain the mammalian circadian rhythms. NPAS2 has been shown to be analogous to the function of CLOCK in CLOCK-deficient mice. In Clock knockout mice, NPAS2 is upregulated to keep the rhythms intact. Npas2-mutant mice, which do not express functional NPAS2 protein, still maintain robust circadian rhythms in locomotion. However, like CLOCK-deficient mice in the CLOCK/BMAL1 TTFL, Npas2-mutant mice (in the NPAS2/BMAL1 TTFL) still have small defects in their circadian rhythms such as a shortened circadian period and an altered response to changes in the typical light-dark cycle. In addition, Npas2 knockout mice show sleep disturbances and have decreased expression of mPer2 in their forebrains. Mice without functional alleles of both Clock and Npas2 became arrhythmic once placed in constant darkness, suggesting that both genes have overlapping roles in maintaining circadian rhythms. In both wild-type and Clock knockout mice, Npas2 expression is observed at the same levels, confirming that Npas2 plays a role in maintaining these rhythms in the absence of Clock.
In other tissues
Npas2 is expressed everywhere in the periphery of the body. Special focus has been given to its function in liver tissues, and its mRNA is upregulated in Clock-mutant mice. However, studies have shown that Npas2 alone is unable to maintain circadian rhythms in peripheral tissues in the absence of CLOCK protein, unlike in the SCN. One theory to explain this observation is that neurons in the brain are characterized by intercellular coupling and can thus respond to deficiencies in key clock proteins in nearby neurons to maintain rhythms. In peripheral tissues such as the liver and lung, however, the lack of intercellular coupling does not allow for this compensatory mechanism to occur. A second theory as to why NPAS2 can maintain rhythms in CLOCK-deficient SCNs but not in CLOCK-deficient peripheral tissues, is that there exists an additional unknown factor in the SCN that is not present in peripheral tissues.
Non-circadian function
NPAS2-deficient mice have been shown to have long-term memory deficits, suggesting that the protein may play a key role in the acquisition of such memories. This theory was tested by inserting a reporter gene (lacZ) that resulted in the production of an NPAS2 protein lacking the bHLH domain. These mice were then given several tests, including the cued and contextual fear task, and showed long-term memory deficits in both tasks.
Interactions
NPAS2 has been shown to interact with:
ARNTL (also known as BMAL1). Like Clock, Npas2 mRNA cycles with a similar phase to that of Bmal1, with both peaking 8 hours before the peak of Per2 mRNA expression. This is consistent with the observation that NPAS2 forms a heterodimer with BMAL1 to drive Per2 expression.
EP300. NPAS2 and EP300 interact in a time-dependent, synchronized manner. EP300 is recruited to NPAS2 as a coactivator of clock gene expression.
Retinoic acid receptor alpha (RARα) and retinoid X receptor alpha (RXRα). In peripheral clocks, RARα and RXRα interact with NPAS2 by inhibiting the NPAS2:BMAL1 heterodimer-mediated expression of clock genes. This interaction depends upon humoral signaling by retinoic acid and serves to phase-shift the clock.
Small heterodimer partner (SHP). In the liver circadian clock, NPAS2 and SHP engage in a TTFL: NPAS2 controls the circadian rhythms of SHP by rhythmically binding to its promoter, while SHP inhibits transcription of Npas2 when present.
Clinical significance
Npas2 genotypes can be determined through tissue samples from which genomic DNA is extracted and assayed. The assay is performed under PCR conditions and can be used to determine specific mutations and polymorphisms.
Polymorphisms and tumorigenesis
Mounting evidence suggests that the NPAS2 protein and other circadian genes are involved in tumorigenesis and tumor growth, possibly through their control of cancer-related biologic pathways. A missense polymorphism in NPAS2 (Ala394Thr) has been shown to be associated with risk of human tumors including breast cancer. These findings provide evidence suggesting a possible role for the circadian Npas2 gene in cancer prognosis. These results have been confirmed in both breast and colorectal cancers.
NPAS2 and mood disorders
Current research has revealed an association between seasonal affective disorder (SAD) and general mood disorder related to NPAS2, ARNTL, and CLOCK polymorphisms. These genes may influence seasonal variations through metabolic factors such as body weight and appetite.
Associated with a connection to mood disorders, NPAS2 has been found to be involved with dopamine degradation. This was first suggested by the observation that the clock components BMAL1 and NPAS2 transcriptionally activated a luciferase reporter driven by the murine monoamine oxidase A (MAOA) promoter in a circadian fashion. This suggested that these two clock components (BMAL1 and NPAS2) directly regulated MAOA transcription. Subsequent findings discovered positive transcriptional regulation of BMAL1/NPAS2 by PER2. In mice lacking PER2, both MAOA mRNA and MAOA protein levels were decreased. Therefore, dopamine degradation was reduced, and dopamine levels in the nucleus accumbens were increased. These findings indicate that degradation of monoamines is regulated by the circadian clock. It is very likely that the described clock-mediated regulation of monoamines is relevant for humans, because single-nucleotide polymorphisms in Per2, Bmal1, and Npas2 are associated in an additive fashion with seasonal affective disorder or winter depression.
See also
Clock (gene)
Bmal1/Arntl (gene)
Suprachiasmatic nucleus (SCN)
Per (gene)
Steven McKnight (scientist)
References
External links
Steven McKnight, the first scientist to implicate Npas2 as a contributor to circadian rhythms
Transcription factors
PAS-domain-containing proteins | NPAS2 | [
"Chemistry",
"Biology"
] | 1,918 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
13,530,757 | https://en.wikipedia.org/wiki/RAR-related%20orphan%20receptor%20beta | RAR-related orphan receptor beta (ROR-beta), also known as NR1F2 (nuclear receptor subfamily 1, group F, member 2) is a nuclear receptor that in humans is encoded by the RORB gene.
Function
The protein encoded by this gene is a member of the NR1 subfamily of nuclear hormone receptors. It is a DNA-binding protein that can bind as a monomer or as a homodimer to hormone response elements upstream of several genes to enhance the expression of those genes. The specific functions of this protein are not known, but it has been shown to interact with NM23-2, a nucleoside-diphosphate kinase involved in organogenesis and differentiation.
In the brain, ROR-beta is concentrated in layer 4 of the cerebral cortex, where it plays a role in the development of structures such as barrel columns.
A mutation in this gene also results in the loss of spinal cord interneurons and of saltatorial locomotion, a type of hopping gait that in mammals can be found in rabbits, hares, kangaroos, and some species of rodents.
Interactions
RAR-related orphan receptor beta has been shown to interact with NME1.
See also
RAR-related orphan receptor
References
Further reading
External links
Intracellular receptors
Transcription factors | RAR-related orphan receptor beta | [
"Chemistry",
"Biology"
] | 267 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
13,530,775 | https://en.wikipedia.org/wiki/TRPC1 | Transient receptor potential canonical 1 (TRPC1) is a protein that in humans is encoded by the TRPC1 gene.
Function
TRPC1 is an ion channel located on the plasma membrane of numerous human and animal cell types.
It is a nonspecific cation channel, which means that both sodium and calcium ions can pass through it. TRPC1 is thought to mediate calcium entry in response to depletion of endoplasmic calcium stores or activation of receptors coupled to the phospholipase C system. In HEK293 cells the unitary current-voltage relationship of endogenous TRPC1 channels is almost linear, with a slope conductance of about 17 pS. The extrapolated reversal potential of TRPC1 channels is +30 mV.
The TRPC1 protein is widely expressed throughout the mammalian brain and has a similar corticolimbic expression pattern as TRPC4 and TRPC5.
The highest density of TRPC1 protein is found in the lateral septum, an area with dense TRPC4 expression, and hippocampus and prefrontal cortex, areas with dense TRPC5 expression.
History
TRPC1 was the first mammalian Transient Receptor Potential channel to be identified. In 1995 it was cloned when the research groups headed by Craig Montell and Lutz Birnbaumer were searching for proteins similar to the TRP channel in Drosophila. Together with TRPC3 they became the founding members of the TRPC ion channel family.
Interactions
TRPC1 has been shown to interact with:
HOMER3,
Polycystic kidney disease 2,
RHOA
TRPC3,
TRPC4, and
TRPC5.
See also
TRPC
References
Further reading
External links
Ion channels | TRPC1 | [
"Chemistry"
] | 356 | [
"Neurochemistry",
"Ion channels"
] |
13,530,779 | https://en.wikipedia.org/wiki/TRPC2 | Transient receptor potential cation channel, subfamily C, member 2, also known as TRPC2, is a protein that in humans is encoded by the TRPC2 pseudogene. This protein is not expressed in humans but is in certain other species such as mouse.
Interactions
TRPC2 has been shown to interact with TRPC6.
See also
TRPC
References
Further reading
External links
Ion channels
Genes mutated in mice
Pseudogenes | TRPC2 | [
"Chemistry"
] | 89 | [
"Neurochemistry",
"Ion channels"
] |
13,530,785 | https://en.wikipedia.org/wiki/TRPC3 | Short transient receptor potential channel 3 (TrpC3) also known as transient receptor protein 3 (TRP-3) is a protein that in humans is encoded by the TRPC3 gene. The TRPC3/6/7 subfamily are implicated in the regulation of vascular tone, cell growth, proliferation and pathological hypertrophy. These are diacylglycerol-sensitive cation channels known to regulate intracellular calcium via activation of the phospholipase C (PLC) pathway and/or by sensing Ca2+ store depletion. Together, their role in calcium homeostasis has made them potential therapeutic targets for a variety of central and peripheral pathologies.
Function
Non-specific cation conductance elicited by the activation of TrkB by BDNF is TRPC3-dependent in the CNS. TRPC channels are almost always co-localized with mGluR1-expressing cells and likely play a role in mGluR-mediated EPSPs.
The TRPC3 channel has been shown to be preferentially expressed in non-excitable cell types, such as oligodendrocytes. However, evidence suggests that active TRPC3 channels in basal ganglia (BG) output neurons are responsible for maintaining a tonic inward depolarizing current that regulates resting membrane potential and promotes regular neuronal firing. Conversely, inhibiting TRPC3 promotes cellular hyperpolarization, which can lead to slower and more irregular neuronal firing. While it's unclear if TRPC3 channels have equal expression, other members of the TRPC family have been localized to the axon hillock, cell body, and dendritic processes of dopamine-expressing cells.
The neuromodulator, substance P, activates TRPC3/7 channels inducing cellular currents that underlie rhythmic pacemaker activity in the brainstem, enhancing the regularity and frequency of respiratory rhythms, showing homology to the mechanism described in BG neurons. Transgenic cardiomyocytes expressing TRPC3 show prolonged action potential duration when exposed to a TRPC3 agonist. The same cardiomyocytes also increase their firing rate with agonist exposure under a current-clamp tetanus protocol suggesting that they may play a role in cardiac arrhythmogenesis.
Modulators
A small molecule agonist is GSK1702934A and antagonists are GSK417651A and GSK2293017A. A commercially available inhibitor is available in the form of a pyrazole compound, Pyr3 TRPC3 has been shown to specifically interact with TRPC1 and TRPC6.
See also
Transient receptor potential channel
TRPC
References
Further reading
External links
Ion channels | TRPC3 | [
"Chemistry"
] | 568 | [
"Neurochemistry",
"Ion channels"
] |
13,530,790 | https://en.wikipedia.org/wiki/TRPC4 | The short transient receptor potential channel 4 (TrpC4), also known as Trp-related protein 4, is a protein that in humans is encoded by the TRPC4 gene.
Function
TrpC4 is a member of the transient receptor potential cation channels. This protein forms a non-selective calcium-permeable cation channel that is activated by Gαi-coupled receptors, Gαq-coupled receptors and tyrosine kinases, and plays a role in multiple processes including endothelial permeability, vasodilation, neurotransmitter release and cell proliferation.
Tissue distribution
The nonselective cation channel TrpC4 has been shown to be present in high abundance in the cortico-limbic regions of the brain. In addition, TRPC4 mRNA is present in midbrain dopaminergic neurons in the ventral tegmental area and the substantia nigra.
Roles
Deletion of the trpc4 gene decreases levels of sociability in a social exploration task. These results suggest that TRPC4 may play a role in regulating social anxiety in a number of different disorders. However deletion of the trpc4 gene had no impact on basic or complex strategic learning. Given that the trpc4 gene is expressed in a select population of midbrain dopamine neurons, it has been proposed that it may have an important role in dopamine related processes including addiction and attention.
Clinical significance
Single nucleotide polymorphisms in this gene may be associated with generalized epilepsy with photosensitivity.
Interactions
TRPC4 has been shown to interact with ITPR1, TRPC1, and TRPC5.
See also
TRPC
References
Further reading
External links
Ion channels | TRPC4 | [
"Chemistry"
] | 363 | [
"Neurochemistry",
"Ion channels"
] |
13,530,791 | https://en.wikipedia.org/wiki/TRPC5 | Short transient receptor potential channel 5 (TrpC5) also known as transient receptor protein 5 (TRP-5) is a protein that in humans is encoded by the TRPC5 gene. TrpC5 is subtype of the TRPC family of mammalian transient receptor potential ion channels.
Function
TrpC5 is one of the seven mammalian TRPC (transient receptor potential canonical) proteins. TrpC5 is a multi-pass membrane protein and is thought to form a receptor-activated non-selective calcium permeant cation channel. The protein is active alone or as a heteromultimeric assembly with TRPC1, TRPC3, and TRPC4. It also interacts with multiple proteins including calmodulin, CABP1, enkurin, Na+–H+ exchange regulatory factor (NHERF), interferon-induced GTP-binding protein (MX1), ring finger protein 24 (RNF24), and SEC14 domain and spectrin repeat-containing protein 1 (SESTD1).
TRPC4 and TRPC5 have been implicated in the mechanism of mercury toxicity and neurological behavior. It was established in 2021 that TRPC5 is a component of the dental cold sensing system.
Activation
Homomultimeric TRPC5 and heteromultimeric TRPC5-TRPC1 channels are activated by extracellular reduced thioredoxin. This channel has also been found to be involved in the action of anaesthetics such as chloroform, halothane and propofol.
Interactions
TRPC5 has been shown to interact with STMN3, TRPC1, and TRPC4.
See also
TRPC
References
Further reading
External links
Ion channels | TRPC5 | [
"Chemistry"
] | 360 | [
"Neurochemistry",
"Ion channels"
] |
13,530,805 | https://en.wikipedia.org/wiki/TRPM2 | Transient receptor potential cation channel, subfamily M, member 2, also known as TRPM2, is a protein that in humans is encoded by the TRPM2 gene.
Structure
The protein encoded by this gene is a non-selective calcium-permeable cation channel and is part of the Transient Receptor Potential ion channel super family. The closest relative is the cold and menthol activated TRPM8 ion channel. While TRPM2 is not cold sensitive it is activated by heat. The TRPM2 ion channel is activated by free intracellular ADP-ribose in synergy with free intracellular calcium. ADP-Ribose is produced to by the enzyme PARP in response to oxidative stress and confers susceptibility to cell death. Several alternatively spliced transcript variants of this gene have been described, but their full-length nature is not known.
Function
The TRPM2 gene is highly expressed in the brain and was implicated by both genetic linkage studies in families and then by case control or trio allelic association studies in the genetic aetiology of bipolar affective disorder (Manic Depression).
The physiological role of TRPM2 is not well understood. It was shown to be involved in insulin secretion. In the immune cells it mediates parts of the responses to TNF-alpha. A role has been suggested for TRPM2 in activation of NLRP3 inflammasome, the dysregulation of which is strongly associated with a number of auto inflammatory and metabolic diseases, such as gout, obesity and diabetes. In the brain it is involved in the toxicity of amyloid beta, a protein associated with Alzheimer's disease. In 2016, TRPM2 channel was strongly implicated in the detection of non-painful warm stimuli. Chun-Hsiang Tan and Peter McNaughton studied the responses of actual sensory neurons to thermal stimuli, then used an RNA-sequencing strategy to identify TRPM2 as genetically required for warmth detection in the non-noxious range of 33–38 °C.
Clinical significance
TRPM2 expression and function help preserve cancer cell viability. TRPM2 channels are highly expressed in many cancers, notably neuroblastoma.
See also
TRPM
References
Further reading
External links
Ion channels
Biology of bipolar disorder
Nudix hydrolases | TRPM2 | [
"Chemistry"
] | 476 | [
"Neurochemistry",
"Ion channels"
] |
13,530,817 | https://en.wikipedia.org/wiki/PER3 | The PER3 gene encodes the period circadian protein homolog 3 protein in humans. PER3 is a paralog to the PER1 and PER2 genes. It is a circadian gene associated with delayed sleep phase syndrome in humans.
History
The Per3 gene was independently cloned by two research groups (Kobe University School of Medicine and Harvard Medical School) who both published their discovery in June 1998. The mammalian Per3 was discovered by searching for homologous cDNA sequences to Per2. The amino acid sequence of the mouse PERIOD3 protein (mPER3) is between 37-56% similar to the other two PER proteins.
Function
This gene is a member of the Period family of genes. It is expressed in a circadian pattern in the suprachiasmatic nucleus (SCN), the primary circadian pacemaker in the mammalian brain. Genes in this family encode components of the circadian rhythms of locomotor activity, metabolism, and behavior. Circadian expression in the SCN continues in constant darkness, and a shift in the light/dark cycle evokes a proportional shift of gene expression in the SCN. PER1 and PER2 are necessary for molecular timekeeping and light responsiveness in the master circadian clock in the SCN, but little data is shown on the concrete function for PER3. PER3 was found to be important for endogenous timekeeping in specific tissues and those tissue-specific changes in endogenous periods result in internal misalignment of circadian clocks in Per3 double knockout (-/-) mice. PER3 may have a stabilizing effect on PER1 and PER2, and this stabilizing effect may be reduced in the PER3-P415A/H417R polymorphism.
Role in chronobiology
The RNA levels of mPer3 oscillate with a circadian rhythm in both the SCN and in the eyes, as well as in peripheral tissues, including the liver, skeletal muscle, and testis. Unlike Per1 and Per2, of which the mRNA is induced in response to light, Per3 mRNA in the SCN does not respond to light. This suggests that Per3 may be regulated differently than either Per1 or Per2.
The mPER3 protein contains a PAS domain, similar to mPER1 and mPER2. Likely, mPER3 binds to other proteins using this domain. However, while PER1/2 have been shown to be important in the transcription-translation feedback loop involved in the intracellular circadian clock, the influence of PER3 in this loop has not yet been fully elucidated, given that mPER3 does not appear to be functionally redundant to mPER1 and mPER2. mPer3 may not be a member of the core clock loop at all.
Animal studies
While the Per3 gene is a paralog to the PER1 and PER2 genes, studies in animals generally show that it does not contribute significantly to circadian rhythms. Functional Per3-/- animals experience only small changes in free-running period, and do not respond significantly differently to light pulses. Per1-/- and Per2-/- animals experience a significant change in free-running period; however, knocking out Per3 in addition to either Per1 or Per2 has little effect on free-running rhythms. Furthermore, Per1-/-Per2-/- mice are completely arrhythmic, indicating that these two genes have much more importance to the biological clock than Per3.
Per3 knockout mice experience a slightly shortened period of locomotor activity (by 0.5 hr) and are less sensitive to light, in that they entrain more slowly to changes in the light-dark cycle. PER3 may be involved in the suppression of behavioral activity in response to light, although mPer3 expression is not necessary for circadian rhythms.
Clinical significance
The PER3 “length” polymorphism in the 54-bp repeat sequence in exon 18 (GenBank accession no. AB047686) is a structural polymorphism due to an insertion or deletion of 18 amino acids in a region encoding a putative phosphorylation domain. The polymorphism has been associated with diurnal preference and delayed sleep phase syndrome. A longer allele polymorphism is associated with “morningness” and the short allele with “eveningness.” The short allele is also associated with delayed sleep phase syndrome. The length polymorphism has also been shown to inhibit adipogenesis and Per3 knockout mice were shown to have increased adipose tissue and decreased muscle tissue compared to wild type. Additionally, the presence of the length polymorphism has also been shown to be associated with type 2 diabetes mellitus (T2DM) patients as compared to non-diabetic control patients. The PER3-P415A/H417R polymorphism has been linked to familial advanced sleep phase syndrome in humans, as well as to seasonal affective disorder, though when knocked in to mice, the polymorphism causes a delayed sleep phase.
Gene
Orthologs
The following is a list of some orthologs of the PER3 gene in other species:
PER3 (P. troglodytes)
PER3 (M. mulatta)
PER3 (C. lupus)
PER3 (H. sapiens)
PER3 (B. taurus)
Per3 (M. musculus)
Per3 (R. norvegicus)
PER3 (G. gallus)
per3 (X. tropicalis)
per3 (D. rerio)
Paralogs
PER1
PER2
Gene location
The human PER3 gene is located on chromosome 1 at the following location:
Start: 7,784,320 bp
Finish: 7,845,181 bp
Length: 60,862 bases
Exons: 25
PER3 has 19 transcripts (splice variants).
Protein structure
The PER3 protein has been identified to have the following features:
Size: 1201 amino acids
Molecular mass: 131888 Da
Quaternary structure: Homodimer
Post translational modifications
The following are some known post transcriptional modifications to the Per3 gene:
Phosphorylation by CSNK1E is weak and appears to require association with PER1 and translocation to the nucleus.
Ubiquitinated.
Modification sites at PhosphoSitePlus
Modification sites at neXtProt
References
External links
Transcription factors
Circadian rhythm
PAS-domain-containing proteins | PER3 | [
"Chemistry",
"Biology"
] | 1,346 | [
"Behavior",
"Transcription factors",
"Gene expression",
"Signal transduction",
"Circadian rhythm",
"Sleep",
"Induced stem cells"
] |
13,530,822 | https://en.wikipedia.org/wiki/PER2 | PER2 is a protein in mammals encoded by the PER2 gene. PER2 is noted for its major role in circadian rhythms.
Discovery
The per gene was first discovered using forward genetics in Drosophilla melanogaster in 1971. Mammalian Per2 was discovered by in 1997 through a search for homologous cDNA sequences to PER1. It is more similar to Drosophila per than its paralogs. Later experiments in also identified Per2 in humans.
Function
PER2 is a member of the Period family of genes and is expressed in a circadian pattern in the suprachiasmatic nucleus, the primary circadian pacemaker in the mammalian brain. Genes in this family encode components of the circadian clock, which regulates the daily rhythms of locomotor activity, metabolism, and behavior. Circadian expression of these genes and their encoded proteins in the suprachiasmatic nucleus. Human PER2 is involved in human sleep disorder and cancer formation. Lowered PER2 expression is common in many tumors cells within the body, suggesting PER2 is integral for proper function and decreased levels promotes tumor progression.
PER2 contains glucocorticoid response elements (GREs) and a GRE within the core clock gene PER2 is continuously occupied during rhythmic expression and essential for glucocorticoid regulation of PER2 in vivo. Mice with a genomic deletion spanning this GRE expressed elevated leptin levels and were protected from glucose intolerance and insulin resistance on glucocorticoid treatment but not from muscle wasting. PER2 is an integral component of a particular glucocorticoid regulatory pathway and that glucocorticoid regulation of the peripheral clock is selectively required for some actions of glucocorticoids.
PER2 expression in mice is increased by exposure to 13,000 lumens of intense daylight such as sunshine while decreasing troponin levels in equal opposite amounts. PER2 in turn enhances oxygen-efficient glycolysis and hence provides cardioprotection from ischemia. Therefore, it is speculated that strong light may reduce the risk of heart attacks and decrease the damage after experiencing one. Moreover, PER2 has protective functions in liver diseases, as it antagonizes hepatitis C viral replication.
Per2 knockout mice experience a free-running period of around 21.8 hours, compared to the normal mouse free-running period of 23.3 hours. Some of the Per2 knockout mice can also become arrhythmic under constant light conditions. PER2 has also been shown to be possibly important in the development of cancer. PER2 expression is significantly lower in human patients with lymphoma and acute myeloid leukemia.
The PER2 protein seems to be important for the proliferation of osteoblasts, which aid in adding density to the bone through a pathway utilizing Myc and Ccnd1. Certain PER2 mutations have shown that they can increase the tolerance to the amount of alcohol that a mouse can intake through reduced uptake of glutamate.
Main interactions
In mammals, mPER2 forms a heterodimer with mPER1, mCRY1, and mCRY2 by binding to PAS domains. The heterodimer acts to inhibit their own transcription by suppressing the CLOCK/BMAL1 complex resulting in a negative feedback loop. This negative feedback loop is essential for maintaining a functioning circadian clock. A disruption of either both mPER1 and mPER2 genes together or both mCRY genes causes behavioural arrhythmicity when the double-knockout animals are placed in constant conditions. A third PER gene, mPer3, does not have a critical role in the maintenance of the core clock feedback loops. Molecular and behavioural rhythms are preserved in mice lacking mPer3.
Interactions with CK1e
PER2 has been shown to interact with a kinase called CK1e. CK1e phosphorylates PER2 in mammals. In Syrian hamster, a mutation called tau has been discovered in the CK1e, which increases phosphorylation of homologous PER2 leading to a faster degradation and a shortened period. Mutations in hPER2 can cause FASPS because of a lack of phosphorylation site in the mutated hPER2 protein.
Interaction table
Clinical significance
A genetic test from a cheek swab can use PER2 expression levels to tell whether a person is an early morning person or a "night owl".
Familial advanced sleep phase
Familial advanced sleep phase (FASP) is characterized as a short period (e.g. 23.3 vs 24.3hr for population) in humans. A mutation in hPER2 decreases its phosphorylation by CK1d, which causes the phenotype seen in some FASP. The primary cause of these FASP is a mutation that changes amino acid 662 from serine to glycine (S662G) in PER2. The S662G mutation makes PER2 mutant protein a stronger repressor than normal PER2, decreasing cellular PER2 levels and therefore causing this form of FASP. The mutation also seems to cause an increase in the turnover rate of PER2 in the nucleus.
Gene
The PER2 gene is located on the long (q) arm of chromosome 2 at position 37.3 and has 25 exons.
Predicted human PER2 protein created from the PER2 gene shares 44% identity with human PER1 and 77% identity with mouse Per2. Northern blots analysis revealed that PER2 was expressed as a 7-kb mRNA in all tissues examined. An additional 1.8-kb transcript was also detected in some tissues. PER2 mRNA has been shown to peak at ZT 6 in the SCN.
See also
PER1
PER3
Familial sleep traits
References
Further reading
External links
Transcription factors
PAS-domain-containing proteins | PER2 | [
"Chemistry",
"Biology"
] | 1,214 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
13,530,826 | https://en.wikipedia.org/wiki/TRPA1 | Transient receptor potential cation channel, subfamily A, member 1, also known as transient receptor potential ankyrin 1, TRPA1, or The Mustard and Wasabi Receptor, is a protein that in humans is encoded by the TRPA1 (and in mice and rats by the Trpa1) gene.
TRPA1 is an ion channel located on the plasma membrane of many human and animal cells. This ion channel is best known as a sensor for pain, cold and itch in humans and other mammals, as well as a sensor for environmental irritants giving rise to other protective responses (tears, airway resistance, and cough).
Function
TRPA1 is a member of the transient receptor potential channel family. TRPA1 contains 14 N-terminal ankyrin repeats and is believed to function as a mechanical and chemical stress sensor. One of the specific functions of this protein studies involves a role in the detection, integration and initiation of pain signals in the peripheral nervous system. It can be activated at sites of tissue injury or sites of inflammation directly by endogenous mediators or indirectly as a downstream target via signaling from a number of distinct G-protein coupled receptors (GPCRs), such as bradykinin.
The role of TRPA1 in pain sensing was first revealed when TRPA1 was identified as the receptor for mustard oil (allyl isothiocyanate), the pungent ingredient in mustard and wasabi. Recent studies indicate that TRPA1 is activated by a number of reactive (cinnamaldehyde, farnesyl thiosalicylic acid, formalin, hydrogen peroxide, 4-hydroxynonenal, acrolein, and tear gases) and non-reactive compounds (nicotine, PF-4840154) and is thus considered as a "chemosensor" in the body. TRPA1 is co-expressed with TRPV1 on nociceptive primary afferent C-fibers in humans. This sub-population of peripheral C-fibers is considered important sensors of nociception in humans and their activation will under normal conditions give rise to pain. Indeed, TRPA1 is considered as an attractive pain target. TRPA1 knockout mice showed near complete attenuation of nocifensive behaviors to formalin, tear-gas and other reactive chemicals . TRPA1 antagonists are effective in blocking pain behaviors induced by inflammation (complete Freund's adjuvant and formalin).
Although it is not fully confirmed whether noxious cold sensation is mediated by TRPA1 in vivo, several recent studies clearly demonstrated cold activation of TRPA1 channels in vitro.
In the heat-sensitive loreal pit organs of many snakes, TRPA1 is responsible for the detection of infrared radiation.
Structure
In 2016, cryo-electron microscopy was employed to obtain a three-dimensional structure of TRPA1. This work revealed that the channel assembles as a homotetramer, and possesses several structural features that hint at its complex regulation by irritants, cytoplasmic second messengers (e.g., calcium), cellular co-factors (e.g., inorganic anions like polyphosphates), and lipids (e.g., PIP2). Most notably, the site of covalent modification and activation for electrophilic irritants was localized to a tertiary structural feature on the membrane-proximal intracellular face of the channel, which has been termed the 'allosteric nexus', and which is composed of a cysteine-rich linker domain and the eponymous TRP domain. Breakthrough research combining cryo-electron microscopy and electrophysiology later elucidated the molecular mechanism of how the channel functions as a broad-spectrum irritant detector. With respect to electrophiles, which activate the channel by covalent modification of two cysteines in the allosteric nexus, it was shown that these reactive oxidative species act step-wise to modify two critical cysteine residues in the allosteric nexus. Upon covalent attachment, the allosteric nexus adopts a conformational change that is propagated to the channel's pore, dilating it to permit cation influx and subsequent cellular depolarization. With respect to activation by the second messenger calcium, the structure of the channel in complex with calcium localized the binding site for this ion and functional studies demonstrated that this site controls the various different effects of calcium on the channel – namely potentiation, desensitization, and receptor-operation.
Clinical significance
In 2008, it was observed that caffeine suppresses activity of human TRPA1, but it was found that mouse TRPA1 channels expressed in sensory neurons cause an aversion to drinking caffeine-containing water, suggesting that the TRPA1 channels mediate the perception of caffeine.
TRPA1 has also been implicated in airway irritation by cigarette smoke, cleaning supplies and in the skin irritation experienced by some smokers trying to quit by using nicotine replacement therapies such as inhalers, sprays, or patches.
A missense mutation of TRPA1 was found to be the cause of a hereditary episodic pain syndrome. A family from Colombia suffers from debilitating upper-body pain starting in infancy that is usually triggered by fasting or fatigue (illness, cold temperature, and physical exertion being contributory factors). A gain-of-function mutation in the fourth transmembrane domain causes the channel to be overly sensitive to pharmacological activation.
Metabolites of paracetamol (acetaminophen) have been demonstrated to bind to the TRPA1 receptors, which may desensitize the receptors in the way capsaicin does in the spinal cord of mice, causing an antinociceptive effect. This is suggested as the antinociceptive mechanism for paracetamol.
Oxalate, a metabolite of an anti cancer drug oxaliplatin, has been demonstrated to inhibit prolyl hydroxylase, which endows cold-insensitive human TRPA1 with pseudo cold sensitivity (via reactive oxygen generation from mitochondria). This may cause a characteristic side-effect of oxaliplatin (cold-triggered acute peripheral neuropathy).
Ligand binding
TRPA1 can be considered to be one of the most promiscuous TRP ion channels, as it seems to be activated by a large number of noxious chemicals found in many plants, food, cosmetics and pollutants.
Activation of the TRPA1 ion channel by the olive oil phenolic compound oleocanthal appears to be responsible for the pungent or "peppery" sensation in the back of the throat caused by olive oil.
Although several nonelectrophilic agents such as thymol and menthol have been reported as TRPA1 agonists, most of the known activators are electrophilic chemicals that have been shown to activate the TRPA1 receptor via the formation of a reversible covalent bond with cysteine residues present in the ion channel. Another example of a nonelectrophilic agent is the anesthetic propofol, which is known to cause pain on injection into a vein, a side effect attributed to TRPV1 and TRPA1 activation. A dibenz[b,f][1,4]oxazepine derivative substituted by a carboxylic methyl ester at position 10 has been reported to be a potent nonelectrophilic (thiol-unreactive) TRPA1 agonist (EC50 = 0.05 nM), while dibenzoxazepine (CR 'gas', 0.3 nM) itself, as well as several other tear gases (CN (30 nM), CS (0.9 nM), CA (10 nM) 'gases') were found to be thiol-reactive TRPA1 agonists. This study found that chemical reactivity with thiols in combination with lipophilicity enabling membrane permeation result in a potent TRPA1 agonistic effect, but thiol adduct formation is neither sufficient nor necessary for TRPA1 activation. The pyrimidine PF-4840154 is a potent, non-covalent activator of both the human (EC50 = 23 nM) and rat (EC50 = 97 nM) TRPA1 channels. This compound elicits nociception in a mouse model through TRPA1 activation. Furthermore, PF-4840154 is superior to allyl isothiocyanate, the pungent component of mustard oil, for screening purposes. Other TRPA1 channel activators include JT-010 and ASP-7663, while channel blockers include A-967079, HC-030031 and AM-0902.
The eicosanoids formed in the ALOX12 (i.e. arachidonate-12-lipoxygnease) pathway of arachidonic acid metabolism, 12S-hydroperoxy-5Z,8Z,10E,14Z-eicosatetraenoic acid (i.e. 12S-HpETE; see 12-Hydroxyeicosatetraenoic acid) and the hepoxilins (Hx), HxA3 (i.e. 8R/S-hydroxy-11,12-oxido-5Z,9E,14Z-eicosatrienoic acid) and HxB3 (i.e. 10R/S-hydroxy-11,12-oxido-5Z,8Z,14Z-eicosatrienoic acid) (see Hepoxilin#Pain perception) directly activate TRPA1 and thereby contribute to the hyperalgesia and tactile allodynia responses of mice to skin inflammation. In this animal model of pain perception, the hepoxilins are released in the spinal cord directly activate TRPA (and also TRPV1) receptors to augment the perception of pain. 12S-HpETE, which is the direct precursor to HxA3 and HxB3 in the ALOX12 pathway, may act only after being converted to these hepoxilins. The epoxide, 5,6-epoxy-8Z,11Z,14Z-eicosatrienoic acid (5,6-EET) made by the metabolism of arachidonic acid by any one of several cytochrome P450 enzymes (see Epoxyeicosatrienoic acid) likewise directly activates TRPA1 to amplify pain perception.
Studies with mice, guinea pigs, and human tissues indicate that another arachidonic acid metabolite, Prostaglandin E2, operates through its prostaglandin EP3 G protein coupled receptor to trigger cough responses. Its mechanism of action does not appear to involve direct binding to TRPA1 but rather the indirect activation and/or sensitization of TRPA1 as well as TRPV1 receptors. Genetic polymorphism in the EP3 receptor (rs11209716), has been associated with ACE inhibitor-induce cough in humans.
More recently, a peptide toxin termed the wasabi receptor toxin from the Australian black rock scorpion (Urodacus manicatus) was discovered; it was shown to bind TRPA1 non-covalently in the same region as electrophiles and act as a gating modifier toxin for the receptor, stabilizing the channel in an open conformation.
TRPA1 inhibition
A number of small molecule inhibitors (antagonists) have been discovered which have been shown to block the function of TRPA1. At the cellular level, assays that measure agonist-activated inhibition of TRPA1-mediated calcium fluxes and electrophysiological assays have been used to characterize the potency, species specificity and mechanism of inhibition. While the earliest inhibitors, such as HC-030031, were lower potency (micromolar inhibition) and had limited TRPA1 specificity, the more recent discovery of highly potent inhibitors with low nanomolar inhibition constants, such as A-967079 and ALGX-2542 as well as high selectivity among other members the TRP superfamily and lack of interaction with other targets have provided valuable tool compounds and candidates for future drug development.
Resolvin D1 (RvD1) and RvD2 (see resolvins) and maresin 1 are metabolites of the omega 3 fatty acid, docosahexaenoic acid. They are members of the specialized proresolving mediators (SPMs) class of metabolites that function to resolve diverse inflammatory reactions and diseases in animal models and, it is proposed, in humans. These SPMs also damp pain perception arising from various inflammation-based causes in animal models. The mechanism behind their pain-dampening effect involves the inhibition of TRPA1, probably (in at least certain cases) by an indirect effect wherein they activate another receptor located on neurons or nearby microglia or astrocytes. CMKLR1, GPR32, FPR2, and NMDA receptors have been proposed to be the receptors through which SPMs may operate to down-regulate TRPs and thereby pain perception.
Ligand examples
Agonists
4-Oxo-2-nonenal
Allicin
Allyl isothiocyanate
ASP-7663
Cannabidiol
Cannabichromene
Gingerol
Icilin
Polygodial
Propofol
Hepoxilins A3 and B3
12S-Hydroperoxy-5Z,8Z,10E,14Z-eicosatetraenoic acid
4,5-Epoxyeicosatrienoic acid
Cinnamaldehyde
PF-4840154
2-Arachidonoylglycerol
Anandamide
N-Arachidonoyl dopamine
Palmitoylethanolamide
Cannabidiolic acid
Cannabidivarin
Cannabigerol
Cannabigerolic acid
Cannabigerovarin
Tetrahydrocannabivarin
Tetrahydrocannabivarin acid
Gating Modifiers
WaTx
Antagonists
HC-030031
GRC17536
A-967079
ALGX-2513
ALGX-2541
ALGX-2563
ALGX-2561
ALGX-2542
References
External links
Membrane proteins
Ion channels | TRPA1 | [
"Chemistry",
"Biology"
] | 3,041 | [
"Neurochemistry",
"Ion channels",
"Protein classification",
"Membrane proteins"
] |
13,530,834 | https://en.wikipedia.org/wiki/Rev-ErbA%20alpha | Rev-Erb alpha (Rev-Erbɑ), also known as nuclear receptor subfamily 1 group D member 1 (NR1D1), is one of two Rev-Erb proteins in the nuclear receptor (NR) family of intracellular transcription factors. In humans, REV-ERBɑ is encoded by the NR1D1 gene, which is highly conserved across animal species.
Rev-Erbɑ plays an important role in regulation of the core circadian clock through repression of the positive clock element Bmal1. It also regulates several physiological processes under circadian control, including metabolic and immune pathways. Rev-Erbɑ mRNA demonstrates circadian oscillation in its expression, and it is highly expressed in mammals in the brain and metabolic tissues such as skeletal muscle, adipose tissue, and liver.
Discovery
Rev-Erbɑ was discovered in 1989 by Nobuyuki Miyajima and colleagues, who identified two erbA homologs on human chromosome 17 that were transcribed from opposite DNA strands in the same locus. One of the genes encoded a protein that was highly similar to chicken thyroid hormone receptor, and the other, which they termed ear-1, would later be described as Rev-Erbɑ. The protein was first referenced by the name Rev-Erbɑ in 1990 by Mitchell A. Lazar, Karen E. Jones, and William W. Chin, who isolated Rev-Erbɑ complementary DNA from a human fetal skeletal muscle library. Similar to the gene in rats, they found that human Rev-Erbɑ was transcribed from the strand opposite human thyroid hormone receptor alpha (THRA, c-erbAα).
Rev-Erbɑ was first implicated in circadian control in 1998, when Aurelio Balsalobre, Francesca Damiola, and Ueli Schibler demonstrated that expression of Rev-Erbɑ in rat fibroblasts showed daily rhythms. Rev-Erbɑ was first identified as a key player in the transcription translation feedback loop (TTFL) in 2002, when experiments demonstrated that Rev-Erbɑ acted to repress transcription of the Bmal1 gene, and Rev-Erbɑ expression was controlled by other TTFL components. This established Rev-Erbɑ as the link between the positive and negative loops of the TTFL.
Genetics and evolution
The NR1D1 (nuclear receptor subfamily 1 group D member 1) gene, located on chromosome 17, encodes the protein REV-ERBɑ in humans. It is transcribed from the opposite strand of the human thyroid hormone receptor alpha (THRA, c-erbAα) so that NR1D1 and THRA cDNA are complementary on 269 bases. The gene consists of 7,797 bases with 8 exons, forming only 1 splice variant. The NR1D1 promoter itself contains a REV-ERB response element (RevRE), which allows for regulation of gene expression both through autoregulation and regulation by retinoic acid receptor-related orphan receptor alpha (RORɑ), another nuclear receptor transcription factor. NR1D1 also contains an E-box at its promoter, which allows for regulation by BMAL1. In humans, NR1D1 (REV-ERBɑ) is highly expressed in the brain and metabolic tissues, including skeletal muscle, adipose tissue, and the liver.
Genomic analysis suggests that the NR1D1 gene was present in the most recent common ancestor of all animals, with orthologs present in 378 species tested, including chimpanzees, dogs, mice, rats, chickens, zebrafish, frogs, and fruit flies. Comparison to the rat ortholog, Nr1d1, indicates high conservation in the DNA binding and carboxy-terminal domains, as well as conservation of transcription of c-erbA alpha-2 and Rev-Erbɑ on opposite strands. In humans, NR1D1 has only one paralog, NR1D2 (REV-ERBβ), which is located on chromosome 3 and likely arose from a duplication event. However, both NR1D1 and NR1D2 are members of the nuclear receptor family, indicating they share common ancestry. As such, NR1D1 is functionally related to other nuclear receptor genes, such as peroxisome proliferator activated receptor delta (PPARD) and retinoic acid receptor alpha (RARA). Furthermore, studies have shown that the NR1D1/THRA genetic locus is genetically linked to the RARA gene.
Protein structure
The human NR1D1 gene produces a protein product (REV-ERBα) of 614 amino acids. REV-ERBα has 3 major functional domains, including a DNA-binding domain (DBD) and a ligand-binding domain (LBD) at the C-terminus, and a N-terminus domain which allows for activity modulation. These three domains are a common feature of nuclear receptor proteins.
The Rev-Erb proteins are unique from other nuclear receptors in that they do not have a helix in the C-terminal that is necessary for coactivator recruitment and activation by nuclear receptors via their LBD. Instead, Rev-Erbα interacts via its LBD with Nuclear Receptor Co-Repressor (NCoR) and another closely related co-repressor Silencing Mediator of Retinoid and Thyroid Receptors (SMRT), although the interaction with NCoR is stronger due to its structural compatibility. Heme, an endogenous ligand of Rev-Erbα, further stabilizes the interaction with NCoR. The repression by Rev-Erbα also requires interaction with the class I histone deactylase 3 (HDAC3) - NCoR complex. The catalytic activity of HDAC3 is activated only when it complexes with NCoR or SMRT, so Rev-Erbα must interact with this complex in order for gene repression to occur via histone deacetylation. It is still unknown whether other HDACs play a role in the function of Rev-Erbα. Rev-Erbα recruits the NCoR-HDAC3 complex through binding a specific DNA sequence commonly referred to as RORE due to its interaction with the transcriptional activator Retinoic Acid Receptor-related Orphan Receptor (ROR). This sequence consists of an "AGGTCA" half-site preceded by an A/T sequence.. Rev-Erbα binds in the major groove of this sequence via its DBD domain, which contains two C4-type zinc fingers. Rev-Erbα can repress gene activation as a monomer through competitive binding at this RORE site, but two Rev-Erbα molecules are required for interaction with NCoR and active gene repression. This can occur by two Rev-Erbα molecules binding separate ROREs or as a stronger interaction through binding a response element that is a direct repeat of the RORE (RevDR2).
In mice, it has been shown that the N-terminal regulatory domain contains an important site for phosphorylation by casein kinase 1 epsilon (Csnk1e), which aids in proper localization of Rev-Erbα, and furthermore, that this domain is necessary for activation of the gap junction protein 1 (GJA1) gene.
Function
Circadian oscillator
Rev-Erbα has been proposed to coordinate circadian metabolic responses. Circadian rhythms are driven by interlocking transcription/translation feedback regulatory loops (TTFLs) that generate and maintain these daily rhythms, and Rev-Erbα is involved in a secondary TTFL in mammals. The primary TTFL features transcriptional activator proteins CLOCK and BMAL1 that contribute to the rhythmic expression of genes within this loop, notably per and cry. The expression of these genes then act through negative feedback to inhibit CLOCK:BMAL1 transcription. The secondary TTFL, featuring Rev-Erbα working in conjunction with Rev-Erbβ and the orphan receptor RORα, is thought to strengthen this primary TTFL by further regulating BMAL1. RORα shares the same response elements as Rev-Erbα but exerts opposite effects on gene transcription; BMAL1 expression is repressed by Rev-Erbα and activated by RORα. CLOCK:BMAL1 expression activates the transcription of NR1D1, encoding the Rev-Erbα protein. Increased Rev-Erbα expression in turn, represses transcription of BMAL1, stabilizing the loop. The oscillating expression of RORα and Rev-Erbα in the suprachiasmatic nucleus, the principal circadian timekeeper in mammals, leads to the circadian pattern of BMAL1 expression. The occupancy of the BMAL1 promoter by these two receptors is key for proper timing of the core clock machinery in mammals.
Metabolism
Rev-erbα plays a role in the regulation of whole body metabolism through controlling lipid metabolism, bile acid metabolism, and glucose metabolism. Rev-Erbα relays circadian signals into metabolic and inflammatory regulatory responses and vice versa, although the precise mechanisms underlying this relationship are not entirely understood.
Rev-erbα regulates the expression of liver apolipoproteins, sterol regulatory element binding protein, and the fatty acid elongase elovl3 through its repressional activity In addition, the silencing of Rev-erbα is associated with the reduction of fatty acid synthase, a key regulator of lipogenesis. Rev-erbα deficient mice exhibit dyslipidemia due to elevated triglyceride levels and Rev-erbα polymorphisms in humans have been associated with obesity. Rev-erbα also regulates adipogenesis of white and brown adipocytes. Rev-Erbα transcription is induced during the adipogenic process, and over-expression of Rev-erbα enhances adipogenesis. Researchers have proposed that Rev-erbα's role in adipocyte function may affect the timing of processes such as lipid storage and lipolysis, contributing to long term issues with BMI control. Rev-erbα also regulates bile acid metabolism by indirectly down-regulating Cyp7A1, which encodes the first and rate controlling enzyme of the major bile acid biosynthetic pathway.
Rev-erbα plays both indirect and direct roles in glucose metabolism. BMAL1 heavily influences glucose production and glycogen synthesis, thus through the regulation of BMAL1, Rev-erbα indirectly regulates glucose synthesis. More directly, Rev-erbα's expression in the pancreas regulates the function of α-cells and β-cells, which produce glucagon and insulin, respectively.
Muscle and cartilage
Rev-erbα plays a role in myogenesis through interaction with the transcription complex Nuclear Factor-T. It also represses the expression of genes involved in muscle cell differentiation and is expressed in a circadian manner in mouse skeletal muscle. Loss of Rev-erbα function reduces mitochondrial content and function, leading to an impaired exercise capacity. Over-expression leads to improvement.
This protein has also been implicated in the integrity of cartilage. Out of all known nuclear receptors, Rev-erbα is the most highly expressed in osteoarthritic cartilage. One study found that in patients with osteoarthritis has reduced Rev-erbα levels compared to normal cartilage. Research on rheumatoid arthritis (RA) has implicated the potential for treatment with Rev-erbα agonists to RA patients due to their suppression of bone and cartilage destruction.
Immune system
Rev-erbα contributes to the inflammatory response in mammals. In mouse smooth muscle cells, the protein up-regulates expression of interleukin 6 (IL-6) and cyclooxygenase-2. In humans, it controls the lipopolysaccharide (LPS) induced endotoxic response through repressing toll-like receptor (TLR-4), which triggers the immune response to LPS. In the brain, Rev-erbα deletion causes a disruption in the oscillation of microglial activation and increases the expression of pro-inflammatory transcripts.
Many immune and inflammatory proteins exhibit circadian oscillatory behavior, and research has shown that Rev-erbα deficient mice no longer exhibit these oscillations, notably in IL-6, IL-12, CCL5, and CXCL1, and CCL2. Rev-erbα has also been implicated in the development of group 3 innate lymphoid cells (ILC3), which play a role in regulating intestinal health and are responsible for lymphoid development. REV-ERBα promotes RORγt expression, and RORγt is required for ILC3 expression. Rev-erbα is highly expressed in ILC3 subsets.
Mood and behavior
Rev-erbα has been implicated in the regulation of memory and mood. Rev-erbα knockout mice are deficient in short term, long term, and contextual memories, showing deficits in the function of their hippocampus. In addition, Rev-erbα has been proposed to play a role in the regulation of midbrain dopamine production and mood-related behavior in mice through repression of tyrosine hydroxylase gene transcription. Dopamine related dysfunction is associated with mood disorders, notably major depressive disorder, seasonal affective disorder, and bipolar disorder. Genetic variations in human NR1D1 loci are also associated with bipolar disorder onset.
Rev-erbα has been proposed as a target in the treatment of bipolar disorder through lithium, which indirectly regulates the protein at a post-translational level. Lithium inhibits glycogen synthase kinase (GSK 3β), an enzyme that phosphorylates and stabilizes Rev-erbα. Lithium binding to GSK 3β then destabilizes and alters the function of Rev-erbα. This research has been implicated in the development of therapeutic agents for affective disorders, such as lithium for bipolar disorder.
References
Further reading
External links
Intracellular receptors
Transcription factors | Rev-ErbA alpha | [
"Chemistry",
"Biology"
] | 2,975 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
13,530,853 | https://en.wikipedia.org/wiki/TRPM5 | Transient receptor potential cation channel subfamily M member 5 (TRPM5), also known as long transient receptor potential channel 5 is a protein that in humans is encoded by the TRPM5 gene.
Function
TRPM5 is a calcium-activated non-selective cation channel that induces depolarization upon increases in intracellular calcium, it is a signal mediator in chemosensory cells. Channel activity is initiated by a rise in the intracellular calcium, and the channel permeates monovalent cations as K+ and Na+.
TRPM5 is a key component of taste transduction in the gustatory system of bitter, sweet and umami tastes being activated by high levels of intracellular calcium. It has also been targeted as a possible contributor to fat taste signaling. The calcium dependent opening of TRPM5 produces a depolarizing generator potential which leads to an action potential.
TRPM5 is expressed in pancreatic β-cells where it is involved in the signaling mechanism for insulin secretion. The potentiation of TRPM5 in the β-cells leads to increased insulin secretion and protects against the development of type 2 diabetes in mice. Further expression of TRPM5 can be found in tuft cells, solitary chemosensory cells and several other cell types in the body that have a sensory role.
Drugs modulating TRPM5
The role of TRPM5 in the pancreatic β-cell makes it a target for the development of novel antidiabetic therapies.
Agonists
Steviol glycosides, the sweet compounds in the leaves of the Stevia rebaudiana plant, potentiate the calcium-induced activity of TRPM5. In this way they stimulate the glucose-induced insulin secretion from the pancreatic β-cell.
Rutamarin, a phytochemical found in Ruta graveolens has been identified as an activator of several TRP channels, including TRPM5 and TRPV1 and inhibits the activity of TRPM8.
Antagonists
Selective blocking agents of TRPM5 ion channels can be used to identify TRPM5 currents in primary cells. Most identified compounds show, however, a poor selectivity between TRPM4 and TRPM5 or other ion channels.
TPPO or TriPhenylPhosphineOxide is the most selective blocker of TRPM5 however, its application suffers due to a poor solubility.
Ketoconazole is an antifungal drug that inhibits TRPM5 activity.
Flufenamic Acid is an NSAID drug that inhibits the activity of TRPM5 or TRPM4.
Clotrimazole is an antifungal drug and reduces the currents through TRPM5.
Nicotine inhibits the TRPM5 channel. Through the inhibition of TRPM5, the taste loss observed in people with a smoking habit can be explained.
See also
TRPM
References
Further reading
External links
IUPHAR
HGNC Gene families
Pfam
Ion channels | TRPM5 | [
"Chemistry"
] | 622 | [
"Neurochemistry",
"Ion channels"
] |
13,530,864 | https://en.wikipedia.org/wiki/TRPV2 | Transient receptor potential cation channel subfamily V member 2 is a protein that in humans is encoded by the TRPV2 gene. TRPV2 is a nonspecific cation channel that is a part of the TRP channel family. This channel allows the cell to communicate with its extracellular environment through the transfer of ions, and responds to noxious temperatures greater than 52 °C. It has a structure similar to that of potassium channels, and has similar functions throughout multiple species; recent research has also shown multiple interactions in the human body.
TRP subfamily
The vanilloid TRP subfamily (TRPV) named after the vanilloid receptor 1 consist of six members, four of them (TRPV1-TRPV4) have been related to thermal sensation. TRPV2 shares 50% of its homology with TRPV1. Compared to TRPV1 channels, TRPV2 channels do not open in response to vanilloids like capsaicin or thermal stimuli around 43 °C. This may be due to the composition of the ankyrin repeat domains in TRPV2, which are different than those in TRPV1. However, TRPV2 channels can open by noxious temperatures greater than 52 °C. TRPV2 initially was characterized as a noxious heat sensor channel, but more evidence suggest its importance in various osmosensory and mechanosensory mechanisms. The channel can open in response to a variety of stimuli including hormones, growth factors, mechanical stretching, heat, osmotic swelling, lysophospholipids, and cannabinoids. These channels are expressed in medium to large diameter neurons, motor neurons, and other non-neuronal tissues like the heart and lungs, which indicates its versatile function. The channel has an important role for basic cell function including contraction, cell proliferation, and cell death. The same channel can have different functions depending on the type of tissue. Other roles of TRPV2 continue to be explored in an attempt to define the role of translocation of TRPV2 by growth factors. SET2 is a TRPV2 selective antagonist.
Discovery
TRPV2 was independently discovered by two research groups and described in 1999. It was identified in the lab of David Julius as a close homolog of TRPV1, known as the first identified thermosensitive ion channel. Itaru Kojima from Gunma University was looking for a protein which is responsible for the entry of calcium into cells in response to insulin-like growth factor-1 (IGF-1). Upon stimulation of cells with IGF-1, it was discovered that TRPV2 translocates towards and integrates into the cell membrane and increases intracellular calcium concentrations.
Structure
TRPV2 channel has a similar structure to potassium channels, which are the largest ion channel family. This channel is composed of six transmembrane spanning regions (S1-S6) with a pore forming loop between S5 and S6. The pore forming loop also defines the selectivity filter, which determines the ions that are able to enter the channel. The S1-S4 region, as well as the N and C terminals of the protein, is important in reference to the gating of the channel. Although TRPV2 is a nonspecific cation channel, it is more permeable to calcium ions; calcium is an intracellular messenger and plays a very important role in a variety of different cellular processes. At rest, the pore channel is closed; in the activated state, the channel opens, allowing the influx of sodium and calcium ions that initiates an action potential.
Species homology
The TRPV subfamily of channels of 1 through 4 have unique functions. One important variation is that these channels trigger cellular signaling pathways via non-selective cation flux, making them unique. Specifically, the TRPV2 channel has structural similarities amongst the other members of the TRPV family. For instance, the channel consists of six transmembrane domains and a pore forming loop between S5 and S6. Within the human genome, putative homologs can be found. This suggests that the amino acids and proteins coded come from a common ancestor where their structures are conserved in function.
Among the subfamily, TRPV2 and TRPV1 share 50% of their sequence identity not only in humans, but in rats as well. The rat TRPV2 can be comparable to that of humans because they exhibit similar surface localization among one another. Each channel possesses ATP binding regions and the 50% sequence identity between TRPV1 and TRPV2 suggests that both channel's Ankyrin repeat domain (ARD) bind to different regulatory ligands as well. The channels structure can be observed as similar to that of potassium channels. In knockout mice, the physiological thermal responses show similar activation to wild-type mice. On top of that, humans, rats, and mice are considered orthologues.
Tissue distribution
Homo sapiens
In homo sapiens, there is broad expression of TRPV2 in the lymph nodes, spleen, lung, appendix, and placenta; it is mostly expressed in the lungs. TRPV2 is majorly in a sub population of medium to large sensory neurons, as well as being distributed in the brain and spinal cord. The mRNA expression of TRPV2 is also found in human pulmonary and umbilical vein endothelial cells. Based on mRNA expression of TRPV2 in mice, it is also speculated that it is expressed in arterial muscle cells, which can then be influenced by blood pressure; though it was evident that TRPV2 expression was localized in the intracellular area, some growth factors localized it to the plasma cell membrane. In circulatory organs, studies and data suggest that TRPV2 may be a mechanosensor, meaning that it can sense changes in external stimuli; the mechanisms involved in opening TRPV2 by membrane stretching or hypoosmotic cell swelling have not yet been determined.
Mus musculus
In mus musculus (house mouse), TRPV2 functions as a protein coding gene. There is broad expression of TRPV2 in the thymus, placenta, cerebellum, and spleen; it is most commonly expressed in the thymus. The thymus is a lymphoid organ involved in the function of the immune system, where T cells mature. T cells are an important component to the adaptive immune system, because it is where the body adapts to foreign substances; this demonstrates TRPV2's importance in the immune system. TRPV2 in mus musculus is also activated by hypo-osmolarity and cell stretching, indicating that TRPV2 plays a role in mechanotransduction in mice as well. In experiments with knockout mice (TRPV2KO mice), it was found that TRPV2 is expressed in brown adipocytes and in brown adipose tissue (BAT). It can be concluded that TRPV2 plays a role in BAT thermogenesis in mice, since it was found that a lack of TRPV2 impairs this thermogenesis in BAT; given these results, this could be a target for human obesity therapy.
Rattus norvegicus
In rattus norvegicus (Norway rat), there is broad expression of TRPV2 in the adrenal glands and the lungs, being most present in the adrenal glands. TRPV2 is also present in the thymus and spleen, but not in high amounts. Without using any external growth factors, TRPV2 is highly specific to the plasma cell membrane in rat adult dorsal root ganglions, cerebral cortex, and arterial muscle cells.
Clinical significance
Cancer
TRPV2 plays a role in negative homeostatic control of excess cell proliferation by inducing apoptosis (programmed cell death). This is accomplished predominantly through the Fas pathway, also known as the death-inducing signaling complex. Activation of TRPV2 by growth factors and hormones induces the receptor to translocate from intracellular compartments to the plasma membrane, which initiates the development of death signals. An example of the role of TRPV2 in apoptosis is its expression in the bladder cancer t24 cell line. TRPV2 in bladder cancer leads to apoptosis through the influx of calcium ions through the TPRV2 channel. In some tumors, the over-expression of TRPV2 can lead to abnormal signaling pathways that drives unchecked cell proliferation and resistance to apoptotic stimuli. The over-expression of TRPV2 has been linked to several cancer types and cell lines. TRPV2 is expressed in human HepG2 cells, a cell line containing human liver carcinogenic cells. Heat allows for calcium entry into these cells through TRPV2 channels, which aid in the maintenance of these cells. TRPV2 also negatively affects patients with gliomas. TRPV2 in carcinogenic glial cells leads to resistance to apoptotic cell death, leading to harmful, carcinogenic cell survival.
Immunity
TRPV2 is expressed in the spleen, lymphocytes, and myeloid cells including granulocytes, macrophages and mast cells. Among these cell types, TRPV2 mediates cytokine release, phagocytosis, endocytosis, podosome assembly, and inflammation. The influx of calcium seems to play an important role in these functions. Mast cells are leukocytes (white blood cells) rich in histamine which are able to respond to a variety of stimuli, often initiating inflammatory and/or allergic responses. The responses generated by mast cells rely on the calcium influx in the plasma membrane with the help of channels. Surface localization of the TRPV2 protein along with coupling of the protein to calcium and proinflammatory degranulation have been found in mast cells. The activation of TRPV2 in high temperatures permits calcium ion influx, inducing the release of proinflammatory factors. Therefore, TRPV2 is essential in mast cell degranulation as a result of its response to heat.
Immune cells are also able to kill pathogens by binding to them and engulfing them in a process known as phagocytosis. In macrophages TRPV2 recruitment toward the phagosome is regulated by PI3k signaling, protein kinase C, akt kinase, and Src kinases. They are able to locate these microbes through chemotaxis which is TRPV2 mediated. When the pathogen is endocytosed it is degraded then presented on the membrane of antigen presenting cells (i.e. macrophages). Macrophages present these antigens to T cells via a major histocompatibility complex (MHC). The region between the MHC-peptide and the T cell receptor is known as the immunosynapse. TRPV2 channels are highly concentrated in this region. When these two cells interact, it allows calcium to diffuse through the TRPV2 channel. TRPV2 mRNA has been detected in CD4+ and CD8+ T cells as well as in human B lymphocytes. TRPV2 is one type of ion channel that directs T cell activation, proliferation, and defense mechanisms. If the TRPV2 channel were absent or not functioning properly in T cells, T cell receptor signaling would not be optimal. TRPV2 also acts as a transmembrane protein on the surface of B cells, negatively controlling B cell activation. Abnormal TRPV2 expression has been reported in hematological diseases including multiple myeloma, myelodysplastic syndrome, Burkitt lymphoma, and acute myeloid leukemia.
Metabolic
TRPV2 seems to be essential in glucose homeostasis. It is highly expressed in MIN6 cells, which is a β-cell. These cell types are known for releasing insulin, a molecule that functions to keep glucose levels low. Under unstimulated conditions, TRPV2 is localized in the cytoplasm. Activation causes the channel to translocate to the plasma membrane. This triggers the influx of calcium resulting in insulin secretion.
Cardiovascular
TRPV2 is very important in the structure and function of cardiomyocytes (heart cells). Compared to skeletal muscles, TRPV2 is expressed 10 times as high in cardiomyocytes and is important in current conduction. TRPV2 has been shown to be involved in stretch-dependent responses in heart cells. TRPV2 expression is concentrated in intercalated discs which allows the synchronous contraction of cardiomyocytes. Abnormal expression of TRPV2 results in reduced shortening length, shortening rate, and lengthening rate which ultimately compromise cardiac contractile function.
Ligands
Agonist
Agonists include:
Anandamide
2-Arachidonoylglycerol
Cannabidiol (CBD)
Cannabidiorcol (O-1821)
Cannabidivarin (CBDV)
Cannabinol (CBN)
Cannabigerol (CBG)
Cannabigerovarin (CBGV)
Tetrahydrocannabinol (THC)
Tetrahydrocannabinolic acid (THCA)
Tetrahydrocannabivarin (THCV)
Nabilone
See also
Endocannabinoid system
Transient receptor potential channel
TRPV family
Cannabinoids
References
External links
Ion channels | TRPV2 | [
"Chemistry"
] | 2,852 | [
"Neurochemistry",
"Ion channels"
] |
13,530,870 | https://en.wikipedia.org/wiki/TRPM4 | Transient receptor potential cation channel subfamily M member 4 (hTRPM4), also known as melastatin-4, is a protein that in humans is encoded by the TRPM4 gene.
TRPM4 Channel Blocker
9-Phenanthrol
TRPM4-IN-5
See also
TRPM
References
Further reading
External links
Ion channels | TRPM4 | [
"Chemistry"
] | 74 | [
"Neurochemistry",
"Ion channels"
] |
13,530,909 | https://en.wikipedia.org/wiki/ARNTL2 | Aryl hydrocarbon receptor nuclear translocator-like 2, also known as Arntl2, Mop9, Bmal2, or Clif, is a gene.
Arntl2 is a paralog to Arntl, which are both homologs of the Drosophila Cycle. Homologs were also isolated in fish, birds and mammals such as mice and humans. Based on phylogenetic analyses, it was proposed that Arntl2 arose from duplication of the Arntl gene early in the vertebrate lineage, followed by rapid divergence of the Arntl gene copy. The protein product of the gene interacts with both CLOCK and NPAS2 to bind to E-box sequences in regulated promoters and activate their transcription. Although Arntl2 is not required for normal function of the mammalian circadian oscillator, it may play an important role in mediating the output of the circadian clock. Perhaps because of this, there is relatively little published literature on the role of Arntl2 in regulation of physiology.
Arntl2 is a candidate gene for human type 1 diabetes.
In overexpression studies, ARNTL2 protein forms a heterodimer with CLOCK to regulate E-box sequences in the Pai-1 promoter. Recent work suggest that this interaction may be in concert with ARNTL/CLOCK heterodimeric complexes.
History
The ARNTL2 gene was originally discovered in 2000 by John B. Hogenesch et al. under the name MOP9 as a part of the PAS domain superfamily of eukaryotic transcription factors and as a homolog to ARNTL/MOP3. Hogenesch’s initial characterization of MOP9 indicated the role of the MOP9 protein as a partner of the bHLH-PAS transcription factor CLOCK in that the MOP9 protein forms a transcriptionally-active heterodimer with the circadian CLOCK protein. The MOP9 protein, like the MOP3 protein, was also found to form heterodimers with MOP4 and hypoxia-inducible factors including HIF1α. The MOP9 gene was found to be coexpressed with CLOCK in the suprachiasmatic nucleus (SCN) in the hypothalamus, the site of the central mammalian circadian oscillator. Due to MOP9 exhibiting extensive sequence identity with genes such as MOP3 and CYCLE, its dimerization with CLOCK, and the brain-specific expression of MOP9, particularly its expression in the SCN, Hogenesch et al. proposed that MOP9 is involved in the regulation of locomotor activity as a part of the mammalian circadian system. Further studies on the MOP9 gene have adopted the names ARNTL2 and BMAL2 in the same style as the previously-discovered ARNTL gene. Like ARNTL/BMAL1, one of the earliest discovered functions of BMAL2 in the circadian system was through its formation of the BMAL2-CLOCK heterodimer, and the relative transactivation of BMAL2-CLOCK and BMAL1-CLOCK have also indicated that BMAL1 and BMAL2 have distinguishable and individually important roles in the circadian system. Knockout studies of BMAL1 and BMAL2 have also demonstrated the regulatory effect of BMAL1 on BMAL2 expression, and have indicated that BMAL2 may play a more significant role in the circadian system than previously appreciated, although the exact nature of the role of BMAL2 has not yet been fully elucidated.
Structure
The BMAL2 protein follows the basic helix-loop-helix structure of the PER-ARNT-SIM family and contains a bHLH-PAS domain in its N-terminal region and a variable C-terminus. The PAS domain acts as a dimerization and binding surface in the aryl hydrocarbon receptor (AHR). Overall, BMAL2 shares much of its structure with BMAL1. However, the location on Chromosome 12 of BMAL2 in humans suggests that the gene may have a different function in the embryo.
Function
BMAL2 forms a heterodimer with CLOCK, and activates transcription, and plays a role in the molecular oscillator. BMAL1 and BMAL2 are positive regulators and activate transcription by binding to proximal (–565 to –560 bp) and distal (–680 to –675 bp) E-box enhancers of the PAI-1 promoter. BMAL 2 functions similarly to BMAL1, but a research study from 2009 found differences in affinities of the homolog genes. The Per2 gene showed a stronger affinity to the BMAL2-CLOCK complex, and CRY2 had a stronger affinity to BMAL1-CLOCK complex. Per2 and CRY2 both inhibit the complexes, and negatively regulate transcription. The true function on Bmal2 is not yet fully understood., A 2010 study by Shi el. al shows that overexpression of BMAL2 in a BMAL1 knockout mice rescues locomotor rhythms and metabolic rhythms. In the same study, rhythmicity was not rescued in peripheral tissues, such as the liver and lung. Bmal2 cannot replace Bmal1, and the two are not interchangeable. The protein does play an active role in the oscillator, but Bmal2 is not required for circadian oscillations in mice.
Interactions
Species distribution
Orthologs for BMAL2 have been found in many mammals other than humans, including chimpanzees, dogs and cows (ARNTL2), mice (Arntl2 and Bmal2), and rats (ARNTL2), as well as in zebrafish. ARNTL2 genes differ significantly more between species than ARNTL genes– BMAL2 proteins have diverged 20 times as quickly as BMAL1 proteins since the genes diverged, suggesting an unidentified function in BMAL1 that does not exist in BMAL2. Human and zebrafish BMAL2 proteins contained only 66% of the same amino acids, rather than 85% between human and zebrafish BMAL1 proteins. Identifying the cause of the comparatively significant differences across species in BMAL2 will be significant for understanding the function of BMAL2 in the circadian clock.
Knockout Studies
Like many genes involved in the circadian system, BMAL2 is a paralog of BMAL1. However, a 2000 study by Bunger et al. demonstrated that unlike other paralog pairs in the circadian system, such as Per1/Per2, Cry1/Cry2, and Clock/Npas2, only a single knockout of either BMAL1 or BMAL2 is required to confer arrhythmicity, rather than a knockout of both paralogs, although other studies have indicated that BMAL1-specific knockouts also have significant effects on metabolism and longevity. The same 2000 study by Bunger et al. also indicated that knockouts of BMAL1 down-regulate expression of BMAL2. A 2010 study by Shi et al. found that BMAL2 expression, conferred by a constitutively expressed promoter, can rescue both circadian rhythmicity in locomotion as well as metabolic phenotypes in Bmal1-knockout mice. Thus, BMAL1 and BMAL2 form a functionally redundant paralog pair, but in mice, BMAL2 expression is regulated by BMAL1 such that knocking out BMAL1 effectively results in the knockout of both BMAL1 and BMAL2, indicating that BMAL2 may play a more important role in the circadian system than previously thought. However, this same study by Shi et al. also found that over-expression of BMAL2 is ultimately insufficient to drive circadian rhythms in the peripheral tissues of mice, thereby suggesting that the behavioral rhythms observed in this study may come from weak molecular clocks fortified through networks with the suprachiasmatic nucleus (SCN).
The C-terminal region of the BMAL1 protein is crucial for generating sustained circadian oscillations at the cellular level. Two specific domains within this intrinsically unstructured C-terminus enable this function, distinguishing BMAL1 from BMAL2. The regulation of the BMAL1 transactivation domain (TAD) as a key mechanism in circadian timing is an ongoing area of research.
Clinical significance
BMAL1 and BMAL2 genes are known to have a role in glucose homeostasis. A research study from 2015 used forward genetics to find a genotype of BMAL2 associated with Type 2 diabetes. The BMAL2 rs7958822 is a polymorphism, and has various genotypes: A/G, A/A, and G/G. The study found an association that obese men with BMAL2 rs7958822 A/G and A/C genotypes had a higher prevalence of type 2 diabetes.
Prior research studies have found desynchronization in cortisol synthesis and body temperature in patients with Parkinson’s Disease, suggesting a role of the circadian genes in the disease, The study used RT-PCR assay to track the BMAL2 gene in PD patients, and found changes in expression, specifically at 21:00 and 00:00. More research is needed to find the molecular mechanism behind this, but the results suggest that BMAL 2 and the molecular clock play a role in Parkinson’s disease.
In colorectal cancer cells, the upregulation of BMAL2 has been associated with higher levels of tumor mutational burden (TMB) as a result of subsequent upregulation of PAI1. The relationship between BMAL2 and TMB has been investigated in many models, providing further evidence for a positive correlation between BMAL2 expression and the expression of promoters of TMB. However, there is still a gap in research investigating the predictive capacity of circadian gene expression, including BMAL2, relating to TMB levels.
See also
Arntl (Bmal1)
References
External links
Transcription factors
PAS-domain-containing proteins | ARNTL2 | [
"Chemistry",
"Biology"
] | 2,107 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
13,530,915 | https://en.wikipedia.org/wiki/TRPC7 | Transient receptor potential cation channel, subfamily C, member 7, also known as TRPC7, is a human gene encoding a protein of the same name.
See also
TRPC
Further reading
External links
Ion channels | TRPC7 | [
"Chemistry"
] | 44 | [
"Neurochemistry",
"Ion channels"
] |
13,530,922 | https://en.wikipedia.org/wiki/TRPV4 | Transient receptor potential cation channel subfamily V member 4 is an ion channel protein that in humans is encoded by the TRPV4 gene.
The TRPV4 gene encodes TRPV4, initially named "vanilloid-receptor related osmotically activated channel" (VR-OAC) and "OSM9-like transient receptor potential channel, member 4 (OTRPC4)", a member of the vanilloid subfamily in the transient receptor potential (TRP) superfamily of ion channels. The encoded protein is a Ca2+-permeable, nonselective cation channel that has been found involved in multiple physiologic functions, dysfunctions and also disease. It functions in the regulation of systemic osmotic pressure by the brain, in vascular function, in liver, intestinal, renal and bladder function, in skin barrier function and response of the skin to ultraviolet-B radiation, in growth and structural integrity of the skeleton, in function of joints, in airway- and lung function, in retinal and inner ear function, and in pain. The channel is activated by osmotic, mechanical and chemical cues. It also responds to thermal changes (warmth). Channel activation can be sensitized by inflammation and injury.
The TRPV4 gene has been co-discovered by W. Liedtke et al. and R. Strotmann et al.
Clinical significance
Channelopathy mutations in the TRPV4 gene lead to skeletal dysplasias, premature osteoarthritis, and neurological motor function disorders and are associated with a range of disorders, including brachyolmia type 3, congenital distal spinal muscular atrophy, Familial digital arthropathy-brachydactyly (FDAB), scapuloperoneal spinal muscular atrophy, and subtype 2C of Charcot–Marie–Tooth disease.
Pharmacology
A number of TRPV4 agonists and antagonists have been identified since its discovery. The discovery of unselective modulators (e.g. antagonist ruthenium red) was followed by the apparition of more potent (agonist 4aPDD) or selective (antagonist RN-1734) compounds, including some with bioavailability suitable for in vivo pharmacology studies such as agonist GSK1016790A (with ~10 fold selectivity vs TRPV1), and antagonists HC-067047 (with ~5 fold selectivity vs hERG and ~10 fold selectivity vs TRPM8) and RN-9893 (with ~50 fold selectivity vs TRPM8 and ~10 fold selectivity vs M1).
Resolvin D1 (RvD1), a metabolite of the omega 3 fatty acid, docosahexaenoic acid, is a member of the specialized proresolving mediators (SPMs) class of metabolites that function to resolve diverse inflammatory reactions and diseases in animal models and, it is proposed, humans. This SPM also dampens pain perception arising from various inflammation-based causes in animal models. The mechanism behind this pain-dampening effect involves the inhibition of TRPV4, probably (in at least certain cases) by an indirect effect wherein it activates another receptor located on neurons or nearby microglia or astrocytes. CMKLR1, GPR32, FPR2, and NMDA receptors have been proposed to be the receptors through which a SPM may operate to down-regulate TRPs and thereby pain perception.
Interactions
TRPV4 has been shown to interact with MAP7 and LYN.
Implication in Temperature-Dependent Sex Determination in Reptiles
TRPV4 has been proposed to be the thermal sensor in gonads of Alligator mississipiensis, a species with temperature-dependent sex determination. However the data were over interpreted and TRPV4 is probably not involved in temperature-dependent sex determination due to large overlap of expression at male producing temperature and female producing temperature for example.
See also
TRPV
References
External links
GeneReviews/NCBI/NIH/UW entry on Charcot-Marie-Tooth Neuropathy Type 2
Ion channels | TRPV4 | [
"Chemistry"
] | 880 | [
"Neurochemistry",
"Ion channels"
] |
13,530,930 | https://en.wikipedia.org/wiki/TRPM8 | Transient receptor potential cation channel subfamily M (melastatin) member 8 (TRPM8), also known as the cold and menthol receptor 1 (CMR1), is a protein that in humans is encoded by the TRPM8 gene. The TRPM8 channel is the primary molecular transducer of cold somatosensation in humans. In addition, mints can desensitize a region through the activation of TRPM8 receptors (the 'cold'/menthol receptor).
Structure
The TRPM8 channel is a homotetramer, composed of four identical subunits with a transmembrane domain with six helices (S1–6). The first four, S1–4, act as the voltage sensor and allow binding of menthol, icilin and similar channel agonists. S5 and S6 and a connecting loop, also part of the structure, make up the pore, a non-selective cation channel which consists of a highly conserved hydrophobic region. A range of diverse components are required for the high level of specificity in response to cold and menthol stimuli which eventually lead to ion flow through the protein channel.
Function
TRPM8 is an ion channel: upon activation, it allows the entry of Na+ and Ca2+ ions into the cell, which leads to depolarization and the generation of an action potential. The signal is conducted from primary afferents (type C- and A-delta) eventually leading to the sensation of cold and cold pain.
The TRPM8 protein is expressed in sensory neurons, and it is activated by cold temperatures and cooling agents, such as menthol and icilin whereas WS-12 and CPS-369 are the most selective agonists of TRPM8.
TRPM8 is also expressed in the prostate, lungs, and bladder where its function is not well understood.
Role in the nervous system
The transient receptor potential channel (TRP) superfamily, which includes the menthol (TRPM8) and capsaicin receptors (TRPV1), serve a variety of functions in the peripheral and central nervous systems. In the peripheral nervous system, TRPs respond to stimuli from temperature, pressure, inflammatory agents, and receptor activation. Central nervous system roles of the receptors include neurite outgrowth, receptor signaling, and excitotoxic cell death resulting from noxious stimuli.
McKemy et al., 2002 provided some of the first evidence for existence of a cold-activated receptor throughout the mammalian somatosensory system. Using calcium imaging and patch clamp based approaches, they showed a response in dorsal root ganglion (DRG) neurons that exposure to cold, 20 °C or cooler, lead to a response in calcium influx. This receptor was shown to respond to both cold temperatures, menthol, and similar now-known agonists of the TRPM8 receptor. It works in conjunction with the TRPV1 receptor to maintain a feasible threshold temperature range in which our cells are comfortable and our perception of these stimuli occurs at the spinal cord and brain, which integrate signals from different fibers of varying sensitivity to temperature. Application of menthol to skin or mucous membranes results directly in membrane depolarization, followed by calcium influx via voltage-dependent calcium channels, providing evidence for the role of TRPM8 and other TRP receptors to mediate our sensory interaction with the environment in response to cold in the same way as in response to menthol.
Properties
pH-sensitivity
In contrast to the TRPV1 (capsaicin) receptor, which is potentiated by low pH, acidic conditions were shown to inhibit the TRPM8 Ca2+ response to menthol and icilin (an agonist of the menthol receptor). It is hypothesized the TRPV1 and TRPM8 receptors act together in response to inflammatory conditions: TRPV1, by proton action, increases the burning sensation of pain, while the acidity inhibits TRPM8 to block the more pleasant sensation of coolness in more dire instances of pain.
Sensitization
Numerous studies have been published investigating the effect of L-menthol application as a model for TRPM8-sensitization. The primary consensus finding is that TRPM8 sensitization increases the sensation of cold pain, also known as cold hyperalgesia. An experiment was done in a double-blind two-way crossover study by applying 40% L-menthol to the forearm, using ethanol as a control. Activation of the TRPM8-receptor channel (the primary menthol receptor channel) resulted in increased sensitization to the menthol stimulus. To investigate the mechanisms of this sensitization, Wasner et al., 2004, performed A fiber conduction blockade of the superficial radial nerve in another group of subjects. This ended up reducing the menthol-induced sensation of cold and hyperalgesia because blocking A fiber conduction resulted in inhibition of a class of group C nerve fiber nociceptors needed to transduce the sensation of pain. They concluded menthol sensitizes cold-sensitive peripheral C nociceptors and activates cold-specific A delta fibers.
Desensitization
As is common in response to many other sensory stimuli, much experimental evidence exists for the desensitization of human response of TRPM8 receptors to menthol. Testing involving administration of menthol and nicotine-containing cigarettes non-smokers, which induced what they classified as an irritant response, after initial sensitization, showed a declining response in subjects over time, lending itself to the incidence of desensitization. Ethanol, with similar irritant and desensitization properties, was used as a control for nicotine, to distinguish it from menthol-induced response. The menthol receptor was seen to sensitize or desensitize based on cellular conditions, and menthol produces increased activity in Ca2+-voltage gated channels that is not seen in ethanol, cyclohexanol and other irritant controls, suggestive of a specific molecular receptor. Dessirier et al., 2001, also claim the cross-desensitization of menthol receptors can occur by unknown molecular mechanisms, though they hypothesize the importance of Ca2+ in reducing cell excitability in a way similar to that in the capsaicin receptor.
Mutagenesis of protein kinase C phosphorylation sites in TRPM8 (wild type serines and threonines replaced by alanine in mutants) reduces the desensitizing response.
Caryophyllene inhibits TRPM8, which helps mammals to improve cold tolerance at low ambient temperatures.
Cross-desensitization
Cliff et al., 1994, performed a study to discover more about the properties of the menthol receptor and whether menthol had the ability to cross-desensitize with other chemical irritant receptors. Capsaicin was known to cross-desensitize with other irritant agonists, where the same information was not known about menthol. The study involved subjects swishing either menthol or capsaicin for an extended time at regular intervals. There were three significant conclusions about cross-desensitizing: 1) Both chemicals self-desensitize, 2) menthol receptors can desensitize in response to capsaicin, and, most novelly, 3) capsaicin receptors are sensitized in response to menthol.
Ligands
Agonists
In a search for compounds that activated the TRPM8 cold receptor, compounds that produce a cooling-sensation were sought out from the fragrance industries. Of 70 relevant compounds, the following 10 produced the associated [Ca2+]-increase response in mTRPM8-transfected HEK293 cells used to identify agonists. Experimentally identified and commonly utilized agonists of the menthol receptor include linalool, geraniol, hydroxy-citronellal, icilin, WS-12, Frescolat MGA, Frescolat ML, PMD 38, Coolact P, M8-Ag and Cooling Agent 10. Traditionally used agonists include menthol and borneol.
Antagonists
BCTC, thio-BCTC, capsazepine and M8-An were identified as antagonists of the TRPM8 receptor. These antagonists physically block the receptor for cold and menthol, by binding to the S1-S4 voltage-sensing domain, preventing response.
AMG-333
RQ-00434739
RQ-00203078
PF-05105679 cas: [1398583-31-7].
M8 B
AMTB
5-benzyloxytryptamine
Anandamide
N-Arachidonoyl dopamine
Tetrahydrocannabinol
Tetrahydrocannabinolic acid
Cannabidiol
Cannabidiolic acid
Cannabigerol
Tetrahydrocannabivarin
Tetrahydrocannabivarin Acid
Cannabidivarin
Cannabigerolic acid
Cannabigerovarin
Cannabichromene
Cannabinol
Clinical significance
Cold-patches have traditionally been used to induce analgesia or relief in pain which is caused as result of traumatic injuries. The underlying mechanism of cold-induced analgesia remained obscure until the discovery of TRPM8.
One research group has reported that TRPM8 is activated by chemical cooling agents (such as menthol) or when ambient temperatures drop below approximately , suggesting that it mediates the detection of cold thermal stimuli by primary afferent sensory neurons of afferent nerve fibers.
Three independent research groups have reported that mice lacking functional TRPM8 gene expression are severely impaired in their ability to detect cold temperatures. Remarkably, these animals are deficient in many diverse aspects of cold signaling, including cool and noxious cold perception, injury-evoked sensitization to cold, and cooling-induced analgesia. These animals provide a great deal of insight into the molecular signaling pathways that participate in the detection of cold and painful stimuli. Many research groups, both in universities and pharmaceutical companies, are now actively involved in looking for selective TRPM8 ligands to be used as new generation of neuropathic analgesic drugs.
Low concentrations of TRPM8 agonists such as menthol (or icilin) found to be antihyperalgesic in certain conditions, whereas high concentrations of menthol caused both cold and mechanical hyperalgesia in healthy volunteers.
TRPM8 knockout mice not only indicated that TRPM8 is required for cold sensation but also revealed that TRPM8 mediates both cold and mechanical allodynia in rodent models of neuropathic pain. Furthermore, recently it was shown that TRPM8 antagonists are effective in reversing established pain in neuropathic and visceral pain models.
TRPM8 upregulation in bladder tissues correlates with pain in patients with painful bladder syndromes. Furthermore, TRPM8 is upregulated in many prostate cancer cell lines and Dendreon/Genentech are pursuing an agonist approach to induce apoptosis and prostate cancer cell death.
Role in cancer
TRPM8 channels may be a target for treating prostate cancer. TRPM8 is an androgen dependent Ca2+ channel necessary for prostate cancer cells to survive and grow. Immunofluorescence showed expression of the TRPM8 protein in the ER and plasma membrane of the androgen-responsive LNCaP cell line. TRPM8 was expressed in androgen-insensitive cells, but it was not shown to be needed for their survival. By knockout of TRPM8 with siRNAs targeting TRPM8 mRNAs, the necessity of the TRPM8 receptor was shown in the androgen-dependent cancer cells. This has useful implications in terms of gene therapy, as there are so few treatment options for men with prostate cancer. As an androgen-regulated protein whose function is lost as cancer develops in cells, the TRPM8 protein seems to be especially critical in regulating calcium levels and has recently been proposed as the focus of new drugs used to treat prostate cancer.
See also
Endocannabinoid system
TRPM
Ruthenium red
References
Further reading
External links
Ion channels | TRPM8 | [
"Chemistry"
] | 2,587 | [
"Neurochemistry",
"Ion channels"
] |
13,530,939 | https://en.wikipedia.org/wiki/TRPM3 | Transient receptor potential cation channel subfamily M member 3 is a protein that in humans is encoded by the TRPM3 gene.
Function
The product of this gene belongs to the family of transient receptor potential (TRP) channels. TRP channels are Ca2+ permeable non-selective cation channels that play roles in a wide variety of physiological processes, including calcium signaling, heat and cold sensation, calcium and magnesium homeostasis. TRPMs mediates sodium and calcium entry, which induces depolarization and a cytoplasmic Ca2+ signal. Alternatively spliced transcript variants encoding different isoforms have been -identified.
TRPM3 was shown to be activated by the neurosteroid pregnenolone sulfate as well as the synthetic compound CIM0216.
Peripheral heat sensation
TRPM3 is expressed in peripheral sensory neurons of the dorsal root ganglia, and they are activated by high temperatures. Genetic deletion of TRPM3 in mice reduces sensitivity to noxious heat, as well as inflammatory thermal hyperalgesia. Inhibitors of TRPM3 were also shown to reduce noxious heat and inflammatory heat hyperalgesia, as well as reduce heat hyperalgesia and spontaneous pain in nerve injury induced neuropathic pain.
Receptor mediated inhibition
TRPM3 is robustly inhibited by the activation of cell surface receptors that couple to inhibitory heterotrimeric G-proteins (Gi) via direct binding of the Gβγ subunit of the G-protein to the channel. Gβγ was shown to bind to a short α-helical segment of the channel. Receptors that inhibit TRPM3 include opioid receptors and GABAB receptors.
TRPM3 in the brain
Mutations in TRPM3 in humans, were recently shown to cause a intellectual disability and epilepsy. The disease associated mutations were shown to increase the sensitivity of the channel to agonists, and heat.
TRPM3 ligands, activators and modulators
Activators
Heat
Pregnenolone Sulfate
CIM-0216
Channel Blockers
Mefenamic acid
Citrus fruit flavonoids, e.g. naringenin, isosakuranetin and hesperetin, as well as ononetin (a deoxybenzoin).
Primidone, a clinically used antiepileptic medication also directly inhibits TRPM3.
Activity Modulator
pH
See also
TRPM
TRPM3-related neurodevelopmental disorders
References
Further reading
External links
Ion channels | TRPM3 | [
"Chemistry"
] | 523 | [
"Neurochemistry",
"Ion channels"
] |
13,530,960 | https://en.wikipedia.org/wiki/TRPV3 | Transient receptor potential cation channel, subfamily V, member 3, also known as TRPV3, is a human gene encoding the protein of the same name.
The TRPV3 protein belongs to a family of nonselective cation channels that function in a variety of processes, including temperature sensation and vasoregulation. The thermosensitive members of this family are expressed in subsets of human sensory neurons that terminate in the skin, and are activated at distinct physiological temperatures. This channel is activated at temperatures between 22 and 40 degrees C. The gene lies in close proximity to another family member (TRPV1) gene on chromosome 17, and the two encoded proteins are thought to associate with each other to form heteromeric channels.
Function
The TRPV3 channel has wide tissue expression that is especially high in the skin (keratinocytes) but also in the brain. It functions as a molecular sensor for innocuous warm temperatures. Mice lacking these protein are unable to sense elevated temperatures (>33 °C) but are able to sense cold and noxious heat. In addition to thermosensation TRPV3 channels seem to play a role in hair growth because mutations in the TRPV3 gene cause hair loss in mice. The role of TRPV3 channels in the brain is unclear, but appears to play a role in mood regulation. The protective effects of the natural product, incensole acetate were partially mediated by TRPV3 channels.
Modulation
The TRPV3 channel is directly activated by various natural compounds like carvacrol, thymol and eugenol. Several other monoterpenoids which cause either feeling of warmth or are skin sensitizers can also open the channel. Monoterpenoids also induce agonist-specific desensitization of TRPV3 channels in a calcium-independent manner.
Resolvin E1 (RvE1), RvD2, and 17R-RvD1 (see resolvins) are metabolites of the omega 3 fatty acids, eicosapentaenoic acid (for RvE1) or docosahexaenoic acid (for RvD2 and 17R-RvD1). These metabolites are members of the specialized proresolving mediators (SPMs) class of metabolites that function to resolve diverse inflammatory reactions and diseases in animal models and, it is proposed, humans. These SPMs also dampen pain perception arising from various inflammation-based causes in animal models. The mechanism behind their pain-dampening effects involves the inhibition of TRPV3, probably (in at least certain cases) by an indirect effect wherein they activate other receptors located on neurons or nearby microglia or astrocytes. CMKLR1, GPR32, FPR2, and NMDA receptors have been proposed to be the receptors through which these SPMs operate to down-regulate TRPV3 and thereby pain perception.
2-Aminoethoxydiphenyl borate (2-APB) is a mixed agonist-antagonist of the TRPV3 receptor, acting as an antagonist at low concentrations but showing agonist activity when used in larger amounts. Drofenine also acts as a TRPV3 agonist in addition to its other actions. Conversely, icilin has been shown to act as a TRPV3 antagonist, as well as a TRPM8 agonist. Forsythoside B acts as a TRPV3 inhibitor among other actions. Farnesyl pyrophosphate is an endogenous agonist of TRPV3, while incensole acetate from frankincense also acts as an agonist at TRPV3. TRPV3-74a is a selective TRPV3 antagonist.
Ligands
Agonists
Cannabidiol
Tetrahydrocannabivarin
Cannabigerolic acid
Cannabigerovarin
See also
TRPV
Endocannabinoid system
References
Further reading
External links
Ion channels | TRPV3 | [
"Chemistry"
] | 837 | [
"Neurochemistry",
"Ion channels"
] |
13,531,370 | https://en.wikipedia.org/wiki/Push%20start | Push starting, also known as, roll starting, clutch starting, popping the clutch or crash starting, is a method of starting a motor vehicle with an internal combustion engine that has a manual transmission, a mechanical fuel pump, and a mechanically driven generator or alternator. By pushing or letting the vehicle roll downhill then engaging the clutch at the appropriate speed the engine will turn over and start. The technique is most commonly employed when other starting methods (automobile self starter, kick start, jump start etc.) are unavailable.
The most common way to push start a vehicle is to put the manual transmission in second gear, switching the ignition to on/run, depressing the clutch, and pushing the vehicle until it is at a speed of or more, then quickly engaging the clutch to make the engine rotate and start while keeping the gas pedal partially depressed, then quickly disengaging the clutch so it does not stall.
Types
Push starting is most successful when the automobile is using a gasoline engine, uses a carburetor, and uses a capacitor discharge ignition (CDI) or an inductive discharge ignition system. Automobiles with other types of engine, ignition, and fuel delivery configurations may work, but may be more difficult to start. Some engines must have a battery providing some electricity since fuel injection systems must have power to operate.
Automatic or manual gearbox
A vehicle equipped with an automatic transmission (including semi-automatic (clutchless manual) transmissions) is difficult to push start since selection of transmission gears is possible only when the internals of such a gearbox are rotating. However, automatics with both front and rear hydraulic pumps can be push-started with no problems. The last American automobile with this type of transmission was the 1969 Chevrolet Corvair with a Powerglide automatic. While push-start can cause more damage to a hydrolocked engine, the starter motor is limited.
Petrol or diesel engines
A diesel engine uses heat and high compression (compression ratios commonly 16-23:1 versus 8-12:1 for gasoline) to ignite the fuel. When normally starting a modern diesel engine, it typically uses glowplugs to preheat the cylinder(s). If a battery is completely discharged then it may not provide the necessary power to heat the glowplugs, making the push starting of a diesel vehicle with a depleted battery almost impossible.
Fuel delivery systems
Fuel injection is most common for modern gasoline and diesel engines. Fuel injection needs electrical power to open and close the fuel injectors. If a battery is of a sufficiently discharged state that it cannot provide the power to turn an automobile self starter then it may also not be possible to activate the injectors. The most common method to start such a vehicle engine is to jump start it.
A fuel pump is used for fuel injection. It can be mechanically driven or electrically driven. If electrical then the same problem may arise which the battery cannot turn the pump because it is heavily discharged.
A carburetor only needs suction from the internal combustion engine to work best when push starting. Once the engine is running, a fuel pump (mechanical or electrical) will continue to supply fuel to the carburetor.
Carburetor engines may damage the catalytic converter by fuel when cranked longer without igniting fuel or electrical controlling the carburetor.
Ignition systems
A modern gasoline engine contains an electronic ignition system which precisely times the electrical pulse to the spark plug. The advantage of such a device is that it can deliver a full power electrical pulse to the spark plugs even when the alternator is turning very slowly (as in push starting a motor). The outdated method of a mechanically timed ignition system is that it cannot deliver a full electrical pulse at very low engine revolutions per minute (RPM). This may affect the ease of push starting an engine to life.
History
In the early 20th century, many motorcycles could only be push started; the 1908 Scott was distinguished by introducing a kick starter feature. Excelsior Motor Company's Welbike, intended to be carried by paratroopers in World War II, was designed to be started only by push starting.
References
Motorcycle engines
Starting systems | Push start | [
"Technology"
] | 846 | [
"Motorcycle engines",
"Engines"
] |
13,531,757 | https://en.wikipedia.org/wiki/Elon%20Lages%20Lima | Elon Lages Lima (July 9, 1929 – May 7, 2017) was a Brazilian mathematician whose research concerned differential topology, algebraic topology, and differential geometry. Lima was an influential figure in the development of mathematics in Brazil.
Lima was professor emeritus at Instituto Nacional de Matemática Pura e Aplicada of which he was the director during three separate periods. Lima has been twice recipient of the Prêmio Jabuti from the Câmara Brasileira do Livro, for his textbooks Espaços Métricos and Álgebra Linear, and of the Anísio Teixeira Prize from the Ministry of Education and Sports. His mathematical style was heavily influenced by Bourbaki's.
Biography
He began his career as a high school teacher in Fortaleza, Ceará. Lima graduated with a bachelor's degree in mathematics from the Universidade do Brasil (today UFRJ) in 1953. He obtained his doctorate in 1958 from the University of Chicago with Edwin Henry Spanier as advisor. He is a former Guggenheim Fellow, and holds memberships in the Academia Brasileira de Ciências (Brazilian Academy of Sciences) and the TWAS, the Academy of Sciences for the Developing World. He is a professor Honoris Causa of the Universidade Federal do Ceará and of the University of Brasília. He was a member of the Upper Board of FAPERJ from 1987 to 1991. He was also a member of the National Board of Education.
He wrote over thirty books in mathematics, some of which were intended for secondary school teachers. Between 1990 and 1995, he coordinated the IMPA-VITAE project, which held skills improvement courses for mathematics teachers in eleven cities from eight states throughout Brazil.
He received the grã-cruz ("Great Cross") of the Ordem Nacional do Mérito Científico ("National Order of Scientific Merit") of Brazil.
Selected publications
Lima, E. L. (1964). "Common singularities of commuting vector fields on 2-manifolds". Commentarii Mathematici Helvetici. vol. 39, pp. 97–110.
Lima, E. L. (1965). "Commuting vector fields on S3". Annals of Mathematics. vol. 81, pp. 70–88.
do Carmo, M. and Lima, E. L. (1969). "Isometric immersions with semi-definite second quadratic forms". Archiv der Mathematik vol. 20, pp. 173–175.
Lima, E. L. (1987). "Orientability of smooth hypersurfaces and the Jordan-Brouwer separation theorem". Expositiones Mathematicae. vol. 5, pp. 283–286.
See also
Spectrum (topology), a notion introduced by Lima.
Notes
Silva, C. P. . "Sobre o início e consolidação da pesquisa matemática no Brasil, Parte 2", Revista Brasileira de História da Matemática (RBHM), Vol. 6, n. 12, p. 165-196, 2006
1929 births
2017 deaths
People from Maceió
University of Chicago alumni
Recipients of the Great Cross of the National Order of Scientific Merit (Brazil)
Topologists
Textbook writers
Instituto Nacional de Matemática Pura e Aplicada researchers
Members of the Brazilian Academy of Sciences
21st-century Brazilian mathematicians
20th-century Brazilian mathematicians
20th-century Brazilian writers
20th-century Brazilian educators
21st-century Brazilian educators
21st-century Brazilian writers
Brazilian non-fiction writers | Elon Lages Lima | [
"Mathematics"
] | 730 | [
"Topologists",
"Topology"
] |
13,532,062 | https://en.wikipedia.org/wiki/Carbarsone | Carbarsone is an organoarsenic compound used as an antiprotozoal drug for treatment of amebiasis and other infections. It was available for amebiasis in the United States as late as 1991. Thereafter, it remained available as a turkey feed additive for increasing weight gain and controlling histomoniasis (blackhead disease).
Carbarsone is one of four arsenical animal drugs approved by the U.S. Food and Drug Administration for use in poultry and/or swine, along with nitarsone, arsanilic acid, and roxarsone. In September 2013, the FDA announced that Zoetis and Fleming Laboratories would voluntarily withdraw current roxarsone, arsanilic acid, and carbarsone approvals, leaving only nitarsone approvals in place. In 2015 FDA withdrew the approval of using nitarsone in animal feeds. The ban came into effect at the end of 2015.
References
Antiprotozoal agents
Arsonic acids
Ureas
Anilines | Carbarsone | [
"Chemistry",
"Biology"
] | 216 | [
"Organic compounds",
"Antiprotozoal agents",
"Biocides",
"Ureas"
] |
13,532,091 | https://en.wikipedia.org/wiki/Phi%20Piscium | Phi Piscium, Latinized from φ Piscium, is a quadruple star system approximately 380 light years away in the constellation Pisces. It consists of Phi Piscium A, with a spectral type of K0III, and Phi Piscium B. Phi Piscium A possesses a surface temperature of 3,500 to 5,000 kelvins. Some suggest the only visible companion in the Phi Piscium B sub-system is a late F dwarf star, while others suggest it is a K0 star. The invisible component of the Phi Piscium B sub-system is proposed to have a spectral type of M2V. The star system has a period of about 20½ years and has a notably high eccentricity of 0.815.
Naming
In Chinese, (), meaning Legs (asterism), refers to an asterism consisting of refers to an asterism consisting of φ Piscium, η Andromedae, 65 Piscium, ζ Andromedae, ε Andromedae, δ Andromedae, π Andromedae, ν Andromedae, μ Andromedae, β Andromedae, σ Piscium, τ Piscium, 91 Piscium, υ Piscium, χ Piscium and ψ¹ Piscium. Consequently, φ Piscium itself is known as (, .)
References
K-type giants
Spectroscopic binaries
4
Pisces (constellation)
Piscium, Phi
Piscium, 085
007318
005742
0360
Durchmusterung objects | Phi Piscium | [
"Astronomy"
] | 337 | [
"Pisces (constellation)",
"Constellations"
] |
13,532,475 | https://en.wikipedia.org/wiki/Tunggal%20panaluan | A tunggal panaluan is a magic staff used by shamans of the Batak people, who live in the highlands of North Sumatra, Indonesia. Traditionally the tunggal panaluan is made from wood of a specific tree and carved with human figures and embellished with horsehair and cooked human brain, both procured from sacrificial victims.
Shape
Tunggal panaluan was carved out of the wood of Cassia javanica, the only tree from which the tunggal panaluan can be created. The tree occupies a central place in the ancestral myth of the Batak people, as well as the figures depicted on the tunggal panaluan. Tunggal panaluan was carved with human and animal figures arranged above each other. The human figure refer to a Batak myth that tells of incestuous twins. Animals found depicted in the tunggal panaluan include snakes, dragons, geckos and water buffaloes. Another type of magic staff, known as the tunggal malehat, depicts a human riding either a horse or a mythical beast.
Use
The tunggal panaluan was used in ceremonies to ward off disaster and illness as well as to cause them. To imbue the staff with magic, first the datu (shaman) has to create a hole in the staff in which a magical substance known as the pupuk is inserted. Pupuk's creation involved the putrefied remain of the mutilated body of a murdered child.
Variation
Tungkot malehat is another variation of tunggal panaluan. Most tungkot malehat are manufactured and used by the Karo instead of the Toba. They have more simplistic design than the tunggal panaluan. Tunggal panaluan are elaborately carved all the way until the bottom of the staff, whereas the body of the tungkot malehat are left plain and uncarved. The only carved part of the tungkot malehat is its top part, usually carved with a human figure riding a singa or a horse. It is generally accepted that the tungkot malehat is a more recent development of the tunggal panaluan.
See also
Naga morsarang
Porhalaan
Pupuk
Notes
References
Kuiper, F. B. J., Cosmogony and Conception: A Query, History of Religions, Vol. 10, No. 2 (Nov., 1970), 91–138.
Winkler, J., Die Toba-Batak auf Sumatra in gesunden und kranken Tagen, American Anthropologist, New Series, Vol. 32, No. 4 (Oct.-Dec., 1930), 682–687.
External links
Magical Ancient Keris Antiques
Culture of Sumatra
Batak
Magic (supernatural) | Tunggal panaluan | [
"Physics"
] | 566 | [
"Magic items",
"Physical objects",
"Matter"
] |
13,532,795 | https://en.wikipedia.org/wiki/Location-based%20advertising | Location-based advertising (LBA) is a form of advertising that integrates mobile advertising with location-based services. The technology is used to pinpoint consumers location and provide location-specific advertisements on their mobile devices.
According to Bruner and Kumar, "LBA refers to marketer-controlled information specially tailored for the place where users access an advertising medium".
Types
There are two types of location-based services in general: push and pull.
The push approach is more versatile and is divided into two types. A not requested service (opt-out) is the more common approach amongst the two approaches, as this allows advertisers to target users until the users do not want the ads to be sent to them. By contrast, through using the opt-in approach the users can determine what type of advertisements or promotional material they can receive from the advertisers. The advertisers must abide by certain legal regulations set in place and respect users' choices.
In contrast, using the LBA pull approach, users can directly search for information by entering certain keywords. The users look for specific information and not the other way around. For example, a traveler visiting New York could use a local search application such as WHERE on her device to find the nearest local Chinese restaurant in Manhattan. After she selects one of the restaurants, a map is provided as well as an offer of a free appetizer good for the next hour.
Location-based advertising is closely related to mobile advertising, which is divided into four types:
Messaging
Display
Search
Product placement
Process
For push-based LBA, users must opt-into the company's LBA program; this would most likely be done via the seller's website or at the store. Then users would be requested to provide their personal information, such as mobile phone number, first name, and other related information. After the data are all submitted, the company would send a text message requesting users to confirm the LBA subscription. Once these steps have been completed, the company can now use location-based technology to provide their customers with geographically based offers and incentives.
For pull-based LBA, users interact with local, typically mobile, sites or applications, and are presented offers in a standard pull advertising model. Location-based advertising companies like go2 Media aggregate local listings from yellow page companies, local directories, group discount businesses and others. Users are presented these ads as display advertising integrated with publisher content or search advertising in response to user queries.
In addition to directly opting in, users may see location-based display ads served from a location-based ad aggregator/network such as NAVTEQ or AdLocal by Cirius Technologies.
Potential benefits
LBA, as a form of direct marketing, allows marketers to reach specific target audiences. Bruner and Kumar state that LBA enhances the ability to reach people in a much more targeted manner than was possible in the past. For example, if a customer has purchased a Harry Potter movie from a DVD/CD rental store and subscribed to the store's LBA program, he can expect to receive a message on his mobile phone about the release date of the next Harry Potter movie, including a movie sample, while he is on the train going back home.
Since LBA can improve advertising relevance by giving the customer control over what, when, where, and how they receive ads, it provides them with more relevant information, personalized message, and targeted offer. Vidaille (2007) stated, “With a targeted message, we’ve reached about 20 percent response rate. That’s incredibly good”. The internet can do similar things, such as sending new information about products, promotional coupons, or asking consumers' opinion, but few people respond to e-mail marketing because it’s not personal anymore. In contrast, LBA gives consumers relevant information rather than spam; therefore, it increases the chances of getting higher responses.
Finally, unlike other traditional media, LBA, in addition to being used as advertising, can also be used to research consumers which can be used to tailor future offers. “Consumers are constantly providing information on their behavior through mobile internet activity”. With location-based service, surveys can take place in the real world, in real time, rather than in halls, in a focus group facility, or on a PC. Mobile survey can be integrated with a marketing campaign; the results of customer satisfaction research can be used iteratively to guide the next campaign. For example, a restaurant that is experiencing increased competition can use the specific database – a collection of small mobile surveys of customers who had used coupons from the LBA in the geographic area – to determine their dining preferences, times, and occasions. Marketers can also use customers' past consumption patterns to forecast future patterns and send special dining offers to the target population at the right place and time, in order to build interest, response, and interaction to the restaurant.
Concerns
Privacy issues
The mobile phone is an incredibly personal tool. However, as Darling pointed out, “The fact that mobile device is so personal can be both a strength and a weakness”. On one hand, marketers can entertain, inform, build brand awareness, create loyalty, and drive purchase decision among their target consumers through LBA. On the other hand, consumer privacy is still a concern. Therefore, the establishment of a well thought-out consumer privacy and preference management policy is critical to the long-term success of LBA. Marketers should inform their consumers on how their information is to be stored, secured, and used or combined with other purposes of marketing. If LBA can assist people in their everyday life, they will be more than happy to reveal their location. To conclude, in order to ensure continue success and long-term longevity of LBA, consumer trust must be established and maintained. LBA needs to be permission-based and marketers must take great strides in protecting customers' privacy and respecting their preferences. In International Journal of Mobile Marketing, Banerjee and Dholakia found that the response to LBA depends not only on the type of location but also the kind of activity the individual is engaged in. They are more likely to prefer LBA in public places and during leisure time.
Perception of spam
Another major concern for LBA is spam; consumers can easily perceive LBA as spam if it is done inappropriately. According to Fuller, spam is defined as “any unsolicited marketing message sent via electronic mail or to a mobile phone”. In short, spam is an unwanted message that is delivered even though a user has not requested for it. Since the customer is in control and all activities are voluntary, customers' objective, goals, and emotions must be taken into account. A recent survey showed that users spend only 8 to 10 seconds on mobile advertisements. Therefore, the interaction must be straightforward and simple. Marketers must also develop relevant and engaging advertising content that mobile users want to access at the right place and time. More importantly, marketers must make sure that their offer contains real value for the customer, and must follow strict opt-in policies. The best way for marketers to distance from spam is to give consumers choice, control, and confidentiality while insuring that they only received relevant information.
Potential breaches of advertising standards
Misuse of LBA can result in claims for a product or service that the advertiser cannot substantiate. Advertisements and advertorials that incorporate the geographical location of the customer have the potential to breach advertising rules, standards and codes of conduct in many legislatures. For instance, the UK Advertising Standards Authority (ASA) requires all advertisements to be honest, truthful and not mislead. Since the promoter will not know the final wording of the advertisement in every case, it cannot undergo a proper compliance check. A claim such as "[location] woman loses 10 pounds with our new diet plan" is clearly false, since it cannot be substantiated for the majority of geographies where the advertisement might appear. Claims based on LBA have been ruled misleading by the ASA on a case-by-case basis.
See also
Geomarketing
References
Works cited
Banerjee, Syagnik & Dholakia, Ruby Roy,(2008)"Does Location Based Advertising Work?" International Journal of Mobile Marketing, Dec, from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2135087
Darling, A. (2007, May 9). Mobile starts to pay its way. Marketing. Retrieved July 29, 2007, from ABI-Inform database.
Ferris, M. (2007, March). Insight on mobile advertising, promotion, and research. Journal of Advertising Research. Retrieved July 29, 2007, from Business Source Premier database.
Fierce Markets Inc. (2007, March 1). IDC says don’t underestimate full potential of mobile marketing. Retrieved August 10, 2007, from https://web.archive.org/web/20070827183751/http://www.fiercemobilecontent.com/node/2941
Fuller, P. (2005, September 7). Why spam doesn’t have to happen on mobile device. Retrieved August 9, 2007, from
Girgenti, D. (2007, April). Mobile marketing. Media. Retrieved July 29, 2007, from ABI-Inform database.
Halper, P. (2007, March 5). Advertising goes mobile. Fortune. Retrieved August 4, 2007, from Business Source Premier database.
Steiniger, S., Neun, M., & Edwardes, A. (2006). Foundations of location based service. Retrieved August 5, 2007, from https://web.archive.org/web/20070926211436/http://www.geo.unizh.ch/publications/cartouche/lbs_lecturenotes_steinigeretal2006.pdf
Further reading
App Marketing Agency (2013, June 1). Location Based Ads – Best Examples. Retrieved August 13, 2013, from http://www.appmarketingagentur.de/mobile-advertising/location-based-ads-best-examples
Fitzgerald, R. (2006, September 14). Technology: Follow you, follow me. The Guardian. Retrieved July 30, 2007, from ABI-Inform database.
Mobile Marketing Association. (2007). Media advertising guidelines. Retrieved August 5, 2007, from http://www.mmaglobal.com/mobileadvertising.pdf
Mobile Marketing Association. (2007). Mobile marketing industry glossary. Retrieved August 5, 2007, from https://web.archive.org/web/20070706131118/http://www.mmaglobal.com/glossary.pdf
Sharma, A., Delaney, K., Bryan-Low, C., Spencer, J., & Ramstad, E. (2007, August 2). Google pushes tailored phones to win lucrative ad market. Wall Street Journal. Retrieved August 4, 2007, from Business Source Premier database.
Advertising techniques
Mobile telecommunications
Geomarketing | Location-based advertising | [
"Technology"
] | 2,312 | [
"Mobile telecommunications"
] |
13,532,851 | https://en.wikipedia.org/wiki/J.%20H.%20Wilkinson%20Prize%20for%20Numerical%20Software | The James H. Wilkinson Prize for Numerical Software is awarded every four years to honor outstanding contributions in the field
of numerical software. The award is named to commemorate the outstanding contributions of James H. Wilkinson in the same field.
The prize was established by Argonne National Laboratory (ANL), the National Physical Laboratory (NPL), and the Numerical Algorithms Group (NAG). They sponsored the award every four years at the International Congress on Industrial and Applied Mathematics (ICIAM) beginning with the 1991 award. By agreement in 2015 among ANL, NPL, NAG, and SIAM, the prize will be administered by the Society for Industrial and Applied Mathematics (SIAM) starting with the 2019 award.
Eligibility and selection criteria
Candidates must have worked in the field for at most 12 years after receiving their PhD as of January 1 of the award year. Breaks in continuity are allowed, and the prize committee may make exceptions. The award is given on the basis of:
Clarity of the software implementation and documentation.
Clarity of the paper accompanying the entry.
Portability, reliability, efficiency and usability of the software implementation.
Depth of analysis of the algorithm and the software.
Importance of application addressed by the software.
Quality of the test software
Winners
1991
The first prize in 1991 was awarded to Linda Petzold for DASSL, a differential algebraic equation solver. This code is available in the public domain.
1995
The 1995 prize was awarded to Chris Bischof and Alan Carle for ADIFOR 2.0, an automatic differentiation tool for Fortran 77 programs. The code is available for educational and non-profit research.
1999
The 1999 prize was awarded to Matteo Frigo and Steven G. Johnson for FFTW, a C library for computing the discrete Fourier transform.
2003
The 2003 prize was awarded to Jonathan Shewchuk for Triangle, a two-dimensional mesh generator and Delaunay Triangulator. It is freely available.
2007
The 2007 prize was awarded to Wolfgang Bangerth, Guido Kanschat, and Ralf Hartmann for deal.II, a software library for computational solution of partial differential equations using adaptive finite elements. It is freely available.
2011
Andreas Waechter (IBM T. J. Watson Research Center) and Carl Laird (Texas A&M University) were awarded the 2011 prize for IPOPT, an object-oriented library for solving large-scale continuous optimization problems. It is freely available.
2015
The 2015 prize was awarded to Patrick Farrell (University of Oxford), Simon Funke (Simula Research Laboratory), David Ham (Imperial College London), and Marie Rognes (Simula Research Laboratory) for the development of dolfin-adjoint, a package which automatically derives and solves adjoint and tangent linear equations from high-level mathematical specifications of finite element discretisations of partial differential equations.
2019
The 2019 prize was awarded to Jeff Bezanson, Stefan Karpinski, and Viral B. Shah for their development of the Julia programming language.
2023
The 2023 prize was awarded to Field Van Zee and Devin Matthews for the development of BLIS, a portable open-source software framework for instantiating high-performance BLAS-like dense linear algebra libraries on modern CPUs.
See also
List of computer science awards
List of mathematics awards
References
External links
Official Website
Computer science awards
Awards established in 1991
Awards of the Society for Industrial and Applied Mathematics | J. H. Wilkinson Prize for Numerical Software | [
"Technology"
] | 693 | [
"Science and technology awards",
"Computer science",
"Computer science awards"
] |
13,534,563 | https://en.wikipedia.org/wiki/Nematophagous%20fungus | Nematophagous fungi are carnivorous fungi specialized in trapping and digesting nematodes. More than 700 species are known. Species exist that live inside the nematodes from the beginning and others that catch them, mostly with glue traps or in rings, some of which constrict on contact. Some species possess both types of traps. Another technique is to stun the nematodes using toxins, a method employed by Coprinus comatus, Stropharia rugosoannulata, and the family Pleurotaceae. The habit of feeding on nematodes has arisen many times among fungi, as is demonstrated by the fact that nematophagous species are found in all major fungal groups. Nematophagous fungi can be useful in controlling those nematodes that eat crops. Purpureocillium, for example, can be used as a bio-nematicide.
Types
Fungi that feed on nematodes (as the most abundant and convenient prey species) mostly live in nitrogen-deficient habitats. These fungi can be divided into four main groups according to the methods they use to catch their prey. Some use a mechanical means, an adhesive or a mechanical hyphal trap. Some produce a toxin and use it to immobilise the nematode. Some are parasitic, using their spores to gain entry into their prey, and some are egg parasites, inserting their hyphal tips into the eggs or cysts, or into females before the eggs are deposited.
Diversity
Nematophagous fungi have been found throughout the world in a wide range of habitats and climates, but few from extreme environments. Most studied have been the species that attack the nematodes of interest to farmers, horticulturists and foresters, but there are large numbers of species as yet undescribed. The sexual stage of Orbilia occurs on rotting wood on land or in fresh water, while the asexual stage occurs in marine, fresh water and terrestrial habitats. Arthrobotrys dactyloides was the first species to be discovered in brackish water, and other species have been found on mangroves.
Ecology
Nematode-trapping fungi are mostly concentrated in the upper part of the soil, in pastures, leaf litter, mangroves and certain shallow aquatic habitats. They employ techniques such as adhesive hyphal strands, adhesive knobs, adhesive nets formed from hyphal threads, loops of hyphae which tighten round any ensnared nematodes and non-constricting loops. When the nematode has been restrained, the hyphae penetrate the cuticle and the internal tissues of the nematode are devoured.
Arthrobotrys oligospora, a net-building species of fungus, can detect the presence of nematodes nearby in the soil and only builds its snares when they are present. This is presumably because building the net is a highly energy-consuming process; the fungus is alerted to the presence of the nematode by detecting the pheromones, such as ascarosides, with which the worms communicate. The fungus takes active steps to attract its prey by producing olfactory cues that mimic those used by the worm to find food and attract mates. Arthrobotrys dactyloides is a species that employs a loop of hypha to catch nematodes; when one tries to pass through the ring, the loop constricts with great rapidity, trapping the prey.
Some nematophagous fungi produce toxic substances which immobilise nematodes. For example, the hypha of the shaggy ink cap (Coprinus comatus) attacks the free-living soil nematode Panagrellus redivivus with a structure known as a spiny ball; this is used to damage the nematode cuticle to enable immobilisation, after which the hypha pierces the skin and digests the contents.
Most endoparasitic fungi have spores that are attracted to soil nematodes and tend to congregate in the mouth region. Having penetrated the cuticle, the hyphae grow throughout the nematode, absorbing its tissues. Escape tubes emerge from these and grow through the cuticle, and in due course, further motile spores exit through these, ready to infect other nematodes. In other species of fungi, it is conidia rather than spores which are encountered by the nematode and infect it in a similar way. In the case of Harposporium anguillulae, the sickle-shaped conidia are ingested by the nematode and lodge in the oesophagus or gut from where they invade the tissues.
In ovoparasitic species, the hypha flattens itself against an egg, appressoria indicating that infection is imminent or underway. It then pierces the choroid and devours the embryonic nematode before producing conidiophores and moving on to nearby eggs.
Biological control
Some species of nematophagous fungi are being investigated for use in biological pest control. Purpureocillium lilacinum, for example, infests the plant-parasitic Meloidogyne incognita, which attacks the roots of many cultivated plants. Trials have provided varying results, with some strains being aggressive and others less pathogenic, and some strains that appeared promising in the lab proved ineffective in the field. Arthrobotrys dactyloides shows promise at controlling the cosmopolitan plant-parasitic root-knot nematode Meloidogyne javanica.
See also
Fungus
Entomopathogenic fungus
Biological pest control
References
Bibliography
Carnivorous fungi
Fungal pest control agents | Nematophagous fungus | [
"Biology"
] | 1,198 | [
"Fungi",
"Fungal pest control agents"
] |
13,534,706 | https://en.wikipedia.org/wiki/Clean%20Energy%20Trends | Clean Energy Trends is a series of reports by Clean Edge which examine markets for solar, wind, geothermal, fuel cells, biofuels, and other clean energy technologies. Since the publication of the first Clean Energy Trends report in 2002, Clean Edge has provided an
annual snapshot of both the global and U.S. clean energy sector markets.
2006 trends
In 2006 most climate change deniers began to change their views. Scientists, investors, business leaders, and politicians moved the agenda from whether climate change was occurring to what should be done about it. The acceptance of climate change as “real” helped to unlock latent interest in clean energy technologies on the part of corporate and political leaders. In Washington and other capitals, clean energy became a bipartisan issue. In corporate boardrooms, it is said to be fast becoming an imperative. And clean energy markets are growing:
"We have reached the point where the steady and rapid growth of clean energy has become an old story. Each year, it seems, brings an ever-higher plateau of success. This appears to be the future of clean energy: a rolling series of technology breakthroughs, landmark corporate investments, industry consolidation, and the not-infrequent emergence of new and sometimes surprising players entering the field."
2007 trends
Clean Energy Trends 2007 shows markets for four benchmark technologies — solar photovoltaics, wind power, biofuels, and fuel cells — continuing their steady climb. Annual revenue for these four technologies increased nearly 39% in one year — to $55 billion in 2006 up from $40 billion in 2005. Clean Edge forecasts that this trajectory will continue to become a $226 billion market by 2016.
Several developments have helped to strengthen clean energy markets in 2007:
"a near tripling in venture investments in energy technologies in the U.S. to more than $2.4 billion"
"a new level of commitment by U.S. politicians at the regional, state, and federal levels"
"significant corporate investments in clean energy acquisitions and expansion initiatives"
2008 Trends
2009 Trends
See also
List of energy storage projects
Renewable energy industry
References
External links
Climate for clean energy
Clean Energy Development Drives Job Creation
The Future Ain't What Is Used to Be
Energy development
Energy economics
Environmental reports
Technology forecasting
Renewable energy commercialization | Clean Energy Trends | [
"Environmental_science"
] | 463 | [
"Energy economics",
"Environmental social science"
] |
13,534,720 | https://en.wikipedia.org/wiki/Profisafe | Profisafe (usually styled as PROFIsafe, as a portmanteau for Profinet or Profibus safety)
is a standard for a communication protocol for the transmission of safety-relevant data in automation applications with functional safety. This standard was developed jointly by several automation device manufacturers in order to be able to meet the requirements of the legislator and the IFA for safe systems. The required safe function of the protocol has been tested and confirmed by TÜV Süd. The PROFIBUS Nutzerorganisation e.V. in Karlsruhe supervises the standardization for the partner companies and organizes the promotion of this common interface.
System structure
Profisafe defines how safety-related devices (emergency stop buttons, light curtains, overfill prevention devices, ...) communicate safely with safety controllers via Profinet, Profibus or a backplane in such a way that they can be used in safety-related automation tasks up to SIL3 (Safety Integrity Level). Due to the specification of Profisafe, products of different manufacturers can be combined to a safe system.
Market relevance
The first version of Profisafe was released as early as 1998. A second version in 2005 also enabled use via the Ethernet-based Profinet. According to the PROFIBUS Nutzerorganisation e.V., by 2023 a total of almost 21,7 million devices with Profisafe will be placed on the market by the various manufacturers, and a further 2.8 million devices will be added each year. In the database of the PROFIBUS Nutzerorganisation e.V., 106 different products from 31 different manufacturers are entered in October 2022.
Operating principle
With Profisafe, secure communication is implemented via a profile, i.e., via a special format of the user data and an additional protocol.
Safety-relevant data are transported with Profisafe as F-messages between an F-Host (safety controller) and its F-Device (safety device) as payload in a telegram of an industrial network. In the case of a modular F-Device with several F-modules, the payload consists of several F-messages. In this case Profisafe has no further requirements for the transmission channel, this is considered as a black channel. Therefore different transport protocols like Profibus or Profinet can be used. Different transmission channels such as copper cable, fiber optic cable (FOC), backplane bus or wireless systems such as WLAN can be used. Neither the transmission rates nor the respective error detection of the transport protocol play a role for safety.
The following figure shows the format of the payload of a "Safety Protocol Data Unit (SPDU)":
The cyclic redundancy check (CRC signature) is calculated over all local security parameters, the transmitted data and the locally stored monitoring number of the SPDU. This ensures that all information from the sender and the receiver is consistent without having to always transmit all parameters.
The monitoring number enables the recipient to check whether he has received all the messages in the correct sequence. With the acknowledgement, the monitoring number is returned to the sender for checking within a defined maximum delay time (timeout). Since some bus components, such as switches, have a buffer memory, a 32-bit monitoring number was selected for Profisafe.
The 1:1 communication relationship between F-Host and F-Device simplifies the detection of misdirected F-messages. For this purpose, the sender and receiver require a unique identifier (code name) throughout the network, which is used to verify the authenticity of F-messages. In Profisafe, the code name is also called "F-Address".
The following table shows which errors can be detected by which measure:
Specification
The international standard IEC 61508 Functional safety of electrical/electronic/programmable electronic safety-related systems. IEC 62061 Safety of machinery - Functional safety of safety-related electrical, electronic and programmable electronic control systems and ISO 13849 Safety of machinery — Safety-related parts of control systems are also the basis for Profisafe.
The international standard IEC 61784-3 defines different protocols for safe systems with comparable properties. Profisafe is part 3 of this collection of standards and is thus defined as IEC 61784-3-3:2021 CPF 3.
See also
Functional safety
IEC 61784-3 Industrialcommunication networks – Profiles – Functional safety fieldbuses
IEC 61508 Functional safety of electrical/electronic/programmable electronic safety-related systems
IEC 62061 Safety of machinery - Functional safety of safety-related electrical, electronic and programmable electronic control systems
ISO 13849 Safety of machinery - Safety-related parts of control systems
References
Industrial automation | Profisafe | [
"Engineering"
] | 979 | [
"Industrial automation",
"Automation",
"Industrial engineering"
] |
13,535,173 | https://en.wikipedia.org/wiki/Ammonium%20bisulfate | Ammonium bisulfate, also known as ammonium hydrogen sulfate, is a white, crystalline solid with the formula (NH4)HSO4. This salt is the product of the half-neutralization of sulfuric acid by ammonia.
Production
It is commonly collected as a byproduct of the "acetone cyanohydrin route" to the commodity chemical methyl methacrylate.
It can also be obtained by hydrolysis of sulfamic acid in aqueous solution, which produces the salt in high purity:
It also arises by the thermal decomposition of ammonium sulfate:
Applications
It can be further neutralized with ammonia to form ammonium sulfate, a valuable fertilizer. It can be used as a weaker alternative to sulfuric acid, although sodium bisulfate is much more common.
Natural occurrence
A related compound of the (NH4)3H(SO4)2 formula, occurs as the rare mineral letovicite, known from coal fire environments.
References
Ammonium compounds
Sulfates | Ammonium bisulfate | [
"Chemistry"
] | 209 | [
"Sulfates",
"Ammonium compounds",
"Salts"
] |
13,535,375 | https://en.wikipedia.org/wiki/Mass%20spectrometry%20imaging | Mass spectrometry imaging (MSI) is a technique used in mass spectrometry to visualize the spatial distribution of molecules, as biomarkers, metabolites, peptides or proteins by their molecular masses. After collecting a mass spectrum at one spot, the sample is moved to reach another region, and so on, until the entire sample is scanned. By choosing a peak in the resulting spectra that corresponds to the compound of interest, the MS data is used to map its distribution across the sample. This results in pictures of the spatially resolved distribution of a compound pixel by pixel. Each data set contains a veritable gallery of pictures because any peak in each spectrum can be spatially mapped. Despite the fact that MSI has been generally considered a qualitative method, the signal generated by this technique is proportional to the relative abundance of the analyte. Therefore, quantification is possible, when its challenges are overcome. Although widely used traditional methodologies like radiochemistry and immunohistochemistry achieve the same goal as MSI, they are limited in their abilities to analyze multiple samples at once, and can prove to be lacking if researchers do not have prior knowledge of the samples being studied. Most common ionization technologies in the field of MSI are DESI imaging, MALDI imaging, secondary ion mass spectrometry imaging (SIMS imaging) and Nanoscale SIMS (NanoSIMS).
History
More than 50 years ago, MSI was introduced using secondary ion mass spectrometry (SIMS) to study semiconductor surfaces by Castaing and Slodzian. However, it was the pioneering work of Richard Caprioli and colleagues in the late 1990s, demonstrating how matrix-assisted laser desorption/ionization (MALDI) could be applied to visualize large biomolecules (as proteins and lipids) in cells and tissue to reveal the function of these molecules and how function is changed by diseases like cancer, which led to the widespread use of MSI. Nowadays, different ionization techniques have been used, including SIMS, MALDI and desorption electrospray ionization (DESI), as well as other technologies. Still, MALDI is the current dominant technology with regard to clinical and biological applications of MSI.
Operation principle
The MSI is based on the spatial distribution of the sample. Therefore, the operation principle depends on the technique that is used to obtain the spatial information. The two techniques used in MSI are: microprobe and microscope.
Microprobe
This technique is performed using a focused ionization beam to analyze a specific region of the sample by generating a mass spectrum. The mass spectrum is stored along with the spatial coordination where the measurement took place. Then, a new region is selected and analyzed by moving the sample or the ionization beam. These steps are repeated until the entire sample has been scanned. By coupling all individual mass spectra, a distribution map of intensities as a function of x and y locations can be plotted. As a result, reconstructed molecular images of the sample are obtained.
Microscope
In this technique, a 2D position-sensitive detector is used to measure the spatial origin of the ions generated at the sample surface by the ion optics of the instruments. The resolution of the spatial information will depend on the magnification of the microscope, the quality of the ions optics and the sensitivity of the detector. A new region still needs to be scanned, but the number of positions drastically reduces. The limitation of this mode is the finite depth of vision present with all microscopes.
Ion source dependence
The ionization techniques available for MSI are suited to different applications. Some of the criteria for choosing the ionization method are the sample preparation requirement and the parameters of the measurement, as resolution, mass range and sensitivity. Based on that, the most common used ionization method are MALDI, SIMS AND DESI which are described below. Still, other minor techniques used are laser ablation electrospray ionization (LAESI), laser-ablation-inductively coupled plasma (LA-ICP) and nanospray desorption electrospray ionization (nano-DESI).
SIMS and NanoSIMS imaging
Secondary ion mass spectrometry (SIMS) is used to analyze solid surfaces and thin films by sputtering the surface with a focused primary ion beam and collecting and analyzing ejected secondary ions. There are many different sources for a primary ion beam. However, the primary ion beam must contain ions that are at the higher end of the energy scale. Some common sources are: Cs+, O2+, O, Ar+ and Ga+. SIMS imaging is performed in a manner similar to electron microscopy; the primary ion beam is emitted across the sample while secondary mass spectra are recorded. SIMS proves to be advantageous in providing the highest image resolution but only over small area of samples. More, this technique is widely regarded as one of the most sensitive forms of mass spectrometry as it can detect elements in concentrations as small as 1012-1016 atoms per cubic centimeter.
Multiplexed ion beam imaging (MIBI) is a SIMS method that uses metal isotope labeled antibodies to label compounds in biological samples.
Developments within SIMS: Some chemical modifications have been made within SIMS to increase the efficiency of the process. There are currently two separate techniques being used to help increase the overall efficiency by increasing the sensitivity of SIMS measurements: matrix-enhanced SIMS (ME-SIMS) - This has the same sample preparation as MALDI does as this simulates the chemical ionization properties of MALDI. ME-SIMS does not sample nearly as much material. However, if the analyte being tested has a low mass value then it can produce a similar looking spectra to that of a MALDI spectra. ME-SIMS has been so effective that it has been able to detect low mass chemicals at sub cellular levels that was not possible prior to the development of the ME-SIMS technique. The second technique being used is called sample metallization (Meta-SIMS) - This is the process of gold or silver addition to the sample. This forms a layer of gold or silver around the sample and it is normally no more than 1-3 nm thick. Using this technique has resulted in an increase of sensitivity for larger mass samples. The addition of the metallic layer also allows for the conversion of insulating samples to conducting samples, thus charge compensation within SIMS experiments is no longer required.
Subcellular (50 nm) resolution is enabled by NanoSIMS allowing for absolute quantitative analysis at the organelle level.
MALDI imaging
Matrix-assisted laser desorption ionization can be used as a mass spectrometry imaging technique for relatively large molecules. It has recently been shown that the most effective type of matrix to use is an ionic matrix for MALDI imaging of tissue. In this version of the technique the sample, typically a thin tissue section, is moved in two dimensions while the mass spectrum is recorded. Although MALDI has the benefit of being able to record the spatial distribution of larger molecules, it comes at the cost of lower resolution than the SIMS technique. The limit for the lateral resolution for most of the modern instruments using MALDI is 20 m. MALDI experiments commonly use either an Nd:YAG (355 nm) or N2 (337 nm) laser for ionization.
Pharmacodynamics and toxicodynamics in tissue have been studied by MALDI imaging.
DESI imaging
Desorption electrospray Ionization is a less destructive technique, which couples simplicity and rapid analysis of the sample. The sample is sprayed with an electrically charged solvent mist at an angle that causes the ionization and desorption of various molecular species. Then, two-dimensional maps of the abundance of the selected ions in the surface of the sample in relation with the spatial distribution are generated. This technique is applicable to solid, liquid, frozen and gaseous samples. Moreover, DESI allows analyzing a wide range of organic and biological compounds, as animal and plant tissues and cell culture samples, without complex sample preparation Although, this technique has the poorest resolution among other, it can create high-quality image from a large area scan, as a whole body section scanning. Fn
Comparative between the ionization techniques
Combination of various MSI techniques and other imaging techniques
Combining various MSI techniques can be beneficial, since each particular technique has its own advantage. For example, when information regards both proteins and lipids are necessary in the same tissue section, performing DESI to analyze the lipid, followed by MALDI to obtain information about the peptide, and finalize applying a stain (haematoxylin and eosin) for medical diagnosis of the structural characteristic of the tissue. On the other side of MSI with other imaging techniques, fluorescence staining with MSI and magnetic resonance imaging (MRI) with MRI can be highlighted. Fluorescence staining can give information of the appearance of some proteins present in any process inside a tissue, while MSI may give information about the molecular changes presented in that process. Combining both techniques, multimodal picture or even 3D images of the distribution of different molecules can be generated. In contrast, MRI with MSI combines the continuous 3D representation of MRI image with detailed structural representation using molecular information from MSI. Even though, MSI itself can generate 3D images, the picture is just part of the reality due to the depth limitation in the analysis, while MRI provides, for example, detailed organ shape with additional anatomical information. This coupled technique can be beneficial for cancer precise diagnosis and neurosurgery.
Data processing
Standard data format for mass spectrometry imaging datasets
The imzML was proposed to exchange data in a standardized XML file based on the mzML format. Several imaging MS software tools support it. The advantage of this format is the flexibility to exchange data between different instruments and data analysis software.
Software
There are many free software packages available for visualization and mining of imaging mass spectrometry data. Converters from Thermo Fisher format, Analyze format, GRD format and Bruker format to imzML format were developed by the Computis project. Some software modules are also available for viewing mass spectrometry images in imzML format: Biomap (Novartis, free), Datacube Explorer (AMOLF, free), EasyMSI (CEA), Mirion (JLU), MSiReader (NCSU, free) and SpectralAnalysis.
For processing .imzML files with the free statistical and graphics language R, a collection of R scripts is available, which permits parallel-processing of large files on a local computer, a remote cluster or on the Amazon cloud.
Another free statistical package for processing imzML and Analyze 7.5 data in R exists, Cardinal.
SPUTNIK is an R package containing various filters to remove peaks characterized by an uncorrelated spatial distribution with the sample location or spatial randomness.
Applications
A remarkable ability of MSI is to find out the localization of biomolecules in tissues, even though there are no previous information about them. This feature has made MSI a unique tool for clinical research and pharmacological research. It provides information about biomolecular changes related with diseases by tracking proteins, lipids, and cell metabolism. For example, identifying biomarkers by MSI can show detailed cancer diagnosis. In addition, low cost imaging for pharmaceuticals studies can be acquired, such as images of molecular signatures that would be indicative of treatment response for a specific drug or the effectiveness of a particular drug delivery method.
Ion colocalization has been studied as a way to infer local interactions between biomolecules. Similarly to colocalization in microscopy imaging, correlation has been used to quantify the similarity between ion images and generate network models.
Advantages, challenges and limitations
The main advantage of MSI for studying the molecules location and distribution within the tissue is that this analysis can provide either greater selectivity, more information or more accuracy than others. Moreover, this tool requires less investment of time and resources for similar results. The table below shows a comparison of advantages and disadvantages of some available techniques, including MSI, correlated with drug distribution analysis.
Notes
Further reading
"Imaging Trace Metals in Biological Systems" pp 81–134 in "Metals, Microbes and Minerals: The Biogeochemical Side of Life" (2021) pp xiv + 341. Authors Yu, Jyao; Harankhedkar, Shefali; Nabatilan, Arielle; Fahrni, Christopher; Walter de Gruyter, Berlin.
Editors Kroneck, Peter M.H. and Sosa Torres, Martha.
DOI 10.1515/9783110589771-004
References
Mass spectrometry | Mass spectrometry imaging | [
"Physics",
"Chemistry"
] | 2,625 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
13,535,429 | https://en.wikipedia.org/wiki/Vinylogy | In organic chemistry, vinylogy is the transmission of electronic effects through a conjugated organic bonding system. The concept was introduced in 1926 by Ludwig Claisen to explain the acidic properties of formylacetone and related ketoaldehydes. Formylacetone, technically , only exists in the ionized form or . Its adjectival form, vinylogous, is used to describe functional groups in which the standard moieties of the group are separated by a carbon–carbon double bond.
For example, a carboxylic acid is defined as a carbonyl group () directly attached to a hydroxyl group (): O=C–OH. A vinylogous carboxylic acid has a vinyl unit (, vinylene) between the two groups that define the acid: O=C–C=C–OH. The usual resonance of a carboxylate can propagate through the alkene of a vinylogous carboxylate. Likewise, 3-dimethylaminoacrolein is the vinylogous-amide analog of dimethylformamide.
Due to the transmission of electronic information through conjugation, vinylogous functional groups often possess "analogous" reactivity or chemical properties compared to the parent functional group. Hence, vinylogy is a useful heuristic for the prediction of the behavior of systems that are structurally similar but contain intervening C=C bonds that are conjugated to the attached functional groups. For example, a key property of carboxylic acids is their Brønsted acidity. The simplest carboxylic acid, formic acid (), is a moderately strong organic acid with a pKa of 3.7. We would expect vinylogous carboxylic acids to have similar acidity. Indeed, the vinylog of formic acid, 2-formyl-1-ethen-1-ol, has a substantial Brønsted acidity, with an estimated pKa ~ 5–6. In particular, vinylogous carboxylic acids are substantially stronger acids than typical enols (pKa ~ 12). Vitamin C (ascorbic acid, see below) is a biologically important example of a vinylogous carboxylic acid.
The insertion of a o- or p-phenylene (i.e., a benzene ring in the 1,2- or 1,4-orientation) also results in some similarities in reactivity (called "phenylogy"), although the effect is generally weaker, as conjugation through the aryl ring requires consideration of resonance forms or intermediates in which aromaticity is disrupted.
Vinylogous reactions are believed to occur when orbitals of the double bonds of the vinyl group and of an attached electron-withdrawing group (EWG; the π orbitals) are aligned and so can overlap and mix (i.e., are conjugated). Electron delocalization enables the EWG to receive electron density through participation of the conjugated system.
Vinylogous reactivity
A classic example of vinylogy is the relatively high acidity of the γ-hydrogen in . The acidity of the terminal methyl group is similar to that for the methyl ketone .
Vinylogous reactions also include conjugate additions, where a nucleophile reacts at the vinyl terminus, akin to the addition of the nucleophile to the carbonyl of the methyl ketone. In a vinylogous variation of the aldol reaction, an electrophile is attacked by a nucleophilic vinylogous enolate (see first and following image). The vinylogous enolate reacts at the terminal position of the double bond system (the γ-carbon), rather than the α-carbon immediately adjacent to the carbonyl, as would a simple enolate. Allylic electrophiles often react by vinylogous attack of a nucleophile rather than direct addition.
A further example of vinylogous reactivity: ascorbic acid (Vitamin C) behaves as a vinylogous carboxylic acid by involvement of its carbonyl moiety, a vinyl group within the ring, and the lone pair on the hydroxyl group acting as the conjugated system. Acidity of the hydroxyl proton at the terminus of the vinyl group in ascorbic acid is more comparable to a typical carboxylic acid than an alcohol because two major resonance structures stabilize the negative charge on the conjugate base of ascorbic acid (center and right structures in last image), analogous to the two resonance structures that stabilize the negative charge on the anion that results from removal of a proton from a simple carboxylic acid (cf. first image). Analogously, sorbic acid derivatives, extended by another "vinyl" moiety show vinylogous behaviour as well.
Further reading
References
Physical organic chemistry | Vinylogy | [
"Chemistry"
] | 1,022 | [
"Physical organic chemistry"
] |
13,535,882 | https://en.wikipedia.org/wiki/Outlying%20territory | An outlying territory or separate area is a state territory geographically separated from its parent territory and lies beyond Exclusive Economic Zone of its parent territory.
The tables below are lists of outlying territories which are marked by distinct, non-contiguous maritime boundaries or land boundaries:
Outlying geographical regions
Outlying territories outside the continent
Outlying uninhabited dependent territories
Outlying dependent territories and areas of special sovereignty
Notes
1. Enclaves are not included.
2. Disputed outlying territories in the Spratly Islands are not included.
See also
List of sovereign states
List of dependent territories
External links
Maritime boundaries
Countries’ EEZ
Wiktionary-outlying
A European outlying territory
Map of Spratly Islands
Borders
Dependent territories | Outlying territory | [
"Physics"
] | 133 | [
"Spacetime",
"Borders",
"Space"
] |
13,535,930 | https://en.wikipedia.org/wiki/Iteron | Iterons are directly repeated DNA sequences which play an important role in regulation of plasmid copy number in bacterial cells. It is one among the three negative regulatory elements found in plasmids which control its copy number. The others are antisense RNAs and ctRNAs. Iterons complex with cognate replication (Rep) initiator proteins to achieve the required regulatory effect.
Regulation of replication
Iterons have an important role in plasmid replication. An iteron-containing plasmid origin of replication can be found containing about five iterons about 20 base pairs in length total. These iterons provide a saturation site for initiator receptor proteins and promote replication, thus increasing plasmid copy number in a given cell.
Limiting factors of initiation
There are four main limiting factors leading to no initiation of replication in iterons:
Transcriptional autorepression
Initiator dimerization
Initiator titration
Handcuffing
Transcriptional auto-repression is thought to reduce initiator synthesis by repressing the formation of the Rep proteins. Since these proteins work to promote binding of replication machinery, replication can be halted in this form. Another factor used to stop replication is known as dimerization. It works to dimerize these Rep proteins, and as a result, monomers of these proteins are no longer in a high enough concentration to initiate replication. Another limiting factor, titration, occurs after replication, and works to prevent saturation by distributing monomers to daughter origins so that none are fully saturated. Finally, handcuffing refers to pairing origins leading to inactivation. This is mediated by monomers, and inactivation is due to steric hindrance between the origins.
Another less prevalent limitation thought to be present in these iterons is the presence of extra repeats. If a plasmid contains an extra supply of iterons outside of the saturation site, this can decrease plasmid copy number. In contrast, removing these extra iterons will increase copy number.
Replicon structure
Plasmids are known to have very similar structure when under control of iterons. This structure consists of an origin of replication upstream of a gene that codes for a replication initiator protein. The iterons themselves are known to cover about half of the origin of replication. Usually, iterons on the same plasmid are highly conserved, whereas comparing iterons on different plasmids still exhibit homology yet are not as highly conserved. This suggests that iterons could be evolutionarily related.
Replication initiator proteins
The replication initiator protein (Rep) plays a key role in initiation of replication in plasmids. In its monomer form, Rep binds an iteron and promotes replication. The protein itself is known to contain two independent N-terminal and C-terminal globular domains that subsequently bind to two domains of the iteron. The dimer version of the protein is generally inactive in iteron binding; however, it is known to bind to the repE operator. This operator contains half of the iteron sequence, making it able to bind the dimer and promote gene expression.
Plasmids containing iterons are all organized very similarly in structure. The gene for Rep proteins is usually found directly downstream of the origin of replication. This means that the iterons themselves are known to regulate the synthesis of the rep proteins.
References
Genetics techniques
Molecular biology | Iteron | [
"Chemistry",
"Engineering",
"Biology"
] | 720 | [
"Genetics techniques",
"Biochemistry",
"Genetic engineering",
"Molecular biology"
] |
3,083,229 | https://en.wikipedia.org/wiki/Amazon%20Mechanical%20Turk | Amazon Mechanical Turk (MTurk) is a crowdsourcing website with which businesses can hire remotely located "crowdworkers" to perform discrete on-demand tasks that computers are currently unable to do as economically. It is operated under Amazon Web Services, and is owned by Amazon. Employers, known as requesters, post jobs known as Human Intelligence Tasks (HITs), such as identifying specific content in an image or video, writing product descriptions, or answering survey questions. Workers, colloquially known as Turkers or crowdworkers, browse among existing jobs and complete them in exchange for a fee set by the requester. To place jobs, requesters use an open application programming interface (API), or the more limited MTurk Requester site. , requesters could register from 49 approved countries.
History
The service was conceived by Venky Harinarayan in a U.S. patent disclosure in 2001. Amazon coined the term artificial artificial intelligence for processes that outsource some parts of a computer program to humans, for those tasks carried out much faster by humans than computers. It is claimed that Jeff Bezos was responsible for proposing the development of Amazon's Mechanical Turk to realize this process.
The name Mechanical Turk was inspired by "The Turk", an 18th-century chess-playing automaton made by Wolfgang von Kempelen that toured Europe, and beat both Napoleon Bonaparte and Benjamin Franklin. It was later revealed that this "machine" was not an automaton, but a human chess master hidden in the cabinet beneath the board and controlling the movements of a humanoid dummy. Analogously, the Mechanical Turk online service uses remote human labor hidden behind a computer interface to help employers perform tasks that are not possible using a true machine.
MTurk launched publicly on November 2, 2005. Its user base grew quickly. In early- to mid-November 2005, there were tens of thousands of jobs, all uploaded to the system by Amazon itself for some of its internal tasks that required human intelligence. HIT types expanded to include transcribing, rating, image tagging, surveys, and writing.
In March 2007, there were reportedly more than 100,000 workers in over 100 countries. This increased to over 500,000 registered workers from over 190 countries in January 2011. That year, Techlist published an interactive map pinpointing the locations of 50,000 of their MTurk workers around the world. By 2018, research demonstrated that while over 100,000 workers were available on the platform at any time, only around 2,000 were actively working.
Overview
A user of Mechanical Turk can be either a "Worker" (contractor) or a "Requester" (employer). Workers have access to a dashboard that displays three sections: total earnings, HIT status, and HIT totals. Workers set their own hours and are not under any obligation to accept any particular task.
Amazon classifies Workers as contractors rather than employees and does not pay payroll taxes. Classifying Workers as contractors allows Amazon to avoid things like minimum wage, overtime, and workers compensation—this is a common practice among "gig economy" platforms. Workers are legally required to report their income as self-employment income.
In 2013, the average wage for the multiple microtasks assigned, if performed quickly, was about one dollar an hour, with each task averaging a few cents. However, calculating people's average hourly earnings on a microtask site is extremely difficult and several sources of data show average hourly earnings in the $5–$9 per hour range among a substantial number of Workers, while the most experienced, active, and proficient workers may earn over $20 per hour.
Workers can have a postal address anywhere in the world. Payment for completing tasks can be redeemed on Amazon.com via gift certificate (gift certificates are the only payment option available to international workers, apart from India) or can be transferred to a Worker's U.S. bank account.
Requesters can ask that Workers fulfill qualifications before engaging in a task, and they can establish a test designed to verify the qualification. They can also accept or reject the result sent by the Worker, which affects the Worker's reputation. , Requesters paid Amazon a minimum 20% commission on the price of successfully completed jobs, with increased amounts for . Requesters can use the Amazon Mechanical Turk API to programmatically integrate the results of the work directly into their business processes and systems. When employers set up a job, they must specify
how much are they paying for each HIT accomplished,
how many workers they want to work on each HIT,
the maximum time a worker has to work on a single task,
how much time the workers have to complete the work,
as well as the specific details about the job they want to be completed.
Location of Turkers
Workers have been primarily located in the United States since the platform's inception with demographics generally similar to the overall Internet population in the U.S. Within the U.S. workers are fairly evenly spread across states, proportional to each state’s share of the U.S. population. , between 15 and 30 thousand people in the U.S. complete at least one HIT each month and about 4,500 new people join MTurk each month.
Cash payments for Indian workers were introduced in 2010, which updated the demographics of workers, who however remained primarily within the United States. A website showing worker demographics in May 2015 showed that 80% of workers were located in the United States, with the remaining 20% located elsewhere in the world, most of whom were in India. In May 2019, approximately 60% were in the U.S., 40% elsewhere (approximately 30% in India). In early 2023 about 90% of workers were from the U.S. and about half of the remainder from India.
Uses
Human-subject research
, numerous researchers have explored the viability of Mechanical Turk to recruit subjects for social science experiments. Researchers have generally found that while samples of respondents obtained through Mechanical Turk do not perfectly match all relevant characteristics of the U.S. population, they are also not wildly misrepresentative. As a result, thousands of papers that rely on data collected from Mechanical Turk workers are published each year, including hundreds in top ranked academic journals.
A challenge with using MTurk for human-subject research has been maintaining data quality. A study published in 2021 found that the types of quality control approaches used by researchers (such as checking for bots, VPN users, or workers willing to submit dishonest responses) can meaningfully influence survey results. They demonstrated this via impact on three common behavioral/mental healthcare screening tools. Even though managing data quality requires work from researchers, there is a large body of research showing how to gather high quality data from MTurk. The cost of using MTurk is considerably lower than many other means of conducting surveys, so many researchers continue to use it.
The general consensus among researchers is that the service works best for recruiting a diverse sample; it is less successful with studies that require more precisely defined populations or that require a representative sample of the population as a whole. Many papers have been published on the demographics of the MTurk population. MTurk workers tend to be younger, more educated, more liberal, and slightly less wealthy than the U.S. population overall.
Machine Learning
Supervised Machine Learning algorithms require large amounts of human-annotated data to be trained successfully. Machine learning researchers have hired Workers through Mechanical Turk to produce datasets such as SQuAD, a question answering dataset.
Missing persons searches
, the service has been used to search for prominent missing individuals. This use was first suggested during the search for James Kim, but his body was found before any technical progress was made. That summer, computer scientist Jim Gray disappeared on his yacht and Amazon's Werner Vogels, a personal friend, made arrangements for DigitalGlobe, which provides satellite data for Google Maps and Google Earth, to put recent photography of the Farallon Islands on Mechanical Turk. A front-page story on Digg attracted 12,000 searchers who worked with imaging professionals on the same data. The search was unsuccessful.
In September 2007, a similar arrangement was repeated in the search for aviator Steve Fossett. Satellite data was divided into sections, and Mechanical Turk users were asked to flag images with "foreign objects" that might be a crash site or other evidence that should be examined more closely. This search was also unsuccessful. The satellite imagery was mostly within a 50-mile radius, but the crash site was eventually found by hikers about a year later, 65 miles away.
Artistic works
MTurk has also been used as a tool for artistic creation. One of the first artists to work with Mechanical Turk was xtine burrough, with The Mechanical Olympics (2008), Endless Om (2015), and Mediations on Digital Labor (2015). Another work was artist Aaron Koblin's Ten Thousand Cents (2008).
Third-party programming
Programmers have developed browser extensions and scripts designed to simplify the process of completing jobs. Amazon has stated that they disapprove of scripts that completely automate the process and preclude the human element. This is because of the concern that the task completion process—e.g. answering a survey—could be gamed with random responses, and the resultant collected data could be worthless. Accounts using so-called automated bots have been banned.
API
Amazon makes available an application programming interface (API) for the MTurk system. The MTurk API lets a programmer submit jobs, retrieve completed work, and approve or reject that work. In 2017, Amazon launched support for AWS Software Development Kits (SDK), allowing for nine new SDKs available to MTurk Users. MTurk is accessible via API from the following languages: Python, JavaScript, Java, .NET, Go, Ruby, PHP, or C++. Web sites and web services can use the API to integrate MTurk work into other web applications, providing users with alternatives to the interface Amazon has built for these functions.
Use case examples
Processing photos / videos
Amazon Mechanical Turk provides a platform for processing images, a task well-suited to human intelligence. Requesters have created tasks that ask workers to label objects found in an image, select the most relevant picture in a group of pictures, screen inappropriate content, classify objects in satellite images, or digitize text from images such as scanned forms filled out by hand.
Data cleaning / verification
Companies with large online catalogues use Mechanical Turk to identify duplicates and verify details of item entries. For example: removing duplicates in yellow pages directory listings, checking restaurant details (e.g. phone number and hours), and finding contact information from web pages (e.g. author name and email).
Information collection
Diversification and scale of personnel of Mechanical Turk allow collecting information at a large scale, which would be difficult outside of a crowd platform. Mechanical Turk allows Requesters to amass a large number of responses to various types of surveys, from basic demographics to academic research. Other uses include writing comments, descriptions, and blog entries to websites and searching data elements or specific fields in large government and legal documents.
Data processing
Companies use Mechanical Turk's crowd labor to understand and respond to different types of data. Common uses include editing and transcription of podcasts, translation, and matching search engine results.
Research validity
The validity of research conducted with the Mechanical Turk worker pool has long been debated among experts. This is largely because questions of validity are complex: they involve not only questions of whether the research methods were appropriate and whether the study was well-executed, but also questions about the goal of the project, how the researchers used MTurk, who was sampled, and what conclusions were drawn.
Most experts agree that MTurk is better suited for some types of research than others. MTurk appears well-suited for questions that seek to understand whether two or more things are related to each other (called correlational research; e.g., are happy people more healthy?) and questions that attempt to show one thing causes another thing (experimental research; e.g., being happy makes people more healthy). Fortunately, these categories capture most of the research conducted by behavioral scientists, and most correlational and experimental findings found in nationally representative samples replicate on MTurk.
The type of research that is not well-suited for MTurk is often called "descriptive research." Descriptive research seeks to describe how or what people think, feel, or do; one example is public opinion polling. MTurk is not well-suited to such research because it does not select a representative sample of the general population. Instead, MTurk is a nonprobability, convenience sample. Descriptive research is best conducted with a probability-based, representative sample of the population researchers want to understand. When compared to the general population, people on MTurk are younger, more highly educated, more liberal, and less religious.
Labor issues
Mechanical Turk has been criticized by journalists and activists for its interactions with and use of labor.
Computer scientist Jaron Lanier noted how the design of Mechanical Turk "allows you to think of the people as software components" in a way that conjures "a sense of magic, as if you can just pluck results out of the cloud at an incredibly low cost". A similar point is made in the book Ghostwork by Mary L. Gray and Siddharth Suri.
Critics of MTurk argue that workers are forced onto the site by precarious economic conditions and then exploited by requesters with low wages and a lack of power when disputes occur. Journalist Alana Semuels’s article "The Internet Is Enabling a New Kind of Poorly Paid Hell" in The Atlantic is typical of such criticisms of MTurk.
Some academic papers have obtained findings that support or serve as the basis for such common criticisms, but others contradict them. A recent academic commentary argued that study participants on sites like MTurk should be clearly warned about the circumstances in which they might later be denied payment as a matter of ethics, even though such statements may not reduce the rate of careless responding.
A paper published by a team at CloudResearch shows that only about 7% of people on MTurk view completing HITs as something akin to a full-time job. Most people report that MTurk is a way to earn money during their leisure time or as a side gig. In 2019, the typical worker spent five to eight hours per week and earned around $7 per hour. The sampled workers did not report mistreatment at the hands of requesters; they reported trusting requesters more than employers outside of MTurk. Similar findings were presented in a review of MTurk by the Fair Crowd Work organization, a collective of crowd workers and unions.
Monetary compensation
The minimum payment that Amazon allows for a task is one cent. Because tasks are typically simple and repetitive the majority of tasks pay only a few cents, but there are also well-paying tasks on the site.
Many criticisms of MTurk stem from the fact that a majority of tasks offer low wages. In addition, workers are considered independent contractors rather than employees. Independent contractors are not protected by the Fair Labor Standards Act or other legislation that protects workers’ rights. Workers on MTurk must compete with others for good HIT opportunities as well as spend time searching for tasks and other actions that they are not compensated for.
The low payment offered for many tasks has fueled criticism of Mechanical Turk for exploiting and not compensating workers for the true value of the task they complete. One study of 3.8 million tasks completed by 2,767 workers showed that "workers earned a median hourly wage of about $2 an hour" with 4% of workers earning more than $7.25 per hour.
The Pew Research Center and the International Labour Office published data indicating people made around $5.00 per hour in 2015. A study focused on workers in the U.S. indicated average wages of at least $5.70 an hour, and data from the CloudResearch study found average wages of about $6.61 per hour. Some evidence suggests that very active and experienced people can earn $20 per hour or more.
Fraud
The Nation magazine reported in 2014 that some Requesters had taken advantage of Workers by having them do the tasks, then rejecting their submissions in order to avoid paying them. Available data indicates that rejections are fairly rare. Workers report having a small minority of their HITs rejected, perhaps as low as 1%.
In the Facebook–Cambridge Analytica data scandal, Mechanical Turk was one of the means of covertly gathering private information for a massive database. The system paid people a dollar or two to install a Facebook-connected app and answer personal questions. The survey task, as a work for hire, was not used for a demographic or psychological research project as it might have seemed. The purpose was instead to bait the worker to reveal personal information about the worker's identity that was not already collected by Facebook or Mechanical Turk.
Labor relations
Others have criticized that the marketplace does not allow workers to negotiate with employers. In response to criticisms of payment evasion and lack of representation, a group developed a third-party platform called Turkopticon which allows workers to give feedback on their employers. This allows workers to avoid potentially unscrupulous jobs and to recommend superior employers. Another platform called Dynamo allows workers anonymously and organize campaigns to better their work environment, such as the Guidelines for Academic Requesters and the Dear Jeff Bezos Campaign. Amazon made it harder for workers to enroll in Dynamo by closing the request account that provided workers with a required code for Dynamo membership. Workers created third-party plugins to identify higher paying tasks, but Amazon updated its website to prevent these plugins from working. Workers have complained that Amazon's payment system will on occasion stop working.
Related systems
Mechanical Turk is comparable in some respects to the now discontinued Google Answers service. However, the Mechanical Turk is a more general marketplace that can potentially help distribute any kind of work tasks all over the world. The Collaborative Human Interpreter (CHI) by Philipp Lenssen also suggested using distributed human intelligence to help computer programs perform tasks that computers cannot do well. MTurk could be used as the execution engine for the CHI.
In 2014 the Russian search giant Yandex launched a similar system called Toloka that is similar to the Mechanical Turk.
See also
CAPTCHA, which challenges and verifies human work at a simple online task
Citizen science
Microwork
References
Further reading
Business Week article on Mechanical Turk by Rob Hof, November 4, 2005.
Wired Magazine story about "Crowdsourcing," June 2006.
Salon.com article on Mechanical Turk by Katharine Mieszkowski, July 24, 2006.
New York Times article on Mechanical Turk by Jason Pontin, March 25, 2007.
Technology Review article on Mechanical Turk, "How Mechanical Turk is Broken," by Christopher Mims, January 3, 2010.
(discusses labor relations)
External links
Requester Best Practices Guide, Updated February 2015.
Amazon (company)
Internet properties established in 2005
Crowdsourcing
Human-based computation
Social information processing
Web services | Amazon Mechanical Turk | [
"Technology"
] | 3,925 | [
"Information systems",
"Human-based computation"
] |
3,083,335 | https://en.wikipedia.org/wiki/Otoscope | An otoscope or auriscope is a medical device used by healthcare professionals to examine the ear canal and eardrum. This may be done as part of routine physical examinations, or for evaluating specific ear complaints, such as earaches, sense of fullness in the ear, or hearing loss.
Usage
Function
An otoscope enables viewing and examination of the ear canal and tympanic membrane (eardrum). Otoscopic examination can help diagnose conditions such as acute otitis media (infection of the middle ear), traumatic perforation of the eardrum, and cholesteatoma.
The presence of cerumen (earwax), shed skin, pus, canal skin edema, foreign bodies, and various ear diseases, can obscure the view of the eardrum and thus compromise the value of otoscopy done with a common otoscope, but can confirm the presence of obstructing symptoms.
Otoscopes can also be used to examine patients' noses (avoiding the need for a separate nasal speculum) and upper throats (by removing the speculum).
Method of use
The most common otoscopes consist of a handle and a head. The head contains a light source and a magnifying lens, to help illuminate and enlarge ear structures. The distal (front) end of the otoscope has an attachment for disposable plastic ear specula.
The examiner first pulls on the pinna (usually the earlobe, side or top) to straighten the ear canal, and then inserts the ear speculum side of the otoscope into the outer ear. It is important to brace the index or little finger of the hand holding the otoscope against the patient's head to avoid injuring the ear canal. The examiner then looks through the lens on the rear of the instrument to see inside the ear canal.
In many models, the examiner can remove the lens and insert instruments like specialized suction tips through the otoscope into the ear canal, such as for removing earwax. Most models also have an insertion point for a bulb that pushes air through the speculum (pneumatic otoscopy) for testing eardrum mobility.
Types
Many otoscopes for doctors' offices are wall-mounted, with an electrical cord providing power from an electric outlet. Portable otoscopes powered by batteries (usually rechargeable) in the handle are also available.
Otoscopes are often sold with ophthalmoscopes as a diagnostic set.
Monocular and binocular
Most otoscopes used in emergency rooms, pediatric offices, general practice, and by internists are monocular devices. These provide a two-dimensional view of the ear canal and its contents, and usually at least a portion of the eardrum.
Another method of performing otoscopy (visualization of the ear) is by using a binocular (two-eyed) microscope in conjunction with a larger plastic or metal ear speculum, which provides a much larger field of view. The microscope is suspended from a stand, which frees up both of the examiner's hands; the patient is placed in a supine position and their head is tilted, which keeps the head stable and enables better lighting. The binocular view enables depth perception, which makes removal of earwax or other obstructing materials easier and less hazardous. The microscope also has up to 40× magnification, allowing more detailed viewing of the entire ear canal, and of the entire eardrum (unless prevented by edema of the canal skin). Subtle changes in the anatomy can also be more easily detected and interpreted.
Traditionally, binocular microscopes are only used by otolaryngologists (ear, nose, and throat specialists) and otologists (subspecialty ear doctors). Their widespread adoption in general medicine is hindered by cost and lack of familiarity among pediatric and general medicine professors in physician training programs. Studies have shown that reliance on a monocular otoscope to diagnose ear disease results in a more than 50% chance of misdiagnosis, as compared to binocular microscopic otoscopy.
Pneumatic otoscope
The pneumatic otoscope is used to examine the eardrum for assessing the health of the middle ear. This is done by assessing the eardrum's contour (normal, retracted, full, or bulging), its color (gray, yellow, pink, amber, white, red, or blue), its translucency (translucent, semi-opaque, opaque), and its mobility (normal, increased, decreased, or absent). The pneumatic otoscope is the standard tool used in diagnosing otitis media (infection of the middle ear).
The pneumatic otoscope has a pneumatic (diagnostic) head, which contains a lens, an enclosed light source, and a nipple for attaching a rubber bulb and tubing. By gently squeezing and releasing the bulb in rapid succession, the degree of eardrum mobility in response to positive and negative pressure can be observed. The head is designed so that an airtight chamber is produced when a speculum is attached and fitted snugly into the patient's ear canal. Using a rubber-tipped speculum or adding a small sleeve of rubber tubing at the end of a plastic speculum, can help improve the airtight seal and also help avoid injuring the patient.
By replacing the pneumatic head with a surgical head, the pneumatic otoscope can also be used to clear earwax from the ear canal, and to perform diagnostic tympanocentesis (drainage of fluid from the middle ear) or myringotomy (creation of incision in the eardrum). The surgical head consists of an unenclosed light source and a lens that can swivel over a wide arc.
See also
References
External links
Phisick – Pictures and information about antique otoscopes
Ear procedures
Endoscopes
Medical equipment
French inventions | Otoscope | [
"Biology"
] | 1,256 | [
"Medical equipment",
"Medical technology"
] |
3,083,485 | https://en.wikipedia.org/wiki/ZETA%20%28fusion%20reactor%29 | ZETA, short for Zero Energy Thermonuclear Assembly, was a major experiment in the early history of fusion power research. Based on the pinch plasma confinement technique, and built at the Atomic Energy Research Establishment in the United Kingdom, ZETA was larger and more powerful than any fusion machine in the world at that time. Its goal was to produce large numbers of fusion reactions, although it was not large enough to produce net energy.
ZETA went into operation in August 1957 and by the end of the month it was giving off bursts of about a million neutrons per pulse. Measurements suggested the fuel was reaching between 1 and 5 million kelvins, a temperature that would produce nuclear fusion reactions, explaining the quantities of neutrons being seen. Early results were leaked to the press in September 1957, and the following January an extensive review was released. Front-page articles in newspapers around the world announced it as a breakthrough towards unlimited energy, a scientific advance for Britain greater than the recently launched Sputnik had been for the Soviet Union.
U.S. and Soviet experiments had also given off similar neutron bursts at temperatures that were not high enough for fusion. This led Lyman Spitzer to express his scepticism of the results, but his comments were dismissed by UK observers as jingoism. Further experiments on ZETA showed that the original temperature measurements were misleading; the bulk temperature was too low for fusion reactions to create the number of neutrons being seen. The claim that ZETA had produced fusion had to be publicly withdrawn, an embarrassing event that cast a chill over the entire fusion establishment. The neutrons were later explained as being the product of instabilities in the fuel. These instabilities appeared inherent to any similar design, and work on the basic pinch concept as a road to fusion power ended by 1961.
Despite ZETA's failure to achieve fusion, the device went on to have a long experimental lifetime and produced numerous important advances in the field. In one line of development, the use of lasers to more accurately measure the temperature was tested on ZETA, and was later used to confirm the results of the Soviet tokamak approach. In another, while examining ZETA test runs it was noticed that the plasma self-stabilised after the power was turned off. This has led to the modern reversed field pinch concept. More generally, studies of the instabilities in ZETA have led to several important theoretical advances that form the basis of modern plasma theory.
Conceptual development
The basic understanding of nuclear fusion was developed during the 1920s as physicists explored the new science of quantum mechanics. George Gamow's 1928 exploration of quantum tunnelling demonstrated that nuclear reactions could take place at lower energies than classical theory predicted. Using this theory, in 1929 Fritz Houtermans and Robert Atkinson demonstrated that expected reaction rates in the core of the Sun supported Arthur Eddington's 1920 suggestion that the Sun is powered by fusion.
In 1934, Mark Oliphant, Paul Harteck and Ernest Rutherford were the first to achieve fusion on Earth, using a particle accelerator to shoot deuterium nuclei into a metal foil containing deuterium, lithium or other elements. This allowed them to measure the nuclear cross section of various fusion reactions, and determined that the deuterium-deuterium reaction occurred at a lower energy than other reactions, peaking at about 100,000 electronvolts (100 keV).
This energy corresponds to the average energy of particles in a gas heated to thousands of millions of kelvins. Materials heated beyond a few tens of thousands of kelvins dissociate into their electrons and nuclei, producing a gas-like state of matter known as plasma. In any gas the particles have a wide range of energies, normally following the Maxwell–Boltzmann statistics. In such a mixture, a small number of particles will have much higher energy than the bulk.
This leads to an interesting possibility: even at temperatures well below 100,000 eV, some particles will randomly have enough energy to undergo fusion. Those reactions release huge amounts of energy. If that energy can be captured back into the plasma, it can heat other particles to that energy as well, making the reaction self-sustaining. In 1944, Enrico Fermi calculated this would occur at about 50,000,000 K.
Confinement
Taking advantage of this possibility requires the fuel plasma to be held together long enough that these random reactions have time to occur. Like any hot gas, the plasma has an internal pressure and thus tends to expand according to the ideal gas law. For a fusion reactor, the problem is keeping the plasma contained against this pressure; any known physical container would melt at these temperatures.
A plasma is electrically conductive, and is subject to electric and magnetic fields. In a magnetic field, the electrons and nuclei orbit the magnetic field lines. A simple confinement system is a plasma-filled tube placed inside the open core of a solenoid. The plasma naturally wants to expand outwards to the walls of the tube, as well as move along it, towards the ends. The solenoid creates a magnetic field running down the centre of the tube, which the particles will orbit, preventing their motion towards the sides. Unfortunately, this arrangement does not confine the plasma along the length of the tube, and the plasma is free to flow out the ends.
The obvious solution to this problem is to bend the tube around into a torus (a ring or doughnut shape). Motion towards the sides remains constrained as before, and while the particles remain free to move along the lines, in this case, they will simply circulate around the long axis of the tube. But, as Fermi pointed out, when the solenoid is bent into a ring, the electrical windings would be closer together on the inside than the outside. This would lead to an uneven field across the tube, and the fuel will slowly drift out of the centre. Some additional force needs to counteract this drift, providing long-term confinement.
Pinch concept
A potential solution to the confinement problem had been detailed in 1934 by Willard Harrison Bennett. Any electric current creates a magnetic field, and due to the Lorentz force, this causes an inward directed force. This was first noticed in lightning rods. Bennett showed that the same effect would cause a current to "self-focus" a plasma into a thin column. A second paper by Lewi Tonks in 1937 considered the issue again, introducing the name "pinch effect". It was followed by a paper by Tonks and William Allis.
Applying a pinch current in a plasma can be used to counteract expansion and confine the plasma. A simple way to do this is to put the plasma in a linear tube and pass a current through it using electrodes at either end, like a fluorescent lamp. This arrangement still produces no confinement along the length of the tube, so the plasma flows onto the electrodes, rapidly eroding them. This is not a problem for a purely experimental machine, and there are ways to reduce the rate. Another solution is to place a magnet next to the tube; when the magnetic field changes, the fluctuations cause an electric current to be induced in the plasma. The major advantage of this arrangement is that there are no physical objects within the tube, so it can be formed into a torus and allow the plasma to circulate freely.
The toroidal pinch concept as a route to fusion was explored in the UK during the mid-1940s, especially by George Paget Thomson of Imperial College London. With the formation of the Atomic Energy Research Establishment (AERE) at Harwell, Oxfordshire, in 1945, Thomson repeatedly petitioned the director, John Cockcroft, for funds to develop an experimental machine. These requests were turned down. At the time there was no obvious military use, so the concept was left unclassified. This allowed Thomson and Moses Blackman to file a patent on the idea in 1946, describing a device using just enough pinch current to ionise and briefly confine the plasma while being heated by a microwave source that would also continually drive the current.
As a practical device there is an additional requirement, that the reaction conditions last long enough to burn a reasonable amount of the fuel. In the original Thomson and Blackman design, it was the job of the microwave injection to drive the electrons to maintain the current and produce pinches that lasted on the order of one minute, allowing the plasma to reach 500 million K. The current in the plasma also heated it; if the current was also used as the heat source, the only limit to the heating was the power of the pulse. This led to a new reactor design where the system operated in brief but very powerful pulses. Such a machine would demand a very large power supply.
First machines
In 1947, Cockcroft arranged a meeting of several Harwell physicists to study Thomson's latest concepts, including Harwell's director of theoretical physics, Klaus Fuchs. Thomson's concepts were poorly received, especially by Fuchs. When this presentation also failed to gain funding, Thomson passed along his concepts to two graduate students at Imperial, Stanley (Stan) W. Cousins and Alan Alfred Ware (1924-2010). He added a report on a type of toroidal particle accelerator known as the "Wirbelrohr" ("whirl tube"), designed in Germany by Max Steenbeck. The Wirbelrohr consisted of a transformer with a torus-shaped vacuum tube as its secondary coil, similar in concept to the toroidal pinch devices.
Later that year, Ware built a small machine out of old radar equipment and was able to induce powerful currents. When they did, the plasma gave off flashes of light, but he could not devise a way to measure the temperature of the plasma. Thomson continued to pressure the government to allow him to build a full-scale device, using his considerable political currency to argue for the creation of a dedicated experimental station at the Associated Electrical Industries (AEI) lab that had recently been constructed at Aldermaston.
Ware discussed the experiments with anyone who was interested, including Jim Tuck of Clarendon Laboratory at Oxford University. While working at Los Alamos during the war, Tuck and Stanislaw Ulam had built an unsuccessful fusion system using shaped charge explosives, but it did not work. Tuck was joined by Australian Peter Thonemann, who had worked on fusion theory, and the two arranged funding through Clarendon to build a small device like the one at Imperial. But before this work started, Tuck was offered a job in the U.S., eventually returning to Los Alamos.
Thonemann continued working on the idea and began a rigorous programme to explore the basic physics of plasmas in a magnetic field. Starting with linear tubes and mercury gas, he found that the current tended to expand outward through the plasma until it touched the walls of the container (see skin effect). He countered this with the addition of small electromagnets outside the tube, which pushed back against the current and kept it centred. By 1949, he had moved on from the glass tubes to a larger copper torus, in which he was able to demonstrate a stable pinched plasma. Frederick Lindemann and Cockcroft visited and were duly impressed.
Cockcroft asked Herbert Skinner to review the concepts, which he did in April 1948. He was sceptical of Thomson's ideas for creating a current in the plasma and thought Thonemann's ideas seemed more likely to work. He also pointed out that the behaviour of plasmas in a magnetic field was not well understood, and that "it is useless to do much further planning before this doubt is resolved."
Meanwhile, at Los Alamos, Tuck acquainted the U.S. researchers with the British efforts. In early 1951, Lyman Spitzer introduced his stellarator concept and was shopping the idea around the nuclear establishment looking for funding. Tuck was sceptical of Spitzer's enthusiasm and felt his development programme was "incredibly ambitious". He proposed a much less aggressive programme based on pinch. Both men presented their ideas in Washington in May 1951, which resulted in the Atomic Energy Commission giving Spitzer US$50,000. Tuck convinced Norris Bradbury, the director of Los Alamos, to give him US$50,000 from the discretionary budget, using it to build the Perhapsatron.
Early results
In 1950 Fuchs admitted to passing UK and U.S. atomic secrets to the USSR. As fusion devices generated high energy neutrons, which could be used to enrich nuclear fuel for bombs, the UK immediately classified all their fusion research. This meant the teams could no longer work in the open environment of the universities. The Imperial team under Ware moved to the AEI labs at Aldermaston and the Oxford team under Thonemann moved to Harwell.
By early 1952 there were numerous pinch devices in operation; Cousins and Ware had built several follow-on machines under the name Sceptre, and the Harwell team had built a series of ever-larger machines known as Mark I through Mark IV. In the U.S., Tuck built his Perhapsatron in January 1952. It was later learned that Fuchs had passed the UK work to the Soviets, and that they had started a fusion programme as well.
It was clear to all of these groups that something was seriously wrong in the pinch machines. As the current was applied, the plasma column inside the vacuum tube would become unstable and break up, ruining the compression. Further work identified two types of instabilities, nicknamed "kink" and "sausage". In the kink, the normally toroidal plasma would bend to the sides, eventually touching the edges of the vessel. In the sausage, the plasma would neck down at locations along the plasma column to form a pattern similar to a link of sausages.
Investigations demonstrated both were caused by the same underlying mechanism. When the pinch current was applied, any area of the gas that had a slightly higher density would create a slightly stronger magnetic field and collapse faster than the surrounding gas. This caused the localised area to have higher density, which created an even stronger pinch, and a runaway reaction would follow. The quick collapse in a single area would cause the whole column to break up.
Stabilised pinch
Early studies of the phenomenon suggested one solution to the problem was to increase the compression rate. In this approach, the compression would be started and stopped so rapidly that the bulk of the plasma would not have time to move; instead, a shock wave created by this rapid compression would be responsible for compressing the majority of the plasma. This approach became known as fast pinch. The Los Alamos team working on the Columbus linear machine designed an updated version to test this theory.
Others started looking for ways to stabilise the plasma during compression, and by 1953 two concepts had come to the fore. One solution was to wrap the vacuum tube in a sheet of thin but highly conductive metal. If the plasma column began to move, the current in the plasma would induce a magnetic field in the sheet, one that, due to Lenz's law, would push back against the plasma. This was most effective against large, slow movements, like the entire plasma torus drifting within the chamber.
The second solution used additional electromagnets wrapped around the vacuum tube. The magnetic fields from these magnets mixed with the pinch field created by the current in the plasma. The result was that the paths of the particles within the plasma tube were no longer purely circular around the torus, but twisted like the stripes on a barber's pole. In the U.S., this concept was known as giving the plasma a "backbone", suppressing small-scale, localised instabilities. Calculations showed that this stabilised pinch would dramatically improve confinement times, and the older concepts "suddenly seemed obsolete".
Marshall Rosenbluth, recently arrived at Los Alamos, began a detailed theoretical study of the pinch concept. With his wife Arianna W. Rosenbluth and Richard Garwin, he developed "motor theory", or "M-theory", published in 1954. The theory predicted that the heating effect of the electric current was greatly increased with the power of the electric field. This suggested that the fast pinch concept would be more likely to succeed, as it was easier to produce larger currents in these devices. When he incorporated the idea of stabilising magnets into the theory a second phenomenon appeared; for a particular, and narrow, set of conditions based on the physical size of the reactor, the power of the stabilising magnets and the amount of pinch, toroidal machines appeared to be naturally stable.
ZETA begins construction
U.S. researchers planned to test both fast pinch and stabilised pinch by modifying their existing small-scale machines. In the UK, Thomson once again pressed for funding for a larger machine. This time he was much more warmly received, and initial funding of £200,000 was provided in late 1954. Design work continued during 1955, and in July the project was named ZETA. The term "zero energy" was a phrase used in the nuclear industry to refer to small research reactors, like ZEEP, which had a role similar to ZETA's goal of producing reactions while releasing no net energy.
The ZETA design was finalised in early 1956. Metropolitan-Vickers was hired to build the machine, which included a 150 tonne pulse transformer, the largest built in Britain to that point. A serious issue arose when the required high-strength steels needed for the electrical components were in short supply, but a strike in the U.S. electrical industry caused a sudden glut of material, resolving the problem.
ZETA was the largest and most powerful fusion device in the world at the time of its construction. Its aluminium torus had an internal bore of and a major radius of , over three times the size of any machine built to date. It was also the most powerful design, incorporating an induction magnet that was designed to induce currents up to 100,000 amperes (amps) into the plasma. Later amendments to the design increased this to 200,000 amps. It included both types of stabilisation; its aluminium walls acted as the metal shield, and a series of secondary magnets ringed the torus. Windows placed in the gaps between the toroidal magnets allowed direct inspection of the plasma.
In July 1954, the AERE was reorganised into the United Kingdom Atomic Energy Authority (UKAEA). Modifications to Harwell's Hangar 7 in order to house the machine began that year. Despite its advanced design, the price tag was modest: about US$1 million. By late 1956 it was clear that ZETA was going to come online in mid-1957, beating the Model C stellarator and the newest versions of the Perhapsatron and Columbus. Because these projects were secret, based on the little information available the press concluded they were versions of the same conceptual device, and that the British were far ahead in the race to produce a working machine.
Soviet visit and the push to declassify
From 1953 the U.S. had increasingly concentrated on the fast pinch concept. Some of these machines had produced neutrons, and these were initially associated with fusion. There was so much excitement that several other researchers quickly entered the field as well. Among these was Stirling Colgate, but his experiments quickly led him to conclude that fusion was not taking place. According to Spitzer resistivity, the temperature of the plasma could be determined from the current flowing through it. When Colgate performed the calculation, the temperatures in the plasma were far below the requirements for fusion.
This being the case, some other effect had to be creating the neutrons. Further work demonstrated that these were the result of instabilities in the fuel. The localised areas of high magnetic field acted as tiny particle accelerators, causing reactions that ejected neutrons. Modifications attempting to reduce these instabilities failed to improve the situation and by 1956 the fast pinch concept had largely been abandoned. The U.S. labs began turning their attention to the stabilised pinch concept, but by this time ZETA was almost complete and the US was well behind.
In 1956, while planning a well publicised state visit by Nikita Khrushchev and Nikolai Bulganin to the UK, the Harwell researchers received an offer from Soviet scientist Igor Kurchatov to give a talk. They were surprised when he began his talk on "the possibility of producing thermonuclear reactions in a gaseous discharge". Kurchatov's speech revealed the Soviet efforts to produce fast pinch devices similar to the American designs, and their problems with instabilities in the plasmas. Kurchatov noted that they had also seen neutrons being released, and had initially believed them to be from fusion. But as they examined the numbers, it became clear the plasma was not hot enough and they concluded the neutrons were from other interactions.
Kurchatov's speech made it apparent that the three countries were all working on the same basic concepts and had all encountered the same sorts of problems. Cockcroft missed Kurchatov's visit because he had left for the U.S. to press for declassification of the fusion work to avoid this duplication of effort. There was a widespread belief on both sides of the Atlantic that sharing their findings would greatly improve progress. Now that it was known the Soviets were at the same basic development level, and that they were interested in talking about it publicly, the US and UK began considering releasing much of their information as well. This developed into a wider effort to release all fusion research at the second Atoms for Peace conference in Geneva in September 1958.
In June 1957 the UK and U.S. finalised their agreement to release data to each other sometime prior to the conference, which both the UK and the U.S. planned on attending "in force". The final terms were reached on 27 November 1957, opening the projects to mutual inspection, and calling for a wide public release of all the data in January 1958.
Promising results
ZETA started operation in mid-August 1957, initially with hydrogen. These runs demonstrated that ZETA was not suffering from the same stability problems that earlier pinch machines had seen and their plasmas were lasting for milliseconds, up from microseconds, a full three orders of magnitude improvement. The length of the pulses allowed the plasma temperature to be measured using spectrographic means; although the light emitted was broadband, the Doppler shifting of the spectral lines of slight impurities in the gas (oxygen in particular) led to calculable temperatures.
Even in early experimental runs, the team started introducing deuterium gas into the mix and began increasing the current to 200,000 amps. On the evening of 30 August the machine produced huge numbers of neutrons, on the order of one million per experimental pulse, or "shot". An effort to duplicate the results and eliminate possible measurement failure followed.
Much depended on the temperature of the plasma; if the temperature was low, the neutrons would not be fusion related. Spectrographic measurements suggested plasma temperatures between 1 and 5 million K; at those temperatures the predicted rate of fusion was within a factor of two of the number of neutrons being seen. It appeared that ZETA had reached the long-sought goal of producing small numbers of fusion reactions, as it was designed to do.
U.S. efforts had suffered a string of minor technical setbacks that delayed their experiments by about a year; both the new Perhapsatron S-3 and Columbus II did not start operating until around the same time as ZETA in spite of being much smaller experiments. Nevertheless, as these experiments came online in mid-1957, they too began generating neutrons. By September, both these machines and a new design, DCX at Oak Ridge National Laboratory, appeared so promising that Edward Gardner reported that:
Prestige politics
The news was too good to keep bottled up. Tantalising leaks started appearing in September. In October, Thonemann, Cockcroft and William P. Thompson hinted that interesting results would be following. In November a UKAEA spokesman noted "The indications are that fusion has been achieved". Based on these hints, the Financial Times dedicated an entire two-column article to the issue. Between then and early 1958, the British press published an average of two articles a week on ZETA. Even the U.S. papers picked up the story; on 17 November The New York Times reported on the hints of success.
Although the British and the U.S. had agreed to release their data in full, at this point the overall director of the U.S. program, Lewis Strauss, decided to hold back the release. Tuck argued that the field looked so promising that it would be premature to release any data before the researchers knew that fusion was definitely taking place. Strauss agreed, and announced that they would withhold their data for a period to check their results.
As the matter became better known in the press, on 26 November the publication issue was raised in House of Commons. Responding to a question by the opposition, the leader of the house announced the results publicly while explaining the delay in publication due to the UK–U.S. agreement. The UK press interpreted this differently, claiming that the U.S. was dragging its feet because it was unable to replicate the British results.
Things came to a head on 12 December when a former member of parliament, Anthony Nutting, wrote a New York Herald Tribune article claiming:
The article resulted in a flurry of activity in the Macmillan administration. Having originally planned to release their results at a scheduled meeting of the Royal Society, there was great concern over whether to invite the Americans and Soviets, especially as they believed the Americans would be greatly upset if the Soviets arrived, but just as upset if they were not invited and the event was all-British. The affair eventually led to the UKAEA making a public announcement that the U.S. was not holding back the ZETA results, but this infuriated the local press, which continued to claim the U.S. was delaying to allow them to catch up.
Early concerns
When the information-sharing agreement was signed in November a further benefit was realised: teams from the various labs were allowed to visit each other. The U.S. team, including Stirling Colgate, Lyman Spitzer, Jim Tuck and Arthur Edward Ruark, all visited ZETA and concluded there was a "major probability" the neutrons were from fusion.
On his return to the U.S., Spitzer calculated that something was wrong with the ZETA results. He noticed that the apparent temperature, 5 million K, would not have time to develop during the short firing times. ZETA did not discharge enough energy into the plasma to heat it to those temperatures so quickly. If the temperature was increasing at the relatively slow rate his calculations suggested, fusion would not be taking place early in the reaction, and could not be adding energy that might make up the difference. Spitzer suspected the temperature reading was not accurate. Since it was the temperature reading that suggested the neutrons were from fusion, if the temperature was lower, it implied the neutrons were non-fusion in origin.
Colgate had reached similar conclusions. In early 1958, he, Harold Furth and John Ferguson started an extensive study of the results from all known pinch machines. Instead of inferring temperature from neutron energy, they used the conductivity of the plasma itself, based on the well-understood relationships between temperature and conductivity. They concluded that the machines were producing temperatures perhaps what the neutrons were suggesting, nowhere near hot enough to explain the number of neutrons being produced, regardless of their energy.
By this time the latest versions of the US pinch devices, Perhapsatron S-3 and Columbus S-4, were producing neutrons of their own. The fusion research world reached a high point. In January, results from pinch experiments in the U.S. and UK both announced that neutrons were being released, and that fusion had apparently been achieved. The misgivings of Spitzer and Colgate were ignored.
Public release, worldwide interest
The long-planned release of fusion data was announced to the public in mid-January. Considerable material from the UK's ZETA and Sceptre devices was released in depth in the 25 January 1958 edition of Nature, which also included results from Los Alamos' Perhapsatron S-3, Columbus II and Columbus S-2. The UK press was livid. The Observer wrote that "Admiral Strauss' tactics have soured what should be an exciting announcement of scientific progress so that it has become a sordid episode of prestige politics."
The results were typical of the normally sober scientific language, and although the neutrons were noted, there were no strong claims as to their source. The day before the release, Cockcroft, the overall director at Harwell, called a press conference to introduce the British press to the results. Some indication of the importance of the event can be seen in the presence of a BBC television field crew, a rare occurrence at that time. He began by introducing the fusion programme and the ZETA machine, and then noted:
The reporters at the meeting were not satisfied with this assessment and continued to press Cockcroft on the neutron issue. After being asked several times, he eventually stated that in his opinion, he was "90 percent certain" they were from fusion. This was unwise; a statement of opinion from a Nobel prize winner was taken as a statement of fact. The next day, the Sunday newspapers were covered with the news that fusion had been achieved in ZETA, often with claims about how the UK was now far in the lead in fusion research. Cockcroft further hyped the results on television following the release, stating: "To Britain this discovery is greater than the Russian Sputnik."
As planned, the U.S. also released a large batch of results from their smaller pinch machines. Many of them were also giving off neutrons, although ZETA was stabilised for much longer periods and generating more neutrons, by a factor of about 1000. When questioned about the success in the UK, Strauss denied that the U.S. was behind in the fusion race. When reporting on the topic, The New York Times chose to focus on Los Alamos' Columbus II, only mentioning ZETA later in the article, and then concluded the two countries were "neck and neck." Other reports from the U.S. generally gave equal support to both programmes. Newspapers from the rest of the world were more favourable to the UK; Radio Moscow went so far to publicly congratulate the UK while failing to mention the U.S. results at all.
As ZETA continued to generate positive results, plans were made to build a follow-on machine. The new design was announced in May; ZETA II would be a significantly larger US$14 million machine whose explicit goal would be to reach 100 million K, and generate net power. This announcement gathered praise even in the US; The New York Times ran a story about the new version. Machines similar to ZETA were being announced around the world; Osaka University announced their pinch machine was even more successful than ZETA, the Aldermaston team announced positive results from their Sceptre machine costing only US$28,000, and a new reactor was built in Uppsala University that was presented publicly later that year. The Efremov Institute in Leningrad began construction of a smaller version of ZETA, although still larger than most, known as Alpha.
Further scepticism, retraction of claims
Spitzer had already concluded that known theory suggested that the ZETA was nowhere near the temperatures the team was claiming, and during the publicity surrounding the release of the work, he suggested that "Some unknown mechanism would appear to be involved". Other researchers in the U.S., notably Furth and Colgate, were far more critical, telling anyone who would listen that the results were bunk. In the Soviet Union, Lev Artsimovich rushed to have the Nature article translated, and after reading it, declared "Chush sobachi!" (bullshit).
Cockcroft had stated that they were receiving too few neutrons from the device to measure their spectrum or their direction. Failing to do so meant they could not eliminate the possibility that the neutrons were being released due to electrical effects in the plasma, the sorts of reactions that Kurchatov had pointed out earlier.
In fact, such measurements would have been easy to make. In the same converted hangar that housed ZETA was the Harwell Synchrocyclotron effort run by Basil Rose. This project had built a sensitive high-pressure diffusion cloud chamber as the cyclotron's main detector. Rose was convinced it would be able to directly measure the neutron energies and trajectories.
In a series of experiments in February and March 1958, he showed that the neutrons had a high directionality, at odds with a fusion origin which would be expected to be randomly directed. To further demonstrate this he had the machine run with the current in the discharge current running "backwards". Had the neutrons been from fusion, the net velocity should have been zero, that is, they should have been travelling in random directions. The measurements showed this was not the case, not only was there a clear directionality to their release, it reversed when the current was reversed. This suggested the neutrons were a result of the electric current itself, not fusion reactions inside the plasma. They also noted that the energy of the neutrons was extremely close to that of a D-D fusion reaction, which suggested that the source was D particles colliding with a solid in the reactor.
This was followed by similar experiments on Perhapsatron and Columbus, demonstrating the same problems. The issue was a new form of instability, the "microinstabilities" or MHD instabilities, that were caused by wave-like signals in the plasma. These had been predicted, but whereas the kink was on the scale of the entire plasma and could be easily seen in photographs, these microinstabilities were too small and rapidly moving to easily detect, and had simply not been noticed before. But like the kink, when these instabilities developed, areas of enormous electrical potential developed, rapidly accelerating protons in the area. These sometimes collided with neutrons in the plasma or the container walls, ejecting them through neutron spallation. This is the same physical process that had been creating neutrons in earlier designs, the problem Cockcroft had mentioned during the press releases, but their underlying cause was more difficult to see and in ZETA they were much more powerful. The promise of stabilised pinch disappeared.
Cockcroft was forced to publish a humiliating retraction on 16 May 1958, claiming "It is doing exactly the job we expected it would do and is functioning exactly the way we hoped it would." Le Monde raised the issue to a front-page headline in June, noting "Contrary to what was announced six months ago at Harwell – British experts confirm that thermonuclear energy has not been 'domesticated. The event cast a chill over the entire field; it was not only the British who looked foolish, every other country involved in fusion research had been quick to jump on the bandwagon.
Harwell in turmoil, ZETA soldiers on
Beginning in 1955, Cockcroft had pressed for the establishment of a new site for the construction of multiple prototype power-producing fission reactors. This was strongly opposed by Christopher Hinton, and a furious debate broke out within the UKAEA over the issue. Cockcroft eventually won the debate, and in late 1958 the UKAEA formed AEE Winfrith in Dorset, where they eventually built several experimental reactor designs.
Cockcroft had also pressed for the ZETA II reactor to be housed at the new site. He argued that Winfrith would be better suited to build the large reactor, and the unclassified site would better suit the now-unclassified research. This led to what has been described as "as close to a rebellion that the individualistic scientists at Harwell could possibly mount". Thonemann made it clear he was not interested in moving to Dorset and suggested that several other high-ranking members would also quit rather than move. He then went on sabbatical to Princeton University for a year. The entire affair was a major strain on Basil Schonland, who took over the Research division when Cockcroft left in October 1959 to become the Master of the newly formed Churchill College, Cambridge.
While this was taking place, the original ZETA II proposal had been growing ever-larger, eventually specifying currents as powerful as the Joint European Torus that was built years later. As it seemed this was beyond the state-of-the-art, the project was eventually cancelled in February 1959. A new proposal soon took its place, the Intermediate-Current Stability Experiment (ICSE). ICSE was designed to take advantage of further stabilising effects noticed in M-theory, which suggested that very fast pinches would cause the current to flow only in the outer layer of the plasma, which should be much more stable. Over time, this machine grew to be about the same size as ZETA; ICSE had a 6 m major diameter and 1 m minor diameter, powered by a bank of capacitors storing 10 MJ at 100 kV.
Harwell was as unsuited to ICSE as it was for ZETA II, so Schonland approached the government with the idea of a new site for fusion research located close to Harwell. He was surprised to find they were happy with the idea, as this would limit employment at Harwell, whose payroll roster was becoming too complex to manage. Further study demonstrated that the cost of building a new site would be offset by the savings in keeping the site near Harwell; if ICSE was built at Winfrith, the travel costs between the sites would be considerable. In May 1959, the UKAEA purchased RNAS Culham, about from Harwell. ICSE construction began later that year, starting with a one-acre building to house it, known as "D-1".
Meanwhile, work continued on ZETA to better understand what was causing the new forms of instabilities. New diagnostic techniques demonstrated that the electron energies were very low, on the order of 10 eV (approximately 100,000 K) while ion temperatures were somewhat higher at 100 eV. Both of these pointed to a rapid loss of energy in the plasma, which in turn suggested the fuel was turbulent and escaping confinement to hit the walls of the chamber where it rapidly cooled. A full presentation of the results was made at the Salzburg Conference in 1961, where the Soviet delegation presented very similar results on their ZETA-clone, Alpha.
The source of this turbulence was not clearly identified at that time, but the team suggested it was due to current-driven resistive modes; if one did not use the simplifying assumption that the plasma had no macroscopic resistance, new instabilities would naturally appear. When the new head of the UKAEA, William Penney, heard that the ICSE design was also based on the resistance-free assumption, he cancelled the project in August 1960. Parts for the partially-assembled reactor were scavenged by other teams.
Thonemann had returned by this point and found much to disagree with on ICSE. He demanded to be allowed to set up a new fusion group to remain at Harwell on ZETA. ZETA remained the largest toroidal machine in the world for some time, and went on to have a productive career for just over a decade, but in spite of its later successes ZETA was always known as an example of British folly.
Thomson scattering and tokamaks
ZETA's failure was due to limited information; using the best available measurements, ZETA was returning several signals that suggested the neutrons were due to fusion. The original temperature measures were made by examining the Doppler shifting of the spectral lines of the atoms in the plasma. The inaccuracy of the measurement and spurious results caused by electron impacts with the container led to misleading measurements based on the impurities, not the plasma itself. Over the next decade, ZETA was used continuously in an effort to develop better diagnostic tools to resolve these problems.
This work eventually developed a method that is used to this day. The introduction of lasers provided a new solution through a British discovery known as Thomson scattering. Lasers have extremely accurate and stable frequency control, and the light they emit interacts strongly with free electrons. A laser shone into the plasma will be reflected off the electrons, and during this process will be Doppler shifted by the electrons' movement. The speed of the electrons is a function of their temperature, so by comparing the frequency before and after collisions, the temperature of the electrons could be measured with an extremely high degree of accuracy. By "reversing" the system, the temperature of the ions could also be directly measured.
Through the 1960s ZETA was not the only experiment to suffer from unexpected performance problems. Problems with plasma diffusion across the magnetic fields plagued both the magnetic mirror and stellarator programs, at rates that classical theory could not explain. Adding more fields did not appear to correct the problems in any of the existing designs. Work slowed dramatically as teams around the world tried to better understand the physics of the plasmas in their devices. Pfirsch and Schluter were the first to make a significant advance, suggesting that much larger and more powerful machines would be needed to correct these problems. An attitude of pessimism took root across the entire field.
In 1968 a meeting of fusion researchers took place in Novosibirsk, where, to everyone's astonishment, the Soviet hosts introduced their work on their tokamak designs which had performance numbers that no other experiment was even close to matching. The latest of their designs, the T-3, was producing electron energies of 1000 eV, compared to about 10 eV in ZETA. This corresponded to a plasma temperature of about 10 million K. Although the Soviet team was highly respected, the results were so good that there was serious concern their indirect temperature measurements might be unreliable and they had fallen prey to a measurement problem like the one that had occurred with ZETA. Spitzer, once again, expressed his scepticism rather strongly, sparking off an acrimonious debate with Artsimovich.
The Soviets were equally concerned about this, and even though it was the height of the Cold War, Artsimovich invited UKAEA to bring their laser system to the Kurchatov Institute and independently measure the performance. Artsimovich had previously called their system "brilliant." The team became known as "the Culham five", performing a series of measurements in late 1968 and early 1969. The resulting paper was published in November 1969 and convinced the fusion research field that the tokamak was indeed reaching the levels of performance the Soviets claimed. The result was a "veritable stampede" of tokamak construction around the world, and it remains the most studied device in the fusion field.
Tokamaks are toroidal pinch machines. The key difference is the relative strengths of the fields. In the stabilised pinch machines, most of the magnetic field in the plasma was generated by the current induced in it. The strength of the external stabilisation fields was much lower and only penetrated into the outer layers of the plasma mass. The tokamak reversed this; the external magnets were much more powerful and the plasma current greatly reduced in comparison. Artsimovich put it this way:
This difference is today part of a general concept known as the safety factor, denoted q. It has to be greater than one to maintain stability during a discharge; in ZETA it was about . A ZETA-type machine could reach this q, but would require enormously powerful external magnets to match the equally large fields being generated by the current. The tokamak approach resolved this by using less pinch current; this made the system stable but meant the current could no longer be used to heat the plasma. Tokamak designs require some form of external heating.
Reversed field pinch
In 1965, the newly opened Culham laboratory hosted what had become a periodic meeting of international fusion researchers. Of all the work presented, only two papers on stabilised pinch were present, both on ZETA. Spitzer did not mention them during the opening comments.
Normally, the pulse of electricity sent into ZETA formed a current pulse with a shape similar to a Poisson distribution, ramping up quickly then trailing off. One of the papers noted that the plasma stability reached a maximum just after the current began to taper off, and then lasted longer than the current pulse itself. Given that the current was there to provide confinement, that the plasma would actually increase in confinement as the current was reduced was entirely unexpected. This phenomenon was dubbed "quiescence".
Three years later, at the same meeting where Soviet results with the T-3 tokamak were first released, a paper by Robinson and King examined the quiescence period. They determined it was due to the original toroidal magnetic field reversing itself, creating a more stable configuration. At the time, the enormity of the T-3 results overshadowed this result.
John Bryan Taylor took up the issue and began a detailed theoretical study of the concept, publishing a groundbreaking 1974 article on the topic. He demonstrated that as the magnetic field that generated the pinch was relaxing, it interacted with the pre-existing stabilising fields, creating a self-stable magnetic field. The phenomenon was driven by the system's desire to preserve magnetic helicity, which suggested a number of ways to improve the confinement time.
Although the stabilising force was lower than the force available in the pinch, it lasted considerably longer. It appeared that a reactor could be built that would approach the Lawson criterion from a different direction, using extended confinement times rather than increased density. This was similar to the stellarator approach in concept, and although it would have lower field strength than those machines, the energy needed to maintain the confinement was much lower. Today this approach is known as the reversed field pinch (RFP) and has been a field of continued study.
Taylor's study of the relaxation into the reversed state led to his development of a broader theoretical understanding of the role of magnetic helicity and minimum energy states, greatly advancing the understanding of plasma dynamics. The minimum-energy state, known as the "Taylor state", is particularly important in the understanding of new fusion approaches in the compact toroid class. Taylor went on to study the ballooning transformation, a problem that was occurring in the latest high-performance toroidal machines as large-scale waveforms formed in the plasma. His work in fusion research won him the 1999 James Clerk Maxwell Prize for Plasma Physics.
Demolition
Culham officially opened in 1965, and various teams began leaving the former sites through this period. A team kept ZETA operational until September 1968. Hangar 7, which housed ZETA and other machines, was demolished during financial year 2005/2006.
Notes
References
Citations
Bibliography
External links
Britain's Sputnik – BBC Radio 4 programme on ZETA first broadcast on 16 January 2008
ZETA – Peace Atoms, contemporary newsreel story on the reactor.
Magnetic confinement fusion devices
Nuclear power in the United Kingdom
Nuclear research institutes in the United Kingdom
Nuclear research reactors
Nuclear technology in the United Kingdom
Research institutes in Oxfordshire
Vale of White Horse | ZETA (fusion reactor) | [
"Chemistry"
] | 9,682 | [
"Particle traps",
"Magnetic confinement fusion devices"
] |
3,083,822 | https://en.wikipedia.org/wiki/Razdow%20Telescope | Razdow Laboratories, Inc. was founded by Austrian born physicist Dr. Adolph Razdow (1908–1985). A refugee of the Holocaust, he emigrated to the United States in July 1946. In the early 1960s Razdow was awarded a contract by NASA to develop and deploy a series of solar monitoring telescopes at major observatories around the globe. These devices automatically tracked the Sun across the sky, recording and transmitting television images of the solar disk in the Hydrogen-alpha spectrum. NASA Astronauts, which would soon be traversing the space around the Earth would be vulnerable to radiation storms associated with solar flares, and these telescopes were commissioned to provide a 24-hour watch on solar activity. A few of these telescopes are still in operation.
References
Telescope manufacturers
Solar telescopes | Razdow Telescope | [
"Astronomy"
] | 159 | [
"Telescope manufacturers",
"People associated with astronomy"
] |
3,084,027 | https://en.wikipedia.org/wiki/Targeted%20therapy | Targeted therapy or molecularly targeted therapy is one of the major modalities of medical treatment (pharmacotherapy) for cancer, others being hormonal therapy and cytotoxic chemotherapy. As a form of molecular medicine, targeted therapy blocks the growth of cancer cells by interfering with specific targeted molecules needed for carcinogenesis and tumor growth, rather than by simply interfering with all rapidly dividing cells (e.g. with traditional chemotherapy). Because most agents for targeted therapy are biopharmaceuticals, the term biologic therapy is sometimes synonymous with targeted therapy when used in the context of cancer therapy (and thus distinguished from chemotherapy, that is, cytotoxic therapy). However, the modalities can be combined; antibody-drug conjugates combine biologic and cytotoxic mechanisms into one targeted therapy.
Another form of targeted therapy involves the use of nanoengineered enzymes to bind to a tumor cell such that the body's natural cell degradation process can digest the cell, effectively eliminating it from the body.
Targeted cancer therapies are expected to be more effective than older forms of treatments and less harmful to normal cells. Many targeted therapies are examples of immunotherapy (using immune mechanisms for therapeutic goals) developed by the field of cancer immunology. Thus, as immunomodulators, they are one type of biological response modifiers.
The most successful targeted therapies are chemical entities that target or preferentially target a protein or enzyme that carries a mutation or other genetic alteration that is specific to cancer cells and not found in normal host tissue. One of the most successful molecular targeted therapeutics is imatinib, marketed as Gleevec, which is a kinase inhibitor with exceptional affinity for the oncofusion protein BCR-Abl which is a strong driver of tumorigenesis in chronic myelogenous leukemia. Although employed in other indications, imatinib is most effective targeting BCR-Abl. Other examples of molecular targeted therapeutics targeting mutated oncogenes, include PLX27892 which targets mutant B-raf in melanoma.
There are targeted therapies for lung cancer, colorectal cancer, head and neck cancer, breast cancer, multiple myeloma, lymphoma, prostate cancer, melanoma and other cancers.
Biomarkers are usually required to aid the selection of patients who will likely respond to a given targeted therapy.
Co-targeted therapy involves the use of one or more therapeutics aimed at multiple targets, for example PI3K and MEK, in an attempt to generate a synergistic response and prevent the development of drug resistance.
The definitive experiments that showed that targeted therapy would reverse the malignant phenotype of tumor cells involved treating Her2/neu transformed cells with monoclonal antibodies in vitro and in vivo by Mark Greene's laboratory and reported from 1985.
Some have challenged the use of the term, stating that drugs usually associated with the term are insufficiently selective. The phrase occasionally appears in scare quotes: "targeted therapy". Targeted therapies may also be described as "chemotherapy" or "non-cytotoxic chemotherapy", as "chemotherapy" strictly means only "treatment by chemicals". But in typical medical and general usage "chemotherapy" is now mostly used specifically for "traditional" cytotoxic chemotherapy.
Types
The main categories of targeted therapy are currently small molecules and monoclonal antibodies.
Small molecules
Many are tyrosine-kinase inhibitors.
Imatinib (Gleevec, also known as STI–571) is approved for chronic myelogenous leukemia, gastrointestinal stromal tumor and some other types of cancer. Early clinical trials indicate that imatinib may be effective in treatment of dermatofibrosarcoma protuberans.
Gefitinib (Iressa, also known as ZD1839), targets the epidermal growth factor receptor (EGFR) tyrosine kinase and is approved in the U.S. for non small cell lung cancer.
Erlotinib (marketed as Tarceva). Erlotinib inhibits epidermal growth factor receptor, and works through a similar mechanism as gefitinib. Erlotinib has been shown to increase survival in metastatic non small cell lung cancer when used as second line therapy. Because of this finding, erlotinib has replaced gefitinib in this setting.
Sorafenib (Nexavar)
Sunitinib (Sutent)
Dasatinib (Sprycel)
Lapatinib (Tykerb)
Nilotinib (Tasigna)
Bosutinib (Bosulif)
Ponatinib (Iclusig)
Asciminib (Scemblix)
Bortezomib (Velcade) is an apoptosis-inducing proteasome inhibitor drug that causes cancer cells to undergo cell death by interfering with proteins. It is approved in the U.S. to treat multiple myeloma that has not responded to other treatments.
The selective estrogen receptor modulator tamoxifen has been described as the foundation of targeted therapy.
Janus kinase inhibitors, e.g. FDA approved tofacitinib
ALK inhibitors, e.g. crizotinib
Bcl-2 inhibitors (e.g. FDA approved venetoclax, obatoclax in clinical trials, navitoclax, and gossypol.
PARP inhibitors (e.g. FDA approved olaparib, rucaparib, niraparib and talazoparib)
PI3K inhibitors (e.g. perifosine in a phase III trial)
Apatinib is a selective VEGF Receptor 2 inhibitor which has shown encouraging anti-tumor activity in a broad range of malignancies in clinical trials. Apatinib is currently in clinical development for metastatic gastric carcinoma, metastatic breast cancer and advanced hepatocellular carcinoma.
Zoptarelin doxorubicin (AN-152), doxorubicin linked to [D-Lys(6)]- LHRH, Phase II results for ovarian cancer.
Braf inhibitors (vemurafenib, dabrafenib, LGX818) used to treat metastatic melanoma that harbors BRAF V600E mutation
MEK inhibitors (trametinib, MEK162) are used in experiments, often in combination with BRAF inhibitors to treat melanoma
CDK inhibitors, e.g. PD-0332991, LEE011 in clinical trials
Hsp90 inhibitors, some in clinical trials
Hedgehog pathway inhibitors (e.g. FDA approved vismodegib and sonidegib).
Salinomycin has demonstrated potency in killing cancer stem cells in both laboratory-created and naturally occurring breast tumors in mice.
VAL-083 (dianhydrogalactitol), a “first-in-class” DNA-targeting agent with a unique bi-functional DNA cross-linking mechanism. NCI-sponsored clinical trials have demonstrated clinical activity against a number of different cancers including glioblastoma, ovarian cancer, and lung cancer. VAL-083 is currently undergoing Phase 2 and Phase 3 clinical trials as a potential treatment for glioblastoma (GBM) and ovarian cancer. As of July 2017, four different trials of VAL-083 are registered.
Ibrutinib blocks Bruton's tyrosine kinase (BTK) and is used to treat mantle cell lymphoma, chronic lymphocytic leukemia, and Waldenström's macroglobulinemia.
Small molecule drug conjugates
Vintafolide is a small molecule drug conjugate consisting of a small molecule targeting the folate receptor. It is currently in clinical trials for platinum-resistant ovarian cancer (PROCEED trial) and a Phase 2b study (TARGET trial) in non-small-cell lung carcinoma (NSCLC).
Serine/threonine kinase inhibitors (small molecules)
Temsirolimus (Torisel)
Everolimus (Afinitor)
Vemurafenib (Zelboraf)
Trametinib (Mekinist)
Dabrafenib (Tafinlar)
Monoclonal antibodies
Several are in development and a few have been licensed by the FDA and the European Commission. Examples of licensed monoclonal antibodies include:
Pembrolizumab (Keytruda) binds to PD-1 proteins found on T cells. Pembrolizumab blocks PD-1 and help the immune system kill cancer cells. It is used to treat melanoma, Hodgkin's lymphoma, non-small cell lung carcinoma and several other types of cancer.
Rituximab targets CD20 found on B cells. It is used in non Hodgkin lymphoma
Trastuzumab targets the Her2/neu (also known as ErbB2) receptor expressed in some types of breast cancer
Alemtuzumab
Cetuximab target the epidermal growth factor receptor (EGFR). It is approved for use in the treatment of metastatic colorectal cancer and squamous cell carcinoma of the head and neck.
Panitumumab also targets the EGFR. It is approved for the use in the treatment of metastatic colorectal cancer.
Bevacizumab targets circulating VEGF ligand. It is approved for use in the treatment of colon cancer, breast cancer, non-small cell lung cancer, and is investigational in the treatment of sarcoma. Its use for the treatment of brain tumors has been recommended.
Ipilimumab (Yervoy)
Brentuximab targets CD30 and is useful in some types of lymphoma.
Many antibody-drug conjugates (ADCs) are being developed. See also antibody-directed enzyme prodrug therapy (ADEPT).
Progress and future
In the U.S., the National Cancer Institute's Molecular Targets Development Program (MTDP) aims to identify and evaluate molecular targets that may be candidates for drug development.
See also
History of cancer chemotherapy#Targeted therapy
Targeted drug delivery
Targeted molecular therapy for neuroblastoma
Targeted therapy of lung cancer
Treatment of lung cancer#Targeted therapy
Targeted covalent inhibitors
References
External links
Targeted Therapy Database (TTD) from the Melanoma Molecular Map Project
Targeted therapy Fact sheet from the U.S. National Cancer Institute
Molecular Oncology: Receptor-Based Therapy Special issue of Journal of Clinical Oncology (April 10, 2005) dedicated to targeted therapies in cancer treatment
Targeting Targeted Therapy New England Journal of Medicine (2004)
Targeting tumors with medicinal cannabis oil – publication list from Spain
Antineoplastic drugs
Drugs | Targeted therapy | [
"Chemistry"
] | 2,281 | [
"Pharmacology",
"Chemicals in medicine",
"Drugs",
"Products of chemical industry"
] |
3,084,295 | https://en.wikipedia.org/wiki/Linear%20network%20coding | In computer networking, linear network coding is a program in which intermediate nodes transmit data from source nodes to sink nodes by means of linear combinations.
Linear network coding may be used to improve a network's throughput, efficiency, and scalability, as well as reducing attacks and eavesdropping. The nodes of a network take several packets and combine for transmission. This process may be used to attain the maximum possible information flow in a network.
It has been proven that, theoretically, linear coding is enough to achieve the upper bound in multicast problems with one source. However linear coding is not sufficient in general; even for more general versions of linearity such as convolutional coding and filter-bank coding. Finding optimal coding solutions for general network problems with arbitrary demands is a hard problem, which can be NP-hard
and even undecidable.
Encoding and decoding
In a linear network coding problem, a group of nodes are involved in moving the data from source nodes to sink nodes. Each node generates new packets which are linear combinations of past received packets by multiplying them by coefficients chosen from a finite field, typically of size .
More formally, each node, with indegree, , generates a message from the linear combination of received messages by the formula:
Where the values are coefficients selected from . Since operations are computed in a finite field, the generated message is of the same length as the original messages. Each node forwards the computed value along with the coefficients, , used in the level, .
Sink nodes receive these network coded messages, and collect them in a matrix. The original messages can be recovered by performing Gaussian elimination on the matrix. In reduced row echelon form, decoded packets correspond to the rows of the form
Background
A network is represented by a directed graph . is the set of nodes or vertices, is the set of directed links (or edges), and gives the capacity of each link of . Let be the maximum possible throughput from node to node . By the max-flow min-cut theorem, is upper bounded by the minimum capacity of all cuts, which is the sum of the capacities of the edges on a cut, between these two nodes.
Karl Menger proved that there is always a set of edge-disjoint paths achieving the upper bound in a unicast scenario, known as the max-flow min-cut theorem. Later, the Ford–Fulkerson algorithm was proposed to find such paths in polynomial time. Then, Edmonds proved in the paper "Edge-Disjoint Branchings" the upper bound in the broadcast scenario is also achievable, and proposed a polynomial time algorithm.
However, the situation in the multicast scenario is more complicated, and in fact, such an upper bound can't be reached using traditional routing ideas. Ahlswede et al. proved that it can be achieved if additional computing tasks (incoming packets are combined into one or several outgoing packets) can be done in the intermediate nodes.
The Butterfly Network
The butterfly network is often used to illustrate how linear network coding can outperform routing. Two source nodes (at the top of the picture) have information A and B that must be transmitted to the two destination nodes (at the bottom). Each destination node wants to know both A and B. Each edge can carry only a single value (we can think of an edge transmitting a bit in each time slot).
If only routing were allowed, then the central link would be only able to carry A or B, but not both. Supposing we send A through the center; then the left destination would receive A twice and not know B at all. Sending B poses a similar problem for the right destination. We say that routing is insufficient because no routing scheme can transmit both A and B to both destinations simultaneously. Meanwhile, it takes four time slots in total for both destination nodes to know A and B.
Using a simple code, as shown, A and B can be transmitted to both destinations simultaneously by sending the sum of the symbols through the two relay nodes – encoding A and B using the formula "A+B". The left destination receives A and A + B, and can calculate B by subtracting the two values. Similarly, the right destination will receive B and A + B, and will also be able to determine both A and B. Therefore, with network coding, it takes only three time slots and improves the throughput.
Random Linear Network Coding
Random linear network coding (RLNC) is a simple yet powerful encoding scheme, which in broadcast transmission schemes allows close to optimal throughput using a decentralized algorithm. Nodes transmit random linear combinations of the packets they receive, with coefficients chosen randomly, with a uniform distribution from a Galois field. If the field size is sufficiently large, the probability that the receiver(s) will obtain linearly independent combinations (and therefore obtain innovative information) approaches 1. It should however be noted that, although random linear network coding has excellent throughput performance, if a receiver obtains an insufficient number of packets, it is extremely unlikely that they can recover any of the original packets. This can be addressed by sending additional random linear combinations until the receiver obtains the appropriate number of packets.
Operation and key parameters
There are three key parameters in RLNC. The first one is the generation size. In RLNC, the original data transmitted over the network is divided into packets. The source and intermediate nodes in the network can combine and recombine the set of original and coded packets. The original packets form a block, usually called a generation. The number of original packets combined and recombined together is the generation size. The second parameter is the packet size. Usually, the size of the original packets is fixed. In the case of unequally-sized packets, these can be zero-padded if they are shorter or split into multiple packets if they are longer. In practice, the packet size can be the size of the maximum transmission unit (MTU) of the underlying network protocol. For example, it can be around 1500 bytes in an Ethernet frame. The third key parameter is the Galois field used. In practice, the most commonly used Galois fields are binary extension fields. And the most commonly used sizes for the Galois fields are the binary field and the so-called binary-8 (). In the binary field, each element is one bit long, while in the binary-8, it is one byte long. Since the packet size is usually larger than the field size, each packet is seen as a set of elements from the Galois field (usually referred to as symbols) appended together. The packets have a fixed amount of symbols (Galois field elements), and since all the operations are performed over Galois fields, then the size of the packets does not change with subsequent linear combinations.
The sources and the intermediate nodes can combine any subset of the original and previously coded packets performing linear operations. To form a coded packet in RLNC, the original and previously coded packets are multiplied by randomly chosen coefficients and added together. Since each packet is just an appended set of Galois field elements, the operations of multiplication and addition are performed symbol-wise over each of the individual symbols of the packets, as shown in the picture from the example.
To preserve the statelessness of the code, the coding coefficients used to generate the coded packets are appended to the packets transmitted over the network. Therefore, each node in the network can see what coefficients were used to generate each coded packet. One novelty of linear network coding over traditional block codes is that it allows the recombination of previously coded packets into new and valid coded packets. This process is usually called recoding. After a recoding operation, the size of the appended coding coefficients does not change. Since all the operations are linear, the state of the recoded packet can be preserved by applying the same operations of addition and multiplication to the payload and the appended coding coefficients. In the following example, we will illustrate this process.
Any destination node must collect enough linearly independent coded packets to be able to reconstruct the original data. Each coded packet can be understood as a linear equation where the coefficients are known since they are appended to the packet. In these equations, each of the original packets is the unknown. To solve the linear system of equations, the destination needs at least linearly independent equations (packets).
Example
In the figure, we can see an example of two packets linearly combined into a new coded packet. In the example, we have two packets, namely packet and packet . The generation size of our example is two. We know this because each packet has two coding coefficients () appended. The appended coefficients can take any value from the Galois field. However, an original, uncoded data packet would have appended the coding coefficients or , which means that they are constructed by a linear combination of zero times one of the packets plus one time the other packet. Any coded packet would have appended other coefficients. In our example, packet for instance has appended the coefficients . Since network coding can be applied at any layter of the communication protocol, these packets can have a header from the other layers, which is ignored in the network coding operations.
Now, lets assume that the network node wants to produce a new coded packet combining packet and packet . In RLNC, it will randomly choose two coding coefficients, and in the example. The node will multiply each symbol of packet by , and each symbol of packet by . Then, it will add the results symbol-wise to produce the new coded data. It will perform the same operations of multiplication and addition to the coding coefficients of the coded packets.
Misconceptions
Linear network coding is still a relatively new subject. However, the topic has been vastly researched over the last twenty years. Nevertheless, there are still some misconceptions that are no longer valid:
Decoding computational complexity: Network coding decoders have been improved over the years. Nowadays, the algorithms are highly efficient and parallelizable. In 2016, with Intel Core i5 processors with SIMD instructions enabled, the decoding goodput of network coding was 750 MB/s for a generation size of 16 packets and 250 MB/s for a generation size of 64 packets. Furthermore, today's algorithms can be vastly parallelizable, increasing the encoding and decoding goodput even further.
Transmission Overhead: It is usually thought that the transmission overhead of network coding is high due to the need to append the coding coefficients to each coded packet. In reality, this overhead is negligible in most applications. The overhead due to coding coefficients can be computed as follows. Each packet has appended coding coefficients. The size of each coefficient is the number of bits needed to represent one element of the Galois field. In practice, most network coding applications use a generation size of no more than 32 packets per generation and Galois fields of 256 elements (binary-8). With these numbers, each packet needs bytes of appended overhead. If each packet is 1500 bytes long (i.e. the Ethernet MTU), then 32 bytes represent an overhead of only 2%.
Overhead due to linear dependencies: Since the coding coefficients are chosen randomly in RLNC, there is a chance that some transmitted coded packets are not beneficial to the destination because they are formed using a linearly dependent combination of packets. However, this overhead is negligible in most applications. The linear dependencies depend on the Galois fields' size and are practically independent of the generation size used. We can illustrate this with the following example. Let us assume we are using a Galois field of elements and a generation size of packets. If the destination has not received any coded packet, we say it has degrees of freedom, and then almost any coded packet will be useful and innovative. In fact, only the zero-packet (only zeroes in the coding coefficients) will be non-innovative. The probability of generating the zero-packet is equal to the probability of each of the coding coefficient to be equal to the zero-element of the Galois field. I.e., the probability of a non-innovative packet is of . With each successive innovative transmission, it can be shown that the exponent of the probability of a non innovative packet is reduced by one. When the destination has received innovative packets (i.e., it needs only one more packet to fully decode the data). Then the probability of a non innovative packet is of . We can use this knowledge to calculate the expected number of linearly dependent packets per generation. In the worst-case scenario, when the Galois field used contains only two elements (), the expected number of linearly dependent packets per generation is of 1.6 extra packets. If our generation size if of 32 or 64 packets, this represents an overhead of 5% or 2.5%, respectively. If we use the binary-8 field (), then the expected number of linearly dependent packets per generation is practically zero. Since it is the last packets the major contributor to the overhead due to linear dependencies, there are RLNC-based protocols such as tunable sparse network coding that exploit this knowledge. These protocols introduce sparsity (zero-elements) in the coding coefficients at the beginning of the transmission to reduce the decoding complexity, and reduce the sparsity at the end of the transmission to reduce the overhead due to linear dependencies.
Applications
Over the years, multiple researchers and companies have integrated network coding solutions into their applications. We can list some of the applications of network coding in different areas:
VoIP: The performance of streaming services such as VoIP over wireless mesh networks can be improved with network coding by reducing the network delay and jitter.
Video and audio streaming and conferencing: The performance of MPEG-4 traffic in terms of delay, packet loss, and jitter over wireless networks prone to packet erasures can be improved with RLNC. In the case of audio streaming over wireless mesh networks, the packet delivery ratio, latency, and jitter performance of the network can be significantly increased when using RLNC instead of packet forwarding-based protocols such as simplified multicast forwarding and partial dominant pruning. The performance improvements of network coding for video conferencing are not only theoretical. In 2016, the authors of built a real-world testbed of 15 wireless Android devices to evaluate the feasibility of network-coding-based video conference systems. Their results showed large improvements in packet delivery ratio and overall user experience, especially over poor quality links compared to multicasting technologies based on packet forwarding.
Software-defined wide area networks (SD-WAN): Large industrial IoT wireless networks can benefit from network coding. Researchers showed that network coding and its channel bundling capabilities improved the performance of SD-WANs with a large number of nodes with multiple cellular connections. Nowadays, companies such as Barracuda are employing RLNC-based solutions due to their advantages in low latency, small footprint on computing devices, and low overhead.
Channel bundling: Due to the statelessness characteristics of RLNC, it can be used to efficiently perform channel bundling, i.e., the transmission of information through multiple network interfaces. Since the coded packets are randomly generated, and the state of the code traverses the network together with the coded packets, a source can achieve bundling without much planning just by sending coded packets through all its network interfaces. The destination can decode the information once enough coded packets arrive, irrespectively of the network interface. A video demonstrating the channel bundling capabilities of RLNC is available at.
5G private networks: RLNC can be integrated into the 5G NR standard to improve the performance of video delivery over 5G systems. In 2018, a demo presented at the Consumer Electronics Show demonstrated a practical deployment of RLNC with NFV and SDN technologies to improve video quality against packet loss due to congestion at the core network.
Remote collaboration.
Augmented reality remote support and training.
Remote vehicle driving applications.
Connected cars networks.
Gaming applications such as low latency streaming and multiplayer connectivity.
Healthcare applications.
Industry 4.0.
Satellite networks.
Agricultural sensor fields.
In-flight entertainment networks.
Major security and firmware updates for mobile product families.
Smart city infrastructure.
Information-centric networking and named data networking.: Linear network coding can improve the network efficiency of information-centric networking solutions by exploiting the multi-source multi-cast nature of such systems. It has been shown, that RLNC can be integrated into distributed content delivery networks such as IPFS to increase data availability while reducing storage resources.
Alternative to forward error correction and automatic repeat requests in traditional and wireless networks with packet loss, such as Coded TCP and Multi-user ARQ
Protection against network attacks such as snooping, eavesdropping, replay, or data corruption.
Digital file distribution and P2P file sharing, e.g. Avalanche filesystem from Microsoft
Distributed storage
Throughput increase in wireless mesh networks, e.g.: COPE, CORE, Coding-aware routing, and B.A.T.M.A.N.
Buffer and delay reduction in spatial sensor networks: Spatial buffer multiplexing
Wireless broadcast: RLNC can reduce the number of packet transmission for a single-hop wireless multicast network, and hence improve network bandwidth
Distributed file sharing
Low-complexity video streaming to mobile device
Device-to-device extensions
See also
Secret sharing protocol
Homomorphic signatures for network coding
Triangular network coding
References
Fragouli, C.; Le Boudec, J. & Widmer, J. "Network coding: An instant primer" in Computer Communication Review, 2006. https://doi.org/10.1145/1111322.1111337
Ali Farzamnia, Sharifah K. Syed-Yusof, Norsheila Fisa "Multicasting Multiple Description Coding Using p-Cycle Network Coding", KSII Transactions on Internet and Information Systems, Vol 7, No 12, 2013.
External links
Network Coding Homepage
A network coding bibliography
Raymond W. Yeung, Information Theory and Network Coding, Springer 2008, http://iest2.ie.cuhk.edu.hk/~whyeung/book2/
Raymond W. Yeung et al., Network Coding Theory, now Publishers, 2005, http://iest2.ie.cuhk.edu.hk/~whyeung/netcode/monograph.html
Christina Fragouli et al., Network Coding: An Instant Primer, ACM SIGCOMM 2006, http://infoscience.epfl.ch/getfile.py?mode=best&recid=58339.
Avalanche Filesystem, http://research.microsoft.com/en-us/projects/avalanche/default.aspx
Random Network Coding, https://web.archive.org/web/20060618083034/http://www.mit.edu/~medard/coding1.htm
Digital Fountain Codes, http://www.icsi.berkeley.edu/~luby/
Coding-Aware Routing, https://web.archive.org/web/20081011124616/http://arena.cse.sc.edu/papers/rocx.secon06.pdf
MIT offers a course: Introduction to Network Coding
Network coding: Networking's next revolution?
Coding-aware protocol design for wireless networks: http://scholarcommons.sc.edu/etd/230/
Coding theory
Information theory
Finite fields
Network performance
Wireless sensor network | Linear network coding | [
"Mathematics",
"Technology",
"Engineering"
] | 4,081 | [
"Discrete mathematics",
"Coding theory",
"Telecommunications engineering",
"Applied mathematics",
"Wireless networking",
"Wireless sensor network",
"Computer science",
"Information theory"
] |
3,084,362 | https://en.wikipedia.org/wiki/Party%20wall | A party wall (occasionally parti-wall or parting wall, shared wall, also known as common wall or as a demising wall) is a wall shared by two adjoining properties. Typically, the builder lays the wall along a property line dividing two terraced houses, so that one half of the wall's thickness lies on each side. This type of wall is usually structural. Party walls can also be formed by two abutting walls built at different times. The term can be also used to describe a division between separate units within a multi-unit apartment complex. Very often the wall in this case is non-structural but designed to meet established criteria for sound and/or fire protection, i.e. a firewall.
England and Wales
While party walls are effectively in common ownership of two or more immediately adjacent owners, there are various possibilities for legal ownership: the wall may belong to both tenants (in common), to one tenant or the other, or partly to one, partly to the other. In cases where the ownership is not shared, both parties have use of the wall, if not ownership. Other party structures can exist, such as floors dividing flats or apartments.
Apart from special statutory definitions, the term "Party Wall" may be used in four different legal senses.
It may mean:
a wall of which the adjoining owners are tenants in common;
a wall divided longitudinally into two strips, one belonging to each of the neighbouring owners;
a wall which belongs entirely to one of the adjoining owners, but is subject to an easement or right in the other to have it maintained as a dividing wall between the two tenements;
a wall divided longitudinally into two moieties, each moiety being subject to a cross easement, in favour of the owner of the other moiety.
In English law the party wall does not confirm a boundary at its median point and there are instances where the legal boundary between adjoining lands actually comes at one face or the other of a wall or part of it, and sometimes at some odd measurement within the thickness of the wall. The legal position is, however, clear insofar as a party using or benefiting from a party wall or structure abutting, on or in its land has rights to use the wall and for it to be retained should the other side no longer wish it to be there. For this reason, expert surveyors are used in the main to issue notices, deal with the response from someone receiving a notice and settling any dispute by an Award. Details can be obtained from the Royal Institution of Chartered Surveyors.
Originating in London as early as the 11th century, requirements for terraced houses to have a dividing wall substantially capable of acting as a fire break have been applied in some form or other. Evidently, this was not enough to prevent the several great fires of London, and the most famous of which being the Great Fire of 1666.
In England and Wales, the Party Wall etc. Act 1996, confers rights on those whose property adjoins a party wall or other 'party structure' irrespective of ownership of the wall or structure.
Paris
The principles of the party wall in Paris under the common law in 1765 are the following:
If a man when building his home does not leave a sufficient space on his property he can not prevent his wall becoming a party wall with his neighbor who could build his home erect to the wall paying half the cost for materials and land that the wall resides on.
Nothing can be done to the party wall without legal consent of both neighboring parties.
Repairs to the wall are split equally between the two neighboring parties.
No beams (of a home) can be placed in the walls by either party more than a one-half inch the thickness of the wall. If a party does wish to install beams into the wall greater than one-half inch thickness of the wall then materials such as jams, or chains must also be added in order to support the beams.
United States
In the United States, the term may refer to a fire wall that separates and is used in common by two adjoining buildings (condominium, row house), or the wall between two adjacent commercial buildings that were often built using common walls, or built walls onto existing walls. Rights and obligations are governed by state statutes, and common law.
The wall starts at the foundation and continues up to a parapet, creating two separate and structurally independent buildings on either side.
See also
Right to light
Architectural acoustics
Property law
Semi-detached housing
Condominium
Partition wall
References
Party wall guidance Royal Institution of Chartered Surveyors (RICS)
"Paris Party Wall", The Encyclopedia of Diderot & d'Alembert Collaborative Translation Project
Surveying
Types of wall | Party wall | [
"Engineering"
] | 954 | [
"Structural engineering",
"Surveying",
"Civil engineering",
"Types of wall"
] |
3,084,444 | https://en.wikipedia.org/wiki/Macroglobulin | Macroglobulins are large globular proteins and are found in the blood and other body fluids. Various physiological processes, including immunity, coagulation, and chemical transport, rely on these proteins. A macroglobulin is a plasma globulin of high molecular weight. Elevated levels of macroglobulins (macroglobulinemia) may cause manifestations of excess blood viscosity (as is the case for IgM antibodies in Waldenström macroglobulinemia) and/or precipitate within blood vessels when temperature drops (as in cryoglobulinaemia). Other macroglobulins include α2-macroglobulin, which is elevated in nephrotic syndrome, diabetes, severe burns, and other conditions, while a deficiency is associated with chronic obstructive pulmonary disease.
Structure
Macroglobulins range in molecular weight from 400,000 to 720,000 daltons. They are made up of four distinguishing subunits that each possess multiple domains. Disulfide bonds and non-covalent interactions allow the subunits to stay together. One zinc atom also occupies a position in each subunit, aiding to maintain the stability of tetramers. The macroglobulin tetramers can be irreversibly dissociated into dimers by metalscontraions as well as chaotropic substances like urea and guanidine hydrochloride. Despite being similar in structure to immunoglobulins (Ig), macroglobulins have a distinct Y-shaped conformation. A sequence of 16 amino acids at the C-terminus of each subunit provides a highly polar, hydrophobic, binding site that can bind any sterically accessible molecule.
Types of Macroglobulins but With an Emphasis on alpha-2 Macroglobulin
There’s four primary main types of macroglobulins, alpha-2 macroglobulin, beta-2 macroglobulin, pregnancy-associated plasma protein-A, and complement component C4-binding protein. Alpha-2 macroglobulin is studied the most. Alpha-2 macroglobulin is a notable plasma component with a molecular mass of 820 kDa, about 300 mg/100 ml, and around 10% carbohydrate in 31 glycans. Alpha-2 macroglobulin is a tetrameric protein which means, in essence, that it is predominantly made up of four identical subunits. Alpha-2 macroglobulin's identical subunits contain 1451 amino acid residues. Each subunit has several domains, of which include a bait region that can engage with different surface receptors on cells and proteases, and a receptor-binding domain. Alpha-2 macroglobulin is important in regulating protease activity in the blood. There are no known full oligosaccharide structures. Alpha-2 macroglobulin is also the largest non-immunoglobulin molecule found among the several highly abundant proteins in peripheral blood circulation.
. Endothelial cells and hepatocytes interact with each other in order to produce alpha-2 macroglobulin where it resides mostly in the liver. A wide range of serine, threonine, pro-inflammatory cytokines, and metalloproteases can be inhibited by alpha-2 macroglobulin. Additionally, it can activate a number of genes necessary for cell oncogenesis, atherosclerosis, and proliferation/hypertrophy.
alpha-2 Macroglobulin's Function
In the plasma and tissues of vertebrates, alpha 2-macroglobulin and similar proteins act as humoral defensive barriers against pathogens by binding onto host or foreign peptides and particles. Human alpha-2 macroglobulin can facilitate the reverse or irreversible capture of proteins with a multitude of biological activities via reactive sites generated by activated molecules, such as transglutaminase cross-linking sites, thioester, and high-affinity zinc sites. The fact that alpha-2 macroglobulin interacts with and engulfs nearly any proteinase it comes across, whether it is native or foreign, suggests that it has a been designated a special role as a "panproteinase inhibitor." De novo binding sites are then created by the activation of alpha-2 macroglobulin and are used to facilitate and organize the establishment of complexes with cytokines and other peptides. The direct physical interrelation of cytokines with activated alpha-2 macroglobulin in cell cultures suggested that it has a function as a biological response modulator.
Evolutionary History
The earliest known members of the macroglobulin family of proteins initially emerged roughly 500–700 million years ago. Presently, members of the macroglobulin family have been identified in crustaceans, molluscs, fish, amphibians, reptiles, ticks, insects, birds, and mammals. The blood of some species show that multiple members of the macroglobulin family have different molecular weights and partially redundant functions. Today, macroglobulin can be found as monomers, dimers, or tetramers in a large range of different species. Each and every protein component can be characterized by having a "trap" which is composed of a cyclic thioether on the bottom and a sizable hydrophobic region. Each representation can administer a variety of regulatory tasks since complexes can be built with various regulatory chemicals via covalent or hydrophobic interactions.The macroglobulin family of proteins can be referred to as the primary regulating macromolecules of organism fluid media because of their lengthy evolutionary history, broad distribution, inherent preservation, and variety of regulatory roles. Mammals have the greatest growth of the alpha-2 macroglobulin family. There is a study of rats that is particularly interesting because the alpha-2 macroglobulin that predominates in rats differs from human alpha-2 macroglobulin by having an extra sulfide bond within the subunit; as a result, it is actually made up of eight subunits. Two macroglobulins with identical qualities first occur in fish whose alpha-2 macroglobulin are represented by tetramer forms, one of which is exclusively found in blood and the other only in eggs. This can be explained by the divergence of the gene's ancestor and the direct linkage of the egg alpha-2 macroglobulin gene to the reproductive process, which necessitates the mobilization of proliferative components. It should be emphasized that complement system proteins first appeared in fish and are structurally and functionally identical to related human proteins.
Waldenström Macroglobulinemia
Waldenström macroglobulinemia is a slow-silent disease that typically develops when a person is around 65 or older, is male, has a family history of lymphoma, and is caucasian. The condition is called Waldenström macroglobulinemia because the abnormal cells generate excessive levels of IgM which is the biggest immunoglobulin protein, and is also one of the causes of the condition. Interestingly, some affected people will exhibit these elevated IgM and lymphoplasmacytic cell levels but display no symptoms of the disease; in these people, the illness is typically discovered by fluke after a blood test that was performed for an entirely different medical reason. Smoldering Waldenström macroglobulinemia is the diagnosis for these people who present asymptomatic to it. Before a person with the illness exhibits observable signs and symptoms, it can take many years. Waldenström macroglobulinemia is an uncommon cancer of the blood cells that is distinguished by the proliferation of abnormal white blood cells within the bone marrow. These aberrant cells resemble both B cells, which are white blood cells, also known as lymphocytes, and plasma cells, which are B cells that went through a secondary type of development. The term "lymphoplasmacytic cells" refers to these irregular cells that display both lymphocyte and plasma-like features. Waldenström macroglobulinemia is categorized as a lymphoplasmacytic lymphoma as a result of these cells.
References
External links
Blood proteins | Macroglobulin | [
"Chemistry"
] | 1,707 | [
"Biochemistry stubs",
"Protein stubs"
] |
3,084,650 | https://en.wikipedia.org/wiki/Plated%20ware | Plated ware refers to articles chiefly intended for tableware consisting of a base metal or alloy covered by one of the precious metals, with the object of giving them the appearance of gold or silver. Historically, the standard amount of precious metal used was an ounce of silver per square foot of surface area (2.8cL per 930 cm2). Although items hand-plated with metal leaf date back to ancient times, large scale production dates to 1742 when Thomas Boulsover, of Sheffield, England developed a process by which silver plates were fused to base metal (generally copper) ingots by heating them in a furnace with borax. The ingots were then rolled down to a sheet, and from these sheets silver-plated articles were made.
Large articles such as dish covers were originally only silver-plated on one side, and after being worked into shape were tinned inside. The process varied regionally; in the West Midlands, bar-copper was the base metal used, which when bare of silver appeared dark red, whilst in Sheffield copper mixed with brass, an alloy of copper and zinc was used. The Sheffield process resulted in a harder and stronger end product ("Sheffield plate") and was consequently more popular, and Sheffield became the world's leading producer of metal tableware and cutlery. Following John Wright and George Elkington's development of commercial electroplating in 1840 (the process still in use today) the traditional method of production fell into rapid decline, although it continues to be used for some items subject to very heavy wear (notably buttons).
References
Serving and dining
Metal plating | Plated ware | [
"Chemistry"
] | 330 | [
"Metallurgical processes",
"Coatings",
"Metal plating"
] |
3,084,709 | https://en.wikipedia.org/wiki/Semi-differentiability | In calculus, the notions of one-sided differentiability and semi-differentiability of a real-valued function f of a real variable are weaker than differentiability. Specifically, the function f is said to be right differentiable at a point a if, roughly speaking, a derivative can be defined as the function's argument x moves to a from the right, and left differentiable at a if the derivative can be defined as x moves to a from the left.
One-dimensional case
In mathematics, a left derivative and a right derivative are derivatives (rates of change of a function) defined for movement in one direction only (left or right; that is, to lower or higher values) by the argument of a function.
Definitions
Let f denote a real-valued function defined on a subset I of the real numbers.
If is a limit point of and the one-sided limit
exists as a real number, then f is called right differentiable at a and the limit ∂+f(a) is called the right derivative of f at a.
If is a limit point of and the one-sided limit
exists as a real number, then f is called left differentiable at a and the limit ∂–f(a) is called the left derivative of f at a.
If is a limit point of and and if f is left and right differentiable at a, then f is called semi-differentiable at a.
If the left and right derivatives are equal, then they have the same value as the usual ("bidirectional") derivative. One can also define a symmetric derivative, which equals the arithmetic mean of the left and right derivatives (when they both exist), so the symmetric derivative may exist when the usual derivative does not.
Remarks and examples
A function is differentiable at an interior point a of its domain if and only if it is semi-differentiable at a and the left derivative is equal to the right derivative.
An example of a semi-differentiable function, which is not differentiable, is the absolute value function , at a = 0. We find easily
If a function is semi-differentiable at a point a, it implies that it is continuous at a.
The indicator function 1[0,∞) is right differentiable at every real a, but discontinuous at zero (note that this indicator function is not left differentiable at zero).
Application
If a real-valued, differentiable function f, defined on an interval I of the real line, has zero derivative everywhere, then it is constant, as an application of the mean value theorem shows. The assumption of differentiability can be weakened to continuity and one-sided differentiability of f. The version for right differentiable functions is given below, the version for left differentiable functions is analogous.
Differential operators acting to the left or the right
Another common use is to describe derivatives treated as binary operators in infix notation, in which the derivatives is to be applied either to the left or right operands. This is useful, for example, when defining generalizations of the Poisson bracket. For a pair of functions f and g, the left and right derivatives are respectively defined as
In bra–ket notation, the derivative operator can act on the right operand as the regular derivative or on the left as the negative derivative.
Higher-dimensional case
This above definition can be generalized to real-valued functions f defined on subsets of Rn using a weaker version of the directional derivative. Let a be an interior point of the domain of f. Then f is called semi-differentiable at the point a if for every direction u ∈ Rn the limit
with R exists as a real number.
Semi-differentiability is thus weaker than Gateaux differentiability, for which one takes in the limit above h → 0 without restricting h to only positive values.
For example, the function is semi-differentiable at , but not Gateaux differentiable there. Indeed,
with
(Note that this generalization is not equivalent to the original definition for n = 1 since the concept of one-sided limit points is replaced with the stronger concept of interior points.)
Properties
Any convex function on a convex open subset of Rn is semi-differentiable.
While every semi-differentiable function of one variable is continuous; this is no longer true for several variables.
Generalization
Instead of real-valued functions, one can consider functions taking values in Rn or in a Banach space.
See also
Derivative
Directional derivative
Partial derivative
Gradient
Gateaux derivative
Fréchet derivative
Derivative (generalizations)
Dini derivatives
References
Real analysis
Differential calculus
Articles containing proofs
Functions and mappings | Semi-differentiability | [
"Mathematics"
] | 947 | [
"Mathematical analysis",
"Functions and mappings",
"Calculus",
"Mathematical objects",
"Mathematical relations",
"Differential calculus",
"Articles containing proofs"
] |
3,084,771 | https://en.wikipedia.org/wiki/Electric%20bicycle%20laws | Many countries have enacted electric vehicle laws to regulate the use of electric bicycles, also termed e-bikes. Some jurisdictions have regulations governing safety requirements and standards of manufacture. The members of the European Union and other regions have wider-ranging legislation covering use and safety.
Laws and terminology are diverse. Some countries have national regulations with additional regional regulations for each state, province, or municipality. Systems of classification and nomenclature may vary. Jurisdictions may address "power-assisted bicycles" (Canada) or "electric pedal-assisted cycles" (European Union and United Kingdom) or simply "electric bicycles". Some classify pedelecs as being distinct from other bicycles using electric power. Consequently, any particular e-bike may be subject to different classifications and regulations in different jurisdictions.
Australia
In Australia, the e-bike is defined by the Australian Vehicle Standards as a bicycle that has an auxiliary motor with a maximum power output not exceeding 250 W without consideration for speed limits or pedal sensors. Each state is responsible for deciding how to treat such a vehicle and currently all states agree that such a vehicle does not require licensing or registration. Some states have their own rules such as no riding under electric power on bike paths and through built-up areas so riders should view the state laws regarding their use. There is no license and no registration required for e-bike use.
Since 30 May 2012, Australia has an additional new e-bike category using the European Union model of a pedelec as per the CE EN15194 standard. This means the e-bike can have a motor of 250,W of continuous rated power which can only be activated by pedalling (if above 6 km/h) and must cut out over 25 km/h – if so it is classed as a normal bicycle. The state of Victoria is the first to amend it's local road rules, see below.
Road vehicles in Australia must comply with all applicable Australian Design Rules (ADRs) before they can be supplied to the market for use in transport (Motor Vehicle Standards Act 1989 Cwth).
The ADRs contain the following definitions for bicycles and mopeds:
4.2. Two-Wheeled and Three-Wheeled Vehicles
4.2.1. PEDAL CYCLE (AA)
A vehicle designed to be propelled through a mechanism solely by human power.
4.2.2. POWER-ASSISTED PEDAL CYCLE (AB)
A pedal cycle to which is attached one or more auxiliary propulsion motors having a combined maximum power output not exceeding 200 watts.
4.2.3. MOPED - 2 Wheels (LA)
A 2-wheeled motor vehicle, not being a power-assisted pedal cycle, with an engine cylinder capacity not exceeding 50 ml and a "Maximum Motor Cycle Speed" not exceeding 50 km/h; or a 2-wheeled motor vehicle with a power source other than a piston engine and a "Maximum Motor Cycle Speed" not exceeding 50 km/h.
(Vehicle Standard (Australian Design Rule – Definitions and Vehicle Categories 2005 Compilation 3 19 September 2007).
There are no ADRs applicable to AA or AB category vehicles. There are ADRs for lighting, braking, noise, controls and dimensions for LA category vehicles, mostly referencing the equivalent UN ECE Regulations. An approval is required to supply to the market any road vehicle to which ADRs apply and an import approval is required to import any road vehicle into Australia.
New South Wales
In New South Wales, there are two types of power-assisted pedal cycle. For the first type, the electric motor's maximum power output must not exceed 200 watts, and the pedal cycle cannot be propelled exclusively by the motor. For the second type, known as a "pedalec", the vehicle must comply with the European Standard for Power Assisted Pedal Cycles (EN15194).
Since October 2014 all petrol powered cycles are explicitly banned.
Since February 2023 pedelac bikes in NSW can have a 500 watt motor.
Victoria
A bicycle designed to be propelled by human power using pedals may have an electric or petrol powered motor attached provided the motor's maximum power output does not exceed 200 watts.
As of 18 September 2012, the Victorian road rules have changed to enable a pedelec to be used as a bicycle in Victoria. The change allows more options of power assisted pedal cycles under bicycle laws.
A pedelec is defined as meeting EU standard EN15194, has a motor of no more than 250w of continuous rated power and which is only to be activated by pedalling when travelling at speeds of between 6 km/h and 25 km/h.
Queensland
In Queensland, the situation is similar to Victoria. There are two types of legal motorised bicycle. For the first type, the electric motor must not be capable of generating more than 200 watts of power. For the second type, known as a "pedalec", the vehicle must comply with the European Standard for Power Assisted Pedal Cycles (EN15194).
The pedals on a motorised bicycle must be the primary source of power for the vehicle. If the motor is the primary source of power then the device cannot be classed as a motorised bicycle. For example, a device where the rider can twist a throttle and complete a journey using motor power only without using the pedals, would not be classed as a motorised bicycle.
Motorised bicycles can be ridden on all roads and paths, except where bicycles are specifically excluded. Riders do not need to have a driver licence to ride a motorised bicycle.
Canada
Eight provinces of Canada allow electric power-assisted bicycles. In all eight provinces, e-bikes are limited to 500 W output, and cannot travel faster than on motor power alone on level ground. In Alberta prior to July 1, 2009, the limits were 750 W and , but presently match federal legislation. Age restrictions vary in Canada. All require an approved helmet. Regulations may or may not require an interlock to prevent the use of power when the rider is not pedaling. Some versions (e.g., if capable of operating without pedaling) of e-bikes require drivers' licenses in some provinces and have age restrictions. Vehicle licenses and liability insurance are not required. Generally, they are considered vehicles (like motorcycles and pedal cycles), so are subject to the same rules of the road as regular bicycles. In some cases, regulatory requirements have been complicated by lobbying in respect of the Segway PT.
Bicycles assisted by a gasoline motor or other fuel are regulated differently from e-bikes. These are classified as motorcycles, regardless of the power output of the motor and maximum attainable speed.
Note that in Canada, the term "assist bicycle" is the technical term for an e-bike and "power-assisted bicycle" is used in the Canadian Federal Legislation, but is carefully defined to only apply to electric motor assist, and specifically excludes internal combustion engines (though this is not the case in the United States).
Federal requirements
Since 2000, Canada's Motor Vehicle Safety Regulations (MVSR) have defined Power Assisted bicycles (PABs) as a separate category, and which require no license to operate. PABs are currently defined as a two- or three-wheeled bicycle equipped with handlebars and operable pedals, an attached electric motor of 500W or less, and a maximum speed capability of 32 km/h from the motor over level ground. Other requirements include a permanently affixed label from the manufacturer in a conspicuous location stating the vehicle is a power-assisted bicycle under the statutory requirements in force at the time of manufacture. All power-assisted bicycles must utilize an electric motor for assisted propulsion.
A power-assisted bicycle may be imported and exported freely within Canada without the same restrictions placed on auto-mobiles or a moped. Under federal law, power-assisted bicycles may be restricted from operation on some roads, lanes, paths, or thoroughfares by the local municipality.
Bicycle-style PABs are permitted on National Capital Commission's (NCC) Capital Pathway network, but scooter-style PABs are prohibited. All PABs (bicycle- and scooter-style) are permitted on dedicated NCC bike lanes. All PABs are prohibited in Gatineau Park's natural surface trails.
Provincial requirements for use
Alberta
Alberta identifies e-bikes as "power bicycles" and is consistent with the federal definition of "power-assisted bicycle" in MVSR CRC, c 1038 s 2. Motor output must not exceed and e-bikes cannot travel faster than . Fully operable pedals are required. No driver's license, vehicle insurance, or vehicle registration is required. Operators must be 12 years of age or older. All operators are required to wear a motorcycle helmet meeting the standards set in AR 122/2009 s 112(2). A passenger is permitted only if the e-bike is equipped with a seat designated for that passenger.
British Columbia
An e-bike is identified as a "motor-assisted cycle" (MAC) in British Columbia, which differs from electric mopeds and scooters, which are "limited-speed motorcycles". Motor-assisted cycles must: have an electric motor of no more than 500 W; have fully operable pedals; not be capable of propelling the device at a speed greater than ]. The engine must disengage when (a) the operator stops pedaling, (b) an accelerator controller is released, OR (c) a brake is applied. A driver's license, vehicle registration, and insurance are all not required. Rider must be 16 years old or more, and a bike helmet must be worn.
E-bikes in British Columbia must comply with all standards outlined in Motor Assisted Cycle Regulation, BC Reg 151/2002.
Ontario
Ontario is one of the last provinces in Canada to move toward legalizing power-assisted bicycles (PABs) for use on roads, even though they have been federally defined and legal in Canada since early 2001. In November 2005, "Bill 169" received royal assent allowing the Ministry of Transportation of Ontario (MTO) to place any vehicle on road. On October 4, 2006, the Minister of Transportation for Ontario Donna Cansfield announced the Pilot Project allowing PABs which meet the federal standards definition for operation on road. PAB riders must follow the rules and regulations of a regular bicycles, wear an approved bicycle helmet and be at least 16 years or older. There are still a number of legal considerations for operating any bicycle in Ontario.
On October 5, 2009, the Government of Ontario brought in laws regulating electric bikes in the province.
E-bikes, which can reach a speed of 32 kilometres per hour, are allowed to share the road with cars, pedestrians and other traffic throughout the province.
The new rules limit the maximum weight of an e-bike to 120 kilograms, require a maximum braking distance of nine metres and prohibit any modifications to the bike's motor that would create speeds greater than 32 kilometres per hour.
Also, riders must be at least 16 years of age, wear approved bicycle or motorcycle helmets and follow the same traffic laws as bicyclists.
Municipalities are also specifically permitted by the legislation to restrict where e-bikes may be used on their streets, bike lanes and trails, as well as restricting certain types of e-bike (e.g. banning "scooter-style" e-bikes from bicycle trails).
E-bikes are not permitted on 400-series highways, expressways or other areas where bicycles are not allowed.
Riding an e-bike under the age of 16 or riding an e-bike without an approved helmet are new offences in the legislation, carrying fines of between $60 and $500. E-bike riders are subject to the same penalties as other cyclists for all other traffic offences.
Manitoba
E-bikes are legal in Manitoba, so long as certain stipulations are met. The bike must not be designed to have more than three wheels touching the ground, the motor must stop providing motive power if the bike exceeds 32 km/h for any reason, the motor must be smaller than 500W, it has to have functioning pedals, if it is engaged by a throttle, the motor immediately stops providing the vehicle with motive power when the driver activates a brake, and if engaged by the driver applying muscle power to the pedals, the motor immediately stops providing the vehicle with motive power when the driver stops applying muscle power. The bike must also have either a mechanism to turn the electric motor on and off that can be operated by the driver, and if the vehicle has a throttle, is separate from the throttle, or a mechanism that prevents the motor from engaging until the vehicle is traveling at 3 km/h or more. The user must also be at least 14 years of age to operate an E-bike. All other Manitoba laws regarding cycling also apply.
New Brunswick
To be allowed on the road it needs wheel rims larger than 9 inches, have a headlight for night, a seat at least 27 inches off the ground.
New Brunswick's Policy on Electric Motor Driven Cycles and Electric Bicycles
The Registrar will permit an electric motor driven cycle to be registered if it meets Canada Motor Vehicle Safety Standards (CMVSS) as a Limited Speed Motorcycle, or Scooter as is done with gas powered motor driven cycles. If the vehicle was manufactured after 1988, it will bear a compliance label stating that it meets these standards. The operator will be subject to all the requirements placed on operators of motor driven cycles.
If the vehicle is able to powered by human force and has a motor 500W or less, and the motor is not capable of assisting when the vehicle is traveling at a speed greater than 32 km/h then it can be considered a bicycle and all the requirements placed on bicyclists are applicable.
It is important to note that if a vehicle has an electric motor greater than 500 watts and is capable of powering the vehicle when traveling at a speed greater than 32 km/h and it does not have a CMVSS compliance label it cannot be registered unless the owner can prove, by having the vehicle certified by an engineer, that it is safe for operation on NB highways. Also, not all vehicles are suitable for operation on NB highways and it could be that the vehicle in question may not be a motor driven cycle or a bicycle and cannot be operated on the highway at all.
Power Assisted Bicycle Label:
Manufacturers of e-bikes must permanently affix a label, in a conspicuous location, stating in both official languages that the vehicle is a power-assisted bicycle as defined in the regulations under the federal Motor Vehicle Safety Act. Homemade e-bikes will not have this label.
NOTE 1: The previous version of the policy had a section on it needing to "look like a bike" or a "bike style frame" but never defined what those were. That has been dropped and is no longer part of the new policy.
NOTE 2: The top speed of the bike if propelled by human power is the posted speed limit, but the motor is only allowed to get up to and keep at 32 km/h. If the posted limit is under 32 then the posted limit is the limit allowed.
NOTE 3: There is no maximum weight limit.
NOTE 4: Ebikes are allowed to use cargo trailers/kid trailers.
NOTE 5: There is no minimum age set.
NOTE 6: DUI – If you have a DUI conviction the restrictions of the DUI override the ebike policy definition of an ebike as a bicycle and put it into the motor vehicle category.
Newfoundland
Nova Scotia
In Nova Scotia power-assisted bicycles are classified similarly to standard pedal bicycles. The Nova Scotia Motor Vehicle Act defines a power-assisted bicycle as a bicycle with an electric motor of 500 watts or less, with two wheels (one of which is at least 350 mm) or four wheels (two of which are at least 350mm). PABs are permitted on the road in the province of Nova Scotia as long as you wear an approved bicycle helmet with the chinstrap engaged. They do not have to meet the conditions defined within the Canadian Motor Vehicle Safety Regulations for a motorcycle (they are not classed as "motor vehicles"), but they do have to comply with federal regulations that define Power Assisted Bicycles.
Prince Edward Island
Are treated as Mopeds and will need to pass inspection as a moped.
Quebec
In Quebec power-assisted bicycles are often classified similarly to standard pedal bicycles. They do not have to meet the conditions defined within the Canadian Motor Vehicle Safety Regulations (they are not classed as "motor vehicles"), but they do have to comply with federal regulations that define Power Assisted Bicycles. The Quebec Highway Safety Code defines a power-assisted bicycle as a bicycle (2 or 3 wheels that touch the ground) with an electric motor with a maximum power of 500W and a top speed of 32 km/h bearing a specific compliance label permanently attached by the manufacturer. PABs are permitted on the road in the province of Quebec, but riders have to be 14 and over to ride the electric bicycle and if they are under the age of 18, must have a moped or scooter license.
Saskatchewan
Power assisted bicycles are classified in two categories in Saskatchewan. An electric assist bicycle is a 2 or 3-wheeled bicycle that uses pedals and a motor at the same time only. A power cycle uses either pedals and motor or motor only. Both must have engines with 500 watt power or less, and must not be able exceed , i.e., electric motor cuts out at this speed or cycle is unable to go this fast on a level surface. The power cycle has to meet the Canadian Motor Vehicle Safety Standards (CMVSS) for a power-assisted bicycle. The power cycle requires at least a learner's driving licence (class 7), and all of the other classes 1–5 may operate these also. The electric assist bicycle does not require a licence. Helmets are required for each. Both are treated as bicycles regarding rules of the road. Gas powered or assisted bicycles are classified as motorcycles regardless of engine size or if using pedals plus motor. Stickers identifying the bicycle's compliance with the Federal classification may be required for power cycles by some cities or municipalities.
China
Mainland
In China, e-bikes currently come under the same classification as bicycles and hence do not require a driver's license to operate. Previously it was required that users registered their bike in order to be recovered if stolen, although this has recently been abolished. Due to a recent rise in electric-bicycle-related accidents, caused mostly by inexperienced riders who ride on the wrong side of the road, run red lights, do not use headlights at night etc., the Chinese government plans to change the legal status of illegal bicycles so that vehicles with an unladen weight of or more and a top speed of or more will require a motorcycle license to operate, while vehicles lighter than and slower than 30 km/h can be ridden unlicensed. In the southern Chinese cities of Guangzhou, Dongguan and Shenzhen, e-bikes, like all motorcycles, are banned from certain downtown districts. There are also bans in place in small areas of Shanghai, Hangzhou and Beijing. Bans of "Scooter-Style Electric Bikes" (SSEB) were however cancelled and in Shenzhen e-bikes may be seen on the streets nowadays (2010–11).
Electric powered bicycles slower than 20 km/h without pedaling are legally recognized as a non-mechanically operated vehicle in China.
According to "TECHNOLOGY WATCH", this should help promote its widespread use. Electric bicycles were banned in some areas of Beijing from August 2002 to January 2006 due to concerns over environmental, safety and city image issues. Beijing has re-allowed use of approved electric bicycles as of January 4, 2006. Some cities in China still ban electric bikes.
Hong Kong
Hong Kong has independent traffic laws from mainland China.
Electric bikes are considered motorcycles in Hong Kong, and therefore need type approval from the Transport Department, just as automobiles. All electric bikes available in Hong Kong fail to meet the type approval requirement, and the Transport Department has never granted any type approval for an electric bike, making all electric bikes effectively illegal in Hong Kong. Even if they got type approval, the driver would need a motorcycle driving licence to ride. As a side note, Hong Kong does not have a moped vehicle class (and therefore no moped driving license), and mopeds are considered motorcycles too.
Electric bicycles are not allowed in any public area, meaning an area where there is full or partial public access. Any kind of pedal assist, electric bike, scooter, moped or vehicle which has any form of propulsion, whether in full or as assist, other than human power, must be approved as either a car, motorcycle, van, truck, bus or similar. This makes pedelecs and tilt-controlled two-wheel personal vehicles illegal in all practical ways, as they cannot be registered as motorcycles.
Europe
European Union definition
Regulation (EU) No 168/2013 of the European Parliament and of the council, which replaced 2002/24/EC on 1 January 2016 but is substantially the same, exempts vehicles with the following definition from the requirement for type approval: "pedal cycles with pedal assistance which are equipped with an auxiliary electric motor having a maximum continuous rated power of less than or equal to 250 W, where the output of the motor is cut off when the cyclist stops pedalling and is otherwise progressively reduced and finally cut off before the vehicle speed reaches ". This is the de facto definition of an electrically assisted pedal cycle in the EU. As with all EU directives, individual member countries of the EU are left to implement the requirements in national legislation. The EU specification does not require a helmet to be worn when riding this class of bicycle.
European product safety standard EN 15194 was published in 2009. The aim of EN 15194 is "to provide a standard for the assessment of electrically powered cycles of a type which are excluded from type approval by Directive 2002/24/EC".
National requirements
Belgium
In Belgium, Technical laws passed on 09/09/2016 and 17/11/2017 allow for three types of e-bikes:
250 W 25 km/h limited "e-bikes" for all ages without a helmet.
1000 W 25 km/h limited "motorized-bikes", over 16 years, with conformity certificate, without a helmet.
4000 W 45 km/h limited "speed pedelecs", which are classed as mopeds for all requirements.
Denmark
In Denmark, Parliament has decided to approve the speed pedelec – a type of super electric bike that can reach speeds of up to 45 km/h – for riding on cycle paths. Danish Parliament has decided that as of 1 July 2018, those operating the superbikes only need to have turned 15 and wear an approved helmet, and a license if between 15 and 18.
"The regulations for experimental schemes with Speed Pedelecs (45 km/h) and the rules for e-bikes (25 km/h) are as follows:
- A Speed Pedelec must be EU type-approved, and a physical type plate containing data from the approval must be present on the bicycle. It must not exceed 20 km/h without pedal assistance.
- An e-bike can achieve a maximum speed of 6 km/h without pedal assistance (referred to as the 'walk function') and a maximum of 25 km/h with pedal assistance. The walk function can be operated via a button or twist/gas handle.
- There is no specific requirement that an e-bike with a maximum 250 W motor must be EU type-approved. However, it must comply with the Machinery Directive and carry the CE mark.
- Regarding how the law applies to configuring different restrictions via a computer/display, there is no specific information available."
Finland
In Finland, Bicycles meeting the European Union definition can be used without regulation. Bicycles with 250-1000 W electric motors, or which allow assistance without pedalling, are classified as L1e-A-class motorised bicycles according to EU regulation, and must be insured for use on public roads, and limited to 25 km/h.
Latvia
In Latvia, the laws do not set any additional provisions specifically for electric bicycles other than defining a "bicycle" for the Road Traffic Law as a human-powered vehicle that may be equipped with an electric motor with power of no more than 250 W.
Norway
As a member of the European Economic Area (EEA), Norway implemented the European Union definition. As in the EU, pedelec e-bikes are classified as ordinary bicycles, according to the Vehicle Regulation (kjøretøyforskriften) § 4–1, 5g, not registered in the Vehicle Registry, and with no requirement for a driving license.
Sweden
Sweden uses the European Union definition, according to the Swedish Vehicle Regulation (Trafikverket).
Switzerland
Regulations updated in 2012 categorize electric-assisted pedal bikes as "light", usable without regulation if their motor power does not exceed 500W, and their maximum speed is 25 km/h if pedalled, 20 km/h without pedal assistance.
Switzerland (not an EU member) has more liberal standards for fast electric bicycles than most of Europe, with an easy process to obtain a license to use 45 km/h e-bikes.
Turkey
Laws are similar to those in the EU.
United Kingdom
Laws were amended in 2015 to match much of the EU regulation detail, including a 250W power limit and 25 km/h speed limit. A minimum rider age of 14 years old applies.
India
Indian law requires that all-electric vehicles have ARAI approval. Vehicles with below 250W and speed less than 25 km/h, do not require certification- hence not following full testing process, but needs to get an exemption report from ARAI. Whereas more powerful vehicles need to go through a full testing process following CMVR rules. This can take time and cost money but assures safe and reliable design for Electric Vehicles. These regulations are not promulgated by the Regional Transport offices, and riders are not required to obtain a license to drive, carry insurance, or wear a helmet. In India, all-electric cycles which do not require a licence and registration are made in accordance with the guidelines issued by ARAI.
Israel
In Israel, persons above 16 years old are allowed to use pedal-assisted bicycles with power of up to 250 W and a speed limit of 25 km/h. The bicycle must satisfy the European Standard EN15194 and be approved by the Standards Institution of Israel. A new law, effective January 10, 2019, states that riders under 18 who have no automobile license will need a special permit. Otherwise, no license or insurance is required. Other motorized bicycles are considered to be motorcycles and should be licensed and insured as such. The maximum weight of the e-bike itself cannot exceed 30 kg.
The Israeli Ministry of Transportation passed legislation in 2009 and again in 2018. The 2018 law is effective from January 1, 2019, and regards a bicycle permit:
Israeli authorities passed legislation, as of December 2009, that allows electric bicycles to be legal for street use in the country under the following criteria:
The maximum power of the electric engine is not higher than 250W.
The electric motor is activated by the rider's pedalling effort and it has to cut out completely when the rider stops pedalling.
The electric motor power decreases with the advance of the bicycle speed and it must cut out completely whenever the bicycle reaches a speed of 25 km/h.
The electric bicycle has to comply with the European standard — BSEN 15194.
Japan
In Japan, Electric-assisted bicycles are treated as human-powered bicycles, while bicycles capable of propulsion by electric power alone face additional registration and regulatory requirements as mopeds. Requirements include electric power generation by a motor that cannot be easily modified, along with a power assist mechanism that operates safely and smoothly. In December 2008, The assist ratio was updated as follows:
Under 10 km/h; 2
10–24 km/h; 2-(Running speed - 10) / 7
Over 24 km/h; 0
(See Moped#Individual countries/regions)
New Zealand
In New Zealand, the regulations read: "AB (Power-assisted pedal cycle) A pedal cycle to which is attached one or more auxiliary propulsion motors having a combined maximum power output not exceeding 300 watts." This is explained by NZTA as "A power-assisted cycle is a cycle that has a motor of up to 300 watts. The law treats these as ordinary cycles rather than motorcycles. This means that it is not necessary to register or license them. Note that the phrase "maximum power output" that is found in the regulation (but omitted in the explanation) may create confusion because some e-bike motor manufacturers advertise and print on the motor their "maximum input power" because that number is larger (typically motors run at about 80% efficiency) thus give the impression the buyer is getting a more powerful motor. This can cause misunderstandings with law enforcement officers who do not necessarily understand the difference, and when stopping a rider on an e-bike in a traffic stop, look at the number on the motor to determine if the e-bike is legal or not.
Vehicles with electric power and power of less than 300 W are classified as "not a motor vehicle". Such electric bicycles must comply with the same rules as bicycles. You must wear a helmet even on a scooter or bike under 300 W. If the power is over 300 W or a combustion engine is used it is a "low -powered vehicle" and the moped rules apply. Specifically, a driver's license and registration are required.
Philippines
In the Philippines, the Land Transportation Office issued Memorandum Circular 721-2006 stating that registration is not needed for electric bicycles (i.e. electric motor assisted bicycles with working pedals) and even extended the exemption to "bicycle-like" vehicles.
Russian Federation
According to Russian law, bicycles can have electric motors with nominal output power of 250 watts or less which automatically turns itself off on speeds above 25 km/h. No driver's license is required.
Comparison
(*) Allowed on bike paths when electric systems are turned off
(**) E-bikes are illegal in this region
Comparison of US rules and regulations
Identity: How exactly does legislation identify the electric bicycle?
Type: How does the law define vehicle type?
Max Speed: Maximum speed when powered solely by the motor.
Max Power: Maximum motor power, or engine size, permitted.
Helmet: Is use of a helmet mandatory?
Minimum Age: Operator's minimum age.
Driver's License: Is a license or endorsement required for the driver?
United States
Federal laws and regulations on sales
The U.S. Consumer Product Safety Act states that electric bicycles and tricycles meeting the definition of low-speed electric bicycles will be considered consumer products. The Consumer Product Safety Commission (CPSC) has regulatory authority to assure, through guidelines and standards, that the public will be protected from unreasonable risks of injury or death associated with the use of electric bicycles.
In addition to federal and state electric bicycle regulations, people with certain mobility disabilities may be granted use of Class I and Class II electric bicycles per Title 28 Chapter 1 Part 36 at certain locations where electric bicycles are not normally permitted, so long as they can be used reasonably safely.
Defined
The federal Consumer Product Safety Act defines low-speed electric bicycle" as a two or three-wheeled vehicle with fully operable pedals, a top speed when powered solely by the motor under and an electric motor that produces less than . The Act authorizes the Consumer Product Safety Commission to protect people who ride low-speed electric vehicles by issuing necessary safety regulations. The rules for e-bikes on public roads, sidewalks, and pathways are under state jurisdiction and vary.
In conformance with legislation adopted by the U.S. Congress defining this category of electric-power bicycle (15 U.S.C. 2085(b)), CPSC rules stipulate that low-speed electric bicycles (to include two- and three-wheel vehicles) are exempt from classification as motor vehicles providing they have fully operable pedals, an electric motor of less than , and a top motor-powered speed of fewer than when operated by a rider weighing 170 pounds. An electric bike remaining within these specifications is subject to the CPSC consumer product regulations for a bicycle. Commercially manufactured e-bikes exceeding these power and speed limits are regulated by the federal DOT and NHTSA as motor vehicles and must meet additional safety requirements. The legislation enacting this amendment to the CPSC is also known as HR 727. The text of HR 727 includes the statement: "This section shall supersede any State law or requirement concerning low-speed electric bicycles to the extent that such State law or requirement is more stringent than the Federal law or requirements." (Note that this refers to consumer product regulations enacted under the Consumer Product Safety Act. Preemption of more stringent state consumer product regulations does not limit State authority to regulate the use of electric bicycles, or bicycles in general, under state vehicle codes.)
State requirements for use
While Federal law governs consumer product regulations for "low-speed electric bicycles", as with motor vehicles and bicycles, regulation of how these products are used on public streets is subject to state vehicle codes. There is significant variation from state to state, as summarized below.
Alabama
Every bicycle with a motor attached is defined as a motor-driven cycle. The operation of a motor-driven cycle requires a class M driver license. Restricted class M driver licenses are available for those as young as 14 years of age.
Arizona
Under Arizona law, motorized electric bicycles and tricycles meeting the definition under the applicable statute are not subject to title, licensing, insurance, or registration requirements, and may be used upon any roadway authorized for use by conventional bicycles, including use in bike lanes integrated with motor vehicle roadways. Unless specifically prohibited, electric bicycles may be operated on multi-use trails designated for hiking, biking, equestrian, or other non-motorized use, and upon paths designated for the exclusive use of bicycles. No operator's license is required, but anyone operating a bicycle on Arizona roads must carry proof of identity. A "motorized electric bicycle or tricycle" is legally defined as a bicycle or tricycle that is equipped with a helper motor that may be self-propelled, which is operated at speeds of less than twenty miles per hour. Electric bicycles operated at speeds of twenty miles an hour or more, but less than twenty-five miles per hour may be registered for legal use on the roadways as mopeds, and above twenty-five miles per hour as a registered moped with an 'M' endorsement on the operator's driving license. However, mopeds in Arizona are prohibited from using bike lanes on motor vehicle roadways. The Arizona statute governing motorized electric bicycles does not prohibit local jurisdictions from adopting an ordinance that further regulates or prohibits the operation of motorized electric bicycles or tricycles.
Arkansas
Arkansas does not define E-bikes. The following definition describes a combustion engine. E-bikes being electric do not have a cylinder capacity and thus this law is not technically applicable. The state defines a "Motorized bicycle" as "a bicycle with an automatic transmission and a motor of less than 50cc." Riders require either a certificate to operate a motorized bicycle, a motorcycle license, a motor-driven cycle license, or a license of class A, B, C or D. Certificates cannot be issued to riders under 10 years of age.
California
Electric Bicycles are defined by the California Vehicle Code.
New legislation became effective January 2016. The current regulations define an "electric bicycle", a bicycle equipped with fully operable pedals and an electric motor of less than 750 watts, separated into three classes:
Beginning January 1, 2017, manufacturers and distributors of electric bicycles will be required to apply a label that is permanently affixed, in a prominent location, to each electric bicycle, indicating its class. Should a user "tamper with or modify" an electric bicycle, changing the speed capability, they must replace the label indicating the classification.
Driver's licenses, registration, insurance and license plate requirements do not apply. An electric bicycle is not a motor vehicle. Drinking and driving laws apply. Additional laws or ordinances may apply to the use of electric bicycles by each city or county.
Colorado
Ebike definition in Colorado follows the HR 727 National Law: e-power and max, 2 or 3 wheels, pedals that work. Legal low-powered ebikes are allowed on roads and bike lanes, and prohibited from using their motors on bike and pedestrian paths, unless overridden by local ordinance. The city of Boulder is the first to have done so, banning ebikes over 400W from bike lanes. Bicycles and Ebikes are disallowed on certain high speed highways and all Interstates unless signed as "Allowed" in certain rural Interstate stretches where the Interstate is the ONLY means of travel.
Connecticut
Section 14-1 of Connecticut state law classifies electric bicycles as "motor-driven cycles" if they have a seat height of not less than 26 inches and a motor which produces brake horsepower of 2 or less.
Motor-driven cycles may be operated on the roadway without registration, but the operator must have a driver's license. The cycle may not be operated on any sidewalk, limited access highway or turnpike. If the maximum speed of the cycle is less than the speed limit of the road, the cycle must operate in the right hand lane available for traffic or upon a usable shoulder on the right side of the road unless the operator is making a left turn.
District of Columbia
Electric-assist and other "motorized bicycles" do not need to be inspected, do not require a license, and do not require registration. The vehicle must meet all of the following criteria: a post mounted seat for each person it is designed to carry, two or three wheels which contact the ground, fully operative pedals, wheels at least 16 inches in diameter and a motor not capable of propelling the device at more than 20 mph on level ground. The driver does not need a license, but must be at least 16 years old. DC law prohibits motorized bicycles from traveling anywhere on the sidewalk or in the bike lanes. DC Regulation 18–1201.18 provides: "Except as otherwise permitted for a motor vehicle, no person shall operate a motorized bicycle on any sidewalk or any off-street bikepath or bicycle route within the District. This prohibition shall apply even though the motorized bicycle is being operated solely by human power." So, if cars are prohibited in a particular place, motor-assisted bikes are also prohibited.
Florida
Florida DMV Procedure RS-61 II. "(B.) Dirt bikes noted for off road use, motorized bicycles and Go-Peds are not registered."
Electric Helper-Motor Bicycles
If you are at least 16 years old, a person may ride a bicycle that is propelled by a combination of human power (pedals) and an electric helper-motor that cannot go faster than 20 mph on level ground without a driver license.
Motorized Bicycles and Motorized Scooters
Under Title 23, Chapter 316 of the code, bicycles and motorized bicycles are defined as follows: Bicycle—Every vehicle propelled solely by human power, and every motorized bicycle propelled by a combination of human power and an electric helper motor capable of propelling the vehicle at a speed of not more than 20 miles per hour on level ground upon which any person may ride, having two tandem wheels, and including any device generally recognized as a bicycle though equipped with two front or two rear wheels. The term does not include such a vehicle with a seat height of no more than 25 inches from the ground when the seat is adjusted to its highest position or a scooter or similar device. No person under the age of 16 may operate or ride upon a motorized bicycle.
Motorized Scooter—Any vehicle not having a seat or saddle for the use of the rider, designed to travel on not more than three wheels, and not capable of propelling the vehicle at a speed greater than 30 miles per hour on level ground.
In addition to the statutory language, there are several judicial rulings on the subject.
Georgia
Georgia Code 40-1-1 Part 15.3
Hawaii
A Federal agency, the Consumer Product Safety Commission (CPSC), has exclusive jurisdiction over electric bicycles as to consumer product regulations, but this does not change state regulation of the use of electric bicycles on streets and highways.
"Bicycle" means every vehicle "propelled solely by human power"
upon which any person may ride, having two tandem wheels, and including any vehicle generally recognized as a bicycle though equipped with two front or two rear wheels except a toy bicycle.
Now (by update on September 20, 2019), the DOT of HI STATE has announced the normalization of electric bicycles on city roads (registration fee of $30) under HB.812 (- any 2-3 wheel electric bikes with a DC motor below or up to 750W is qualified to be a bicycle; minimal age to ride an e-bike is 15). HB.812 was passed in both House and Senate floors in March 2019, and it was signed to be effect by Governor David Ige in July 2019.
"Moped" means a device upon which a person may ride which is DOT Approved.
Under the statute, mopeds must be registered. To be registered under Hawaii law a moped must bear a certification label from the manufacturer stating that it complies with federal motor vehicle safety standards (FMVSS). A moped must also possess the following equipment approved by the D.O.T. under Chapter 91: approved braking, fuel, and exhaust system components; approved steering system and handlebars; wheel rims; fenders; a guard or protective covering for drive belts, chains and rotating components; seat or saddle; lamps and reflectors; equipment controls; speedometer; retracting support stand; horn; and identification markings.
Illinois
Two relevant laws for Illinois are 625 ILCS 5/11-1517 & 625 ILCS 5/1-140.10.
625 ILCS 5/11-1517
Each low-speed electric bicycle operating in Illinois should comply with requirements adopted by the United States Consumer Product Safety Commission under 16 CFR 1512. Class 3 low-speed electric bicycle need an accurate speedometer in miles per hour.
After January 1, 2018, every manufacturer and distributor of low-speed electric bicycles needs a permanent and prominent label on the bicycle detailing:
(1) the classification number for the bicycle from 625 ILCS 5/1-140.10
(2) the bicycle's top assisted speed
(3) the bicycle's motor wattage
in Arial font in at least 9-point type.
No person shall knowingly tamper or modify the speed capability or engagement of a low-speed electric bicycle without replacing the original label with accurate class, assisted top speed and motor wattage.
A Class 2 low-speed electric bicycle's electric motor should disengage or cease to function when the brakes are applied. For Class 1 and Class 3 low-speed electric bicycles, the electric motor should disengage or cease to function when the rider stops pedaling.
Low-speed electric bicycle can go on any highway, street, or roadway authorized for use by bicycles, including, but not limited to, bicycle lanes.
A municipality, county, or local authority with jurisdiction can prohibit the use of low-speed electric bicycles or a specific class of low-speed electric bicycles on a bicycle path. Otherwise, low-speed electric bicycles are allowed on a bicycle path.
Low-speed electric bicycle cannot go on a sidewalk.
Class 3 low-speed electric bicycle drivers need to be 16 years or older. If the Class 3 low-speed electric bicycle is designed to accommodate passengers, there are no age restrictions on passengers.
Low-speed electric bicycle & class is defined by 625 ILCS 5/1-140.10.
A "low-speed electric bicycle" is not a moped or a motor driven cycle.
All "low-speed electric bicycle"s must have fully operable pedals and an electric motor of less than 750 watts. Furthermore, they must qualify as a class 1,2 or 3.
Class 1 low-speed electric bicycles have a motor that provides assistance only when the rider is pedaling and does not assist over 20 miles per hour.
Class 2 low-speed electric bicycles have a motor that can exclusively to propel the bicycle and does not assist over 20 miles per hour.
Class 3 low-speed electric bicycles have a motor that provides assistance only when the rider is pedaling and that ceases to provide assistance when the bicycle reaches a speed of 28 miles per hour.
Indiana
In Indiana, the law for E-bikes was changed and now E-bikes are regulated like bicycles. The same rules of the road apply to both e-bikes and what we historically think of as bicycles (i.e. human powered). During the 2019 update to the Indiana Code of Motor Vehicles, E-bikes were put in three classes.
Iowa
In 2006 a bill was passed that changed the definition of a bicycle to include a bicycle that has an electric motor of less than 1 hp (750 watts). The new definition, found in Iowa Code section 321.1(40)c states:
"Bicycle" means either of the following: (1) A device having two wheels and having at least one saddle or seat for the use of a rider which is propelled by human power. (2) A device having two or three wheels with fully operable pedals and an electric motor of less than 750 watts (one horsepower), whose maximum speed on a paved level surface, when powered solely by such a motor while ridden, is less than 20 miles per hour.
Kentucky
Electric bicycle fits under the definition of "moped" under Kentucky law. No tag or insurance is required, but a driver's license is required. "Moped" means either a motorized bicycle whose frame design may include one (1) or more horizontal crossbars supporting a fuel tank so long as it also has pedals, or a motorized bicycle with a step-through type frame which may or may not have pedals rated no more than two (2) brake horsepower, a cylinder capacity not exceeding fifty (50) cubic centimeters, an automatic transmission not requiring clutching or shifting by the operator after the drive system is engaged, and capable of a maximum speed of not more than thirty (30) miles per hour Helmets are required.
Louisiana
Louisiana Revised Statute R.S. 32:1(41) defines a motorized bicycle as a pedal bicycle which may be propelled by human power or helper motor, or by both, with a motor rated no more than one and one-half brake horsepower, a cylinder capacity not exceeding fifty cubic centimeters, an automatic transmission, and which produces a maximum design speed of no more than twenty-five miles per hour on a flat surface. Motorized bicycles falling within this definition must be registered and titled under Louisiana law. Additionally, a motorized bicycle operated upon Louisiana roadways or highways by a person fifteen years of age or older and producing more than five horsepower must possess a valid driver's license with a motorcycle endorsement and adhere to laws governing the operation of a motorcycle, including the wearing of approved eye protectors or a windshield and the wearing of a helmet. The statute also states that "Motorized bicycles such as pocket bikes and scooters that do not meet the requirements of this policy shall not be registered."
As R.S. 32:1(41) refers to motorized bicycles using "an automatic transmission" with helper motors rated in horsepower and cylinder capacity, not by watts or volts, the statute arguably does not cover bicycles powered by an electric motor(s), whether self-propelled or pedal-assist designs.
Maryland
Maryland defines an "electric bicycle" as a vehicle that (1) is designed to be operated by human power with the assistance of an electric motor, (2) is equipped with fully operable pedals, (3) has two or three wheels, (4) has a motor with a rating of 500 watts or less, (5) and is capable of a maximum speed of 20 miles per hour on a level surface when powered by the motor. (Senate Bill 379, approved by the Governor 5/5/2014, Chapter 294.) This legislation excludes "electric bicycle" from the definition of "moped", "motorized minibike", and "motor vehicle", and removes the titling and insurance requirements required for electric bicycles under prior Maryland law.
Before September 20, 2014, Maryland law had classified an electric bicycle as a moped. Mopeds are specifically excluded from the definition of "motor vehicle" per § 11-135 of the Maryland Transportation Code. Mopeds may not be operated sidewalks, trails, roadways with posted speeds in excess of 50 mph, or limited-access highways.
Standard requirements for bicycle lighting, acceptable bicycle parking locations, and prohibitions on wearing earplugs or headsets over both ears apply.
Recent legislation has passed putting Maryland ebike laws in line with the popular class 1,2,3 systems previously implemented in states such as California. This legislation becomes effective October 2019. The most significant portion of this change is the increased max limit on power and speed. It will be increased from a max of 500w / 20 mph to 750w / 28 mph (assuming the ebike in question meets class 3 criteria)
Massachusetts
Massachusetts has the following two definitions for electric bicycles since November 8, 2022:
Class I electric bicycle: Bicycle equipped with a motor that provides assistance only when the rider is pedaling, and that ceases to provide assistance when the electric bicycle reaches 20 mph, with an electric motor of 750 watts or less.
Class II electric bicycle: Bicycle equipped with a throttle-actuated motor that ceases to provide assistance when the electric bicycle reaches 20 mph, with an electric motor of 750 watts or less.
Riders of these electric bicycles do not require a license and are afforded all the rights and privileges related to all bicycle riders except as noted: These electric bicycles are allowed most places traditional bicycles are allowed: roadways, bike lanes, bike paths, paved trails, and if specifically allowed by local jurisdiction and signage, natural surface trails. Electric bicycles are prohibited from sidewalks and most natural surface trails. Local jurisdictions may prohibit the use of electric bicycles on bikeways and bike paths, but this first requires a public notice, public hearing, and posted signage prohibiting electric bicycles. Manufacturers and distributors of electric bicycles must apply a prominent fixed label specifying the classification number, top assisted speed, and motor wattage. Persons who modify the motor-powered speed capability or engagement of the electric bicycle must appropriately replace this label.
Massachusetts does not explicitly use the Federal definition of Class III, 28 mph pedal assist e-bikes. Massachusetts General Laws defines other classes of motorized two-wheeled vehicles that are not Class I or Class II electric bicycles: Motorcycle, Motorized bicycle, and Motorized scooter. Although the definition of motorized scooter includes two-wheeled vehicles propelled by electric motors with or without human power, motorized scooter specifically excludes anything which falls under the definitions of Class I or Class II electric bicycles, motorized bicycle, and motorcycle. Motorized bicycle is a pedal bicycle which has a helper motor, or a non-pedal bicycle which has a motor, with a cylinder capacity not exceeding fifty cubic centimeters, an automatic transmission, which is capable of a maximum speed of no more than thirty miles per hour, and does not fall under the specific definition of a Class I or Class II electric bicycles. Motorcycle includes any bicycle with a motor or driving wheel attached, with the exception of vehicles that fall under the specific definition of motorized bicycle. Thus, a pedal bicycle with an electric motor or a non-pedal bicycle with an electric motor, automatic transmission, maximum speed of 30 miles an hour, and not a Class I or Class II electric bicycle, would fall under the definition of motorized bicycle. A non-Class I or II electric bicycle that did not meet those restrictions would be either a motorized scooter or motorcycle, depending on specific characteristics.
A motorized bicycle cannot be operated by any person under sixteen years of age. Motorized bicycles also cannot be driven at a speed exceeding twenty-five Miles per Hour within the commonwealth, and they are explicitly prohibited from being driven on public highways, public walkways or other public land as designated by the parks department. A motorized bicycle cannot be operated by any person not possessing a valid driver's license or learner's permit. Every person operating a motorized bicycle upon has the right to use all public ways in the commonwealth except limited access or express state highways where signs specifically prohibiting bicycles have been posted, and are subject to the traffic laws and regulations of the commonwealth. Motorized bicycles may be operated on bicycle lanes adjacent to the various ways, but are excluded from off-street recreational bicycle paths. Every person operating a motorized bicycle or riding as a passenger on a motorized bicycle must wear protective headgear, and no person operating a motorized bicycle can permit any other person to ride a passenger on such motorized bicycle unless such passenger is wearing such protective headgear.
Michigan
An electric bicycle (or e-bike) is a bicycle that has a small rechargeable electric motor that can give a boost to the pedaling rider or can take over pedaling completely. To qualify as an e- bike in Michigan, the bike must meet the following requirements:
It must have a seat or saddle for the rider to sit.
There must be fully operational pedals.
It must have an electric motor of no more than 750 watts (or 1 horsepower).
Whether you can ride an e-bicycle on a trail depends on several factors, including the e-bike's class, the type of trail and whether the authority that manages or oversees the trail allows the use. To learn more, read the full legislation or review the provided summary.
Minnesota
Electric-assisted bicycles, also referred to as "e-bikes", are a subset of bicycles that are equipped with a small attached motor. To be classified as an "electric-assisted bicycle" in Minnesota, the bicycle must have a saddle and operable pedals, two or three wheels, and an electric motor of up to 750 watts, as well as meet certain federal motor vehicle safety standards. The motor must disengage during braking and have a maximum speed of 20 miles per hour (whether assisted by human power or not). Minn. Stat. §169.011, subd. 27.
Legislative changes in 2012 significantly altered the classification and regulatory structure for e-bikes. The general effect was to establish electric-assisted bicycles as a subset of bicycles and regulate e-bikes in roughly the same manner as bicycles instead of other motorized devices with two (or three) wheels. Laws 2012, ch. 287, art. 3, §§ 15–17, 21, 23–26, 30, 32–33, and 41. The 2012 Legislature also modified and clarified regulation of e-bikes on bike paths and trails. Laws 2012, ch. 287, art. 4, §§ 1–4, 20.
Following the 2012 change, electric-assisted bicycles are regulated similarly to other bicycles. Most of the same laws apply. Minn. Stat. § §169.011, subd. 27; 169.222.
The bicycle does not need to be registered, and a title is no longer necessary. Minn. Stat. §§ 168.012, subd. 2d;168A.03, subd. 1 clause(11)
A license plate is no longer required to be displayed on the rear. See Minn. Stat. § 169.79, subd. 3. It is not subject to motor vehicle sales tax (the general sales tax would instead be owed on e-bike purchases).
A driver's license or permit is not required. Unlike a non-powered bicycle, the minimum operator age is 15 years old. Minn. Stat. § 169.222, subd. 6a.
The device does not need to be insured. See Minn. Stat. § 65B.43, subds. 2, 13.
Electric-assisted bicycle operators must follow the same traffic laws as operators of motor vehicles (except those that by their nature would not be relevant). The bicycles may be operated two abreast. Operators must generally ride as close as is practical to the right-hand side of the road (exceptions include when overtaking another vehicle, preparing for a left turn, and to avoid unsafe conditions). The bicycle must be ridden within a single lane. Travel on the shoulder of a road must be in the same direction as the direction of adjacent traffic.
Some prohibitions also apply, such as on: carrying cargo that prevents keeping at least one hand on the handlebars or prevents proper use of brakes, riding no more than two abreast on a roadway or shoulder, and attaching the bicycle to another vehicle. Minn. Stat. § 169.222, subds. 3–5. The vehicles may be operated on a sidewalk except in a business district or when prohibited by a local unit of government, and must yield to pedestrians on the sidewalk. Minn. Stat. § 169.223, subd. 3. By default, electric-assisted bicycles are allowed on road shoulders as well as on bicycle trails, bicycle paths, and bicycle lanes.
A local unit of government having jurisdiction over a road or bikeway (including the Department of Natural Resources in the case of state bike trails) is authorized to restrict e-bike use if: the use is not consistent with the safety or general welfare of others; or the restriction is necessary to meet the terms of any legal agreements concerning the land on which a bikeway has been established.
Electric-assisted bicycles can be parked on a sidewalk unless restricted by local government (although they cannot impede normal movement of pedestrians) and can be parked on streets where parking of other motor vehicles is allowed. Minn. Stat. § 169.222, subd. 9.
During nighttime operation, the bicycle must be equipped with a front headlamp, a rear-facing red reflector, and reflectors on the front and rear of pedals, and the bicycle or rider must have reflective surfaces on each side. Minn. Stat. §169.222, subd. 6.
An electric-assisted bicycle can be equipped with a front-facing headlamp that emits a flashing white light, a rear-facing lamp that has a flashing red light, or both. The bicycle can carry studded tires designed for traction (such as in snowy
or icy conditions).
Helmets are no longer required for e-bike use.
Mississippi
In Opinion No. 2007-00602 of the Attorney General, Jim Hood clarified that a "bicycle with a motor attached" does not satisfy the definition of "motor vehicle" under Section 63-3-103. He stated that it is up to the authority creating the bike lane to determine if a bicycle with a motor attached can be ridden in bike lanes. No specifications about the motor were made.
In Opinion No. 2011-00095 of the Attorney General, Jim Hood stated that an operator's license, helmet, safety insurance, title, registration, and safety inspection are all not required of bicycles with a motor attached.
Missouri
The rights and privileges of electric bicycle riders can be found in 307.194 RSMo. Generally, electric bicycle riders have all the rights and responsibilities as riders of bicycles. Electric bicycles are not subject to all laws covering motor vehicles meaning they do not require "vehicle registration, certificates of title, drivers' licenses, [or] financial responsibility."
Electric bicycles are divided into 3 different classes under 301.010(15), RSMo. Class 1 includes electric bicycles with a motor that assists the rider when pedaling and cases at 20 mph. Class 2 includes electric bicycles that use a motor to propel the bicycle instead of a rider pedaling and "is not capable of providing assistance" when the bicycle reaches 20 mph. Class 3 includes electric bicycles with a motor that provides assistance when the rider is pedaling and stops when the bicycle reaches 28 mph.
Persons under the age of 16 are not permitted to operate class 3 electric bicycles.
In 2022, the Missouri Department of Conservation expanded the areas where bicycles, includes electric bicycles, are allowed to be used.
Montana
The law uses a three-part definition where the first two parts describe a human-powered bicycle and one with an independent power source respectively, while the third describes a "moped" with both a motor and pedal assist. (Montana Code 61-8-102).
As of April 21, 2015, mopeds were reclassified to be treated as bicycles in Montana, not requiring a driver's license.
The definition as written does not define the power of the motor in Watts as is conventionally done for electric bicycles, but rather in brake horsepower. Thus for an electric bicycle, motor kit, or electric bicycle motor that is not rated by the manufacture in brake horsepower, but rather in Watts, a conversion must be made in the units a conversion which is not given in the code of the law and thus the court will have to consider a factor of conversion that is not directly encoded in the law. Industry standard conversion for Watts to horsepower for electric motors is 1 horsepower = 746 watts. Acceptance of that conversion factor from industry, however, as interpretation of the law is subject to the process of the courts since it is not defined specifically in the law.
In addition the specific wording of the law may or may not prohibit the use of a "mid-drive" or "crank-drive" motor set-up where the motor drives the rear wheel of the bicycle through the existing chain drive of a bicycle that has multiple gears depending on several points of interpretation of the law. Specifically the interpretation of the wording, "does not require clutching or shifting by the operator after the drive system is engaged". A "mid-drive" or "crank-drive" motor set-up on an electric bicycle does indeed allow the operator to change gears in the power drive system between the motor and the rear wheel of the bicycle. Whether or not such a mechanism which allows the operator to change gears satisfies the wording that requires the operator to change gears is a matter of legal interpretation by the courts. Just as "shall issue" and "may issue" (as in laws governing the issuing licenses) in application of the law have two different meanings (in the first case if you meet the requirements they have to give you the license and in the second they do not have to if they decide not to even if you meet the requirements for the license) whether or not "does not require shifting" outlaws electric bicycles where shifting is possible but is not necessarily required is a matter of interpretation. Thus the legality of electric bicycles equipped with a "mid-drive" or "crank-drive" motor set-up in the U.S. state of Montana is not clearly defined.
Nebraska
Nebraska defines a Moped as "a bicycle with fully operative pedals for propulsion by human power, an automatic transmission, and a motor with a cylinder capacity not exceeding fifty cubic centimeters which produces no more than two brake horsepower and is capable of propelling the bicycle at a maximum design speed of no more than thirty miles per hour on level ground."
However, under a bill passed February 20, 2015 electric bicycles are explicitly defined.
Bicycle shall mean (1) every device propelled solely by human power, upon which any person may ride, and having two tandem wheels either of which is more than fourteen inches in diameter or (2) a device with two or three wheels, fully operative pedals for propulsion by human power, and an electric motor with a capacity not exceeding seven hundred fifty watts which produces no more than one brake horsepower and is capable of propelling the bicycle at a maximum design speed of no more than twenty miles per hour on level ground.
Nevada
As of May 19, 2009, Nevada amended its state transportation laws to explicitly permit electric bicycles to use any "trail or pedestrian walkway" intended for use with bicycles and constructed with federal funding, and otherwise generally permits electric bicycles to be operated in cases where a regular bicycle could be. An electric bicycle is defined as a two- or three-wheeled vehicle with fully operable pedals with an electric motor producing up to 1 gross brake horsepower and up to 750 watts final output, and with a maximum speed of up to 20 miles per hour on flat ground with a rider when powered only by that engine.
New Jersey
As of May 14, 2019, a new vehicle class ("Low-speed electric bicycle") was added to NJRS Title 39, described as "a two or three-wheeled vehicle with fully operable pedals and an electric motor of less than 750 watts, whose maximum speed on a paved level surface, when powered solely by a motor, while operated by a person weighing , is less than ."
Additionally, the existing class of "motorized bicycles" has been expanded to include—in addition to gas-powered vehicles such as mopeds—electric bicycles that can achieve speeds between . For these vehicles, a driver's license and registration are still required.
Under previous regulations, all e-bikes were classified as motorized bicycles (mopeds) and required registration, but could not actually be registered since the law was written only for gas-powered vehicles. The new legislation, which applies to both "pedal-assist" and "throttle" bicycles, removes e-bikes from that legal gray area.
New Mexico
New Mexico has no specific laws concerning electric or motorized bicycles. MVD rules treat motorized bicycles the same as bicycles, requiring no registration or drivers license.
Prior to this clarification by the MVD, electric bicycles were often treated as mopeds, which require a standard drivers license, but no registration.
New York
New York State (NYS) included "motor-assisted bicycles" in its list of vehicles which cannot be registered. A Federal agency, the Consumer Product Safety Commission (CPSC), has exclusive jurisdiction over electric bicycles as to consumer product regulations.
Despite the illegal status in the state of New York until 2020, enforcement varies at the local level. New York City enforces the bike ban with fines and vehicle confiscation for throttle activated electric bikes. However, Mayor Bill de Blasio has changed the city's policy to legalize pedal-assist electric bikes that have a maximum speed limited to 20 mph. Contrarily, Tompkins County supports electric bike use, even providing grant money to fund electric bike share/rental projects.
Several bills were sponsored to legalize electric bicycles for use on NYS roads, and several passed overwhelmingly at the committee level, but none of these initiatives was able to be heard and then passed in the New York State Senate, until 2015. Bill S3997, "An act to amend the vehicle and traffic law, in relation to the definition of electric assisted bicycle. Clarifying the vehicle and traffic law to define electric assisted bicycles; establish that electric assisted bicycles, as defined, are bicycles, not motor vehicles; and establish safety and operational criteria for their use." passed in the Senate in 2015. The related Assembly bill A233 was not brought to a vote in the assembly even though it had passed with little issue in prior years. A legalization bill passed in 2019 was vetoed by the Governor.
The New York Bicycle Coalition has supported efforts to define electric bicycles in New York State New York City has repeatedly drawn media attention for its enforcement of a ban on electric bicycles in certain neighborhoods, with fines of up to $3,000. A law was passed in April 2020 defining and legalizing three classes of electric bicycle. In September 2023, additional regulations were introduced in New York City to restrict the sale of electric bicycles and other battery-powered mobility devices to only those that are UL certified.
Ohio
The Ohio Revised Code 4511.01 distinguishes motorized bicycles and mopeds from motorcycles or scooters by describing them as "...any vehicle having either two tandem wheels or one wheel in the front and two wheels in the rear, that is capable of being pedaled and is equipped with a helper motor of not more than fifty cubic centimeters piston displacement that produces no more than one brake horsepower and is capable of propelling the vehicle at a speed of no greater than twenty miles per hour on a level surface." One brake horsepower converts to 0.75 kW, or (rounded) 750W. Thus, a bicycle with an electric helper motor operating under 750W, and not propelling the bicycle over 20 mph, does not qualify to be registered under Ohio state law. Local jurisdictions may have other regulations.
Oklahoma
Oklahoma defines an Electric-Assisted Bicycle in 47 O.S. 1-104 as "Two or three wheels; and Fully operative pedals for human propulsion and equipped with an electric motor with a power output not to exceed one thousand (1,000) watts, incapable of propelling the device at a speed of more than thirty (30) miles per hour on level ground, and incapable of further increasing the speed of the device when human power alone is used to propel the device at a speed of thirty (30) miles per hour or more. An electric-assisted bicycle shall meet the requirements of the Federal Motor Vehicle Safety Standards as set forth in federal regulations and shall operate in such a manner that the electric motor disengages or ceases to function when the brakes are applied."
Oklahoma the following restrictions on the operation of Electric-Assisted Bicycle in 47 O.S. 11-805.2 as follows:
1. Possess a Class A, B, C or D license, but shall be exempt from a motorcycle endorsement;
2. Not be subject to motor vehicle liability insurance requirements only as they pertain to the operation of electric-assisted bicycles;
3. Be authorized to operate an electric-assisted bicycle wherever bicycles are authorized to be operated;
4. Be prohibited from operating an electric-assisted bicycle wherever bicycles are prohibited from operating; and
5. Wear a properly fitted and fastened bicycle helmet which meets the standards of the American National Standards Institute or the Snell Memorial Foundation Standards for protective headgear for use in bicycling, provided such operator is eighteen (18) years of age or less.
Oregon
Oregon Law (ORS 801.258]) defines an electric assisted bicycle as an electric motor-driven vehicle equipped with operable pedals, a seat or saddle for the rider, no more than three wheels in contact during travel. In addition, the vehicle must be equipped with an electric motor that is capable of applying a power output of no greater than 1,000 watts, and that is incapable of propelling the vehicle at a speed greater than 20 miles per hour on level ground.
In general, electric bicycles are considered "bicycles", rather than motor vehicles, for purposes of the code. This implies that all bicycle regulations apply to electric bicycles including operation in bike lanes. Exceptions to this include a restriction of operation on sidewalks and that a license or permit is required if the rider is younger than 17 years of age.
Pennsylvania
State law defines a motorized pedalcycle as a motor-driven cycle equipped with operable pedals, a motor rated at no more than 1.5 brake horsepower, a cylinder capacity not exceeding 50 cubic centimeters, an automatic transmission, and a maximum design speed of no more than 25 miles per hour. Subchapter J of Publication 45 spells out the vehicle requirements in full.
As of 2008 a standard class C license, proof of insurance, and registration (annual fee: $9.00) are required for operation of any motorized pedalcycle in Pennsylvania. Additionally, there are strict equipment standards that must be met for operation, including: handlebars, brakes, tires/wheels, electrical systems/lighting, mirrors, speedometer, and horns/warning devices.
The definition was clearly written with gasoline-powered pedalcycles in mind. The requirement of an automatic transmission is troublesome for those who just want to add an electric-assist motor to a bicycle, for almost all bicycles have transmissions consisting of chains and manually shifted sprockets. The registration form asks for a VIN, making it difficult to register some foreign-made ebikes. The fine for riding an unregistered electric bike is approximately $160.00 per event as of 2007.
On February 4, 2014, SB997 was introduced by Senator Matt Smith, which seeks to amend PA Vehicle Code to include "Pedalcycle with Electric Assist". In a memo addressed to all senate members, Smith said the definition shall include "bicycles equipped with an electric motor not exceeding 750 watts, weighing not more than , are capable of a maximum speed of not more than , and have operable pedals."
On October 22, 2014, PA house bill 573 passed into law as Act 154, which changes the definition of "pedalcycle" (bicycle) in the PA state vehicle code. "Pedalcycle" is now defined as a vehicle propelled solely by human-powered pedals or a "pedalcycle" (bicycle) with electric assist (a vehicle weighing not more than with two or three wheels more than in diameter, manufactured or assembled with an electric motor rated at no more than 750 watts, equipped with operational pedals, with a maximum speed of 20 mph). Pedal-assisted bicycles meeting this definition may be used without regulation in PA.
Tennessee
Electric Bicycles are defined in Tennessee Code Annotated 55-8-301 – 307
This legislation passed in 2016 and defines an "electric bicycle", as a bicycle or tricycle equipped with fully operable pedals and an electric motor of less than 750 watts, separated into three classes:
(1) A "class 1 electric bicycle," or "low-speed pedal-assisted electric bicycle," is a bicycle equipped with a motor that provides assistance only when the rider is pedaling, and that ceases to provide assistance when the bicycle reaches the speed of 20 miles per hour.
(2) A "class 2 electric bicycle," or "low-speed throttle-assisted electric bicycle," is a bicycle equipped with a motor that may be used exclusively to propel the bicycle, and that is not capable of providing assistance when the bicycle reaches the speed of 20 miles per hour.
(3) A "class 3 electric bicycle," or "speed pedal-assisted electric bicycle," is a bicycle equipped with a motor that provides assistance only when the rider is pedaling, (no throttle) and that ceases to provide assistance when the bicycle reaches the speed of 28 miles per hour, and equipped with a speedometer.
Electric bicycles are governed by the same law as other bicycles, subject to any local restrictions. They may be operated on any part of a street or highway where bicycles are authorized to travel, including a bicycle lane or other portion of a roadway designated for exclusive use by bicyclists.
Class 1 and 2 electric bicycles are allowed on greenways and multi-use paths unless the local government bans their use by ordinance. Class 3 bikes are banned unless the local city council passes an ordinance to allow their use.
Beginning January 1, 2017, manufacturers and distributors of electric bicycles were required to apply a label that is permanently affixed, in a prominent location, to each electric bicycle, indicating its class.
Driver's licenses, registration, insurance and license plate requirements do not apply. An electric bicycle is not a motor vehicle. Drinking and driving laws apply. Additional laws or ordinances may apply to the use of electric bicycles by each city or county.
Texas
"Bicycles" and "Electric Bicycles" are legally defined in the Texas Transportation Code Title 7, Chapter 664 entitled "Operation of Bicycles, Mopeds, and Play Vehicles" in Subchapter G. Under Chapter 541.201 (24), "Electric bicycle" means a bicycle that is (A) designed to be propelled by an electric motor, exclusively or in combination with the application of human power, (B) cannot attain a speed of more than 20 miles per hour without the application of human power, and (C) does not exceed a weight of . The department or a local authority may not prohibit the use of an electric bicycle on a highway that is used primarily by motor vehicles. The department or a local authority may prohibit the use of an electric bicycle on a highway used primarily by pedestrians.
"Medical Exemptions" are also a standard right in the State of Texas for motorcycles & even bicyclists. Through Texas's motorcycle helmet law (bicycle helmet laws from city ordinances), it is only required for those 21 years old or younger to wear a helmet. However, a medical exemption, written by a certified licensed medical physician or licensed chiropractor, which exempts one from wearing a helmet, can be used for bicyclists if helmets are required.
Utah
According to Utah Code 41-6a-102 (17) <Utah Code Section 41-6a-102> an electric assisted bicycle is equipped with an electric motor with a power output of not more than 750 watts and is not capable of further assistance at a speed of more than , or at while pedaling and using a speedometer. New laws specifically exclude electric pedal-assisted bicycles as "motorized vehicles" and bicycles are permitted on all state land (but not necessarily on Indian Reservations, nor restrictive municipalities, such as in Park City Code 10-1-4.5 where electric bicycles are generally not allowed on bike paths2) if the motor is not more than 750 Watts, and the assistance shuts off at (Utah Traffic Code 53-3-202-17-a 1). E-bikes sold in Utah are required to have a sticker that details the performance capacity. Children under 14 can operate an electric bicycle if accompanied by a parent/guardian, but children under 8 may not. (Utah code 41-6a-1115.5) No license, registration, or insurance is required by the State but some municipalities may require these measures (Salt Lake City and Provo require registration).
1 Utah Traffic Code Utah Code Section 41-6a-102
2 Park city, Utah Municipal Code Park City : Municipal Code
Vermont
"Motor-driven cycle" means any vehicle equipped with two or three wheels, a power source providing up to a maximum of two brake horsepower and having a maximum piston or rotor displacement of 50 cubic centimeters if a combustion engine is used, which will propel the vehicle, unassisted, at a speed not to exceed on a level road surface, which does not require clutching or shifting by the operator. The designation is a replacement for "scooter" and "moped;" Vermont does not seem to have laws specifically for e-bikes.
Operators of motor-driven cycles are required to have a valid driver's license but not a motorcycle endorsement.
Virginia
Virginia laws that cover electric bicycles include Va. Code § 46.2-100; § 46.2-903; § 46.2-904; § 46.2-908.1; § 46.2-906.1.
E-bikes are allowed on sidewalks and bike paths, but are subject to local city or county restrictions. E-bikes are not subject to the registration, licensing or insurance requirements that apply to motor vehicles.
Washington
A law that came into effect on June 7, 2018, defines electric-assisted bicycles as a bicycle with two or three wheels, a saddle, fully operative pedals for human propulsion, and an electric motor of no more than 750 watts. The law divides electric-assisted bicycles into three classes:
Class 1 — "an electric assisted bicycle in which the motor provides assistance only when the rider is pedaling and ceases to provide assistance when the bicycle reaches the speed of twenty miles per hour";
Class 2 — "an electric assisted bicycle in which the motor may be used exclusively to propel the bicycle and is not capable of providing assistance when the bicycle reaches the speed of twenty miles per hour";
Class 3 — "an electric assisted bicycle in which the motor provides assistance only when the rider is pedaling and ceases to provide assistance when the bicycle reaches the speed of twenty-eight miles per hour and is equipped with a speedometer."
No drivers license is required and there is no age restriction for operation of Class 1 and 2 e-bikes, but one must be at least 16 years old to use a Class 3 bike.
All classes of electric-assisted bicycles may be operated on a fully controlled limited access highway. Class 1 and 2 electric bicycles can be used on sidewalks, but Class 3 bicycles "may not be used on a sidewalk unless there is no alternative to travel over a sidewalk as part of a bicycle or pedestrian path." Generally a person may not operate an electric-assisted bicycle on a trail that is designated as non-motorized and that has a natural surface, unless otherwise authorized.
Since July 1, 2018, manufacturers or distributors offering new electric-assisted bicycles in Washington state must affix a permanent label in a prominent place on the bike containing the classification number, top assisted speed, and motor wattage of the bike.
See also
Outline of cycling
Personal transporter (International regulation section)
References
External links
Regulations of E-Bikes in North America, National Institute for Transportation and Communities, August 2014.
Bicycle law
Electric bicycles
Bicycle
Vehicle law | Electric bicycle laws | [
"Engineering"
] | 16,669 | [
"Electrical engineering",
"Electrical-engineering-related lists"
] |
3,085,073 | https://en.wikipedia.org/wiki/Nabta%20Playa | Nabta Playa was once a large endorheic basin in the Nubian Desert, located approximately 800 kilometers south of modern-day Cairo or about 100 kilometers west of Abu Simbel in southern Egypt, 22.51° north, 30.73° east. Today the region is characterized by numerous archaeological sites. The Nabta Playa archaeological site, one of the earliest of the Egyptian Neolithic Period, is dated to circa 7500 BC.
Early history
Although today the western Egyptian desert is totally dry, this was not always the case. There is good evidence that there were several humid periods in the past (when up to 500 mm of rain would fall per year), the most recent one during the last interglacial and early last glaciation periods which stretched between 130,000 and 70,000 years ago. During this time, the area was a savanna and supported numerous animals such as extinct buffalo and large giraffes, varieties of antelope and gazelle. Beginning around the 10th millennium BC, this region of the Nubian Desert began to receive more rainfall, filling a lake. Early people may have been attracted to the region due to the source of water.
Archaeological findings indicate the presence of small seasonal camps in the region dating to the 9th–8th millennia BC. Fred Wendorf, the site's discoverer, and ethno-linguist Christopher Ehret have suggested that the people who occupied this region at that time may have been early pastoralists, or like the Saami practiced semi-pastoralism. This is disputed by other sources as the cattle remains found at Nabta have been shown to be morphologically wild in several studies, and hunter-gatherers at the nearby Saharan site of Uan Afada in Libya were penning wild Barbary sheep, an animal that was never domesticated. According to Michael Brass (2018) early cattle remains from Nabta Playa were wild hunted aurochs, whilst domesticated cattle were introduced to northeast Africa in the late 7th millennium BC, originating from cattle domesticated in the Euphrates valley.
Larger settlements began to appear at Nabta Playa by the 7th millennium BC, relying on deep wells for sources of water. Small huts were constructed in straight rows. Sustenance included wild plants, such as legumes, millets, sorghum, tubers, and fruit. Around 6800 BC they began to make pottery locally. In the late 7th millennium BC goats and sheep, apparently imported from Western Asia, appear. Many large hearths also appear.
Early pottery from the Nabta Playa-Bir Kiseiba area has characteristics unlike pottery from surrounding regions. This is followed by pottery with characteristics found only in the Western Desert. Later pottery from c. 5500 BC (Al Jerar phase) has similarities with pottery from the Sudanese region. Pottery decorations included complex patterns of impressions applied with a comb in a rocking motion.
Joel D. Irish (2001), reported in "Holocene Settlement of the Egyptian Sahara", based on osteological and dental data suggested a mainly sub-Saharan African affinity and origin at Nabta (with sub-Saharan tendencies most commonly detected), but also possible North African tendencies, concluding that, "Henneberg et al. suggest that the Nabta Playa people may have been most similar to Negroes from south of the Sahara. The present qualitative dental comparison tentatively supports this conclusion.".
Some researchers, including Christopher Ehret, have suggested a Nilo-Saharan linguistic affinity for the Nabta people.
Organisation
Archaeological discoveries reveal that these New Stone Age peoples seem to have lived more organized lives than their contemporaries nearer to and in the Nile Valley. The people of Nabta Playa had villages with 'planned' layouts, with deep wells that held water year-round.
Findings also indicate that the region was occupied only seasonally, most likely only in the summer, when the local lake had adequate water for grazing cattle. Comparative research suggests the indigenous inhabitants may have a significantly more advanced knowledge of astronomy than previously thought possible.
Religious ties to ancient Egypt
By the 6th millennium BC, evidence of a prehistoric religion or cult appears. From 5500 BC the Late Neolithic period began, with "a new group that had a complex social system expressed in a degree of organisation and control not previously seen." These new people were responsible for sacrificial cattle burials in clay-lined and roofed chambers covered by rough stone tumuli.
It has been suggested that the associated cattle cult indicated in Nabta Playa marks an early evolution of Ancient Egypt's Hathor cult. For example, Hathor was worshipped as a nighttime protector in desert regions (see Serabit el-Khadim). To directly quote professors Wendorf and Schild:
Rough megalithic stone structures buried underground are also found in Nabta Playa, one of which included evidence of what Wendorf described as perhaps "the oldest known sculpture in Egypt."
Astronomical observation
In the 5th millennium BC these peoples fashioned what may be among the world's earliest known archeoastronomical devices (roughly contemporary to the Goseck circle in Germany and the Mnajdra megalithic temple complex in Malta). These include alignments of stones that may have indicated the rising of certain stars and a "calendar circle" that indicates the approximate direction of summer solstice sunrise. "Calendar circle" may be a misnomer as the spaces between the pairs of stones in the gates are a bit too wide, and the distances between the gates are too short for accurate calendar measurements." An inventory of Egyptian archaeoastronomical sites for the UNESCO World Heritage Convention evaluated Nabta Playa as having "hypothetical solar and stellar alignments."
Claims for early alignments and star maps
Astrophysicist Thomas G. Brophy suggests the hypothesis that the southerly line of three stones inside the Calendar Circle represented the three stars of Orion’s Belt and the other three stones inside the calendar circle represented the shoulders and head stars of Orion as they appeared in the sky. These correspondences were for two dates – circa 4800 BC and at precessional opposition – representing how the sky "moves" long term. Brophy proposes that the circle was constructed and used circa the later date, and the dual date representation was a conceptual representation of the motion of the sky over a precession cycle.
Near the Calendar Circle, which is made of smaller stones, there are alignments of large megalithic stones. The southerly lines of these megaliths, Brophy argues, aligned to the same stars as represented in the Calendar Circle, all at the same epoch, circa 6270 BC. Brophy argues that the Calendar Circle correlation with Orion's belt occurred between 6400 BC and 4900 BC, matching radio-carbon dates of some campfires in the area.
Recent research
A 2007 article by a team of University of Colorado archaeoastronomers and archaeologists (Malville, Schild, Wendorf and Brenmer, three of whom had been involved in the original discovery of the site and its astronomical alignment) responded to the work of Brophy and Rosen, in particular their claims for an alignment with Sirius in 6088 BC and other alignments which they dated to 6270 BC, saying that these dates "are about 1500 years earlier than our best estimates for the Terminal Neolithic and the construction of megalithic structures" at Nabta Playa.
The Sirius alignment in question was originally proposed by Wendorf and Malville, for one of the most prominent alignments of megaliths labelled the "C-line", which they said aligned to the rising of Sirius circa 4820 BC. Brophy and Rosen stated in 2005 that megalith orientations and star positions reported by Wendorf and Malville were in error, noting that "Given these corrected data, we see that Sirius actually aligned with the C-line circa 6000 BC. We estimate that 6088 BC Sirius had a declination of −36.51 degrees, for a rising azimuth exactly on the C-line average". However, according to Malville, Schild et al. (2007) the dates proposed by Brophy are inconsistent with the archaeological evidence, and "inference in archaeoastronomy must always be guided and informed by archaeology, especially when substantial field work has been performed in the region". They also concluded that, on closer inspection, the C-line of megaliths "consists of stones resting on the sides and tops of dunes and may not represent an original set of aligned stele".
They also criticised suggestions made by Brophy in his 2002 book The Origin Map that there was a representation of the Milky Way as it was in 17,500 BC and maps of Orion at 16,500 BC, saying "These extremely early dates as well as the proposition that the nomads had contact with extra-galactic aliens are inconsistent with the archaeological record."
They propose that the area was first used as what they call a "regional ceremonial centre" around 6100 BC to 5600 BC with people coming from various locations to gather on the dunes surrounding the playa where there is archaeological evidence for gatherings that involved large numbers of cattle bones, as cattle were normally only killed on important occasions. Around 5500 BC a new, more organised group began to use the site, burying cattle in clay-lined chambers and building other tumuli. Around 4800 BC a stone circle was constructed, with narrow slabs approximately aligned with the summer solstice, near the beginning of the rainy season.
More complex structures followed during a megalith period the researchers dated to between about 4500 BC to 3600 BC. Using their original measurements, complemented by satellite imagery and GPS measurements by Brophy and Rosen, they confirmed possible alignments with Sirius, Arcturus, Alpha Centauri, and the Belt of Orion. They suggest that there are three pieces of evidence suggesting astronomical observations by the herdsmen using the site, which may have functioned as a necropolis. "The repetitive orientation of megaliths, stele, human burials and cattle burials reveals a very early symbolic connection to the north." Secondly, there is the orientation of the cromlech mentioned above. The third piece of evidence is the fifth millennium alignments of stele to bright stars.
They conclude their report by writing that "The symbolism embedded in the archaeological record of Nabta Playa in the Fifth Millennium BC is very basic, focussed on issues of major practical importance to the nomads: cattle, water, death, earth, sun and stars."
In 2011, Maciej Jórdeczka, Halina Królik, Mirosław Masojć and Romuald Schild, a team of archaeologists, excavated a series of Holocene pottery from Nabta Playa which represented the earliest phase of ceramic production in the Saharan region and were described as "relatively sophisticated bowls decorated with a toothed wheel". Also, they argued that the pottery from the region had an important role in shaping the cultural development of the Eastern Sahara during the early Holocene period. The authors concluded that it is "likely that the Early Holocene colonisers of the southern Western Desert, the El Adam hunter-gatherer-cattle keepers, came to the south-eastern fringes of the Sahara from the Nile Valley" and shared an "almost identical" output of technology with the Arkinian culture in Lower Nubia.
Relative chronology
See also
Prehistoric Egypt
List of archaeoastronomical sites by country
Bir Kiseiba
Notes
External links
Article in Scientific American
Article in Nature
Ancient Astronomy in Africa
Megalithic Astronomy at the Ceremonial Center of Nabta Playa
Archaeoastronomy
Archaeological sites in Egypt
Kingdom of Kush
Egyptian calendar
Prehistoric Egypt
8th-millennium BC establishments | Nabta Playa | [
"Astronomy"
] | 2,433 | [
"Archaeoastronomy",
"Astronomical sub-disciplines"
] |
3,085,195 | https://en.wikipedia.org/wiki/Roughcast | Roughcast or pebbledash is a coarse plaster surface used on outside walls that consists of lime and sometimes cement mixed with sand, small gravel and often pebbles or shells. The materials are mixed into a slurry and are then thrown at the working surface with a trowel or scoop. The idea is to maintain an even spread, free from lumps, ridges or runs and without missing any background. Roughcasting incorporates the stones in the mix, whereas pebbledashing adds them on top.
According to the Encyclopædia Britannica Eleventh Edition (1910–1911), roughcast had been a widespread exterior coating given to the walls of common dwellings and outbuildings, but it was then frequently employed for decorative effect on country houses, especially those built using timber framing (half timber). Variety can be obtained on the surface of the wall by small pebbles of different colours, and in the Tudor period fragments of glass were sometimes embedded.
Though it is an occasional home-design fad, its general unpopularity in the UK was estimated to reduce the value of a property by up to 5%. However roughcasting remains very popular in Scotland and rural Ireland, with a high percentage of new houses being built with roughcasting.
This exterior wall finish was made popular in England and Wales during the 1920s, when housing was in greater demand, and house builders were forced to cut costs wherever they could, and used pebbledash to cover poor quality brick work, which also added rudimentary weather protection.
Pebbles were dredged from the seabed to provide the building material needed, although most modern pebbledash is actually not pebbles at all, but small and sharp flint chips, known as spar dash or spa dash.
There are several varieties of this spar dash such as Canterbury spar, sharp-dash, sharpstone dash, thrown dash, pebble stucco, Derbyshire Spar, yellow spar, golden gravel, black and white, and also sunflower.
According to the Encyclopædia Britannica Eleventh Edition, the central tower of St Albans Cathedral, built with Roman tiles from Verulamium, was covered with roughcast believed to be as old as the building. The roughcast was removed around 1870.
See also
Harl
Plasterwork
Stone cladding
References
External links
Building
Building materials
Plastering | Roughcast | [
"Physics",
"Chemistry",
"Engineering"
] | 479 | [
"Building",
"Building engineering",
"Coatings",
"Architecture",
"Construction",
"Materials",
"Plastering",
"Matter",
"Building materials"
] |
3,085,232 | https://en.wikipedia.org/wiki/Baxter%20Althane%20disaster | The Baxter Althane disaster in autumn 2001 was a series of 53 sudden deaths of kidney failure patients in Spain, Croatia, Italy, Germany, Taiwan, Colombia and the USA (mainly Nebraska and Texas). All had received hospital treatment with Althane hemodialysis equipment, a product range manufactured by Baxter International, USA.
Although official investigations initially found no link between the cases, Baxter Co. eventually published its own findings, admitting that a perfluorohydrocarbon-based cleaning fluid was not properly removed from the tubings during manufacture. Baxter also announced discontinuation and permanent recall of all Althane equipment. Families of most non-US victims were compensated by Baxter voluntarily, while US plaintiffs settled via a class action lawsuit. The company continues to manufacture dialysis machines of a newer design.
References
External links
Baxter News Release "Baxter Corporation responds to Croatian Investigators", 7 Jan 2002
Baxter News Release "Baxter announces agreement with families... in Spain", 28 Nov 2001
2001 health disasters
2001 industrial disasters
2001 disasters in the United States
2001 disasters in Europe
Baxter International
Medical scandals
Drug safety | Baxter Althane disaster | [
"Chemistry"
] | 223 | [
"Drug safety"
] |
3,085,304 | https://en.wikipedia.org/wiki/Nitrostarch | Nitrostarch is a secondary explosive similar to nitrocellulose. Much like starch, it is made up of two components, nitrated amylose and nitrated amylopectin. Nitrated amylopectin generally has a greater solubility than amylose; however, it is less stable than nitrated amylose.
The solubility, detonation velocity, and impact sensitivity depend heavily on the level of nitration.
Synthesis
Nitrostarch is made by dissolving starch in red fuming nitric acid. It is then precipitated by adding the solution to concentrated sulfuric acid.
Nitrostarch can be stabilized by refluxing it in ethanol to drive off the left over nitric acid.
History
Nitrostarch was first discovered by French chemist and pharmacist Henri Braconnot.
After stabilizers (such as ammonium oxalate) were devised in the early 1900s to prolong its shelf life, it was started to be used as an industrial explosive.
During World War I, it was used as a filler in hand grenades.
References
Explosive chemicals | Nitrostarch | [
"Chemistry"
] | 239 | [
"Explosive chemicals"
] |
3,085,316 | https://en.wikipedia.org/wiki/Commonly%20used%20gamma-emitting%20isotopes | Radionuclides which emit gamma radiation are valuable in a range of different industrial, scientific and medical technologies. This article lists some common gamma-emitting radionuclides of technological importance, and their properties.
Fission products
Many artificial radionuclides of technological importance are produced as fission products within nuclear reactors. A fission product is a nucleus with approximately half the mass of a uranium or plutonium nucleus which is left over after such a nucleus has been "split" in a nuclear fission reaction.
Caesium-137 is one such radionuclide. It has a half-life of 30 years, and decays by beta decay without gamma ray emission to a metastable state of barium-137 (). Barium-137m has a half-life of a 2.6 minutes and is responsible for all of the gamma ray emission in this decay sequence. The ground state of barium-137 is stable.
The photon energy (energy of a single gamma ray) of is about 662 keV. These gamma rays can be used, for example, in radiotherapy such as for the treatment of cancer, in food irradiation, or in industrial gauges or sensors. is not widely used for industrial radiography as other nuclides, such as cobalt-60 or iridium-192, offer higher radiation output for a given volume.
Iodine-131 is another important gamma-emitting radionuclide produced as a fission product. With a short half-life of 8 days, this radioisotope is not of practical use in radioactive sources in industrial radiography or sensing. However, since iodine is a component of biological molecules such as thyroid hormones, iodine-131 is of great importance in nuclear medicine, and in medical and biological research as a radioactive tracer.
Lanthanum-140 is a decay product of barium-140, a common fission product. It is a potent gamma emitter. It was used in high quantities during the Manhattan Project for the RaLa Experiments.
Activation products
Some radionuclides, such as cobalt-60 and iridium-192, are made by the neutron irradiation of normal non-radioactive cobalt and iridium metal in a nuclear reactor, creating radioactive nuclides of these elements which contain extra neutrons, compared to the original stable nuclides.
In addition to their uses in radiography, both cobalt-60 () and iridium-192 () are used in the radiotherapy of cancer. Cobalt-60 tends to be used in teletherapy units as a higher photon energy alternative to caesium-137, while iridium-192 tends to be used in a different mode of therapy, internal radiotherapy or brachytherapy. The iridium wires for brachytherapy are a palladium-coated iridium/palladium alloy wire made radioactive by neutron activation. This wire is then inserted into a tumor such as a breast tumor, and the tumor is irradiated by gamma ray photons from the wire. At the end of the treatment the wire is removed.
A rare but notable gamma source is sodium-24; this has a fairly short half-life of 15 hours, but it emits photons with very high energies (>2 MeV). It could be used for radiography of thick steel objects if the radiography occurred close to the point of production. Similarly to and , it is formed by the neutron activation of the commonly found stable isotope.
Minor actinides
Americium-241 has been used as a source of low energy gamma photons, it has been used in some applications such as portable X-ray fluorescence equipment (XRF) and common household ionizing smoke detectors. Americium-241 is produced from in nuclear reactors through multiple neutron captures and subsequent beta decays with the plutonium-239 itself being produced mostly from neutron capture and subsequent beta decays by (99% of natural uranium and usually roughly 97% of low enriched uranium or MOX fuel).
Natural radioisotopes
Many years ago radium-226 and radon-222 were used as gamma-ray sources for industrial radiography: for instance, a radon-222 source was used to examine the mechanisms inside an unexploded V-1 flying bomb, while some of the early Bathyspheres could be examined using radium-226 to check for cracks. Because both radium and radon are very radiotoxic and very expensive due to their natural rarity, these natural radioisotopes have fallen out of use over the last half-century, replaced by artificially created radioisotopes. Radon therapy sits on the edge of radioactive quackery and genuine radiotherapy in part due to the lack of reliable data on the stated health benefits.
Table of some useful gamma emitting isotopes
Note only half lives between 100 min and 5,000 yr are listed as short half-lives are usually not practical to use, and long half-lives usually mean extremely low specific activity. d = day, hr = hour, yr = year.
See also
Isotopes of caesium
Common beta emitters
http://www.iem-inc.com/information/tools/radiation-energies/gamma-emitters
useful radioisotope search tool
References
Nuclear physics
Nuclear chemistry
Radioactivity
Isotopes
Nuclear materials
Gamma rays | Commonly used gamma-emitting isotopes | [
"Physics",
"Chemistry"
] | 1,102 | [
"Spectrum (physical sciences)",
"Nuclear chemistry",
"Electromagnetic spectrum",
"Isotopes",
"Materials",
"Nuclear materials",
"Gamma rays",
"Radioactivity",
"Nuclear physics",
"nan",
"Matter"
] |
3,085,681 | https://en.wikipedia.org/wiki/Framatome | Framatome () is a French nuclear reactor business. It is owned by Électricité de France (EDF) (80.5%) and Mitsubishi Heavy Industries (19.5%).
The company first formed in 1958 to license Westinghouse's pressurized water reactor (PWR) designs for use in France. Similar agreements had been put in place with other European countries, and this led to a 1962 contract for a complete plant at Chooz. Westinghouse sold its stake to engineering firm Creusot-Loire in 1976, and the company became solely French owned.
In 2001, Siemens sold its reactor business to Framatome. As part of a larger series of mergers with Cogema and Technicatome, Framatome became the Areva NP division of the new Areva. It changed its name back to Framatome in 2018 after a major investment by utility operator EDF.
While originally a licensing and construction business, today Framatome supplies the entire reactor life-cycle, including design of the European Pressurized Reactor (EPR), construction, fuel management and many related tasks.
History
Framatome was founded in 1958 by several companies of the French industrial giant Schneider Group along with Empain, Merlin Gérin, and the American Westinghouse, in order to license Westinghouse's pressurized water reactor (PWR) technology and develop a bid for Chooz A (in France). Called Franco-Américaine de Constructions Atomiques (Framatome), the original company consisted of four engineers, one from each of the parent companies.
The original mission of the company was to act as a nuclear engineering firm and to develop a nuclear power plant that was to be identical to Westinghouse's existing product specifications. The first European plant of Westinghouse design was by then already under construction in Italy. A formal contract was signed in September 1961 for Framatome to deliver a turnkey system, that is, not only the reactor, but an entire, ready-to-use system of piping, cabling, supports, and other auxiliary systems, propelling Framatome from a nuclear engineering firm to an industrial contractor.
In January 1976, Westinghouse agreed to sell its remaining 15% share to Creusot-Loire, which now owned 66%, and to cede complete marketing independence to Framatome. In February, the Belgian Édouard-Jean Empain sold his 35% interest in Creusot-Loire to Paribas, a French government-linked banking group.
A January 1982 company reorganization simultaneously strengthened French public and private control of the company by allowing Creusot-Loire to increase its share of the company while increasing CEA say in the running of the firm. In 2001, German company Siemens' nuclear business was merged into Framatome. Framatome and Siemens had been officially cooperating since 1989 on the development of the European Pressurized Reactor (EPR).
In 2001, after a merger with Cogema (now Orano) and Technicatome, a new nuclear conglomerate called Areva was formed, and Framatome became Areva NP. In 2007, Areva and Mitsubishi Heavy Industries created a joint venture named Atmea, for marketing the ATMEA1 reactor design. In 2009, Areva NP acquired 30% stake in the Mitsubishi Nuclear Fuel company.
In 2009, Siemens sold its remaining shares in Areva NP. In 2018, after restructuring of Areva, Areva NP was sold to Électricité de France. Mitsubishi Heavy Industries (19.5%), and Assystem (5%) also became shareholders. As a result of the restructuring, Électricité de France and Mitsubishi Heavy Industries became equal shareholders of Atmea with 50% of shares both while Framatome owns a special share in Atmea.
Operations
Framatome designs, manufactures, and installs components, fuel and instrumentation and control systems for nuclear power plants and offers a full range of reactor services. It is responsible for Flamanville 3, Taishan 1 and 2, and Hinkley Point C projects. In addition, Framatome conducts preliminary study for construction of six reactors at the Jaitapur Nuclear Power Project in the Indian state of Maharashtra.
Framatome provides EPR reactors, which is a third generation pressurised water reactor (PWR) design, and Kerena reactors, which is 1,250 MWe Generation III+ boiling water reactor (BWR) design, provisionally known as SWR-1000. The Kerena design was developed from that of the Gundremmingen Nuclear Power Plant by Areva, with extensive German input and using operating experience from Generation II BWRs to simplify systems engineering.
In 2016, following a discovery at Flamanville 3, about 400 large steel forgings manufactured by Framatome's Le Creusot Forge operation since 1965 were found to have carbon-content irregularities that weakened the steel. A widespread programme of French reactor checks was started involving a progressive programme of reactor shutdowns, continued over the winter high electricity demand period into 2017. In December 2016 the Wall Street Journal characterised the problem as a "decades long coverup of manufacturing problems", with Framatome executives acknowledging that Le Creusot had been falsifying documents. Le Creusot Forge was out of operation from December 2015 to January 2018 while improvements to process controls, the quality management system, organisation and safety culture were made.
In 2020 Framatome won an order to deliver reactor protection systems for the Russian VVER-TOI design nuclear reactors at Kursk II.
Locations
France
18 sites spread throughout the country
7500+ employees
Germany
3 locations: Erlangen, Karlstein and Lingen
3000+ employees
China
8 sites : Beijing, Lianyungang, Shanghai, Qinshan, Fuqing, Daya Bay, Yangjiang and Taishan
4000 experts in the world providing vital support ACNS
USA
14 sites: Benicia, CA, Charlotte, NC, Cranberry Township, PA, Fort Worth, TX, Foxborough, MA, Houston, TX, Jacksonville, FL, Lake Forest, CA, Lynchburg, VA (3 locations), Richland, WA, and Washington, D.C.
2,320 employees
Canada
3 sites (Kincardine, ON, Montreal, QC, and Pickering, ON)
UK
3 sites (Bristol and Cranfield)
References
External links
Areva
French companies established in 1958
Energy companies established in 1958
Nuclear technology companies of France
Technology companies established in 1958
Engineering companies of France
Electrical engineering companies
Électricité de France | Framatome | [
"Engineering"
] | 1,345 | [
"Electrical engineering companies",
"Electrical engineering organizations",
"Engineering companies"
] |
3,085,825 | https://en.wikipedia.org/wiki/Serre%20spectral%20sequence | In mathematics, the Serre spectral sequence (sometimes Leray–Serre spectral sequence to acknowledge earlier work of Jean Leray in the Leray spectral sequence) is an important tool in algebraic topology. It expresses, in the language of homological algebra, the singular (co)homology of the total space X of a (Serre) fibration in terms of the (co)homology of the base space B and the fiber F. The result is due to Jean-Pierre Serre in his doctoral dissertation.
Cohomology spectral sequence
Let be a Serre fibration of topological spaces, and let F be the (path-connected) fiber. The Serre cohomology spectral sequence is the following:
Here, at least under standard simplifying conditions, the coefficient group in the -term is the q-th integral cohomology group of F, and the outer group is the singular cohomology of B with coefficients in that group. The differential on the kth page is .
Strictly speaking, what is meant is cohomology with respect to the local coefficient system on B given by the cohomology of the various fibers. Assuming for example, that B is simply connected, this collapses to the usual cohomology. For a path connected base, all the different fibers are homotopy equivalent. In particular, their cohomology is isomorphic, so the choice of "the" fiber does not give any ambiguity.
The abutment means integral cohomology of the total space X.
This spectral sequence can be derived from an exact couple built out of the long exact sequences of the cohomology of the pair , where is the restriction of the fibration over the p-skeleton of B. More precisely, using this notation,
f is defined by restricting each piece on to , g is defined using the coboundary map in the long exact sequence of the pair, and h is defined by restricting to
There is a multiplicative structure
coinciding on the E2-term with (−1)qs times the cup product, and with respect to which the differentials are (graded) derivations inducing the product on the -page from the one on the -page.
Homology spectral sequence
Similarly to the cohomology spectral sequence, there is one for homology:
where the notations are dual to the ones above, in particular the differential on the kth page is a map .
Example computations
Hopf fibration
Recall that the Hopf fibration is given by . The -page of the Leray–Serre Spectral sequence reads
The differential goes down and right. Thus the only differential which is not necessarily is , because the rest have domain or codomain 0 (since they are on the E2-page). In particular, this sequence degenerates at E2 = E∞. The E3-page reads
The spectral sequence abuts to i.e. Evaluating at the interesting parts, we have and Knowing the cohomology of both are zero, so the differential is an isomorphism.
Sphere bundle on a complex projective variety
Given a complex n-dimensional projective variety there is a canonical family of line bundles for coming from the embedding . This is given by the global sections which send
If we construct a rank vector bundle which is a finite whitney sum of vector bundles we can construct a sphere bundle whose fibers are the spheres . Then, we can use the Serre spectral sequence along with the Euler class to compute the integral cohomology of . The -page is given by . We see that the only non-trivial differentials are given on the -page and are defined by cupping with the Euler class . In this case it is given by the top chern class of . For example, consider the vector bundle for a K3 surface. Then, the spectral sequence reads as
The differential for is the square of the Lefschetz class. In this case, the only non-trivial differential is then
We can finish this computation by noting the only nontrivial cohomology groups are
Basic pathspace fibration
We begin first with a basic example; consider the path space fibration
We know the homology of the base and total space, so our intuition tells us that the Serre spectral sequence should be able to tell us the homology of the loop space. This is an example of a case where we can study the homology of a fibration by using the E∞ page (the homology of the total space) to control what can happen on the E2 page. So recall that
Thus we know when q = 0, we are just looking at the regular integer valued homology groups Hp(Sn+1) which has value in degrees 0 and n+1 and value 0 everywhere else. However, since the path space is contractible, we know that by the time the sequence gets to E∞, everything becomes 0 except for the group at p = q = 0. The only way this can happen is if there is an isomorphism from to another group. However, the only places a group can be nonzero are in the columns p = 0 or p = n+1 so this isomorphism must occur on the page En+1 with codomain However, putting a in this group means there must be a at Hn+1(Sn+1; Hn(F)). Inductively repeating this process shows that Hi(ΩSn+1) has value at integer multiples of n and 0 everywhere else.
Cohomology ring of complex projective space
We compute the cohomology of using the fibration:
Now, on the E2 page, in the 0,0 coordinate we have the identity of the ring. In the 0,1 coordinate, we have an element i that generates However, we know that by the limit page, there can only be nontrivial generators in degree 2n+1 telling us that the generator i must transgress to some element x in the 2,0 coordinate. Now, this tells us that there must be an element ix in the 2,1 coordinate. We then see that d(ix) = x2 by the Leibniz rule telling us that the 4,0 coordinate must be x2 since there can be no nontrivial homology until degree 2n+1. Repeating this argument inductively until 2n + 1 gives ixn in coordinate 2n,1 which must then be the only generator of in that degree thus telling us that the 2n + 1,0 coordinate must be 0. Reading off the horizontal bottom row of the spectral sequence gives us the cohomology ring of and it tells us that the answer is
In the case of infinite complex projective space, taking limits gives the answer
Fourth homotopy group of the three-sphere
A more sophisticated application of the Serre spectral sequence is the computation This particular example illustrates a systematic technique which one can use in order to deduce information about the higher homotopy groups of spheres. Consider the following fibration which is an isomorphism on
where is an Eilenberg–MacLane space. We then further convert the map to a fibration; it is general knowledge that the iterated fiber is the loop space of the base space so in our example we get that the fiber is But we know that Now we look at the cohomological Serre spectral sequence: we suppose we have a generator for the degree 3 cohomology of , called . Since there is nothing in degree 3 in the total cohomology, we know this must be killed by an isomorphism. But the only element that can map to it is the generator a of the cohomology ring of , so we have . Therefore by the cup product structure, the generator in degree 4, , maps to the generator by multiplication by 2 and that the generator of cohomology in degree 6 maps to by multiplication by 3, etc. In particular we find that But now since we killed off the lower homotopy groups of X (i.e., the groups in degrees less than 4) by using the iterated fibration, we know that by the Hurewicz theorem, telling us that
Corollary:
Proof: Take the long exact sequence of homotopy groups for the Hopf fibration .
See also
Gysin sequence
References
The Serre spectral sequence is covered in most textbooks on algebraic topology, e.g.
Allen Hatcher, Spectral Sequences
Edwin Spanier, Algebraic topology, Springer
Also
James Davis, Paul Kirk, Lecture notes in algebraic topology gives many nice applications of the Serre spectral sequence.
An elegant construction is due to
Andreas Dress, Zur Spektralsequenz einer Faserung, Inventiones Mathematicae 3 (1967), 172–178, EuDML.
The case of simplicial sets is treated in
Paul Goerss, Rick Jardine, Simplicial homotopy theory, Birkhäuser
Algebraic topology
Spectral sequences | Serre spectral sequence | [
"Mathematics"
] | 1,848 | [
"Fields of abstract algebra",
"Topology",
"Algebraic topology"
] |
3,085,898 | https://en.wikipedia.org/wiki/System%20File%20Checker | System File Checker (SFC) is a utility in Microsoft Windows that allows users to scan for and restore corrupted Windows system files.
Overview
Microsoft ships this utility with Windows 98, Windows 2000 and all subsequent versions of the Windows NT family of operating systems. In Windows Vista, Windows 7 and Windows 10, System File Checker is integrated with Windows Resource Protection (WRP), which protects registry keys and folders as well as critical system files. Under Windows Vista, sfc.exe can be used to check specific folder paths, including the Windows folder and the boot folder.
Windows File Protection (WFP) works by registering for notification of file changes in Winlogon. If any changes are detected to a protected system file, the modified file is restored from a cached copy located in a compressed folder at %WinDir%\System32\dllcache.
Windows Resource Protection (WRP) works by setting discretionary access-control lists (DACLs) and access control lists (ACLs) defined for protected resources. If any changes are detected to a protected system file, the modified file is restored from a cached copy located in a folder at %WinDir%\WinSxS\Backup. Permission for full access to modify WRP-protected resources is restricted to the processes using the Windows Modules Installer service (TrustedInstaller.exe). Administrators no longer have full rights to system files.
History
Due to problems with Windows applications being able to overwrite system files in Windows 95, Microsoft has since implemented a number of security measures to protect system files from malicious attacks, corruptions, or problems such as DLL Hell.
System File Checker was first introduced on Windows 98 as a GUI utility. It offered scanning and restoration of corrupted system files by matching the version number against a database containing the original version number of the files in a fresh Windows 98 installation. This method of file protection was basic. It determined system files by file extension and file path. It was able to restore files from the installation media or a source specified by the user. Windows 98 did not offer real-time system file protection beyond file attributes; therefore, no preventive or reactive measure was available.
All Windows NT-based operating systems since Windows 2000 introduced real-time file protection, called Windows File Protection (WFP).
In addition, the System File Checker utility (sfc.exe) was reimplemented as a more robust command-line utility that integrated with WFP. Unlike the Windows 98 SFC utility, the new utility forces a scan of protected system files using Windows File Protection and allows the immediate silent restoration of system files from the DLLCache folder or installation media.
SFC did not appear on Windows ME, as it was replaced with System File Protection (SFP). Similar to WFP, SFP offered real-time protection.
Issues
The System File Checker component included with versions of Windows 2000 earlier than Service Pack 4 overrode patches distributed by Microsoft; this was rectified in Windows 2000 Service Pack 4.
Usage
In Windows NT-based operating systems, System File Checker can be invoked via Windows Command Prompt (with Admin privilege), with the following command:
(to repair problems)
or (no repair)
If it finds a problem, it will attempt to replace the problematic files from the DLL Cache (%WinDir%\System32\dllcache). If the file is not in the DLL Cache or the DLL Cache is corrupted, the user will be prompted to insert the Windows installation media or provide the network installation path. System File Checker determines the Windows installation source path from the registry values SourcePath and ServicePackSourcePath. It may keep prompting for the installation media even if the user supplies it if these values are not correctly set.
In Windows Vista and onwards, files are protected using access control lists (ACLs), and if it finds a problem, it will attempt to replace the problematic files from the Windows Side-by-side Backup (%WinDir%\WinSxS\Backup). However, the above command has not changed.
System File Checker in Windows Vista and later Windows operating systems can scan specified files. Also, scans can be performed against an offline Windows installation folder to replace corrupt files, in case the Windows installation is not bootable. For performing offline scans, System File Checker must be run from another working installation of Windows Vista or a later operating system or from the Windows setup DVD or a recovery drive which gives access to the Windows Recovery Environment.
In cases where the component store is corrupted, the "System Update Readiness tool" (CheckSUR) can be installed on Windows 7, Windows Vista, Windows Server 2008 R2 or Windows Server 2008, replaced by "Deployment Image Service and Management Tool" (DISM) for Windows 10, Windows 8.1, Windows 8, Windows Server 2012 R2 or Windows Server 2012. This tool checks the store against its own payload and repairs the corruptions that it detects by downloading required files through Windows update.
References
Further reading
External links
sfc | Microsoft Docs
Use the System File Checker tool to repair missing or corrupted system files
Description of Windows XP and Windows Server 2003 System File Checker (Sfc.exe)
Windows administration
Windows components | System File Checker | [
"Technology"
] | 1,084 | [
"Windows commands",
"Computing commands"
] |
3,085,914 | https://en.wikipedia.org/wiki/Gibbs%20measure | In physics and mathematics, the Gibbs measure, named after Josiah Willard Gibbs, is a probability measure frequently seen in many problems of probability theory and statistical mechanics. It is a generalization of the canonical ensemble to infinite systems.
The canonical ensemble gives the probability of the system X being in state x (equivalently, of the random variable X having value x) as
Here, is a function from the space of states to the real numbers; in physics applications, is interpreted as the energy of the configuration x. The parameter is a free parameter; in physics, it is the inverse temperature. The normalizing constant is the partition function. However, in infinite systems, the total energy is no longer a finite number and cannot be used in the traditional construction of the probability distribution of a canonical ensemble. Traditional approaches in statistical physics studied the limit of intensive properties as the size of a finite system approaches infinity (the thermodynamic limit). When the energy function can be written as a sum of terms that each involve only variables from a finite subsystem, the notion of a Gibbs measure provides an alternative approach. Gibbs measures were proposed by probability theorists such as Dobrushin, Lanford, and Ruelle and provided a framework to directly study infinite systems, instead of taking the limit of finite systems.
A measure is a Gibbs measure if the conditional probabilities it induces on each finite subsystem satisfy a consistency condition: if all degrees of freedom outside the finite subsystem are frozen, the canonical ensemble for the subsystem subject to these boundary conditions matches the probabilities in the Gibbs measure conditional on the frozen degrees of freedom.
The Hammersley–Clifford theorem implies that any probability measure that satisfies a Markov property is a Gibbs measure for an appropriate choice of (locally defined) energy function. Therefore, the Gibbs measure applies to widespread problems outside of physics, such as Hopfield networks, Markov networks, Markov logic networks, and boundedly rational potential games in game theory and economics.
A Gibbs measure in a system with local (finite-range) interactions maximizes the entropy density for a given expected energy density; or, equivalently, it minimizes the free energy density.
The Gibbs measure of an infinite system is not necessarily unique, in contrast to the canonical ensemble of a finite system, which is unique. The existence of more than one Gibbs measure is associated with statistical phenomena such as symmetry breaking and phase coexistence.
Statistical physics
The set of Gibbs measures on a system is always convex, so there is either a unique Gibbs measure (in which case the system is said to be "ergodic"), or there are infinitely many (and the system is called "nonergodic"). In the nonergodic case, the Gibbs measures can be expressed as the set of convex combinations of a much smaller number of special Gibbs measures known as "pure states" (not to be confused with the related but distinct notion of pure states in quantum mechanics). In physical applications, the Hamiltonian (the energy function) usually has some sense of locality, and the pure states have the cluster decomposition property that "far-separated subsystems" are independent. In practice, physically realistic systems are found in one of these pure states.
If the Hamiltonian possesses a symmetry, then a unique (i.e. ergodic) Gibbs measure will necessarily be invariant under the symmetry. But in the case of multiple (i.e. nonergodic) Gibbs measures, the pure states are typically not invariant under the Hamiltonian's symmetry. For example, in the infinite ferromagnetic Ising model below the critical temperature, there are two pure states, the "mostly-up" and "mostly-down" states, which are interchanged under the model's symmetry.
Markov property
An example of the Markov property can be seen in the Gibbs measure of the Ising model. The probability for a given spin to be in state s could, in principle, depend on the states of all other spins in the system. Thus, we may write the probability as
.
However, in an Ising model with only finite-range interactions (for example, nearest-neighbor interactions), we actually have
,
where is a neighborhood of the site . That is, the probability at site depends only on the spins in a finite neighborhood. This last equation is in the form of a local Markov property. Measures with this property are sometimes called Markov random fields. More strongly, the converse is also true: any positive probability distribution (nonzero density everywhere) having the Markov property can be represented as a Gibbs measure for an appropriate energy function. This is the Hammersley–Clifford theorem.
Formal definition on lattices
What follows is a formal definition for the special case of a random field on a lattice. The idea of a Gibbs measure is, however, much more general than this.
The definition of a Gibbs random field on a lattice requires some terminology:
The lattice: A countable set .
The single-spin space: A probability space .
The configuration space: , where and .
Given a configuration and a subset , the restriction of to is . If and , then the configuration is the configuration whose restrictions to and are and , respectively.
The set of all finite subsets of .
For each subset , is the -algebra generated by the family of functions , where . The union of these -algebras as varies over is the algebra of cylinder sets on the lattice.
The potential: A family of functions such that
For each is -measurable, meaning it depends only on the restriction (and does so measurably).
For all and , the following series exists:
We interpret as the contribution to the total energy (the Hamiltonian) associated to the interaction among all the points of finite set A.
Then as the contribution to the total energy of all the finite sets A that meet . Note that the total energy is typically infinite, but when we "localize" to each it may be finite, we hope.
The Hamiltonian in with boundary conditions , for the potential , is defined by
where denotes the configuration taking the values of in , and those of in .
The partition function in with boundary conditions and inverse temperature (for the potential and ) is defined by
where
is the product measure
A potential is -admissible if is finite for all and .
A probability measure on is a Gibbs measure for a -admissible potential if it satisfies the Dobrushin–Lanford–Ruelle (DLR) equation
for all and .
An example
To help understand the above definitions, here are the corresponding quantities in the important example of the Ising model with nearest-neighbor interactions (coupling constant ) and a magnetic field (), on :
The lattice is simply .
The single-spin space is
The potential is given by
See also
Boltzmann distribution
Exponential family
Gibbs algorithm
Gibbs sampling
Interacting particle system
Potential game
Softmax
Stochastic cellular automata
References
Further reading
Measures (measure theory)
Statistical mechanics
Game theory equilibrium concepts | Gibbs measure | [
"Physics",
"Mathematics"
] | 1,433 | [
"Physical quantities",
"Measures (measure theory)",
"Quantity",
"Size",
"Game theory",
"Statistical mechanics",
"Game theory equilibrium concepts"
] |
3,086,043 | https://en.wikipedia.org/wiki/Dacryphilia | Dacryphilia (also known as dacrylagnia) is a form of paraphilia in which one is aroused by tears or sobbing.
The term comes from the Greek words meaning "tears", and meaning "love".
Dacryphilia is an underexplored aspect of non-normative sexual interests. Psychologists Richard Greenhill and Mark D. Griffiths from Nottingham Trent University conducted the first empirical study on dacryphilia, published in March 2015. The study, comprising online interviews, included six females and two males, three of them were also involved in BDSM. The researchers identified three themes: compassion, curled lips, and dominance/submission. The paraphilia may be experienced by those who do not consider themselves a dominant or submissive, and are motivated by compassion. Half of the participants, all women, associated dacryphilia with the arousal from comforting a crier due to compassion and shared a fantasy of meeting someone who has faced hardships and providing them comfort. Individuals with dacryphilia may find arousal when their partner cries during a movie or from the normal emotional vulnerability and strong feelings of love that may make a partner cry during intercourse.
References
Paraphilias
Sexual fetishism
Crying | Dacryphilia | [
"Biology"
] | 258 | [
"Behavior",
"Sexuality stubs",
"Sexuality",
"Crying",
"Human behavior"
] |
3,086,101 | https://en.wikipedia.org/wiki/Burr%E2%80%93Erd%C5%91s%20conjecture | In mathematics, the Burr–Erdős conjecture was a problem concerning the Ramsey number of sparse graphs. The conjecture is named after Stefan Burr and Paul Erdős, and is one of many conjectures named after Erdős; it states that the Ramsey number of graphs in any sparse family of graphs should grow linearly in the number of vertices of the graph.
The conjecture was proven by Choongbum Lee. Thus it is now a theorem.
Definitions
If G is an undirected graph, then the degeneracy of G is the minimum number p such that every subgraph of G contains a vertex of degree p or smaller. A graph with degeneracy p is called p-degenerate. Equivalently, a p-degenerate graph is a graph that can be reduced to the empty graph by repeatedly removing a vertex of degree p or smaller.
It follows from Ramsey's theorem that for any graph G there exists a least integer
, the Ramsey number of G, such that any complete graph on at least vertices whose edges are coloured red or blue contains a monochromatic copy of G. For instance, the Ramsey number of a triangle is 6: no matter how the edges of a complete graph on six vertices are colored red or blue, there is always either a red triangle or a blue triangle.
The conjecture
In 1973, Stefan Burr and Paul Erdős made the following conjecture:
For every integer p there exists a constant cp so that any p-degenerate graph G on n vertices has Ramsey number at most cp n.
That is, if an n-vertex graph G is p-degenerate, then a monochromatic copy of G should exist in every two-edge-colored complete graph on cp n vertices.
Known results
Before the full conjecture was proved, it was first settled in some special cases. It was proven for bounded-degree graphs by ; their proof led to a very high value of cp, and improvements to this constant were made by and . More generally, the conjecture is known to be true for p-arrangeable graphs, which includes graphs with bounded maximum degree, planar graphs and graphs that do not contain a subdivision of Kp. It is also known for subdivided graphs, graphs in which no two adjacent vertices have degree greater than two.
For arbitrary graphs, the Ramsey number is known to be bounded by a function that grows only slightly superlinearly. Specifically, showed that there exists a constant cp such that, for any p-degenerate n-vertex graph G,
Notes
References
.
.
.
.
.
.
.
.
.
Graph theory
Ramsey theory
Conjectures that have been proved
Burr–Erdős conjecture
Theorems in graph theory | Burr–Erdős conjecture | [
"Mathematics"
] | 546 | [
"Mathematical theorems",
"Theorems in discrete mathematics",
"Combinatorics",
"Conjectures that have been proved",
"Mathematical problems",
"Theorems in graph theory",
"Ramsey theory"
] |
3,086,117 | https://en.wikipedia.org/wiki/Masayoshi%20Nagata | Masayoshi Nagata (Japanese: 永田 雅宜 Nagata Masayoshi; February 9, 1927 – August 27, 2008) was a Japanese mathematician, known for his work in the field of commutative algebra.
Work
Nagata's compactification theorem shows that algebraic varieties can be embedded in complete varieties. The Chevalley–Iwahori–Nagata theorem describes the quotient of a variety by a group.
In 1959, he introduced a counterexample to the general case of Hilbert's fourteenth problem on invariant theory. His 1962 book on local rings contains several other counterexamples he found, such as a commutative Noetherian ring that is not catenary, and a commutative Noetherian ring of infinite dimension.
Nagata's conjecture on curves concerns the minimum degree of a plane curve specified to have given multiplicities at given points; see also Seshadri constant. Nagata's conjecture on automorphisms concerns the existence of wild automorphisms of polynomial algebras in three variables. Recent work has solved this latter problem in the affirmative.
Selected works
References
1927 births
2008 deaths
20th-century Japanese mathematicians
21st-century Japanese mathematicians
People from Ōbu, Aichi
Academic staff of Kyoto University
Nagoya University alumni
Algebraists | Masayoshi Nagata | [
"Mathematics"
] | 266 | [
"Algebra",
"Algebraists"
] |
3,086,195 | https://en.wikipedia.org/wiki/Seshadri%20constant | In algebraic geometry, a Seshadri constant is an invariant of an ample line bundle L at a point P on an algebraic variety. It was introduced by Demailly to measure a certain rate of growth, of the tensor powers of L, in terms of the jets of the sections of the Lk. The object was the study of the Fujita conjecture.
The name is in honour of the Indian mathematician C. S. Seshadri.
It is known that Nagata's conjecture on algebraic curves is equivalent to the assertion that for more than nine general points, the Seshadri constants of the projective plane are maximal. There is a general conjecture for algebraic surfaces, the Nagata–Biran conjecture.
Definition
Let be a smooth projective variety, an ample line bundle on it, a point of , = { all irreducible curves passing through }.
.
Here, denotes the intersection number of and , measures how many times passing through .
Definition: One says that is the Seshadri constant of at the point , a real number. When is an abelian variety, it can be shown that is independent of the point chosen, and it is written simply .
References
Algebraic varieties
Vector bundles
Mathematical constants | Seshadri constant | [
"Mathematics"
] | 251 | [
"Mathematical constants",
"Mathematical objects",
"Numbers",
"nan"
] |
3,086,255 | https://en.wikipedia.org/wiki/Nagata%E2%80%93Biran%20conjecture | In mathematics, the Nagata–Biran conjecture, named after Masayoshi Nagata and Paul Biran, is a generalisation of Nagata's conjecture on curves to arbitrary polarised surfaces.
Statement
Let X be a smooth algebraic surface and L be an ample line bundle on X of degree d. The Nagata–Biran conjecture states that for sufficiently large r the Seshadri constant satisfies
References
.
. See in particular page 3 of the pdf.
Algebraic surfaces
Conjectures | Nagata–Biran conjecture | [
"Mathematics"
] | 102 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Conjectures"
] |
3,086,309 | https://en.wikipedia.org/wiki/Fujita%20conjecture | In mathematics, Fujita's conjecture is a problem in the theories of algebraic geometry and complex manifolds. It is named after Takao Fujita, who formulated it in 1985.
Statement
In complex geometry, the conjecture states that for a positive holomorphic line bundle L on a compact complex manifold M, the line bundle KM ⊗ L⊗m (where KM is a canonical line bundle of M) is
spanned by sections when m ≥ n + 1 ;
very ample when m ≥ n + 2,
where n is the complex dimension of M.
Note that for large m the line bundle KM ⊗ L⊗m is very ample by the standard Serre's vanishing theorem (and its complex analytic variant). Fujita conjecture provides an explicit bound on m, which is optimal for projective spaces.
Known cases
For surfaces the Fujita conjecture follows from Reider's theorem. For three-dimensional algebraic varieties, Ein and Lazarsfeld in 1993 proved the first part of the Fujita conjecture, i.e. that m≥4 implies global generation.
See also
Ohsawa–Takegoshi L2 extension theorem
References
.
.
.
.
.
Algebraic geometry
Complex manifolds
Conjectures
Unsolved problems in geometry | Fujita conjecture | [
"Mathematics"
] | 248 | [
"Geometry problems",
"Unsolved problems in mathematics",
"Unsolved problems in geometry",
"Fields of abstract algebra",
"Conjectures",
"Algebraic geometry",
"Mathematical problems"
] |
3,086,375 | https://en.wikipedia.org/wiki/Alkermes%20plc | Alkermes plc is a fully-integrated biopharmaceutical company that focuses on developing medicines for psychiatric and neurological disorders. The company was founded in 1987 by Michael Wall. In September 2011 Alkermes, Inc. merged with Elan Drug Technologies (EDT), the former drug formulation and manufacturing division of Élan Corporation, plc. The company is headquartered in Dublin, and has an R&D center in Waltham, Massachusetts, and a manufacturing facility in Wilmington, Ohio.
Products
Alkermes has four proprietary commercial drug products approved for the treatment of schizophrenia, bipolar I disorder, alcohol dependence and opioid dependence. These include olanzapine and samidorphan (Lybalvi), an atypical antipsychotic and opioid modulator combination intended for the treatment of schizophrenia and bipolar I disorder; aripiprazole lauroxil (Aristada), a long-acting injectable for schizophrenia; and naltrexone for extended-release injectable suspension (Vivitrol) for alcohol and opioid dependence.
Other products utilizing Alkermes' proprietary technologies include: diroximel fumarate (Vumerity) for multiple sclerosis, risperidone (microspheres) long-acting injectable (Risperdal Consta) for schizophrenia and bipolar I disorder, paliperidone palmitate (Invega Sustenna, Invega Trinza and Invega Hafyerain in the U.S., Xeplion, Trevicta and Bynnali in Europe) for schizophrenia.
In October 2023, Alkermes announced its first data related to its orexin 2 receptor (OXR2) agonist, ALKS 2680. ALKS 2680 is in development for the treatment of narcolepsy.
In November 2023, Alkermes completed the planned separation of its oncology business into a new company, Mural Oncology, which plans to continue to work on the investigational interleukin-2 (IL-2) drug, nemvaleukin alfa.
In May 2024, Alkermes completed the sale of its Athlone, Ireland facility to Novo Nordisk.
References
External links
Pharmaceutical companies of Ireland
Manufacturing companies based in Dublin (city)
Companies listed on the Nasdaq
Life sciences industry
Tax inversions | Alkermes plc | [
"Biology"
] | 492 | [
"Life sciences industry"
] |
3,086,453 | https://en.wikipedia.org/wiki/Core%20sample | A core sample is a cylindrical section of (usually) a naturally-occurring substance. Most core samples are obtained by drilling with special drills into the substance, such as sediment or rock, with a hollow steel tube, called a core drill. The hole made for the core sample is called the "core hole". A variety of core samplers exist to sample different media under different conditions; there is continuing development in the technology. In the coring process, the sample is pushed more or less intact into the tube. Removed from the tube in the laboratory, it is inspected and analyzed by different techniques and equipment depending on the type of data desired.
Core samples can be taken to test the properties of manmade materials, such as concrete, ceramics, some metals and alloys, especially the softer ones. Core samples can also be taken of living things, including human beings, especially of a person's bones for microscopic examination to help diagnose diseases.
Methods
The composition of the subject materials can vary from almost liquid to the strongest materials found in nature or technology, and the location of the subject materials can vary from on the laboratory bench to over 10 km from the surface of the Earth in a borehole. The range of equipment and techniques applied to the task is correspondingly great. Core samples are most often taken with their long axis oriented roughly parallel to the axis of a borehole, or parallel to the gravity field for the gravity-driven tools. However it is also possible to take core samples from the wall of an existing borehole. Taking samples from an exposure, be it an overhanging rock face or on a different planet, is almost trivial. (The Mars Exploration Rovers carry a Rock Abrasion Tool, which is logically equivalent to the "rotary sidewall core" tool described below.)
Some common techniques include:
gravity coring, in which the core sampler is dropped into the sample, usually the bed of a water body, but essentially the same technique can also be done on soft materials on land. The penetration forces, if recorded, give information about the strength of different depths in the material, which may be the only information required, with samples as an incidental benefit. This technique is common in both civil engineering site investigations (where the technique shades into pile driving) and geological studies of recent aquatic deposits. The low strength of the materials penetrated means that cores have to be relatively small.
vibrating, in which the sampler is vibrated to allow penetration into thixotropic media. Again, the physical strength of the subject material limits the size of the core that can be retrieved.
drilling exploration diamond drilling where a rotating annular tool backed up by a cylindrical core sample storage device is pressed against the subject materials to cut out a cylinder of the subject material. A mechanism is normally needed to retain the cylindrical sample in the coring tool. Depending on circumstances, particularly the consistency and composition of the subject materials, different arrangements may be needed within the core tools to support and protect the sample on its way to the surface; it is often also necessary to control or reduce the contact between the drilling fluid and the core sample, to reduce changes from the coring process. The mechanical forces imposed on the core sample by the tool frequently lead to fracture of the core and loss of less-competent intervals, which can greatly complicate the interpretation of the core. Cores can routinely be cut as small as a few millimeters in diameter (in wood, for dendrochronology) up to over 150 millimeters in diameter (routine in oil exploration). The lengths of samples can range from less than a meter (again, in wood, for dendrochronology) up to around 200 meters in one run, though 27 to 54 meters is more usual (in oil exploration), and many runs can be made in succession if "quick look" analysis in the field suggests that the zone of interest is continuing.
percussion sidewall coring uses robust cylindrical "bullets" explosively propelled into the wall of a borehole to retrieve a (relatively) small, short core sample. These tend to be heavily shattered, rendering porosity/ permeability measurements dubious, but are often sufficient for lithological and micropalaeontological study. Many samples can be attempted in a single run of the tools, which are typically configured with 20 to 30 "bullets" and propulsive charges along the length of a tool. Several tools can often be ganged together for a single run. The success rates for firing a particular bullet, it penetrating the borehole wall, the retention system recovering the bullet from the borehole wall, and the sample is retained in the bullet are all relatively low, so it is not uncommon for only half the samples attempted to be successful. This is an important consideration in planning sample programs.
rotary sidewall coring where a miniaturized automated rotary drilling tool is applied to the side of the borehole to cut a sample similar in size to a percussion sidewall core (described above). These tend to suffer less deformation than percussion cores. However, the core-cutting process takes longer and jams are common in the ancillary equipment which retrieves the sample from the drill bit and stores it within the tool body.
coring by hand can be done with a variety of instruments, such as a Russian Peat Corer, a handheld piston corer or simply a hollow tube. The advantages of coring by hand are low costs and quick operation, but a handheld corer will only reach limited depths, ranging from a few decimeters in clay soils to a few meters in soft lake sediments.
Management of cores and data
Although often neglected, core samples always degrade to some degree in the process of cutting the core, handling it, and studying it. Non-destructive techniques are increasingly common, e.g., the use of MRI scanning to characterize grains, pore fluids, pore spaces (porosity) and their interactions (constituting part of permeability) but such expensive subtlety is likely wasted on a core that has been shaken on an unsprung lorry for 300 km of dirt road. What happens to cores between the retrieval equipment and the final laboratory (or archive) is an often neglected part of record keeping and core management.
Coring has come to be recognized as an important source of data, and more attention and care is being put on preventing damage to the core during various stages of it transportation and analysis. The usual way to do this is to freeze the core completely using liquid nitrogen, which is cheaply sourced. In some cases, special polymers are also used to preserve and seat/cushion the core from damage.
Equally, a core sample which cannot be related to its context (where it was before it became a core sample) has lost much of its benefit. The identification of the borehole, and the position and orientation ("way up") of the core in the borehole is critical, even if the borehole is in a tree trunk – dendrochronologists always try to include a bark surface in their samples so that the date of most-recent growth of the tree can be unambiguously determined.
If these data become separated from core samples, it is generally impossible to regain that data. The cost of a coring operation can vary from a few currency units (for a hand-caught core from a soft soil section) to tens of millions of currency units (for sidewall cores from a remote-area offshore borehole many kilometres deep). Inadequate recording of such basic data has ruined the utility of both types of core.
Different disciplines have different local conventions of recording these data, and the user should familiarize themselves with their area's conventions. For example, in the oil industry, orientation of the core is typically recorded by marking the core with two longitudinal colour streaks, with the red one on the right when the core is being retrieved and marked at surface. Cores cut for mineral mining may have their own, different, conventions. Civil engineering or soil studies may have their own, different, conventions as their materials are often not competent enough to make permanent marks on.
It is becoming increasingly common to retain core samples in cylindrical packaging which forms part of the core-cutting equipment, and to make the marks of record on these "inner barrels" in the field prior to further processing and analysis in the laboratory. Sometimes core is shipped from the field to the laboratory in as long a length as it comes out of the ground; other times it is cut into standard lengths (5m or 1m or 3 ft) for shipping, then reassembled in the laboratory. Some of the "inner barrel" systems are capable of being reversed on the core sample, so that in the laboratory the sample goes "wrong way up" when the core is reassembled. This can complicate interpretation.
If the borehole has petrophysical measurements made of the wall rocks, and these measurements are repeated along the length of the core then the two data sets correlated, one will almost universally find that the depth "of record" for a particular piece of core differs between the two methods of measurement. Which set of measurements to believe then becomes a matter of policy for the client (in an industrial setting) or of great controversy (in a context without an overriding authority). Recording that there are discrepancies, for whatever reason, retains the possibility of correcting an incorrect decision at a later date ; destroying the "incorrect" depth data makes it impossible to correct a mistake later. Any system for retaining and archiving data and core samples needs to be designed so that dissenting opinion like this can be retained.
If core samples from a campaign are competent, it is common practice to "slab" them – cut the sample into two or more samples longitudinally – quite early in laboratory processing so that one set of samples can be archived early in the analysis sequence as a protection against errors in processing. "Slabbing" the core into a 2/3 and a 1/3 set is common. It is also common for one set to be retained by the main customer while the second set goes to the government (who often impose a condition for such donation as a condition of exploration/ exploitation licensing). "Slabbing" also has the benefit of preparing a flat, smooth surface for examination and testing of profile permeability, which is very much easier to work with than the typically rough, curved surface of core samples when they're fresh from the coring equipment. Photography of raw and "slabbed" core surfaces is routine, often under both natural and ultra-violet light.
A unit of length occasionally used in the literature on seabed cores is cmbsf, an abbreviation for centimeters below sea floor.
History of coring
The technique of coring long predates attempts to drill into the Earth’s mantle by the Deep Sea Drilling Program. The value to oceanic and other geologic history of obtaining cores over a wide area of sea floors soon became apparent. Core sampling by many scientific and exploratory organizations expanded rapidly. To date hundreds of thousands of core samples have been collected from floors of all the planet's oceans and many of its inland waters.
Access to many of these samples is facilitated by the Index to Marine & Lacustrine Geological Samples.
Informational value of core samples
Coring began as a method of sampling surroundings of ore deposits and oil exploration. It soon expanded to oceans, lakes, ice, mud, soil and wood. Cores on very old trees give information about their growth rings without destroying the tree.
Cores indicate variations of climate, species and sedimentary composition during geologic history. The dynamic phenomena of the Earth's surface are for the most part cyclical in a number of ways, especially temperature and rainfall.
There are many ways to date a core. Once dated, it gives valuable information about changes of climate and terrain. For example, cores in the ocean floor, soil and ice have altered the view of the geologic history of the Pleistocene entirely.
Alternatives
Reverse circulation drilling is a method in which rock cuttings are continuously extracted through the hollow drill rod and can be sampled for analysis. The method may be faster and use less water than core drilling, but does not produce cores of relatively undisturbed material, so less information on the rock structure can be derived from analysis. If compressed air is used for cutting extraction the sample remains uncontaminated, is available almost immediately, and the method has a low environmental impact.
See also
Core drill
Ice core
Integrated Ocean Drilling Program
Scientific drilling
References
External links
Defining Coring
Core from Walden Pond
Ice Core Dating
John Branner Newsom's boring machine
Glaciology
Geological techniques
Well logging
Environmental science
Geophysics
Petrology
Stratigraphy | Core sample | [
"Physics",
"Engineering",
"Environmental_science"
] | 2,585 | [
"Applied and interdisciplinary physics",
"Petroleum engineering",
"nan",
"Well logging",
"Geophysics"
] |
3,086,721 | https://en.wikipedia.org/wiki/Frost%20flower | A frost flower or ice flower is formed when thin layers of ice are extruded from long-stemmed plants in autumn or early winter. The thin layers of ice are often formed into exquisite patterns, curling into "petals" which resemble flowers.
Types
Frost flower formations are also referred to as frost faces, ice castles, ice blossoms, or crystallofolia.
Types of frost flowers include needle ice, frost pillars, or frost columns, extruded from pores in the soil, and ice ribbons, rabbit frost, or rabbit ice, extruded from linear fissures in plant stems. The term "ice flower" is also used as synonym for ice ribbons, but it may be used to describe the unrelated phenomenon of window frost as well.
Formation
The formation of frost flowers is dependent on a freezing weather condition occurring when the ground is not already frozen. The sap in the stem of the plants will expand (water expands when frozen), causing long, thin cracks to form along the length of the stem. Water is then drawn through these cracks via capillary action and freezes upon contact with the air. As more water is drawn through the cracks it pushes the thin ice layers further from the stem, causing a thin "petal" to form.
The petals of frost flowers are very delicate and will break when touched. They usually melt or sublime when exposed to sunlight and are usually visible in the early morning or in shaded areas.
Examples of plants that often form frost flowers are white crownbeard (Verbesina virginica), commonly called frostweed, yellow ironweed (Verbesina alternifolia), dittany (Cunila origanoides), and Helianthemum canadense.
See also
Hair ice
Needle ice
Frostweed
References
External links
Website about frost flowers
Frost and rime
Hydrology
Water ice
Plant physiology | Frost flower | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 376 | [
"Plant physiology",
"Hydrology",
"Plants",
"Environmental engineering"
] |
3,086,747 | https://en.wikipedia.org/wiki/KRP%20%28biochemistry%29 | KRP stands for kinesin related proteins. bimC is a subfamily of KRPs and its function is to separate the duplicated centrosomes during mitosis.
Role in mitotic repair
Kinesin-13 MCAK (Mitotic Centromere-Associated Kinesin) is a KRP that is involved in resolving errors during mitosis involving kinetochore-microtubules. This process is associated with Aurora B Protein Kinase. When Aurora B's function is disrupted, MCAK ability to locate centromeres, which play a critical role in separation of chromosomes during mitosis, was suppressed. There are other environments in which MCAK's function is impaired, absent impact on its associated kinase. For example, alpha-tubulin detyrosination has been demonstrated to impact MCAK's mitotic repair capabilities, suggesting a potential cause of chromosomal instability.
References
Proteins | KRP (biochemistry) | [
"Chemistry",
"Biology"
] | 185 | [
"Biomolecules by chemical classification",
"Biotechnology stubs",
"Biochemistry stubs",
"Molecular biology",
"Biochemistry",
"Proteins"
] |
3,086,963 | https://en.wikipedia.org/wiki/Computation%20in%20the%20limit | In computability theory, a function is called limit computable if it is the limit of a uniformly computable sequence of functions. The terms computable in the limit, limit recursive and recursively approximable are also used. One can think of limit computable functions as those admitting an eventually correct computable guessing procedure at their true value. A set is limit computable just when its characteristic function is limit computable.
If the sequence is uniformly computable relative to D, then the function is limit computable in D.
Formal definition
A total function is limit computable if there is a total computable function such that
The total function is limit computable in D if there is a total function computable in D also satisfying
A set of natural numbers is defined to be computable in the limit if and only if its characteristic function is computable in the limit. In contrast, the set is computable if and only if it is computable in the limit by a function and there is a second computable function that takes input i and returns a value of t large enough that the has stabilized.
Limit lemma
The limit lemma states that a set of natural numbers is limit computable if and only if the set is computable from (the Turing jump of the empty set). The relativized limit lemma states that a set is limit computable in if and only if it is computable from .
Moreover, the limit lemma (and its relativization) hold uniformly. Thus one can go from an index for the function to an index for relative to . One can also go from an index for relative to to an index for some that has limit .
Proof
As is a [computably enumerable] set, it must be computable in the limit itself as the computable function can be defined
whose limit as goes to infinity is the characteristic function of .
It therefore suffices to show that if limit computability is preserved by Turing reduction, as this will show that all sets computable from are limit computable. Fix sets which are identified with their characteristic functions and a computable function with limit . Suppose that for some Turing reduction and define a computable function as follows
Now suppose that the computation converges in steps and only looks at the first bits of . Now pick such that for all . If then the computation converges in at most steps to . Hence has a limit of , so is limit computable.
As the sets are just the sets computable from by Post's theorem, the limit lemma also entails that the limit computable sets are the sets.
An early result foreshadowing the equivalence of limit-computability with -ness was anticipated by Mostowski in 1954, using a hierarchy and formulas , where is a function obtained from an arbitrary primitive recursive function such that is equivalent to .
Extension
Iteration of limit computability can be used to climb up the arithmetical hierarchy. Namely, an -ary function is iff it can be written in the form for some -ary recursive function \(g\), under the assumption that all limits exist.
Limit computable real numbers
A real number x is computable in the limit if there is a computable sequence of rational numbers (or, which is equivalent, computable real numbers) which converges to x. In contrast, a real number is computable if and only if there is a sequence of rational numbers which converges to it and which has a computable modulus of convergence.
When a real number is viewed as a sequence of bits, the following equivalent definition holds. An infinite sequence of binary digits is computable in the limit if and only if there is a total computable function taking values in the set such that for each i the limit exists and equals . Thus for each i, as t increases the value of eventually becomes constant and equals . As with the case of computable real numbers, it is not possible to effectively move between the two representations of limit computable reals.
Examples
The real whose binary expansion encodes the halting problem is computable in the limit but not computable.
The real whose binary expansion encodes the truth set of first-order arithmetic is not computable in the limit.
Chaitin's constant.
Set-theoretic extension
There is a modified version of the limit lemma for α-recursion theory via functions in the -arithmetical hierarchy, which is a hierarchy defined relative to some admissible ordinal .
For a given admissible ordinal , define the -arithmetical hierarchy:
A relation on is if it is -recursive.
is if it is the projection of a relation.
is if its complement is .
Let be a partial function from to . The following are equivalent:
(The graph of) is .
is weakly -recursive in , the -jump of using indices of -computable functions.
There is an -recursive function approximating such that .
denotes that either and are both undefined, or they are both defined and equal.
See also
Specker sequence
References
J. Schmidhuber, "Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit", International Journal of Foundations of Computer Science, 2002, .
R. Soare. Recursively Enumerable Sets and Degrees. Springer-Verlag 1987.
V. Brattka. A Galois connection between Turing jumps and limits. Log. Methods Comput. Sci., 2018, .
Computability theory
Theory of computation | Computation in the limit | [
"Mathematics"
] | 1,190 | [
"Computability theory",
"Mathematical logic"
] |
3,087,323 | https://en.wikipedia.org/wiki/Myrmecochory | Myrmecochory ( (sometimes myrmechory); from ("ant") and khoreíā ("circular dance") is seed dispersal by ants, an ecologically significant ant–plant interaction with worldwide distribution. Most myrmecochorous plants produce seeds with elaiosomes, a term encompassing various external appendages or "food bodies" rich in lipids, amino acids, or other nutrients that are attractive to ants. The seed with its attached elaiosome is collectively known as a diaspore. Seed dispersal by ants is typically accomplished when foraging workers carry diaspores back to the ant colony, after which the elaiosome is removed or fed directly to ant larvae. Once the elaiosome is consumed, the seed is usually discarded in an underground midden or ejected from the nest. Although diaspores are seldom distributed far from the parent plant, myrmecochores also benefit from this predominantly mutualistic interaction through dispersal to favourable locations for germination, as well as escape from seed predation.
Distribution and diversity
Myrmecochory is exhibited by more than 3,000 plant species worldwide and is present in every major biome on all continents except Antarctica. Seed dispersal by ants is particularly common in the dry heath and sclerophyll woodlands of Australia (1,500 species) and the South African fynbos (1,000 species). Both regions have a Mediterranean climate and largely infertile soils (characterized by low phosphorus availability), two factors that are often cited to explain the distribution of myrmecochory. Myrmecochory is also present in mesic forests in temperate regions of the Northern Hemisphere (i.e. in Europe and in eastern North America), as well as in tropical forests and dry deserts, though to a lesser degree. Estimates for the true biodiversity of myrmecochorous plants range from 11,000 to as high as 23,000 species worldwide, or about 5% of all flowering plant species.
Evolutionary history
Myrmecochory has evolved independently many times in a large number of plant families. A recent phylogenetic study identified more than 100 separate origins of myrmecochory in 55 families of flowering plants. With many independent evolutionary origins, elaiosomes have evolved from a wide variety of parent tissues. Strong selective pressure or the relative ease with which elaiosomes can develop from parent tissues may explain the multiple origins of myrmecochory. These findings identify myrmecochory as a prime example of convergent evolution. In addition, phylogenetic comparison of myrmecochorous plant groups reveals that more than half of the lineages in which myrmecochory evolved are more species-rich than their nonmyrmecochorous sister groups. Not only is myrmecochory a convergent trait, but it also promotes diversification in multiple flowering plant lineages.
Ecology
Myrmecochory is usually classified as a mutualism, but this is contingent on the degree to which participating species benefit from the interaction. Several different factors likely combine to create mutualistic conditions. Myrmecochorous plants may derive benefit from increased dispersal distance, directed dispersal to nutrient-enriched or protected microsites, and/or seed predator avoidance. Costs incurred by myrmecochorous plants include the energy required to provision diaspores, particularly when a disproportionate investment is made of growth-limiting mineral nutrients. For instance, some Australian Acacia species invest a significant portion of their yearly phosphorus uptake in producing diaspores. Diaspores must also be protected from outright predation by ants. This is typically accomplished by the production of a hard, smooth testa, or seed coat.
Few studies have examined the costs and benefits to ants participating in myrmecochory. Much remains to be understood about the selective advantages conferred upon myrmecochorous ants.
No single hypothesis explains the evolution and persistence of myrmecochory. Instead, a combination of beneficial effects working at different spatiotemporal scales likely contribute to the viability of this predominantly mutualistic interaction. Three commonly cited advantages to myrmecochorous plants are increased dispersal distance, directed dispersal, and seed predator avoidance.
Dispersal distance
Increasing dispersal distance from the parent plant is likely to reduce seed mortality resulting from density-dependent effects. Ants can transport seeds as far as 180 m but the average is less than 2 m, and values between 0.5 and 1.5 m are most common. Perhaps due to the relatively limited distance that ants disperse seeds, many myrmecochores exhibit diplochory, a two-staged dispersal mechanism, often with ballistic projection as the initial mechanism, that can increase dispersal distance by as much as 50%. In some cases, ballistic dispersal distance regularly exceeds that of transport by ants. The dispersal distance achieved through myrmecochory is likely to provide an advantage proportionate to the spatial scale of density-dependent effects acting on individual plants. As such, the relatively modest distances ants transport seeds are likely to be more advantageous for myrmecochorous shrubs, forbs, and other plants of small stature.
Directed dispersal
Myrmecochorous plants may benefit when ants disperse seeds to nutrient-rich or protected microsites that enhance germination and establishment of seedlings. Ants disperse seeds in fairly predictable ways, either by disposing of them in underground middens or by ejecting them from the nest. These patterns of ant dispersal are predictable enough to permit plants to manipulate animal behaviour and influence seed fate, effectively directing the dispersal of seeds to desirable sites. For example, myrmecochores can influence seed fate by producing rounder, smoother diaspores that inhibit ants from redispersing seeds after elaiosome removal. This increases the likelihood that seeds will remain underground instead of being ejected from the nest.
Nest chemistry is ideally suited for seed germination given that ant colonies are typically enriched with plant nutrients such as phosphorus and nitrate. This is likely to be advantageous in areas with infertile soils and less important in areas with more favourable soil chemistry, as in fertile forests. In fire-prone areas, depth of burial is an important factor for successful post-burn germination. This, in turn, is influenced by the nesting habits of the myrmecochorous ants. As such, the value of directed dispersal is largely context-dependent.
Seed predator avoidance
Myrmecochorous plants escape or avoid seed predation by granivores when ants remove and sequester diaspores. This benefit is particularly pronounced in areas where myrmecochorous plants are subject to heavy seed predation, which may be common. In mesic forest habitats, seed predators remove around 60% of all dispersed seeds within a few days, and eventually remove all seeds not removed by ants. In addition to attracting ants, elaiosomes also appeal to granivores, and their presence can increase seed predation rates.
Nature of the interaction
Myrmecochory is traditionally thought to be a diffuse or facultative mutualism with low specificity between myrmecochores and individual ant species. This assertion has been challenged in a study of Iberian myrmecochores, demonstrating the disproportionate importance of specific ant species in dispersing seeds. Ant-plant interactions with a single species of myrmecochore were recorded for 37 species of ants, but only two of these were found to disperse diaspores to any significant degree; the rest were seed predators or “cheaters” opportunistically feeding on elaiosomes in situ without dispersing seeds. Larger diaspores are hypothesized to increase the degree of specialization, since ant mutualists need to be larger to successfully carry the diaspore back to the nest.
Ants, however, do not appear to form obligate relationships with myrmecochorous plants. Since no known ant species relies entirely on elaiosomes for their nutritional needs, ants remain generalist foragers even when entering into relationships with a more specialized myrmecochore.
As with many other facultative mutualisms, cheating is present on both sides of the interaction. Ants cheat by consuming elaiosomes without transporting seeds or through outright seed predation. Myrmecochorous plants can also cheat, either by producing diaspores with nonremovable elaiosomes or by simulating the presence of a nonexistent reward with chemical cues. Ants are sometimes capable of discriminating between cheaters and mutualists as shown by studies demonstrating preference for the diaspores of noncheating myrmecochores. Cheating is also inhibited by ecological interactions external to the myrmecochorous interaction; simple models suggest that predation exerts a stabilizing influence on a mutualism such as myrmecochory.
Myrmecochory and invasive species
Myrmecochores are threatened by invasive species in some ecosystems. For instance, the Argentine ant is an aggressive invader capable of displacing native ant populations. Since Argentine ants do not disperse seeds, invasions may lead to a breakdown in the myrmecochory mutualism, inhibiting the dispersal ability of myrmecochores and causing long-term alterations in plant community dynamics. Invasive ant species can also maintain seed dispersal in their introduced range, as is the case with the red fire ant in the Southeastern United States. Some invasive ants are also seed-disperses in their native range, such as the European fire ant, and can act as a high-quality disperser in their introduced range
In South Africa, the Argentine ant has in some cases displaced native ants that disperse the seeds of Fynbos plants like Mimetes cucullatus. The Argentine ants don't take the seeds underground and leave them on the surface, resulting in ungerminated plants and the dwindling of Fynbos seed reserves after veld fires.
Myrmecochorous plants are also capable of invading ecosystems. These invaders may gain an advantage in areas where native ants disperse invasive seeds. Similarly, the spread of myrmecochorous invaders may be inhibited by limitations in the ranges of native ant populations.
See also
Myrmecophily
Myrmecophyte
References
External links
Insect ecology
Myrmecology
Seeds
Mutualism (biology) | Myrmecochory | [
"Biology"
] | 2,137 | [
"Biological interactions",
"Behavior",
"Symbiosis",
"Mutualism (biology)"
] |
3,087,357 | https://en.wikipedia.org/wiki/Inquiline | In zoology, an inquiline (from Latin inquilinus, "lodger" or "tenant") is an animal that lives commensally in the nest, burrow, or dwelling place of an animal of another species. For example, some organisms, such as insects, may live in the homes of gophers or the garages of humans and feed on debris, fungi, roots, etc. The most widely distributed types of inquiline are those found in association with the nests of social insects, especially ants and termites – a single colony may support dozens of different inquiline species. The distinctions between parasites, social parasites, and inquilines are subtle, and many species may fulfill the criteria for more than one of these, as inquilines do exhibit many of the same characteristics as parasites. However, parasites are specifically not inquilines, because by definition they have a deleterious effect on the host species, while inquilines have not been confirmed to do so.
In the specific case of termites, the term "inquiline" is restricted to termite species that inhabit other termite species' nests whereas other arthropods cohabiting termitaria are called "termitophiles". It is important to reiterate that inquilinism in termites (Blattodea, formerly Isoptera) contrasts with the inquilinism observed in other eusocial insects such as ants and bees (Hymenoptera), even though the term "inquiline" has been adopted in both cases. A major distinction is that, while in the former the species mostly resemble forms of commensalism, the latter includes species currently confirmed as social parasites, thus, being closely related to parasitism.
Inquilines are known especially among the gall wasps (Cynipidae family). In the sub-family Synerginae, this mode of life predominates. These insects are similar in structure to the true gall-inducing wasp but do not produce galls, instead, they deposit their eggs within those of other species. They infest certain species of galls, such as those of the blackberry and some oak galls, in large numbers, and sometimes more than one kind occur in a single gall. Perhaps the most remarkable feature of these inquilines is their frequent close resemblance to the insect that produces the gall they infest.
The term inquiline has also been applied to aquatic invertebrates that spend all or part of their life cycles in phytotelmata, water-filled structures produced by plants. For example, Wyeomyia smithii, Metriocnemus knabi, and Habrotrocha rosa are three invertebrates that make up part of the microecosystem within the pitchers of Sarracenia purpurea. Some species of pitcher plants like the Nepenthes and Cephalotus produce acidic, toxic or digestive fluids and host a limited diversity of inquilines. Other pitcher plant species like the Sarracenia or Heliamphora host diverse organisms and depend to a large extent on their symbionts for prey utilization.
See also
Mutualism (biology)
References
Symbiosis | Inquiline | [
"Biology"
] | 663 | [
"Biological interactions",
"Behavior",
"Symbiosis"
] |
3,087,385 | https://en.wikipedia.org/wiki/Scalar%20theories%20of%20gravitation | Scalar theories of gravitation are field theories of gravitation in which the gravitational field is described using a scalar field, which is required to satisfy some field equation.
Note: This article focuses on relativistic classical field theories of gravitation. The best known relativistic classical field theory of gravitation, general relativity, is a tensor theory, in which the gravitational interaction is described using a tensor field.
Newtonian gravity
The prototypical scalar theory of gravitation is Newtonian gravitation. In this theory, the gravitational interaction is completely described by the potential , which is required to satisfy the Poisson equation (with the mass density acting as the source of the field). To wit:
, where
G is the gravitational constant and
is the mass density.
This field theory formulation leads directly to the familiar law of universal gravitation, .
Nordström's theories of gravitation
The first attempts to present a relativistic (classical) field theory of gravitation were also scalar theories. Gunnar Nordström created two such theories.
Nordström's first idea (1912) was to simply replace the divergence operator in the field equation of Newtonian gravity with the d'Alembertian operator . This gives the field equation
.
However, several theoretical difficulties with this theory quickly arose, and Nordström dropped it.
A year later, Nordström tried again, presenting the field equation
,
where is the trace of the stress–energy tensor.
Solutions of Nordström's second theory are conformally flat Lorentzian spacetimes. That is, the metric tensor can be written as , where
ημν is the Minkowski metric, and
is a scalar which is a function of position.
This suggestion signifies that the inertial mass should depend on the scalar field.
Nordström's second theory satisfies the weak equivalence principle. However:
The theory fails to predict any deflection of light passing near a massive body (contrary to observation)
The theory predicts an anomalous perihelion precession of Mercury, but this disagrees in both sign and magnitude with the observed anomalous precession (the part which cannot be explained using Newtonian gravitation).
Despite these disappointing results, Einstein's critiques of Nordström's second theory played an important role in his development of general relativity.
Einstein's scalar theory
In 1913, Einstein (erroneously) concluded from his hole argument that general covariance was not viable. Inspired by Nordström's work, he proposed his own scalar theory. This theory employs a massless scalar field coupled to the stress–energy tensor, which is the sum of two terms. The first,
represents the stress–momentum–energy of the scalar field itself. The second represents the stress-momentum-energy of any matter which may be present:
where is the velocity vector of an observer, or tangent vector to the world line of the observer. (Einstein made no attempt, in this theory, to take account of possible gravitational effects of the field energy of the electromagnetic field.)
Unfortunately, this theory is not diffeomorphism covariant. This is an important consistency condition, so Einstein dropped this theory in late 1914. Associating the scalar field with the metric leads to Einstein's later conclusions that the theory of gravitation he sought could not be a scalar theory. Indeed, the theory he finally arrived at in 1915, general relativity, is a tensor theory, not a scalar theory, with a 2-tensor, the metric, as the potential. Unlike his 1913 scalar theory, it is generally covariant, and it does take into account the field energy–momentum–stress of the electromagnetic field (or any other nongravitational field).
Additional variations
Kaluza–Klein theory involves the use of a scalar gravitational field in addition to the electromagnetic field potential in an attempt to create a five-dimensional unification of gravity and electromagnetism. Its generalization with a 5th variable component of the metric that leads to a variable gravitational constant was first given by Pascual Jordan.
Brans–Dicke theory is a scalar-tensor theory, not a scalar theory, meaning that it represents the gravitational interaction using both a scalar field and a tensor field. We mention it here because one of the field equations of this theory involves only the scalar field and the trace of the stress–energy tensor, as in Nordström's theory. Moreover, the Brans–Dicke theory is equal to the independently derived theory of Jordan (hence it is often referred to as the Jordan-Brans–Dicke or JBD theory). The Brans–Dicke theory couples a scalar field with the curvature of space-time and is self-consistent and, assuming appropriate values for a tunable constant, this theory has not been ruled out by observation. The Brans–Dicke theory is generally regarded as a leading competitor of general relativity, which is a pure tensor theory. However, the Brans–Dicke theory seems to need too high a parameter, which favours general relativity).
Zee combined the idea of the BD theory with the Higgs-Mechanism of Symmetry Breakdown for mass generation, which led to a scalar-tensor theory with Higgs field as scalar field, in which the scalar field is massive (short-ranged). An example of this theory was proposed by H. Dehnen and H. Frommert 1991, parting from the nature of Higgs field interacting gravitational- and Yukawa (long-ranged)-like with the particles that get mass through it.
The Watt–Misner theory (1999) is a recent example of a scalar theory of gravitation. It is not intended as a viable theory of gravitation (since, as Watt and Misner point out, it is not consistent with observation), but as a toy theory which can be useful in testing numerical relativity schemes. It also has pedagogical value.
See also
Nordström's theory of gravitation
References
External links
Goenner, Hubert F. M., "On the History of Unified Field Theories"; Living Rev. Relativ. 7(2), 2004, lrr-2004-2. Retrieved August 10, 2005.
P. Jordan, Schwerkraft und Weltall, Vieweg (Braunschweig) 1955.
Theories of gravity | Scalar theories of gravitation | [
"Physics"
] | 1,336 | [
"Theoretical physics",
"Theories of gravity"
] |
3,087,410 | https://en.wikipedia.org/wiki/Turbulence%20modeling | In fluid dynamics, turbulence modeling is the construction and use of a mathematical model to predict the effects of turbulence. Turbulent flows are commonplace in most real-life scenarios. In spite of decades of research, there is no analytical theory to predict the evolution of these turbulent flows. The equations governing turbulent flows can only be solved directly for simple cases of flow. For most real-life turbulent flows, CFD simulations use turbulent models to predict the evolution of turbulence. These turbulence models are simplified constitutive equations that predict the statistical evolution of turbulent flows.
Closure problem
The Navier–Stokes equations govern the velocity and pressure of a fluid flow. In a turbulent flow, each of these quantities may be decomposed into a mean part and a fluctuating part. Averaging the equations gives the Reynolds-averaged Navier–Stokes (RANS) equations, which govern the mean flow. However, the nonlinearity of the Navier–Stokes equations means that the velocity fluctuations still appear in the RANS equations, in the nonlinear term from the convective acceleration. This term is known as the Reynolds stress, . Its effect on the mean flow is like that of a stress term, such as from pressure or viscosity.
To obtain equations containing only the mean velocity and pressure, we need to close the RANS equations by modelling the Reynolds stress term as a function of the mean flow, removing any reference to the fluctuating part of the velocity. This is the closure problem.
Eddy viscosity
Joseph Valentin Boussinesq was the first to attack the closure problem, by introducing the concept of eddy viscosity. In 1877 Boussinesq proposed relating the turbulence stresses to the mean flow to close the system of equations. Here the Boussinesq hypothesis is applied to model the Reynolds stress term. Note that a new proportionality constant , the (kinematic) turbulence eddy viscosity, has been introduced. Models of this type are known as eddy viscosity models (EVMs).
which can be written in shorthand as
where
is the mean rate of strain tensor
is the (kinematic) turbulence eddy viscosity
is the turbulence kinetic energy
and is the Kronecker delta.
In this model, the additional turbulence stresses are given by augmenting the molecular viscosity with an eddy viscosity. This can be a simple constant eddy viscosity (which works well for some free shear flows such as axisymmetric jets, 2-D jets, and mixing layers).
The Boussinesq hypothesis – although not explicitly stated by Boussinesq at the time – effectively consists of the assumption that the Reynolds stress tensor is aligned with the strain tensor of the mean flow (i.e.: that the shear stresses due to turbulence act in the same direction as the shear stresses produced by the averaged flow). It has since been found to be significantly less accurate than most practitioners would assume. Still, turbulence models which employ the Boussinesq hypothesis have demonstrated significant practical value. In cases with well-defined shear layers, this is likely due the dominance of streamwise shear components, so that considerable relative errors in flow-normal components are still negligible in absolute terms. Beyond this, most eddy viscosity turbulence models contain coefficients which are calibrated against measurements, and thus produce reasonably accurate overall outcomes for flow fields of similar type as used for calibration.
Prandtl's mixing-length concept
Later, Ludwig Prandtl introduced the additional concept of the mixing length, along with the idea of a boundary layer. For wall-bounded turbulent flows, the eddy viscosity must vary with distance from the wall, hence the addition of the concept of a 'mixing length'. In the simplest wall-bounded flow model, the eddy viscosity is given by the equation:
where
is the partial derivative of the streamwise velocity (u) with respect to the wall normal direction (y)
is the mixing length.
This simple model is the basis for the "law of the wall", which is a surprisingly accurate model for wall-bounded, attached (not separated) flow fields with small pressure gradients.
More general turbulence models have evolved over time, with most modern turbulence models given by field equations similar to the Navier–Stokes equations.
Smagorinsky model for the sub-grid scale eddy viscosity
Joseph Smagorinsky was the first who proposed a formula for the eddy viscosity in Large Eddy Simulation models, based on the local derivatives of the velocity field and the local grid size:
In the context of Large Eddy Simulation, turbulence modeling refers to the need to parameterize the subgrid scale stress in terms of features of the filtered velocity field. This field is called subgrid-scale modeling.
Spalart–Allmaras, k–ε and k–ω models
The Boussinesq hypothesis is employed in the Spalart–Allmaras (S–A), k–ε (k–epsilon), and k–ω (k–omega) models and offers a relatively low cost computation for the turbulence viscosity . The S–A model uses only one additional equation to model turbulence viscosity transport, while the k–ε and k–ω models use two.
Common models
The following is a brief overview of commonly employed models in modern engineering applications.
References
Notes
Other
Absi, R. (2019) "Eddy Viscosity and Velocity Profiles in Fully-Developed Turbulent Channel Flows" Fluid Dyn (2019) 54: 137. https://doi.org/10.1134/S0015462819010014
Absi, R. (2021) "Reinvestigating the Parabolic-Shaped Eddy Viscosity Profile for Free Surface Flows" Hydrology 2021, 8(3), 126. https://doi.org/10.3390/hydrology8030126
Townsend, A. A. (1980) "The Structure of Turbulent Shear Flow" 2nd Edition (Cambridge Monographs on Mechanics),
Bradshaw, P. (1971) "An introduction to turbulence and its measurement" (Pergamon Press),
Wilcox, C. D. (1998), "Turbulence Modeling for CFD" 2nd Ed., (DCW Industries, La Cañada),
Turbulence
Turbulence models | Turbulence modeling | [
"Chemistry"
] | 1,292 | [
"Turbulence",
"Fluid dynamics"
] |
3,087,602 | https://en.wikipedia.org/wiki/Topological%20order | In physics, topological order is a kind of order in the zero-temperature phase of matter (also known as quantum matter). Macroscopically, topological order is defined and described by robust ground state degeneracy and quantized non-abelian geometric phases of degenerate ground states. Microscopically, topological orders correspond to patterns of long-range quantum entanglement. States with different topological orders (or different patterns of long range entanglements) cannot change into each other without a phase transition.
Various topologically ordered states have interesting properties, such as (1) topological degeneracy and fractional statistics or non-abelian group statistics that can be used to realize a topological quantum computer; (2) perfect conducting edge states that may have important device applications; (3) emergent gauge field and Fermi statistics that suggest a quantum information origin of elementary particles; (4) topological entanglement entropy that reveals the entanglement origin of topological order, etc. Topological order is important in the study of several physical systems such as spin liquids, and the quantum Hall effect, along with potential applications to fault-tolerant quantum computation.
Topological insulators and topological superconductors (beyond 1D) do not have topological order as defined above, their entanglements being only short-ranged, but are examples of symmetry-protected topological order.
Background
Matter composed of atoms can have different properties and appear in different forms, such as solid, liquid, superfluid, etc. These various forms of matter are often called states of matter or phases. According to condensed matter physics and the principle of emergence, the different properties of materials generally arise from the different ways in which the atoms are organized in the materials. Those different organizations of the atoms (or other particles) are formally called the orders in the materials.
Atoms can organize in many ways which lead to many different orders and many different types of materials. Landau symmetry-breaking theory provides a general understanding of these different orders. It points out that different orders really correspond to different symmetries in the organizations of the constituent atoms. As a material changes from one order to another order (i.e., as the material undergoes a phase transition), what happens is that the symmetry of the organization of the atoms changes.
For example, atoms have a random distribution in a liquid, so a liquid remains the same as we displace atoms by an arbitrary distance. We say that a liquid has a continuous translation symmetry. After a phase transition, a liquid can turn into a crystal. In a crystal, atoms organize into a regular array (a lattice). A lattice remains unchanged only when we displace it by a particular distance (integer times a lattice constant), so a crystal has only discrete translation symmetry. The phase transition between a liquid and a crystal is a transition that reduces the continuous translation symmetry of the liquid to the discrete symmetry of the crystal. Such a change in symmetry is called symmetry breaking. The essence of the difference between liquids and crystals is therefore that the organizations of atoms have different symmetries in the two phases.
Landau symmetry-breaking theory has been a very successful theory. For a long time, physicists believed that Landau Theory described all possible orders in materials, and all possible (continuous) phase transitions.
Discovery and characterization
However, since the late 1980s, it has become gradually apparent that Landau symmetry-breaking theory may not describe all possible orders. In an attempt to explain high temperature superconductivity the chiral spin state was introduced. At first, physicists still wanted to use Landau symmetry-breaking theory to describe the chiral spin state. They identified the chiral spin state as a state that breaks the time reversal and parity symmetries, but not the spin rotation symmetry. This should be the end of the story according to Landau's symmetry breaking description of orders. However, it was quickly realized that there are many different chiral spin states that have exactly the same symmetry, so symmetry alone was not enough to characterize different chiral spin states. This means that the chiral spin states contain a new kind of order that is beyond the usual symmetry description. The proposed, new kind of order was named "topological order". The name "topological order" is motivated by the low energy effective theory of the chiral spin states which is a topological quantum field theory (TQFT). New quantum numbers, such as ground state degeneracy (which can be defined on a closed space or an open space with gapped boundaries, including both Abelian topological orders and non-Abelian topological orders) and the non-Abelian geometric phase of degenerate ground states, were introduced to characterize and define the different topological orders in chiral spin states. More recently, it was shown that topological orders can also be characterized by topological entropy.
But experiments soon indicated that chiral spin states do not describe high-temperature superconductors, and the theory of topological order became a theory with no experimental realization. However, the similarity between chiral spin states and quantum Hall states allows one to use the theory of topological order to describe different quantum Hall states. Just like chiral spin states, different quantum Hall states all have the same symmetry and are outside the Landau symmetry-breaking description. One finds that the different orders in different quantum Hall states can indeed be described by topological orders, so the topological order does have experimental realizations.
The fractional quantum Hall (FQH) state was discovered in 1982 before the introduction of the concept of topological order in 1989. But the FQH state is not the first experimentally discovered topologically ordered state. The superconductor, discovered in 1911, is the first experimentally discovered topologically ordered state; it has Z2 topological order.
Although topologically ordered states usually appear in strongly interacting boson/fermion systems, a simple kind of topological order can also appear in free fermion systems. This kind of topological order corresponds to integral quantum Hall state, which can be characterized by the Chern number of the filled energy band if we consider integer quantum Hall state on a lattice. Theoretical calculations have proposed that such Chern numbers can be measured for a free fermion system experimentally.
It is also well known that such a Chern number can be measured (maybe indirectly) by edge states.
The most important characterization of topological orders would be the underlying fractionalized excitations (such as anyons) and their fusion statistics and braiding statistics (which can go beyond the quantum statistics of bosons or fermions). Current research works show that the loop and string like excitations exist for topological orders in the 3+1 dimensional spacetime, and their multi-loop/string-braiding statistics are the crucial signatures for identifying 3+1 dimensional topological orders. The multi-loop/string-braiding statistics of 3+1 dimensional topological orders can be captured by the link invariants of particular topological quantum field theory in 4 spacetime dimensions.
Mechanism
A large class of 2+1D topological orders is realized through a mechanism called string-net condensation. This class of topological orders can have a gapped edge and are classified by unitary fusion category (or monoidal category) theory. One finds that string-net condensation can generate infinitely many different types of topological orders, which may indicate that there are many different new types of materials remaining to be discovered.
The collective motions of condensed strings give rise to excitations above the string-net condensed states. Those excitations turn out to be gauge bosons. The ends of strings are defects which correspond to another type of excitations. Those excitations are the gauge charges and can carry Fermi or fractional statistics.
The condensations of other extended objects such as "membranes", "brane-nets", and fractals also lead to topologically ordered phases and "quantum glassiness".
Mathematical formulation
We know that group theory is the mathematical foundation of symmetry-breaking orders. What is the mathematical foundation of topological order?
It was found that a subclass of 2+1D topological orders—Abelian topological orders—can be classified by a K-matrix approach. The string-net condensation suggests that tensor category (such as fusion category or monoidal category) is part of the mathematical foundation of topological order in 2+1D. The more recent researches suggest that
(up to invertible topological orders that have no fractionalized excitations):
2+1D bosonic topological orders are classified by unitary modular tensor categories.
2+1D bosonic topological orders with symmetry G are classified by G-crossed tensor categories.
2+1D bosonic/fermionic topological orders with symmetry G are classified by unitary braided fusion categories over symmetric fusion category, that has modular extensions. The symmetric fusion category Rep(G) for bosonic systems and sRep(G) for fermionic systems.
Topological order in higher dimensions may be related to n-Category theory. Quantum operator algebra is a very important mathematical tool in studying topological orders.
Some also suggest that topological order is mathematically described by extended quantum symmetry.
Applications
The materials described by Landau symmetry-breaking theory have had a substantial impact on technology. For example, ferromagnetic materials that break spin rotation symmetry can be used as the media of digital information storage. A hard drive made of ferromagnetic materials can store gigabytes of information. Liquid crystals that break the rotational symmetry of molecules find wide application in display technology. Crystals that break translation symmetry lead to well defined electronic bands which in turn allow us to make semiconducting devices such as transistors. Different types of topological orders are even richer than different types of symmetry-breaking orders. This suggests their potential for exciting, novel applications.
One theorized application would be to use topologically ordered states as media for quantum computing in a technique known as topological quantum computing. A topologically ordered state is a state with complicated non-local quantum entanglement. The non-locality means that the quantum entanglement in a topologically ordered state is distributed among many different particles. As a result, the pattern of quantum entanglements cannot be destroyed by local perturbations. This significantly reduces the effect of decoherence. This suggests that if we use different quantum entanglements in a topologically ordered state to encode quantum information, the information may last much longer. The quantum information encoded by the topological quantum entanglements can also be manipulated by dragging the topological defects around each other. This process may provide a physical apparatus for performing quantum computations. Therefore, topologically ordered states may provide natural media for both quantum memory and quantum computation. Such realizations of quantum memory and quantum computation may potentially be made fault tolerant.
Topologically ordered states in general have a special property that they contain non-trivial boundary states. In many cases, those boundary states become perfect conducting channel that can conduct electricity without generating heat. This can be another potential application of topological order in electronic devices.
Similarly to topological order, topological insulators also have gapless boundary states. The boundary states of topological insulators play a key role in the detection and the application of topological insulators.
This observation naturally leads to a question:
are topological insulators examples of topologically ordered states?
In fact topological insulators are different from topologically ordered states defined in this article.
Topological insulators only have short-ranged entanglements and have no topological order, while the topological order defined in this article is a pattern of long-range entanglement. Topological order is robust against any perturbations. It has emergent gauge theory, emergent fractional charge and fractional statistics. In contrast, topological insulators are robust only against perturbations that respect time-reversal and U(1) symmetries. Their quasi-particle excitations have no fractional charge and fractional statistics. Strictly speaking, topological insulator is an example of symmetry-protected topological (SPT) order, where the first example of SPT order is the Haldane phase of spin-1 chain. But the Haldane phase of spin-2 chain has no SPT order.
Potential impact
Landau symmetry-breaking theory is a cornerstone of condensed matter physics. It is used to define the territory of condensed matter research. The existence of topological order appears to indicate that nature is much richer than Landau symmetry-breaking theory has so far indicated. So topological order opens up a new direction in condensed matter physics—a new direction of highly entangled quantum matter. We realize that quantum phases of matter (i.e. the zero-temperature phases of matter) can be divided into two classes: long range entangled states and short range entangled states. Topological order is the notion that describes the long range entangled states: topological order = pattern of long range entanglements. Short range entangled states are trivial in the sense that they all belong to one phase. However, in the presence of symmetry, even short range entangled states are nontrivial and can belong to different phases. Those phases are said to contain SPT order. SPT order generalizes the notion of topological insulator to interacting systems.
Some suggest that topological order (or more precisely, string-net condensation) in local bosonic (spin) models has the potential to provide a unified origin for photons, electrons and other elementary particles in our universe.
See also
AKLT model
Fractionalization
Herbertsmithite
Implicate order
Quantum topology
Spin liquid
String-net liquid
Symmetry-protected topological order
Topological defect
Topological degeneracy
Topological entropy in physics
Topological quantum field theory
Topological quantum number
Topological string theory
Notes
References
References by categories
Fractional quantum Hall states
Chiral spin states
Early characterization of FQH states
Off-diagonal long-range order, oblique confinement, and the fractional quantum Hall effect, S. M. Girvin and A. H. MacDonald, Phys. Rev. Lett., 58, 1252 (1987)
Effective-Field-Theory Model for the Fractional Quantum Hall Effect, S. C. Zhang and T. H. Hansson and S. Kivelson, Phys. Rev. Lett., 62, 82 (1989)
Topological order
Xiao-Gang Wen, Phys. Rev. B, 40, 7387 (1989), "Vacuum Degeneracy of Chiral Spin State in Compactified Spaces"
Xiao-Gang Wen, Quantum Field Theory of Many Body Systems – From the Origin of Sound to an Origin of Light and Electrons, Oxford Univ. Press, Oxford, 2004.
Characterization of topological order
D. Arovas and J. R. Schrieffer and F. Wilczek, Phys. Rev. Lett., 53, 722 (1984), "Fractional Statistics and the Quantum Hall Effect"
Effective theory of topological order
Mechanism of topological order
Quantum computing
Ady Stern and Bertrand I. Halperin, Phys. Rev. Lett., 96, 016802 (2006), Proposed Experiments to probe the Non-Abelian nu=5/2 Quantum Hall State
Emergence of elementary particles
Xiao-Gang Wen, Phys. Rev. D68, 024501 (2003), Quantum order from string-net condensations and origin of light and massless fermions
See also
Zheng-Cheng Gu and Xiao-Gang Wen, gr-qc/0606100, A lattice bosonic model as a quantum theory of gravity,
Quantum operator algebra
Landsman N. P. and Ramazan B., Quantization of Poisson algebras associated to Lie algebroids, in Proc. Conf. on Groupoids in Physics, Analysis and Geometry(Boulder CO, 1999)', Editors J. Kaminker et al.,159{192 Contemp. Math. 282, Amer. Math. Soc., Providence RI, 2001, (also math{ph/001005.)
Non-Abelian Quantum Algebraic Topology (NAQAT) 20 Nov. (2008),87 pages, Baianu, I.C.
Levin A. and Olshanetsky M., Hamiltonian Algebroids and deformations of complex structures on Riemann curves, hep-th/0301078v1.
Xiao-Gang Wen, Yong-Shi Wu and Y. Hatsugai., Chiral operator product algebra and edge excitations of a FQH droplet (pdf),Nucl. Phys. B422, 476 (1994): Used chiral operator product algebra to construct the bulk wave function, characterize the topological orders and calculate the edge states for some non-Abelian FQH states.
Xiao-Gang Wen and Yong-Shi Wu., Chiral operator product algebra hidden in certain FQH states (pdf),Nucl. Phys. B419, 455 (1994): Demonstrated that non-Abelian topological orders are closely related to chiral operator product algebra (instead of conformal field theory).
Non-Abelian theory.
.
R. Brown, P.J. Higgins, P. J. and R. Sivera, "Nonabelian Algebraic Topology: filtered spaces, crossed complexes, cubical homotopy groupoids" EMS Tracts in Mathematics Vol 15 (2011),
A Bibliography for Categories and Algebraic Topology Applications in Theoretical Physics
Quantum Algebraic Topology (QAT)
Quantum phases
Condensed matter physics
Statistical mechanics | Topological order | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,592 | [
"Quantum phases",
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"Statistical mechanics",
"Matter"
] |
3,088,136 | https://en.wikipedia.org/wiki/Drawing%20down%20the%20Moon%20%28ritual%29 | Drawing down the Moon (also known as drawing down the Goddess) is a central ritual in many contemporary Wiccan traditions. During the ritual, a coven's High Priestess enters a trance and requests that the Goddess or Triple Goddess, symbolized by the Moon, enter her body and speak through her. The High Priestess may be aided by the High Priest, who invokes the spirit of the Goddess. During her trance, the Goddess is supposed to speak through the High Priestess.
History
The name most likely comes from a depiction of two women and the moon on an ancient Greek vase, believed to date from the second century BCE.
It could also come from line 145 of Claudian’s First Book Against Rufinus. Megaera, one of the Erinyes, in the guise of an old man, speaks to Rufinus:
In classical times, the Greek astronomer Aglaonice of Thessaly and ancient Thessalian witches were believed to control the moon, according to the tract: "If I command the moon, it will come down; and if I wish to withhold the day, night will linger over my head; and again, if I wish to embark on the sea, I need no ship, and if I wish to fly through the air, I am free from my weight."
The drawing down of the moon derives from the Vangelo. In this a poem defining the drawing down of the moon is written and this has been used as the basis for the drawing down of the moon by various Wiccan groups. The practice forms part of both Gardnerian and Cochranian rites. The practice is also reference in Reginald Scot's "The Discoverie of Witchcraft".
Though a number of Wiccan traditions may practice a variation of the ritual, the modern form likely originated in Gardnerian Wicca, and is considered a central element of Gardnerian and Alexandrian Wiccan ceremonies. During the modern rite, the High Priestess may recite the Charge of the Goddess, a text based in a mixture of writings by Gerald Gardner and Aleister Crowley, though now often used in its recension by Doreen Valiente, High Priestess in the Gardnerian tradition.
Mel D. Faber explains the ritual in psychoanalytical terms of attempting to re-unite with the protective-mother archetype.
In modern traditions, some solitary Wiccans also perform the ritual, usually within a circle and performed under the light of a full Moon. The solitary will stand in the Goddess Pose (both arms held high, palms up, body and arms forming a 'Y') and recite a charge, or chant.
The ritual in print
"Drawing Down the Moon" is also the title of a book by National Public Radio reporter, Margot Adler— Drawing Down the Moon: Witches, Druids, Goddess-Worshippers, and Other Pagans in America Today—originally published in 1979. Adler writes:
...in this ritual, one of the most serious and beautiful in the modern Craft, the priest invokes into the priestess (or, depending on your point of view, she evokes from within herself) the Goddess or Triple Goddess, symbolized by the phases of the moon. She is known by a thousand names, and among them were those I had used as a child. In some Craft rituals the priestess goes into a trance and speaks; in other traditions the ritual is a more formal dramatic dialogue, often of intense beauty, in which, again, the priestess speaks, taking the role of the Goddess. In both instances, the priestess functions as the Goddess incarnate, within the circle.
See also
Adorcism
References
Further reading
Margot Adler Drawing Down the Moon, Revised and Expanded ed., Viking Press, 1997,
Ed Fitch Magical Rites From the Crystal Well, Llewellyn Publications, 1984,
Starhawk, The Spiral Dance, 20th Anniversary Edition, Harper, San Francisco, 1999,
External links
Drawing Down the Moon from Gardnerian Book of Shadows
Ancient Greek religion
Culture of ancient Thessaly
Magic rituals
Moon myths
Wiccan terminology
Spirit possession | Drawing down the Moon (ritual) | [
"Astronomy"
] | 834 | [
"Astronomical myths",
"Moon myths"
] |
3,088,223 | https://en.wikipedia.org/wiki/Superlens | A superlens, or super lens, is a lens which uses metamaterials to go beyond the diffraction limit. The diffraction limit is a feature of conventional lenses and microscopes that limits the fineness of their resolution depending on the illumination wavelength and the numerical aperture (NA) of the objective lens. Many lens designs have been proposed that go beyond the diffraction limit in some way, but constraints and obstacles face each of them.
History
In 1873 Ernst Abbe reported that conventional lenses are incapable of capturing some fine details of any given image. The superlens is intended to capture such details. This limitation of conventional lenses has inhibited progress in the biological sciences. This is because a virus or DNA molecule cannot be resolved with the highest powered conventional microscopes. This limitation extends to the minute processes of cellular proteins moving alongside microtubules of a living cell in their natural environments. Additionally, computer chips and the interrelated microelectronics continue to be manufactured at progressively smaller scales. This requires specialized optical equipment, which is also limited because these use conventional lenses. Hence, the principles governing a superlens show that it has potential for imaging DNA molecules, cellular protein processes, and aiding in the manufacture of even smaller computer chips and microelectronics.
Conventional lenses capture only the propagating light waves. These are waves that travel from a light source or an object to a lens, or the human eye. This can alternatively be studied as the far field. In contrast, a superlens captures propagating light waves and waves that stay on top of the surface of an object, which, alternatively, can be studied as both the far field and the near field.
In the early 20th century the term "superlens" was used by Dennis Gabor to describe something quite different: a compound lenslet array system.
Theory
Image formation
An image of an object can be defined as a tangible or visible representation of the features of that object. A requirement for image formation is interaction with fields of electromagnetic radiation. Furthermore, the level of feature detail, or image resolution, is limited to a length of a wave of radiation. For example, with optical microscopy, image production and resolution depends on the length of a wave of visible light. However, with a superlens, this limitation may be removed, and a new class of image generated.
Electron beam lithography can overcome this resolution limit. Optical microscopy, on the other hand cannot, being limited to some value just above 200 nanometers. However, new technologies combined with optical microscopy are beginning to allow increased feature resolution (see sections below).
One definition of being constrained by the resolution barrier, is a resolution cut off at half the wavelength of light. The visible spectrum has a range that extends from 390 nanometers to 750 nanometers. Green light, half way in between, is around 500 nanometers. Microscopy takes into account parameters such as lens aperture, distance from the object to the lens, and the refractive index of the observed material. This combination defines the resolution cutoff, or microscopy optical limit, which tabulates to 200 nanometers. Therefore, conventional lenses, which literally construct an image of an object by using "ordinary" light waves, discard information that produces very fine, and minuscule details of the object that are contained in evanescent waves. These dimensions are less than 200 nanometers. For this reason, conventional optical systems, such as microscopes, have been unable to accurately image very small, nanometer-sized structures or nanometer-sized organisms in vivo, such as individual viruses, or DNA molecules.
The limitations of standard optical microscopy (bright-field microscopy) lie in three areas:
The technique can only image dark or strongly refracting objects effectively.
Diffraction limits the object, or cell's, resolution to approximately 200 nanometers.
Out-of-focus light from points outside the focal plane reduces image clarity.
Live biological cells in particular generally lack sufficient contrast to be studied successfully, because the internal structures of the cell are mostly colorless and transparent. The most common way to increase contrast is to stain the different structures with selective dyes, but often this involves killing and fixing the sample. Staining may also introduce artifacts, apparent structural details that are caused by the processing of the specimen and are thus not a legitimate feature of the specimen.
Conventional lens
The conventional glass lens is pervasive throughout our society and in the sciences. It is one of the fundamental tools of optics simply because it interacts with various wavelengths of light. At the same time, the wavelength of light can be analogous to the width of a pencil used to draw the ordinary images. The limit intrudes in all kinds of ways. For example, the laser used in a digital video system cannot read details from a DVD that are smaller than the wavelength of the laser. This limits the storage capacity of DVDs.
Thus, when an object emits or reflects light there are two types of electromagnetic radiation associated with this phenomenon. These are the near field radiation and the far field radiation. As implied by its description, the far field escapes beyond the object. It is then easily captured and manipulated by a conventional glass lens. However, useful (nanometer-sized) resolution details are not observed, because they are hidden in the near field. They remain localized, staying much closer to the light emitting object, unable to travel, and unable to be captured by the conventional lens. Controlling the near field radiation, for high resolution, can be accomplished with a new class of materials not easily obtained in nature. These are unlike familiar solids, such as crystals, which derive their properties from atomic and molecular units. The new material class, termed metamaterials, obtains its properties from its artificially larger structure. This has resulted in novel properties, and novel responses, which allow for details of images that surpass the limitations imposed by the wavelength of light.
Subwavelength imaging
This has led to the desire to view live biological cell interactions in a real time, natural environment, and the need for subwavelength imaging. Subwavelength imaging can be defined as optical microscopy with the ability to see details of an object or organism below the wavelength of visible light (see discussion in the above sections). In other words, to have the capability to observe, in real time, below 200 nanometers. Optical microscopy is a non-invasive technique and technology because everyday light is the transmission medium. Imaging below the optical limit in optical microscopy (subwavelength) can be engineered for the cellular level, and nanometer level in principle.
For example, in 2007 a technique was demonstrated where a metamaterials-based lens coupled with a conventional optical lens could manipulate visible light to see (nanoscale) patterns that were too small to be observed with an ordinary optical microscope. This has potential applications not only for observing a whole living cell, or for observing cellular processes, such as how proteins and fats move in and out of cells. In the technology domain, it could be used to improve the first steps of photolithography and nanolithography, essential for manufacturing ever smaller computer chips.
Focusing at subwavelength has become a unique imaging technique which allows visualization of features on the viewed object which are smaller than the wavelength of the photons in use. A photon is the minimum unit of light. While previously thought to be physically impossible, subwavelength imaging has been made possible through the development of metamaterials. This is generally accomplished using a layer of metal such as gold or silver a few atoms thick, which acts as a superlens, or by means of 1D and 2D photonic crystals. There is a subtle interplay between propagating waves, evanescent waves, near field imaging and far field imaging discussed in the sections below.
Early subwavelength imaging
Metamaterial lenses (Superlenses) are able to reconstruct nanometer sized images by producing a negative refractive index in each instance. This compensates for the swiftly decaying evanescent waves. Prior to metamaterials, numerous other techniques had been proposed and even demonstrated for creating super-resolution microscopy. As far back as 1928, Irish physicist Edward Hutchinson Synge, is given credit for conceiving and developing the idea for what would ultimately become near-field scanning optical microscopy.
In 1974 proposals for two-dimensional fabrication techniques were presented. These proposals included contact imaging to create a pattern in relief, photolithography, electron-beam lithography, X-ray lithography, or ion bombardment, on an appropriate planar substrate. The shared technological goals of the metamaterial lens and the variety of lithography aim to optically resolve features having dimensions much smaller than that of the vacuum wavelength of the exposing light. In 1981 two different techniques of contact imaging of planar (flat) submicroscopic metal patterns with blue light (400 nm) were demonstrated. One demonstration resulted in an image resolution of 100 nm and the other a resolution of 50 to 70 nm.
In 1995, John Guerra combined a transparent grating having 50 nm lines and spaces (the "metamaterial") with a conventional microscope immersion objective. The resulting "superlens" resolved a silicon sample also having 50 nm lines and spaces, far beyond the classical diffraction limit imposed by the illumination having 650 nm wavelength in air.
Since at least 1998 near field optical lithography was designed to create nanometer-scale features. Research on this technology continued as the first experimentally demonstrated negative index metamaterial came into existence in 2000–2001. The effectiveness of electron-beam lithography was also being researched at the beginning of the new millennium for nanometer-scale applications. Imprint lithography was shown to have desirable advantages for nanometer-scaled research and technology.
Advanced deep UV photolithography can now offer sub-100 nm resolution, yet the minimum feature size and spacing between patterns are determined by the diffraction limit of light. Its derivative technologies such as evanescent near-field lithography, near-field interference lithography, and phase-shifting mask lithography were developed to overcome the diffraction limit.
In the year 2000, John Pendry proposed using a metamaterial lens to achieve nanometer-scaled imaging for focusing below the wavelength of light.
Analysis of the diffraction limit
The original problem of the perfect lens: The general expansion of an EM field emanating from a source consists of both propagating waves and near-field or evanescent waves. An example of a 2-D line source with an electric field which has S-polarization will have plane waves consisting of propagating and evanescent components, which advance parallel to the interface. As both the propagating and the smaller evanescent waves advance in a direction parallel to the medium interface, evanescent waves decay in the direction of propagation. Ordinary (positive index) optical elements can refocus the propagating components, but the exponentially decaying inhomogeneous components are always lost, leading to the diffraction limit for focusing to an image.
A superlens is a lens which is capable of subwavelength imaging, allowing for magnification of near field rays. Conventional lenses have a resolution on the order of one wavelength due to the so-called diffraction limit. This limit hinders imaging very small objects, such as individual atoms, which are much smaller than the wavelength of visible light. A superlens is able to beat the diffraction limit. An example is the initial lens described by Pendry, which uses a slab of material with a negative index of refraction as a flat lens. In theory, a perfect lens would be capable of perfect focus – meaning that it could perfectly reproduce the electromagnetic field of the source plane at the image plane.
The diffraction limit as restriction on resolution
The performance limitation of conventional lenses is due to the diffraction limit. Following Pendry (2000), the diffraction limit can be understood as follows. Consider an object and a lens placed along the z-axis so the rays from the object are traveling in the +z direction. The field emanating from the object can be written in terms of its angular spectrum method, as a superposition of plane waves:
where is a function of :
Only the positive square root is taken as the energy is going in the +z direction. All of the components of the angular spectrum of the image for which is real are transmitted and re-focused by an ordinary lens. However, if
then becomes imaginary, and the wave is an evanescent wave, whose amplitude decays as the wave propagates along the z axis. This results in the loss of the high-angular-frequency components of the wave, which contain information about the high-frequency (small-scale) features of the object being imaged. The highest resolution that can be obtained can be expressed in terms of the wavelength:
A superlens overcomes the limit. A Pendry-type superlens has an index of n=−1 (ε=−1, μ=−1), and in such a material, transport of energy in the +z direction requires the z component of the wave vector to have opposite sign:
For large angular frequencies, the evanescent wave now grows, so with proper lens thickness, all components of the angular spectrum can be transmitted through the lens undistorted. There are no problems with conservation of energy, as evanescent waves carry none in the direction of growth: the Poynting vector is oriented perpendicularly to the direction of growth. For traveling waves inside a perfect lens, the Poynting vector points in direction opposite to the phase velocity.
Effects of negative index of refraction
Normally, when a wave passes through the interface of two materials, the wave appears on the opposite side of the normal. However, if the interface is between a material with a positive index of refraction and another material with a negative index of refraction, the wave will appear on the same side of the normal. Pendry's idea of a perfect lens is a flat material where n=−1. Such a lens allows near-field rays, which normally decay due to the diffraction limit, to focus once within the lens and once outside the lens, allowing subwavelength imaging.
Development and construction
Superlens construction was at one time thought to be impossible. In 2000, Pendry claimed that a simple slab of left-handed material would do the job. The experimental realization of such a lens took, however, some more time, because it is not that easy to fabricate metamaterials with both negative permittivity and permeability. Indeed, no such material exists naturally and construction of the required metamaterials is non-trivial. Furthermore, it was shown that the parameters of the material are extremely sensitive (the index must equal −1); small deviations make the subwavelength resolution unobservable. Due to the resonant nature of metamaterials, on which many (proposed) implementations of superlenses depend, metamaterials are highly dispersive. The sensitive nature of the superlens to the material parameters causes superlenses based on metamaterials to have a limited usable frequency range. This initial theoretical superlens design consisted of a metamaterial that compensated for wave decay and reconstructs images in the near field. Both propagating and evanescent waves could contribute to the resolution of the image.
Pendry also suggested that a lens having only one negative parameter would form an approximate superlens, provided that the distances involved are also very small and provided that the source polarization is appropriate. For visible light this is a useful substitute, since engineering metamaterials with a negative permeability at the frequency of visible light is difficult. Metals are then a good alternative as they have negative permittivity (but not negative permeability). Pendry suggested using silver due to its relatively low loss at the predicted wavelength of operation (356 nm). In 2003 Pendry's theory was first experimentally demonstrated at RF/microwave frequencies. In 2005, two independent groups verified Pendry's lens at UV range, both using thin layers of silver illuminated with UV light to produce "photographs" of objects smaller than the wavelength. Negative refraction of visible light was experimentally verified in an yttrium orthovanadate (YVO4) bicrystal in 2003.
It was discovered that a simple superlens design for microwaves could use an array of parallel conducting wires.
This structure was shown to be able to improve the resolution of MRI imaging.
In 2004, the first superlens with a negative refractive index provided resolution three times better than the diffraction limit and was demonstrated at microwave frequencies. In 2005, the first near field superlens was demonstrated by N.Fang et al., but the lens did not rely on negative refraction. Instead, a thin silver film was used to enhance the evanescent modes through surface plasmon coupling. Almost at the same time Melville and Blaikie succeeded with a near field superlens. Other groups followed. Two developments in superlens research were reported in 2008. In the second case, a metamaterial was formed from silver nanowires which were electrochemically deposited in porous aluminium oxide. The material exhibited negative refraction. The imaging performance of such isotropic negative dielectric constant slab lenses were also analyzed with respect to the slab material and thickness. Subwavelength imaging opportunities with planar uniaxial anisotropic lenses, where the dielectric tensor components are of the opposite sign, have also been studied as a function of the structure parameters.
The superlens has not yet been demonstrated at visible or near-infrared frequencies (Nielsen, R. B.; 2010). Furthermore, as dispersive materials, these are limited to functioning at a single wavelength. Proposed solutions are metal–dielectric composites (MDCs) and multilayer lens structures. The multi-layer superlens appears to have better subwavelength resolution than the single layer superlens. Losses are less of a concern with the multi-layer system, but so far it appears to be impractical because of impedance mis-match.
While the evolution of nanofabrication techniques continues to push the limits in fabrication of nanostructures, surface roughness remains an inevitable source of concern in the design of nano-photonic devices. The impact of this surface roughness on the effective dielectric constants and subwavelength image resolution of multilayer metal–insulator stack lenses has also been studied.
Perfect lenses
When the world is observed through conventional lenses, the sharpness of the image is determined by and limited to the wavelength of light. Around the year 2000, a slab of negative index metamaterial was theorized to create a lens with capabilities beyond conventional (positive index) lenses. Pendry proposed that a thin slab of negative refractive metamaterial might overcome known problems with common lenses to achieve a "perfect" lens that would focus the entire spectrum, both the propagating as well as the evanescent spectra.
A slab of silver was proposed as the metamaterial. More specifically, such silver thin film can be regarded as a metasurface. As light moves away (propagates) from the source, it acquires an arbitrary phase. Through a conventional lens the phase remains consistent, but the evanescent waves decay exponentially. In the flat metamaterial DNG slab, normally decaying evanescent waves are contrarily amplified. Furthermore, as the evanescent waves are now amplified, the phase is reversed.
Therefore, a type of lens was proposed, consisting of a metal film metamaterial. When illuminated near its plasma frequency, the lens could be used for superresolution imaging that compensates for wave decay and reconstructs images in the near-field. In addition, both propagating and evanescent waves contribute to the resolution of the image.
Pendry suggested that left-handed slabs allow "perfect imaging" if they are completely lossless, impedance matched, and their refractive index is −1 relative to the surrounding medium. Theoretically, this would be a breakthrough in that the optical version resolves objects as minuscule as nanometers across. Pendry predicted that Double negative metamaterials (DNG) with a refractive index of n=−1, can act, at least in principle, as a "perfect lens" allowing imaging resolution which is limited not by the wavelength, but rather by material quality.
Other studies concerning the perfect lens
Further research demonstrated that Pendry's theory behind the perfect lens was not exactly correct. The analysis of the focusing of the evanescent spectrum (equations 13–21 in reference) was flawed. In addition, this applies to only one (theoretical) instance, and that is one particular medium that is lossless, nondispersive and the constituent parameters are defined as:
ε(ω) / ε0=μ(ω) / μ0=−1, which in turn results in a negative refraction of n=−1
However, the final intuitive result of this theory that both the propagating and evanescent waves are focused, resulting in a converging focal point within the slab and another convergence (focal point) beyond the slab turned out to be correct.
If the DNG metamaterial medium has a large negative index or becomes lossy or dispersive, Pendry's perfect lens effect cannot be realized. As a result, the perfect lens effect does not exist in general. According to FDTD simulations at the time (2001), the DNG slab acts like a converter from a pulsed cylindrical wave to a pulsed beam. Furthermore, in reality (in practice), a DNG medium must be and is dispersive and lossy, which can have either desirable or undesirable effects, depending on the research or application. Consequently, Pendry's perfect lens effect is inaccessible with any metamaterial designed to be a DNG medium.
Another analysis, in 2002, of the perfect lens concept showed it to be in error while using the lossless, dispersionless DNG as the subject. This analysis mathematically demonstrated that subtleties of evanescent waves, restriction to a finite slab and absorption had led to inconsistencies and divergencies that contradict the basic mathematical properties of scattered wave fields. For example, this analysis stated that absorption, which is linked to dispersion, is always present in practice, and absorption tends to transform amplified waves into decaying ones inside this medium (DNG).
A third analysis of Pendry's perfect lens concept, published in 2003, used the recent demonstration of negative refraction at microwave frequencies as confirming the viability of the fundamental concept of the perfect lens. In addition, this demonstration was thought to be experimental evidence that a planar DNG metamaterial would refocus the far field radiation of a point source. However, the perfect lens would require significantly different values for permittivity, permeability, and spatial periodicity than the demonstrated negative refractive sample.
This study agrees that any deviation from conditions where ε=μ=−1 results in the normal, conventional, imperfect image that degrades exponentially i.e., the diffraction limit. The perfect lens solution in the absence of losses is again, not practical, and can lead to paradoxical interpretations.
It was determined that although resonant surface plasmons are undesirable for imaging, these turn out to be essential for recovery of decaying evanescent waves. This analysis discovered that metamaterial periodicity has a significant effect on the recovery of types of evanescent components. In addition, achieving subwavelength resolution is possible with current technologies. Negative refractive indices have been demonstrated in structured metamaterials. Such materials can be engineered to have tunable material parameters, and so achieve the optimal conditions. Losses up to microwave frequencies can be minimized in structures utilizing superconducting elements. Furthermore, consideration of alternate structures may lead to configurations of left-handed materials that can achieve subwavelength focusing. Such structures were being studied at the time.
An effective approach for the compensation of losses in metamaterials, called plasmon injection scheme, has been recently proposed. The plasmon injection scheme has been applied theoretically to imperfect negative index flat lenses with reasonable material losses and in the presence of noise as well as hyperlenses. It has been shown that even imperfect negative index flat lenses assisted with plasmon injection scheme can enable subdiffraction imaging of objects which is otherwise not possible due to the losses and noise. Although plasmon injection scheme was originally conceptualized for plasmonic metamaterials, the concept is general and applicable to all types electromagnetic modes. The main idea of the scheme is the coherent superposition of the lossy modes in the metamaterial with an appropriately structured external auxiliary field. This auxiliary field accounts for the losses in the metamaterial, hence effectively reduces the losses experienced by the signal beam or object field in the case of a metamaterial lens. The plasmon injection scheme can be implemented either physically or equivalently through deconvolution post-processing method. However, the physical implementation has shown to be more effective than the deconvolution. Physical construction of convolution and selective amplification of the spatial frequencies within a narrow bandwidth are the keys to the physical implementation of the plasmon injection scheme. This loss compensation scheme is ideally suited especially for metamaterial lenses since it does not require gain medium, nonlinearity, or any interaction with phonons. Experimental demonstration of the plasmon injection scheme has not yet been shown partly because the theory is rather new.
Near-field imaging with magnetic wires
Pendry's theoretical lens was designed to focus both propagating waves and the near-field evanescent waves. From permittivity "ε" and magnetic permeability "μ" an index of refraction "n" is derived. The index of refraction determines how light is bent on traversing from one material to another. In 2003, it was suggested that a metamaterial constructed with alternating, parallel, layers of n=−1 materials and n=+1 materials, would be a more effective design for a metamaterial lens. It is an effective medium made up of a multi-layer stack, which exhibits birefringence, n2=∞, nx=0. The effective refractive indices are then perpendicular and parallel, respectively.
Like a conventional lens, the z-direction is along the axis of the roll. The resonant frequency (w0) – close to 21.3 MHz – is determined by the construction of the roll. Damping is achieved by the inherent resistance of the layers and the lossy part of permittivity.
Simply put, as the field pattern is transferred from the input to the output face of a slab, so the image information is transported across each layer. This was experimentally demonstrated. To test the two-dimensional imaging performance of the material, an antenna was constructed from a pair of anti-parallel wires in the shape of the letter M. This generated a line of magnetic flux, so providing a characteristic field pattern for imaging. It was placed horizontally, and the material, consisting of 271 Swiss rolls tuned to 21.5 MHz, was positioned on top of it. The material does indeed act as an image transfer device for the magnetic field. The shape of the antenna is faithfully reproduced in the output plane, both in the distribution of the peak intensity, and in the "valleys" that bound the M.
A consistent characteristic of the very near (evanescent) field is that the electric and magnetic fields are largely decoupled. This allows for nearly independent manipulation of the electric field with the permittivity and the magnetic field with the permeability.
Furthermore, this is highly anisotropic system. Therefore, the transverse (perpendicular) components of the EM field which radiate the material, that is the wavevector components kx and ky, are decoupled from the longitudinal component kz. So, the field pattern should be transferred from the input to the output face of a slab of material without degradation of the image information.
Optical superlens with silver metamaterial
In 2003, a group of researchers showed that optical evanescent waves would be enhanced as they passed through a silver metamaterial lens. This was referred to as a diffraction-free lens. Although a coherent, high-resolution, image was not intended, nor achieved, regeneration of the evanescent field was experimentally demonstrated.
By 2003 it was known for decades that evanescent waves could be enhanced by producing excited states at the interface surfaces. However, the use of surface plasmons to reconstruct evanescent components was not tried until Pendry's recent proposal (see "Perfect lens" above). By studying films of varying thickness it has been noted that a rapidly growing transmission coefficient occurs, under the appropriate conditions. This demonstration provided direct evidence that the foundation of superlensing is solid, and suggested the path that will enable the observation of superlensing at optical wavelengths.
In 2005, a coherent, high-resolution image was produced (based on the 2003 results). A thinner slab of silver (35 nm) was better for sub–diffraction-limited imaging, which results in one-sixth of the illumination wavelength. This type of lens was used to compensate for wave decay and reconstruct images in the near-field. Prior attempts to create a working superlens used a slab of silver that was too thick.
Objects were imaged as small as 40 nm across. In 2005 the imaging resolution limit for optical microscopes was at about one tenth the diameter of a red blood cell. With the silver superlens this results in a resolution of one hundredth of the diameter of a red blood cell.
Conventional lenses, whether man-made or natural, create images by capturing the propagating light waves all objects emit and then bending them. The angle of the bend is determined by the index of refraction and has always been positive until the fabrication of artificial negative index materials. Objects also emit evanescent waves that carry details of the object, but are unobtainable with conventional optics. Such evanescent waves decay exponentially and thus never become part of the image resolution, an optics threshold known as the diffraction limit. Breaking this diffraction limit, and capturing evanescent waves are critical to the creation of a 100-percent perfect representation of an object.
In addition, conventional optical materials suffer a diffraction limit because only the propagating components are transmitted (by the optical material) from a light source. The non-propagating components, the evanescent waves, are not transmitted. Moreover, lenses that improve image resolution by increasing the index of refraction are limited by the availability of high-index materials, and point by point subwavelength imaging of electron microscopy also has limitations when compared to the potential of a working superlens. Scanning electron and atomic force microscopes are now used to capture detail down to a few nanometers. However, such microscopes create images by scanning objects point by point, which means they are typically limited to non-living samples, and image capture times can take up to several minutes.
With current optical microscopes, scientists can only make out relatively large structures within a cell, such as its nucleus and mitochondria. With a superlens, optical microscopes could one day reveal the movements of individual proteins traveling along the microtubules that make up a cell's skeleton, the researchers said. Optical microscopes can capture an entire frame with a single snapshot in a fraction of a second. With superlenses this opens up nanoscale imaging to living materials, which can help biologists better understand cell structure and function in real time.
Advances of magnetic coupling in the THz and infrared regime provided the realization of a possible metamaterial superlens. However, in the near field, the electric and magnetic responses of materials are decoupled. Therefore, for transverse magnetic (TM) waves, only the permittivity needed to be considered. Noble metals, then become natural selections for superlensing because negative permittivity is easily achieved.
By designing the thin metal slab so that the surface current oscillations (the surface plasmons) match the evanescent waves from the object, the superlens is able to substantially enhance the amplitude of the field. Superlensing results from the enhancement of evanescent waves by surface plasmons.
The key to the superlens is its ability to significantly enhance and recover the evanescent waves that carry information at very small scales. This enables imaging well below the diffraction limit. No lens is yet able to completely reconstitute all the evanescent waves emitted by an object, so the goal of a 100-percent perfect image will persist. However, many scientists believe that a true perfect lens is not possible because there will always be some energy absorption loss as the waves pass through any known material. In comparison, the superlens image is substantially better than the one created without the silver superlens.
50-nm flat silver layer
In February 2004, an electromagnetic radiation focusing system, based on a negative index metamaterial plate, accomplished subwavelength imaging in the microwave domain. This showed that obtaining separated images at much less than the wavelength of light is possible. Also, in 2004, a silver layer was used for sub-micrometre near-field imaging. Super high resolution was not achieved, but this was intended. The silver layer was too thick to allow significant enhancements of evanescent field components.
In early 2005, feature resolution was achieved with a different silver layer. Though this was not an actual image, it was intended. Dense feature resolution down to 250 nm was produced in a 50 nm thick photoresist using illumination from a mercury lamp. Using simulations (FDTD), the study noted that resolution improvements could be expected for imaging through silver lenses, rather than another method of near field imaging.
Building on this prior research, super resolution was achieved at optical frequencies using a 50 nm flat silver layer. The capability of resolving an image beyond the diffraction limit, for far-field imaging, is defined here as superresolution.
The image fidelity is much improved over earlier results of the previous experimental lens stack. Imaging of sub-micrometre features has been greatly improved by using thinner silver and spacer layers, and by reducing the surface roughness of the lens stack. The ability of the silver lenses to image the gratings has been used as the ultimate resolution test, as there is a concrete limit for the ability of a conventional (far field) lens to image a periodic object – in this case the image is a diffraction grating. For normal-incidence illumination the minimum spatial period that can be resolved with wavelength λ through a medium with refractive index n is λ/n. Zero contrast would therefore be expected in any (conventional) far-field image below this limit, no matter how good the imaging resist might be.
The (super) lens stack here results in a computational result of a diffraction-limited resolution of 243 nm. Gratings with periods from 500 nm down to 170 nm are imaged, with the depth of the modulation in the resist reducing as the grating period reduces. All of the gratings with periods above the diffraction limit (243 nm) are well resolved. The key results of this experiment are super-imaging of the sub-diffraction limit for 200 nm and 170 nm periods. In both cases the gratings are resolved, even though the contrast is diminished, but this gives experimental confirmation of Pendry's superlensing proposal.
Negative index GRIN lenses
Gradient Index (GRIN) – The larger range of material response available in metamaterials should lead to improved GRIN lens design. In particular, since the permittivity and permeability of a metamaterial can be adjusted independently, metamaterial GRIN lenses can presumably be better matched to free space. The GRIN lens is constructed by using a slab of NIM with a variable index of refraction in the y direction, perpendicular to the direction of propagation z.
Far-field superlens
In 2005, a group proposed a theoretical way to overcome the near-field limitation using a new device termed a far-field superlens (FSL), which is a properly designed periodically corrugated metallic slab-based superlens.
Imaging was experimentally demonstrated in the far field, taking the next step after near-field experiments. The key element is termed as a far-field superlens (FSL) which consists of a conventional superlens and a nanoscale coupler.
Focusing beyond the diffraction limit with far-field time reversal
An approach is presented for subwavelength focusing of microwaves using both a time-reversal mirror placed in the far field and a random distribution of scatterers placed in the near field of the focusing point.
Hyperlens
Once capability for near-field imaging was demonstrated, the next step was to project a near-field image into the far-field. This concept, including technique and materials, is dubbed "hyperlens".
In May 2012, calculations showed an ultraviolet (1200–1400 THz) hyperlens can be created using alternating layers of boron nitride and graphene.
In February 2018, a mid-infrared (~5–25 μm) hyperlens was introduced, made from a variably doped indium arsenide multilayer, which offered drastically lower losses.
The capability of a metamaterial-hyperlens for sub-diffraction-limited imaging is shown below.
Sub-diffraction imaging in the far field
With conventional optical lenses, the far field is a limit that is too distant for evanescent waves to arrive intact. When imaging an object, this limits the optical resolution of lenses to the order of the wavelength of light. These non-propagating waves carry detailed information in the form of high spatial resolution, and overcome limitations. Therefore, projecting image details, normally limited by diffraction into the far field does require recovery of the evanescent waves.
In essence steps leading up to this investigation and demonstration was the employment of an anisotropic metamaterial with a hyperbolic dispersion. The effect was such that ordinary evanescent waves propagate along the radial direction of the layered metamaterial. On a microscopic level the large spatial frequency waves propagate through coupled surface plasmon excitations between the metallic layers.
In 2007, just such an anisotropic metamaterial was employed as a magnifying optical hyperlens. The hyperlens consisted of a curved periodic stack of thin silver and alumina (at 35 nanometers thick) deposited on a half-cylindrical cavity, and fabricated on a quartz substrate. The radial and tangential permittivities have different signs.
Upon illumination, the scattered evanescent field from the object enters the anisotropic medium and propagates along the radial direction. Combined with another effect of the metamaterial, a magnified image at the outer diffraction limit-boundary of the hyperlens occurs. Once the magnified feature is larger than (beyond) the diffraction limit, it can then be imaged with a conventional optical microscope, thus demonstrating magnification and projection of a sub-diffraction-limited image into the far field.
The hyperlens magnifies the object by transforming the scattered evanescent waves into propagating waves in the anisotropic medium, projecting a spatial resolution high-resolution image into the far field. This type of metamaterials-based lens, paired with a conventional optical lens is therefore able to reveal patterns too small to be discerned with an ordinary optical microscope. In one experiment, the lens was able to distinguish two 35-nanometer lines etched 150 nanometers apart. Without the metamaterials, the microscope showed only one thick line.
In a control experiment, the line pair object was imaged without the hyperlens. The line pair could not be resolved because of the diffraction limit of the (optical) aperture was limited to 260 nm. Because the hyperlens supports the propagation of a very broad spectrum of wave vectors, it can magnify arbitrary objects with sub-diffraction-limited resolution.
Although this work appears to be limited by being only a cylindrical hyperlens, the next step is to design a spherical lens. That lens will exhibit three-dimensional capability. Near-field optical microscopy uses a tip to scan an object. In contrast, this optical hyperlens magnifies an image that is sub-diffraction-limited. The magnified sub-diffraction image is then projected into the far field.
The optical hyperlens shows a notable potential for applications, such as real-time biomolecular imaging and nanolithography. Such a lens could be used to watch cellular processes that have been impossible to see. Conversely, it could be used to project an image with extremely fine features onto a photoresist as a first step in photolithography, a process used to make computer chips. The hyperlens also has applications for DVD technology.
In 2010, a spherical hyperlens for two dimensional imaging at visible frequencies was demonstrated experimentally. The spherical hyperlens was based on silver and titanium oxide in alternating layers and had strong anisotropic hyperbolic dispersion allowing super-resolution with visible spectrum. The resolution was 160 nm in the visible spectrum. It will enable biological imaging at the cellular and DNA level, with a strong benefit of magnifying sub-diffraction resolution into far-field.
Plasmon-assisted microscopy
Super-imaging in the visible frequency range
In 2007 researchers demonstrated super imaging using materials, which create negative refractive index and lensing is achieved in the visible range.
Continual improvements in optical microscopy are needed to keep up with the progress in nanotechnology and microbiology. Advancement in spatial resolution is key. Conventional optical microscopy is limited by a diffraction limit which is on the order of 200 nanometers (wavelength). This means that viruses, proteins, DNA molecules and many other samples are hard to observe with a regular (optical) microscope. The lens previously demonstrated with negative refractive index material, a thin planar superlens, does not provide magnification beyond the diffraction limit of conventional microscopes. Therefore, images smaller than the conventional diffraction limit will still be unavailable.
Another approach achieving super-resolution at visible wavelength is recently developed spherical hyperlens based on silver and titanium oxide alternating layers. It has strong anisotropic hyperbolic dispersion allowing super-resolution with converting evanescent waves into propagating waves. This method is non-fluorescence based super-resolution imaging, which results in real-time imaging without any reconstruction of images and information.
Super resolution far-field microscopy techniques
By 2008 the diffraction limit has been surpassed and lateral imaging resolutions of 20 to 50 nm have been achieved by several "super-resolution" far-field microscopy techniques, including stimulated emission depletion (STED) and its related RESOLFT (reversible saturable optically linear fluorescent transitions) microscopy; saturated structured illumination microscopy (SSIM) ; stochastic optical reconstruction microscopy (STORM); photoactivated localization microscopy (PALM); and other methods using similar principles.
Cylindrical superlens via coordinate transformation
This began with a proposal by Pendry, in 2003. Magnifying the image required a new design concept in which the surface of the negatively refracting lens is curved. One cylinder touches another cylinder, resulting in a curved cylindrical lens which reproduced the contents of the smaller cylinder in magnified but undistorted form outside the larger cylinder. Coordinate transformations are required to curve the original perfect lens into the cylindrical, lens structure.
This was followed by a 36-page conceptual and mathematical proof in 2005, that the cylindrical superlens works in the quasistatic regime. The debate over the perfect lens is discussed first.
In 2007, a superlens utilizing coordinate transformation was again the subject. However, in addition to image transfer other useful operations were discussed; translation, rotation, mirroring and inversion as well as the superlens effect. Furthermore, elements that perform magnification are described, which are free from geometric aberrations, on both the input and output sides while utilizing free space sourcing (rather than waveguide). These magnifying elements also operate in the near and far field, transferring the image from near field to far field.
The cylindrical magnifying superlens was experimentally demonstrated in 2007 by two groups, Liu et al. and Smolyaninov et al.
Nano-optics with metamaterials
Nanohole array as a lens
Work in 2007 demonstrated that a quasi-periodic array of nanoholes, in a metal screen, were able to focus the optical energy of a plane wave to form subwavelength spots (hot spots). The distances for the spots was a few tens of wavelengths on the other side of the array, or, in other words, opposite the side of the incident plane wave. The quasi-periodic array of nanoholes functioned as a light concentrator.
In June 2008, this was followed by the demonstrated capability of an array of quasi-crystal nanoholes in a metal screen. More than concentrating hot spots, an image of the point source is displayed a few tens of wavelengths from the array, on the other side of the array (the image plane). Also this type of array exhibited a 1 to 1 linear displacement, – from the location of the point source to its respective, parallel, location on the image plane. In other words, from x to x + δx. For example, other point sources were similarly displaced from x' to x' + δx', from x^ to x^ + δx^, and from x^^ to x^^ + δx^^, and so on. Instead of functioning as a light concentrator, this performs the function of conventional lens imaging with a 1 to 1 correspondence, albeit with a point source.
However, resolution of more complicated structures can be achieved as constructions of multiple point sources. The fine details, and brighter image, that are normally associated with the high numerical apertures of conventional lenses can be reliably produced. Notable applications for this technology arise when conventional optics is not suitable for the task at hand. For example, this technology is better suited for X-ray imaging, or nano-optical circuits, and so forth.
Nanolens
In 2010, a nano-wire array prototype, described as a three-dimensional (3D) metamaterial-nanolens, consisting of bulk nanowires deposited in a dielectric substrate was fabricated and tested.
The metamaterial nanolens was constructed of millions of nanowires at 20 nanometers in diameter. These were precisely aligned and a packaged configuration was applied. The lens is able to depict a clear, high-resolution image of nano-sized objects because it uses both normal propagating EM radiation, and evanescent waves to construct the image. Super-resolution imaging was demonstrated over a distance of 6 times the wavelength (λ), in the far-field, with a resolution of at least λ/4. This is a significant improvement over previous research and demonstration of other near field and far field imaging, including nanohole arrays discussed below.
Light transmission properties of holey metal films
2009–12. The light transmission properties of holey metal films in the metamaterial limit, where the unit length of the periodic structures is much smaller than the operating wavelength, are analyzed theoretically.
Transporting an image through a subwavelength hole
Theoretically it appears possible to transport a complex electromagnetic image through a tiny subwavelength hole with diameter considerably smaller than the diameter of the image, without losing the subwavelength details.
Nanoparticle imaging – quantum dots
When observing the complex processes in a living cell, significant processes (changes) or details are easy to overlook. This can more easily occur when watching changes that take a long time to unfold and require high-spatial-resolution imaging. However, recent research offers a solution to scrutinize activities that occur over hours or even days inside cells, potentially solving many of the mysteries associated with molecular-scale events occurring in these tiny organisms.
A joint research team, working at the National Institute of Standards and Technology (NIST) and the National Institute of Allergy and Infectious Diseases (NIAID), has discovered a method of using nanoparticles to illuminate the cellular interior to reveal these slow processes. Nanoparticles, thousands of times smaller than a cell, have a variety of applications. One type of nanoparticle called a quantum dot glows when exposed to light. These semiconductor particles can be coated with organic materials, which are tailored to be attracted to specific proteins within the part of a cell a scientist wishes to examine.
Notably, quantum dots last longer than many organic dyes and fluorescent proteins that were previously used to illuminate the interiors of cells. They also have the advantage of monitoring changes in cellular processes while most high-resolution techniques like electron microscopy only provide images of cellular processes frozen at one moment. Using quantum dots, cellular processes involving the dynamic motions of proteins, are observable (elucidated).
The research focused primarily on characterizing quantum dot properties, contrasting them with other imaging techniques. In one example, quantum dots were designed to target a specific type of human red blood cell protein that forms part of a network structure in the cell's inner membrane. When these proteins cluster together in a healthy cell, the network provides mechanical flexibility to the cell so it can squeeze through narrow capillaries and other tight spaces. But when the cell gets infected with the malaria parasite, the structure of the network protein changes.
Because the clustering mechanism is not well understood, it was decided to examine it with the quantum dots. If a technique could be developed to visualize the clustering, then the progress of a malaria infection could be understood, which has several distinct developmental stages.
Research efforts revealed that as the membrane proteins bunch up, the quantum dots attached to them are induced to cluster themselves and glow more brightly, permitting real time observation as the clustering of proteins progresses. More broadly, the research discovered that when quantum dots attach themselves to other nanomaterials, the dots' optical properties change in unique ways in each case. Furthermore, evidence was discovered that quantum dot optical properties are altered as the nanoscale environment changes, offering greater possibility of using quantum dots to sense the local biochemical environment inside cells.
Some concerns remain over toxicity and other properties. However, the overall findings indicate that quantum dots could be a valuable tool to investigate dynamic cellular processes.
The abstract from the related published research paper states (in part): Results are presented regarding the dynamic fluorescence properties of bioconjugated nanocrystals or quantum dots (QDs) in different chemical and physical environments. A variety of QD samples was prepared and compared: isolated individual QDs, QD aggregates, and QDs conjugated to other nanoscale materials...
See also
Acoustic metamaterials
History of metamaterials
Metamaterial absorber
Metamaterial antennas
Metamaterial cloaking
Negative index metamaterials
Nonlinear metamaterials
Photonic crystal
Photonic metamaterials
Plasmonic metamaterials
Seismic metamaterials
Split-ring resonator
Terahertz metamaterials
Theories of cloaking
Transformation optics
Tunable metamaterials
Plasmonic lens
Academic journals
Metamaterials (journal)
Metamaterials books
Metamaterials Handbook
Metamaterials: Physics and Engineering Explorations
Metamaterials scientists
Nader Engheta
Ulf Leonhardt
John Pendry
Vladimir Shalaev
David R. Smith
Sergei Tretyakov (scientist)
Richard W. Ziolkowski
References
External links
"The Quest for the Superlens" by John B. Pendry and David R. Smith. Scientific American. July 2006. PDF Imperial College.
Subwavelength imaging
Professor Sir John Pendry at MIT – "The Perfect Lens: Resolution Beyond the Limits of Wavelength'
"Surface plasmon subwavelength optics" 2009-12-05
"Superlenses to overcome the diffraction limit "
"Breaking the diffracion limit " Overview of superlens theory
"Flat Superlens Simulation" EM Talk
"Superlens microscope gets up close "
"Superlens breakthrough "
"Superlens breaks optical barrier "
"Materials with negative index of refraction" by V.A. Podolskiy
"Optimizing the superlens: Manipulating geometry to enhance the resolution" by V.A. Podolskiy and Nicholas A. Kuhta
"Now you see it, now you don't: cloaking device is not just sci-fi"
"Initial page describes first demonstration of negative refraction in a natural material"
"Negative-index materials made easy "
"Simple 'superlens' sharpens focusing power" – A lens able to focus 10 times more intensely than any conventional design could significantly enhance wireless power transmission and photolithography (New Scientist, 24 April 2008)
"Far-Field Optical Nanoscopy" by Stefan W. Hell. Vol. 316. Science. 25 May 2007
"Ultraviolet dielectric hyperlens with layered graphene and boron nitride", 22 May 2012
Lenses
Metamaterials
2000 in science
21st century in science | Superlens | [
"Materials_science",
"Engineering"
] | 11,176 | [
"Metamaterials",
"Materials science"
] |
17,474,234 | https://en.wikipedia.org/wiki/Chromium%28III%29%20sulfide | Chromium(III) sulfide is the inorganic compound with the formula Cr2S3. It is a brown-black solid. Chromium sulfides are usually nonstoichiometric compounds, with formulas ranging from CrS to Cr0.67S (corresponding to Cr2S3).
Preparation
Chromium(III) sulfide can be prepared through the reaction of a stoichiometric mixture of the elements at 1000 °C
It is a solid that is insoluble in water. According to X-ray crystallography, its structure is a combination of that of nickel arsenide (1:1 stoichiometry) and Cd(OH)2 (1:2 stoichiometry). Some metal-metal bonding is indicated by the short Cr-Cr distance of 2.78 Å.
See also
Brezinaite, a mineral with the formula Cr3S4
References
Chromium(III) compounds
Sesquisulfides | Chromium(III) sulfide | [
"Chemistry"
] | 204 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
17,474,607 | https://en.wikipedia.org/wiki/Resource%20Description%20and%20Access | Resource Description and Access (RDA) is a standard for descriptive cataloging initially released in June 2010, providing instructions and guidelines on formulating bibliographic data. Intended for use by libraries and other cultural organizations such as museums and archives, RDA is the successor to Anglo-American Cataloguing Rules, Second Edition (AACR2).
Background
RDA emerged from the International Conference on the Principles & Future Development of AACR held in Toronto in 1997. It is published jointly by the American Library Association, the Canadian Federation of Library Associations, and the Chartered Institute of Library and Information Professionals (CILIP) in the United Kingdom. Maintenance of RDA is the responsibility of the RDA Steering Committee (RSC). As of 2015, RSC is undergoing a transition to an international governance structure, expected to be in place in 2019.
RDA instructions and guidelines are available through RDA Toolkit, an online subscription service, and in a print format.
RDA training materials and texts are available online and in print.
Features
RDA is a package of data elements, guidelines, and instructions for creating library and cultural heritage resource metadata that are well-formed according to international models for user-focused linked data applications. The underlying conceptual models for RDA are the Functional Requirements for Bibliographic Records (FRBR), Functional Requirements for Authority Data (FRAD), and Functional Requirements for Subject Authority Data (FRSAD) maintained by IFLA, and will be compliant with the Library Reference Model, the IFLA standard that consolidates them.
RDA Vocabularies
RDA Vocabularies is a representation of the RDA entities, elements, relationship designators, and controlled terms in RDF (Resource Description Framework). The Vocabularies are intended to support linked data applications using RDA. They are maintained in the Open Metadata Registry, a metadata registry, and released via GitHub and the RDA Registry.
The human-readable labels, definitions, and other textual annotations in the Vocabularies are known as RDA Reference. The RDA Reference data are used in the production of RDA Toolkit content.
The RDA Vocabularies and RDA Reference are available under an open license.
Internationalization
RDA is in step with the Statement of International Cataloguing Principles published by IFLA in 2009, and updated in 2016.
The Committee of Principals for RDA, now the RDA Board, announced its commitment to internationalization of RDA in 2015. This is reflected in the new governance structure with representation based on the United Nations Regional Groups, comprising, Africa, Asia, Europe, Latin America and the Caribbean, North America, and Oceania.
As of May 2017, the RDA Toolkit has been translated from English into Catalan, Chinese, Finnish, French, German, Italian, and Spanish. RDA Reference is currently being translated into these languages as well as others including Arabic, Danish, Dutch, Greek, Hebrew, Swedish, and Vietnamese.
Adoption of RDA
In March 2012 the Library of Congress announced that it would fully implement RDA cataloging by the end of March 2013. Library and Archives Canada fully implemented the standard in September 2013. British Library, National Library of Australia, and Deutsche Nationalbibliothek and other national libraries have since implemented RDA .
Opposition
In the United States, the cataloguing community expressed reservations about the new standard in regard to both the business case for RDA in a depressed economy and the value of the standard's stated goals. Michael Gorman, one of the authors of AACR2, was particularly vocal in expression of his opposition to the new guidelines, claiming that RDA was poorly written and organized, and that the plan for RDA unnecessarily abandoned established cataloging practices. Others felt that RDA was too rooted in past practices and therefore was not a vision for the future. In response to these concerns, the three United States national libraries (Library of Congress, National Library of Medicine, and the National Agricultural Library) organized a nationwide test of the new standard.
On 13 June 2011, the Library of Congress, the National Agricultural Library, and the National Library of Medicine released the results of their testing. The test found that RDA to some degree met most of the goals that the JSC (Joint Steering Committee for Development of RDA) put forth for the new code and failed to meet a few of those goals. The Coordinating Committee admitted that they "wrestled with articulating a business case for implementing RDA", nevertheless the report recommended that RDA be adopted by the three national libraries, contingent on several improvements being made. The earliest possible date for implementation was given as January 2013, as the consensus emerging from the analysis of the test data showed that while there were discernible benefits to implementing RDA, these benefits would not be realized without further changes to current cataloging practices, including developing a successor to the MARC format.
Several other institutions were involved in the RDA test. Many of these institutions documented their findings in a special issue of Cataloging & Classification Quarterly.
See also
International Standard Bibliographic Description (ISBD)
Bibliographic Framework Initiative (BIBFRAME)
Anglo-American Cataloguing Rules (AACR)
Functional Requirements for Bibliographic Records (FRBR)
Functional Requirements for Authority Data (FRAD)
Functional Requirements for Subject Authority Data (FRSAD)
International Cataloguing Principles (ICP)
MAchine-Readable Cataloging (MARC)
(RAK)
Dublin Core
Library Reference Model
References
External links
Official website of the RDA Steering Committee
RDA Toolkit
European RDA Interest Group (EURIG)
IFLA – Statement of International Cataloguing Principles
Computer-related introductions in 2010
Library cataloging and classification
Metadata | Resource Description and Access | [
"Technology"
] | 1,171 | [
"Metadata",
"Data"
] |
17,475,077 | https://en.wikipedia.org/wiki/Edgar%20Bright%20Wilson | Edgar Bright Wilson Jr. (December 18, 1908 – July 12, 1992) was an American chemist.
Wilson was a prominent and accomplished chemist and teacher, recipient of the National Medal of Science in 1975, Guggenheim Fellowships in 1949 and 1970, the Elliott Cresson Medal in 1982, and a number of honorary doctorates. He was a member of both the American Academy of Arts and Sciences, the American Philosophical Society, and the United States National Academy of Sciences. He was also the Theodore William Richards Professor of Chemistry, Emeritus at Harvard University. One of his sons, Kenneth G. Wilson, was awarded the Nobel Prize in physics in 1982.
E. B. Wilson was a student and protégé of Nobel laureate Linus Pauling and was a coauthor with Pauling of Introduction to Quantum Mechanics, a graduate level textbook in Quantum Mechanics. Wilson was also the thesis advisor of Nobel laureate Dudley Herschbach. Wilson was elected to the first class of the Harvard Society of Fellows.
Wilson made major contributions to the field of molecular spectroscopy. He developed the first rigorous quantum mechanical Hamiltonian in internal coordinates for a polyatomic molecule. He developed the theory of how rotational spectra are influenced by centrifugal distortion during rotation. He pioneered the use of group theory for the analysis and simplification normal mode analysis, particularly for high symmetry molecules, such as benzene. In 1955, Wilson published Molecular Vibrations along with J.C. Decius and Paul C. Cross. Following the Second World War, Wilson was a pioneer in the application of microwave spectroscopy to the determination of molecular structure. Wilson wrote an influential introductory text Introduction to Scientific Research that provided an introduction of all the steps of scientific research, from defining a problem through the archival of data after publication.
Starting in 1997, the American Chemical Society has annually awarded the E. Bright Wilson Award in Spectroscopy, named in honor of Wilson.
Scientific career
Bright started his higher education at Princeton in 1926, where he received both his bachelor's and master's degree in 1930 and 1931 respectively. He then went to the California Institute of Technology where he worked with Linus Pauling on crystal structure determinations and finished his PhD. During this time, he also wrote a textbook with Pauling, called Introduction to Quantum Mechanics, which was published in 1935. This textbook was still in print in the year 2000, some 70 years after its initial publication.
In 1934, Bright was elected to the Society of Fellows at Harvard for his work done at the California Institute of Technology. His election meant he had a 3-year junior fellowship at Harvard during which he studied molecular motion and symmetry analysis. In 1936 the Harvard Chemistry department appointed Bright as an assistant professor during his third year of his fellowship. He taught courses in chemistry and quantum mechanics and was promoted to an associate professor with tenure after three years. From 1934 to 1941, Bright, along with Harold Gershinowitz, constructed an automatic infrared spectrometer which was used to measure vibrational absorption spectra of various molecules.
After the start of World War II Bright started research on explosives with the National Defense Research Committee (NDRC) where he studied shock waves in water. In 1942 an Underwater Explosives Research Laboratory (UERL) was opened at the Woods Hole Oceanographic Institution which Bright led. The US navy, exasperated by the continual harassment of Nazi U-boats on allied shipping vessels had a strong interest in the UERL and its research with depth charges and other anti-submarine weapons. To facilitate this research, the laboratory acquired an old fishing vessel, the Reliance, which was fitted to record electronic signals from pressure sensors deep underwater.
After the end of the war Bright returned to Harvard. In 1947 Bright and Richard Hughes invented and built a Stark-effect microwave spectrometer which could measure different radio waves and became an important tool in spectroscopy. From 1949 to 1950, Bright took a sabbatical in Oxford during which he mainly worked on his book Introduction to Scientific Research which was published in 1952.
In 1952–1953, during the Korean War, Bright became the Research director and deputy director of the Weapons Evaluation Group (WSEG), where he only stayed for 18 months. He later began accepting assignments in the mid-1960's in Washington during the Vietnam war.
In 1955 Bright published a book Molecular Vibrations along with co-authors J.C Decius and P.C. Cross which discussed infrared and Raman spectra of polyatomic molecules. In 1955 Bright studied the internal rotation of single bonds in molecules using microwave spectroscopy. In 1965 Bright studied the rotational energy transfer in inelastic molecular collisions. In 1970, Bright began to study hydrogen bonding and the structure of hydrogen bonds using low resolution microwave spectroscopy.
In 1979, Bright retired and was named an emeritus professor. The E. Bright Wilson Award in Spectroscopy was established in 1994 by the American Chemical Society.
Personal life
Wilson was born in Gallatin, Tennessee to mother Alma Lackey and father Edgar Bright Wilson, a lawyer. His family soon moved to Yonkers, New York.
He was married to Emily Buckingham from 1935 until she died in 1954. He remarried to Therese Bremer in 1955, a distinguished photochemist. Wilson had a total of 4 sons and 2 daughters, one of whom was Kenneth Wilson, a Nobel Laureate in physics.
In his final years, Wilson suffered from Parkinson's disease. He died on July 12, 1992, in Cambridge, Massachusetts of pneumonia.
References
External links
1908 births
1992 deaths
20th-century American chemists
Theoretical chemists
Harvard University faculty
National Medal of Science laureates
Spectroscopists
Members of the American Philosophical Society | Edgar Bright Wilson | [
"Chemistry"
] | 1,133 | [
"Theoretical chemists",
"American theoretical chemists"
] |
17,476,149 | https://en.wikipedia.org/wiki/Emotional%20self-regulation | The self-regulation of emotion or emotion regulation is the ability to respond to the ongoing demands of experience with the range of emotions in a manner that is socially tolerable and sufficiently flexible to permit spontaneous reactions as well as the ability to delay spontaneous reactions as needed. It can also be defined as extrinsic and intrinsic processes responsible for monitoring, evaluating, and modifying emotional reactions. The self-regulation of emotion belongs to the broader set of emotion regulation processes, which includes both the regulation of one's own feelings and the regulation of other people's feelings.
Emotion regulation is a complex process that involves initiating, inhibiting, or modulating one's state or behavior in a given situation — for example, the subjective experience (feelings), cognitive responses (thoughts), emotion-related physiological responses (for example heart rate or hormonal activity), and emotion-related behavior (bodily actions or expressions). Functionally, emotion regulation can also refer to processes such as the tendency to focus one's attention to a task and the ability to suppress inappropriate behavior under instruction. Emotion regulation is a highly significant function in human life.
Every day, people are continually exposed to a wide variety of potentially arousing stimuli. Inappropriate, extreme or unchecked emotional reactions to such stimuli could impede functional fit within society; therefore, people must engage in some form of emotion regulation almost all of the time. Generally speaking, emotion dysregulation has been defined as difficulties in controlling the influence of emotional arousal on the organization and quality of thoughts, actions, and interactions. Individuals who are emotionally dysregulated exhibit patterns of responding in which there is a mismatch between their goals, responses, and/or modes of expression, and the demands of the social environment. For example, there is a significant association between emotion dysregulation and symptoms of depression, anxiety, eating pathology, and substance abuse. Higher levels of emotion regulation are likely to be related to both high levels of social competence and the expression of socially appropriate emotions.
Theory
Process model
The process model of emotion regulation is based upon the modal model of emotion. The modal model of emotion suggests that the emotion generation process occurs in a particular sequence over time. This sequence occurs as follows:
Situation: the sequence begins with a situation (real or imagined) that is emotionally relevant.
Attention: attention is directed towards the emotional situation.
Appraisal: the emotional situation is evaluated and interpreted.
Response: an emotional response is generated, giving rise to loosely coordinated changes in experiential, behavioral, and physiological response systems.
Because an emotional response (4.) can cause changes to a situation (1.), this model involves a feedback loop from (4.) Response to (1.) Situation. This feedback loop suggests that the emotion generation process can occur recursively, is ongoing, and dynamic.
The process model contends that each of these four points in the emotion generation process can be subjected to regulation. From this conceptualization, the process model posits five different families of emotion regulation that correspond to the regulation of a particular point in the emotion generation process. They occur in the following order:
Situation selection
Situation modification
Attentional deployment
Cognitive change
Response modulation
The process model also divides these emotion regulation strategies into two categories: antecedent-focused and response-focused. Antecedent-focused strategies (i.e., situation selection, situation modification, attentional deployment, and cognitive change) occur before an emotional response is fully generated. Response-focused strategies (i.e., response modulation) occur after an emotional response is fully generated.
Strategies
Situation selection
Situation selection is an emotional regulation strategy that involves choosing to avoid or approach a future emotional situation. If a person selects to avoid or disengage from an emotionally relevant situation, they are decreasing the likelihood of experiencing an emotion. Alternatively, if a person selects to approach or engage with an emotionally relevant situation, they are increasing the likelihood of experiencing an emotion.
Typical examples of situation selection may be seen interpersonally, such as when a parent removes his or her child from an emotionally unpleasant situation. Use of situation selection may also be seen in psychopathology. For example, avoidance of social situations to regulate emotions is particularly pronounced for those with social anxiety disorder and avoidant personality disorder.
Effective situation selection is not always an easy task. For instance, humans display difficulties predicting their emotional responses to future events. Therefore, they may have trouble making accurate and appropriate decisions about which emotionally relevant situations to approach or to avoid.
Situation modification
Situation modification involves efforts to modify a situation so as to change its emotional impact. Situation modification refers specifically to altering one's external, physical environment. Altering one's "internal" environment to regulate emotion is called cognitive change. Younger and older adults seem to also regulate their emotions through situation modification in differing ways that reflects what type of material they consume ranging from negative, neutral, and positive. Livingstone and Isaacowitz conducted research where they observed younger and older adults as they viewed different types of material and had the choice to skip what material they saw, and it came out that older adults are better able to regulate the negative material, focusing mainly on the positive, than the younger adults who still consumed positive material, but not as much modification. This showing that as we age, we gain a better understanding of situation modification and being able to emotionally self-regulate.
Examples of situation modification may include injecting humor into a speech to elicit laughter or extending the physical distance between oneself and another person.
Attentional deployment
Attentional deployment involves directing one's attention towards or away from an emotional situation.
Distraction
Distraction, an example of attentional deployment, is an early selection strategy, which involves diverting one's attention away from an emotional stimulus and towards other content. Distraction has been shown to reduce the intensity of painful and emotional experiences, to decrease facial responding and neural activation in the amygdala associated with emotion, as well as to alleviate emotional distress. As opposed to reappraisal, individuals show a relative preference to engage in distraction when facing stimuli of high negative emotional intensity. This is because distraction easily filters out high-intensity emotional content, which would otherwise be relatively difficult to appraise and process.
Rumination
Rumination, an example of attentional deployment, is defined as the passive and repetitive focusing of one's attention on one's symptoms of distress and the causes and consequences of these symptoms. Rumination is generally considered a maladaptive emotion regulation strategy, as it tends to exacerbate emotional distress. It has also been implicated in a host of disorders including major depression.
Worry
Worry, an example of attentional deployment, involves directing attention to thoughts and images concerned with potentially negative events in the future. By focusing on these events, worrying serves to aid in the down-regulation of intense negative emotion and physiological activity. While worry may sometimes involve problem solving, incessant worry is generally considered maladaptive, being a common feature of anxiety disorders, particularly generalized anxiety disorder.
Thought suppression
Thought suppression, an example of attentional deployment, involves efforts to redirect one's attention from specific thoughts and mental images to other content so as to modify one's emotional state. Although thought suppression may provide temporary relief from undesirable thoughts, it may ironically end up spurring the production of even more unwanted thoughts. This strategy is generally considered maladaptive, being most associated with obsessive-compulsive disorder.
Cognitive change
Cognitive change involves changing how one appraises a situation so as to alter its emotional meaning.
Reappraisal
Reappraisal, an example of cognitive change, is a late selection strategy, which involves a change of the meaning of an event that alters its emotional impact. It encompasses different substrategies, such as positive reappraisal (creating and focusing on a positive aspect of the stimulus), decentering (reinterpreting an event by broadening one's perspective to see "the bigger picture"), or fictional reappraisal (adopting or emphasizing the belief that event is not real, that it is for instance "just a movie" or "just my imagination"). Reappraisal has been shown to effectively reduce physiological, subjective, and neural emotional responding. As opposed to distraction, individuals show a relative preference to engage in reappraisal when facing stimuli of low negative emotional intensity because these stimuli are relatively easy to appraise and process.
Reappraisal is generally considered to be an adaptive emotion regulation strategy. Compared to suppression (including both thought suppression and expressive suppression), which is positively correlated with many psychological disorders, reappraisal can be associated with better interpersonal outcomes, and can be positively related to well-being. However, some researchers argue that context is important when evaluating the adaptiveness of a strategy, suggesting that in some contexts reappraisal may be maladaptive. Furthermore, some research has shown reappraisal does not influence or affect physiological responses to recurrent stress.
Distancing
Distancing, an example of cognitive change, involves taking on an independent, third-person perspective when evaluating an emotional event. Distancing has been shown to be an adaptive form of self-reflection, facilitating the emotional processing of negatively valenced stimuli, reducing emotional and cardiovascular reactivity to negative stimuli, and increasing problem-solving behavior.
Humour
Humour, an example of cognitive change, has been shown to be an effective emotion regulation strategy. Specifically, positive, good-natured humour has been shown to effectively up-regulate positive emotion and down-regulate negative emotion. On the other hand, negative, mean-spirited humour is less effective in this regard.
Response modulation
Response modulation involves attempts to directly influence experiential, behavioral, and physiological response systems.
Expressive suppression
Expressive suppression, an example of response modulation, involves inhibiting emotional expressions. It has been shown to effectively reduce facial expressivity, subjective feelings of positive emotion, heart rate, and sympathetic activation. However, the research findings are mixed regarding whether this strategy is effective for down-regulating negative emotion. Research has also shown that expressive suppression may have negative social consequences, correlating with reduced personal connections and greater difficulties forming relationships.
Expressive suppression is generally considered to be a maladaptive emotion regulation strategy. Compared to reappraisal, it is positively correlated with many psychological disorders, associated with worse interpersonal outcomes, is negatively related to well-being, and requires the mobilization of a relatively substantial amount of cognitive resources. However, some researchers argue that context is important when evaluating the adaptiveness of a strategy, suggesting that in some contexts suppression may be adaptive.
Drug use
Drug use, an example of response modulation, can be used to alter emotion-associated physiological responses. For example, alcohol can produce sedative and anxiolytic effects and beta blockers can affect sympathetic activation.
Exercise
Exercise, an example of response modulation, can be used to down-regulate the physiological and experiential effects of negative emotions. Regular physical activity has also been shown to reduce emotional distress and improve emotional control. Exercise has been proven to increase emotional health and regulation through hormonal regulation.
Sleep
Sleep plays a role in emotion regulation, although stress and worry can also interfere with sleep. Studies have shown that sleep, specifically REM sleep, down-regulates reactivity of the amygdala, a brain structure known to be involved in the processing of emotions, in response to previous emotional experiences. On the flip side, sleep deprivation is associated with greater emotional reactivity or overreaction to negative and stressful stimuli. This is a result of both increased amygdala activity and a disconnect between the amygdala and the prefrontal cortex, which regulates the amygdala through inhibition, together resulting in an overactive emotional brain. Due to the subsequent lack of emotional control, sleep deprivation may be associated with depression, impulsivity, and mood swings. Additionally, there is some evidence that sleep deprivation may reduce emotional reactivity to positive stimuli and events and impair emotion recognition in others.
Borderline Personality Disorder (BPD)
In its extreme form, problems with response modulation is correlated with Borderline Personality Disorder (BPD). BPD is characterized by an enduring instability in regulating emotions, relationships with others, your self-image, and your behavior. This can lead to self-sabotage, risk-taking, impulsivity, and aggression. Research has indicated that the heightened emotional response can be due to an exaggerated amygdala response and an impaired anterior cingulate cortex, which is responsible for modulating emotions. This can lead to an intense emotional response.
In psychotherapy
Emotion regulation strategies are taught, and emotion regulation problems are treated, in a variety of counseling and psychotherapy approaches, including cognitive behavioral therapy (CBT), dialectical behavior therapy (DBT), emotion-focused therapy (EFT), and mindfulness-based cognitive therapy (MBCT).
For example, a relevant mnemonic formulated in DBT is "ABC PLEASE":
Accumulate positive experiences.
Build mastery by being active in activities that make one feel competent and effective to combat helplessness.
Cope ahead, preparing an action plan, researching, and rehearsing (with a skilled helper if necessary).
Physical illness treatment and prevention through checkups.
Low vulnerability to diseases, managed with health care professionals.
Eating healthy.
Avoiding (non-prescribed) mood-altering drugs.
Sleep healthy.
Exercise regularly.
Developmental process
Infancy
Intrinsic emotion regulation efforts during infancy are believed to be guided primarily by innate physiological response systems. These systems usually manifest as an approach towards and an avoidance of pleasant or unpleasant stimuli. At three months, infants can engage in self-soothing behaviors like sucking and can reflexively respond to and signal feelings of distress. For instance, infants have been observed attempting to suppress anger or sadness by knitting their brow or compressing their lips.
Between three and six months, basic motor functioning and attentional mechanisms begin to play a role in emotion regulation, allowing infants to more effectively approach or avoid emotionally relevant situations. Infants may also engage in self-distraction and help-seeking behaviors for regulatory purposes. At one year, infants are able to navigate their surroundings more actively and respond to emotional stimuli with greater flexibility due to improved motor skills. They also begin to appreciate their caregivers' abilities to provide them regulatory support. For instance, infants generally have difficulties regulating fear. As a result, they often find ways to express fear in ways that attract the comfort and attention of caregivers.
Extrinsic emotion regulation efforts by caregivers, including situation selection, modification, and distraction, are particularly important for infants. The emotion regulation strategies employed by caregivers to attenuate distress or to up-regulate positive affect in infants can impact the infants' emotional and behavioral development, teaching them particular strategies and methods of regulation. The type of attachment style between caregiver and infant can therefore play a meaningful role in the regulatory strategies infants may learn to use.
Recent evidence supports the idea that maternal singing has a positive effect on affect regulation in infants. Singing play-songs can have a visible affect-regulatory consequence of prolonged positive affect and even alleviation of distress. In addition to proven facilitation of social bonding, when combined with movement and/or rhythmic touch, maternal singing for affect regulation has possible applications for infants in the NICU and for adult caregivers with serious personality or adjustment difficulties.
Toddler-hood
By the end of the first year, toddlers begin to adopt new strategies to decrease negative arousal. These strategies can include rocking themselves, chewing on objects, or moving away from things that upset them. At two years, toddlers become more capable of actively employing emotion regulation strategies. They can apply certain emotion regulation tactics to influence various emotional states. Additionally, maturation of brain functioning and language and motor skills permits toddlers to manage their emotional responses and levels of arousal more effectively.
Extrinsic emotion regulation remains important to emotional development in toddlerhood. Toddlers can learn ways from their caregivers to control their emotions and behaviors. For example, caregivers help teach self-regulation methods by distracting children from unpleasant events (like a vaccination shot) or helping them understand frightening events.
Childhood
Emotion regulation knowledge becomes more substantial during childhood. For example, children aged six to ten begin to understand display rules. They come to appreciate the contexts in which certain emotional expressions are socially most appropriate and therefore ought to be regulated. For example, children may understand that upon receiving a gift they should display a smile, irrespective of their actual feelings about the gift. During childhood, there is also a trend towards the use of more cognitive emotion regulation strategies, taking the place of more basic distraction, approach, and avoidance tactics.
Regarding the development of emotion dysregulation in children, one robust finding suggests that children who are frequently exposed to negative emotion at home will be more likely to display, and have difficulties regulating, high levels of negative emotion.
Adolescence
Adolescents show a marked increase in their capacities to regulate their emotions, and emotion regulation decision making becomes more complex, depending on multiple factors. In particular, the significance of interpersonal outcomes increases for adolescents. When regulating their emotions, adolescents are therefore likely to take into account their social context. For instance, adolescents show a tendency to display more emotion if they expect a sympathetic response from their peers.
Additionally, spontaneous use of cognitive emotion regulation strategies increases during adolescence, which is evidenced both by self-report data and neural markers.
Adulthood
Social losses increase and health tends to decrease as people age. As people get older their motivation to seek emotional meaning in life through social ties tends to increase. Autonomic responsiveness decreases with age, and emotion regulation skill tends to increase.
Emotional regulation in adulthood can also be examined in terms of positive and negative affectivity. Positive and negative affectivity refers to the types of emotions felt by an individual as well as the way those emotions are expressed. With adulthood comes an increased ability to maintain both high positive affectivity and low negative affectivity “more rapidly than adolescents.” This response to life's challenges seems to become “automatized” as people progress throughout adulthood. Thus, as individuals age, their capability of self-regulating emotions and responding to their emotions in healthy ways improves.
Additionally, emotional regulation may vary between young adults and older adults. Younger adults have been found to be more successful than older adults in practicing “cognitive reappraisal” to decrease negative internal emotions. On the other hand, older adults have been found to be more successful in the following emotional regulation areas:
Predicting the level of “emotional arousal” in possible situations
Having a higher focus on positive information rather than negative
Maintaining healthy levels of “hedonic well-being” (subjective well-being based on increased pleasure and decreased pain)
Overview of perspectives
Neuropsychological perspective
Affective
As people age, their affectthe way they react to emotionschanges, either positively or negatively. Studies show that positive affect increases as a person grows from adolescence to their mid 70s. Negative affect, on the other hand, decreases until the mid 70s. Studies also show that emotions differ in adulthood, particularly affect (positive or negative). Although some studies found that individuals experience less affect as they grow older, other studies have concluded that adults in their middle age experience more positive affect and less negative affect than younger adults. Positive affect was also higher for men than women while the negative affect was higher for women than it was for men and also for single people.
A reason that older peoplemiddle adulthoodmight have less negative affect is because they have overcome, "the trials and vicissitudes of youth, they may increasingly experience a more pleasant balance of affect, at least up until their mid-70s". Positive affect might rise during middle age but towards the later years of lifethe 70sit begins to decline while negative affect also does the same. This might be due to failing health, reaching the end of their lives and the death of friends and relatives.
In addition to baseline levels of positive and negative affect, studies have found individual differences in the time-course of emotional responses to stimuli. The temporal dynamics of emotion regulation, also known as affective chronometry, include two key variables in the emotional response process: rise time to peak emotional response, and recovery time to baseline levels of emotion. Studies of affective chronometry typically separate positive and negative affect into distinct categories, as previous research has shown (despite some correlation) the ability of humans to experience changes in these categories independently of one another. Affective chronometry research has been conducted on clinical populations with anxiety, mood, and personality disorders, but is also utilized as a measurement to test the effectiveness of different therapeutic techniques (including mindfulness training) on emotional dysregulation.
Neurological
The development of functional magnetic resonance imaging has allowed for the study of emotion regulation on a biological level. Specifically, research over the last decade strongly suggests that there is a neural basis. Sufficient evidence has correlated emotion regulation to particular patterns of prefrontal activation. These regions include the orbital prefrontal cortex, the ventromedial prefrontal cortex, and the dorsolateral prefrontal cortex. Two additional brain structures that have been found to contribute are the amygdala and the anterior cingulate cortex. Each of these structures are involved in various facets of emotion regulation and irregularities in one or more regions and/or interconnections among them are affiliated with failures of emotion regulation. An implication to these findings is that individual differences in prefrontal activation predict the ability to perform various tasks in aspects of emotion regulation.
Sociological
People intuitively mimic facial expressions; it is a fundamental part of healthy functioning. Similarities across cultures in regards to nonverbal communication has prompted the debate that it is in fact a universal language. It can be argued that emotion regulation plays a key role in the ability to generate the correct responses in social situations. Humans have control over facial expressions both consciously and unconsciously: an intrinsic emotion program is generated as the result of a transaction with the world, which immediately results in an emotional response and usually a facial reaction. It is a well documented phenomenon that emotions have an effect on facial expression, but recent research has provided evidence that the opposite may also be true.
This notion would give rise to the belief that a person may not only control his emotion but in fact influence them as well. Emotion regulation focuses on providing the appropriate emotion in the appropriate circumstances. Some theories allude to the thought that each emotion serves a specific purpose in coordinating organismic needs with environmental demands (Cole, 1994). This skill, although apparent throughout all nationalities, has been shown to vary in successful application at different age groups. In experiments done comparing younger and older adults to the same unpleasant stimuli, older adults were able to regulate their emotional reactions in a way that seemed to avoid negative confrontation. These findings support the theory that with time people develop a better ability to regulate their emotions. This ability found in adults seems to better allow individuals to react in what would be considered a more appropriate manner in some social situations, permitting them to avoid adverse situations that could be seen as detrimental.
Expressive regulation (in solitary conditions)
In solitary conditions, emotion regulation can include a minimization-miniaturization effect, in which common outward expressive patterns are replaced with toned down versions of expression. Unlike other situations, in which physical expression (and its regulation) serve a social purpose (i.e. conforming to display rules or revealing emotion to outsiders), solitary conditions require no reason for emotions to be outwardly expressed (although intense levels of emotion can bring out noticeable expression anyway). The idea behind this is that as people get older, they learn that the purpose of outward expression (to appeal to other people), is not necessary in situations in which there is no one to appeal to. As a result, the level of emotional expression can be lower in these solitary situations.
Stress
The way an individual reacts to stress can directly overlap with their ability to regulate emotion. Although the two concepts differ in a multitude of ways, "both coping [with stress] and emotion regulation involve affect modulation and appraisal processes" that are necessary for healthy relationships and self-identity.
According to Yu. V. Shcherbatykh, emotional stress in situations like school examinations can be reduced by engaging in self-regulating activities prior to the task being performed. To study the influence of self-regulation on mental and physiological processes under exam stress, Shcherbatykh conducted a test with an experimental group of 28 students (of both sexes) and a control group of 102 students (also of both sexes).
In the moments before the examination, situational stress levels were raised in both groups from what they were in quiet states. In the experimental group, participants engaged in three self-regulating techniques (concentration on respiration, general body relaxation, and the creation of a mental image of successfully passing the examination). During the examination, the anxiety levels of the experimental group were lower than that of the control group. Also, the percent of unsatisfactory marks in the experimental group was 1.7 times less than in the control group. From this data, Shcherbatykh concluded that the application of self-regulating actions before examinations helps to significantly reduce levels of emotional strain, which can help lead to better performance results.
Emotion regulation has also been associated with physiological responses to stress during laboratory stress paradigms.
Decision making
Identification of our emotional self-regulating process can facilitate in the decision-making process. Current literature on emotion regulation identifies that humans characteristically make efforts in controlling emotion experiences. There is then a possibility that our present state emotions can be altered by emotion regulation strategies resulting in the possibility that different regulation strategies could have different decision implications.
Digital emotion regulation
Following widespread adoption in the 21st century of digital devices and services for use in everyday life, evidence is mounting that people are increasingly using these tools to manage and regulate moods and emotions. A wide range of digital resources are used for emotion regulation including smartphones, social media, streaming services, online shopping, and videogames. Such spontaneous forms of digital emotion regulation can be distinguished from the use of digital interventions such as smartphone apps that have been explicitly designed to support emotional regulation or teach emotion regulation skills in clinical and non-clinical populations. Digital implementation of emotion regulation strategies can occur at all stages of the process model and in all strategy families, including interpersonal emotion regulation.
Effects of low self-regulation
With a failure in emotion regulation, there is a rise in psychosocial and emotional dysfunctions and traumatic events. These traumatic experiences typically happen in grade school and are sometimes associated with bullying. Children who can't properly self-regulate express their volatile emotions in a variety of ways, including screaming if they don't have their way, lashing out with their fists, throwing objects (such as chairs), or bullying other children. Such behaviors often elicit negative reactions from the social environment, which, in turn, can exacerbate or maintain the original regulation problems over time, a process termed cumulative continuity. These children are more likely to have conflict-based relationships with their teachers and other children. This can lead to more severe problems such as an impaired ability to adjust to school and predicts school dropout many years later. Children who fail to properly self-regulate grow as teenagers with more emerging problems. Their peers begin to notice this "immaturity", and these children are often excluded from social groups and teased and harassed by their peers. This "immaturity" certainly causes some teenagers to become social outcasts in their respective social groups, causing them to lash out in angry and potentially violent ways. Being teased or being an outcast in childhood is especially damaging because it could lead to psychological symptoms such as depression and anxiety (in which dysregulated emotions play a central role), which, in turn, could lead to more peer victimization. This is why it is recommended to foster emotional self-regulation in children as early as possible.
Occupational therapy in schools
Occupational therapists (OTs) are integrated educators in most public and private schools across the United States. They are trained in mental health and activity analysis to assess the needs of their clients. OTs and students work together to create meaningful and healthy habits for stress management, social skills, emotional labeling, coping strategies, awareness, problem-solving, self-monitoring, judgment, emotional control, and others in the school and home environment. OTs can complete formal assessments for emotional regulation and treat in a client-centered manner for each student. In addition, they can create individualized home programs for carryover with their families. For example, OTs can work with students to engage in the occupational therapist-developed curriculum The Zones of Regulation, which utilizes evidence-based knowledge, formal assessment, and in-classroom treatment to improve self-regulation of emotional behaviors and create long-lasting changes in habits.
Early childhood access to education on emotional regulation mitigates risk factors for increased anxiety, depression, and negative behaviors. It allows the student to create healthy habits for school and home environments. Children should be able to learn to regulate their feelings for full participation in activities, including social skills, play, sports, and school.
See also
References
regul, self
Emotional issues
Life skills
Mindfulness (psychology)
Occupational therapy | Emotional self-regulation | [
"Biology"
] | 6,006 | [
"Emotion",
"Behavior",
"Human behavior"
] |
17,476,439 | https://en.wikipedia.org/wiki/Mobile%20High-Definition%20Link | Mobile High-Definition Link (MHL) is an industry standard for a mobile audio/video interface that allows the connection of smartphones, tablets, and other portable consumer electronics devices to high-definition televisions (HDTVs), audio receivers, and projectors. The standard was designed to share existing mobile device connectors, such as Micro-USB, and avoid the need to add video connectors on devices with limited space for them.
MHL connects to display devices either directly through special HDMI inputs that are MHL-enabled, or indirectly through standard HDMI inputs using MHL-to-HDMI adapters. MHL was developed by a consortium of five companies: Nokia, Samsung, Silicon Image, Sony and Toshiba.
History
Silicon Image, one of the founding companies of the HDMI standard, originally demonstrated a mobile interconnect at the January 2008 Consumer Electronics Show (CES), based on its transition-minimized differential signaling (TMDS) technology. This interface was termed "Mobile High Definition Link" at the time of the demonstration, and is a direct precursor of the implementation announced by the MHL Consortium. The company is quoted as saying it did not ship that original technology in any volume, but used it as a way to get a working group started.
The working group was announced in September 2009, and the MHL Consortium founded in April 2010 by Nokia, Samsung, Silicon Image, Sony and Toshiba. The MHL specification version 1.0 was released in June 2010, and the Compliance Test Specification (CTS) was released in December 2010. May 2011 marked the first retail availability of MHL-enabled products.
The first mobile device to feature the MHL standard was the Samsung Galaxy S II, announced at the 2011 Mobile World Congress.
MHL announced in 2014 that more than half a billion MHL-capable products had been shipped since the standard was created.
Overview
MHL is an adaptation of HDMI intended for mobile devices such as smartphones and tablets. Unlike DVI, which is compatible with HDMI using only passive cables and adapters, MHL requires that the HDMI socket be MHL-enabled. (To deliver an MHL signal to a non-MHL HDMI socket, one can use an adapter device that receives the signal on an MHL-enabled socket, converts it to HDMI, and transmits the HDMI signal to the non-MHL socket). It has several aspects in common with HDMI, such as the ability to carry uncompressed HDCP encrypted high-definition video, eight-channel surround sound, and control remote devices with Consumer Electronics Control (CEC).
There are a total of five pins used in MHL rather than the 19 used in HDMI, namely: a differential pair for data, a bi-directional control channel (CBUS), power charging supply, and ground. This permits a much lighter cable and a much smaller connector on the mobile device, as a typical MHL source will be shared with USB 2.0 on a standard 5-pin Micro-USB receptacle. (Although MHL ports can be dedicated to MHL alone, the standard is designed to permit port sharing with the most commonly used ports.) The USB port switches from USB to MHL when it recognizes an MHL-qualified sink (e.g., a TV) detected on the control wire. A typical MHL sink will be shared with HDMI on a standard 19-pin HDMI receptacle.
Because the same five-pin Micro-USB port is also typically used for charging the device, the sink is required to provide power to maintain the state of charge (or even recharge) while it is being used (although this is dependent on the power available being sufficient e.g., MHL 2 & 3 provide a minimum of 4.5 W / 900 mA, while superMHL can provide up to 40 W). The use of the power line in this way differs from HDMI, which expects the source to provide 55 mA for the purpose of reading the EDID of a display.
Because of to the low pin count of MHL versus HDMI, the functions that are carried on separate dedicated pins on HDMI, namely: the Display Data Channel (DDC) (pins 15 & 16) and CEC (pin 13) are instead carried on the bi-directional control bus (CBUS). The CBUS both emulates the function of the DDC bus and also carries an MHL sideband channel (MSC), which emulates the CEC bus function, and allows a TV remote to control the media player on a phone with its Remote Control Protocol (RCP).
Bandwidth
MHL uses the same Transition-minimized differential signaling (TMDS) as HDMI to carry video, audio, and auxiliary data. However, MHL differs from HDMI in that there is only one differential pair to carry the TMDS data lane, compared to HDMI's four (three data lanes, plus the clock). Therefore these three logical data channels are instead time-division multiplexed into the single physical MHL data lane (i.e., with the logical channels sent sequentially), and the clock signal carried as a common mode signal of this pair. From MHL 3 onwards, the method for carrying the clock signal changed to being carried separately on the MHL CBUS pin instead.
The normal (24 bit) mode operates at 2.25 Gbit/s, and multiplexes the same three channel, 24 bit color signal as HDMI, at a pixel clock rate of up to 75 MHz, sufficient for 1080i and 720p at 60 Hz. One period of the MHL clock equals one period of the pixel clock, and each period of the MHL clock transmits three 10-bit TMDS characters (i.e., a 24-bit pixel, where each 10-bit TMDS character represents an encoded byte – 8-bits).
MHL can also operate in PackedPixel mode at 3 Gbit/s, catering for 1080p, in this case only two channels are multiplexed, as the color signal is changed to a chroma subsampled (YCbCr 4:2:2) pair of adjacent 16-bit pixels (i.e., where two adjacent pixels share chroma values and are represented with only 36-bits), and the pixel clock is doubled to 150 MHz. In this mode, one clock period of the MHL clock now equals two periods of the pixel clock, so each period of the MHL clock transmits twice the number of channels i.e., four 10-bit TMDS characters (a pair of 16-bit pixels).
Version 3 of MHL changed from being frame-based to a packet-based technology, and operates at 6 Gbit/s. superMHL extends this by carrying the data signal over more than one differential pair (up to four with USB Type-C, or a total of six using a superMHL cable) allowing up to 36 Gbit/s.
Versions
All MHL specifications are backward compatible to previous versions of the standard. MHL is connection agnostic (i.e., not tied to a specific type of hardware connector). The first implementations used the 5-pin MHL-USB connector described below, and all are supported over USB Type-C MHL Alternate Mode. Other proprietary and custom connections are also allowed.
MHL 1
Version 1.0 was introduced in June 2010, supporting uncompressed HD video up to 720p/1080i 60 Hz (with RGB and YCbCr 4:2:2/4:4:4 pixel encoding). Support for 1080p 60 Hz (YCbCr 4:2:2) was introduced in version 1.3. The specification supports standard SD (Rec. 601) and HD (Rec. 709) color spaces, as well as those introduced in HDMI 1.3 and 1.4 (xvYCC, sYCC601, Adobe RGB, and AdobeYCC601). Other features include 192 kHz 24-bit LPCM 8-channel surround sound audio, HDCP 1.4 content protection, and a minimum of 2.5 W (500 mA) power between sink (e.g., TV) and source (e.g., mobile phone) for charging. The MHL sideband channel (MSC) includes a built-in Remote Control Protocol (RCP) function allowing the remote control of the TV to operate the MHL mobile device through TV's Consumer Electronics Control (CEC) function, or allowing a mobile device to manage the playback of its content on the TV.
MHL 2
Version 2.0 was introduced in April 2012, and raised the minimum charging supply to 4.5 W (900 mA), with an optional 7.5 W (1.5 A) maximum allowed. Support for 3D video was also introduced, permitting 720p/1080i 60 Hz, and 1080p 24 Hz 3D video modes. The specification also included additional MHL sideband channel (MSC) commands.
MHL 3
Version 3.0 was introduced in August 2013, and added support for 4K Ultra HD (3840 × 2160) 30 Hz video, increasing the maximum bandwidth from 3 Gbit/s to 6 Gbit/s. An additional YCbCr 4:2:0 pixel encoding for 4K resolution was also introduced, while the maximum charging supply was increased to 10 W (2 A). Support for compressed lossless audio formats was added with support for Dolby TrueHD and DTS-HD Master Audio.
The specification increased the speed of the bi-directional data channel from 1 Mbit/s to 75 Mbit/s to enable concurrent 4K video and human interface device (HID) support, such as mice, keyboards, touchscreens, and game controllers. Other features include support for simultaneous multiple displays, improved Remote Control Protocol (RCP) with new commands, and HDCP 2.2 content protection.
superMHL
superMHL 1.0 was introduced in January 2015, supporting 8K Ultra HD (7680 × 4320) 120 Hz High Dynamic Range (HDR) video with wide color gamut (Rec. 2020) and 48-bit deep color. Support for object-based audio formats were added, such as Dolby Atmos and DTS:X, with an audio-only mode also available. The Remote Control Protocol (RCP) was also extended to link multiple MHL devices together (e.g., TV, AVR, Blu-ray Disc player) and control them via one remote.
The specification introduces a reversible 32-pin superMHL connector, which (along with USB Type-C) supports a higher charging power of up to 40 W (20 V / 2 A), and is designed for future bandwidth expansion. The increase in bandwidth over previous MHL versions is achieved by using multiple A/V lanes, each operating at 6 Gbit/s, with a maximum of six A/V lanes supported depending on device and connector type. For example, Micro-USB and HDMI Type-A support one A/V lane, USB Type-C supports up to four A/V lanes, and the superMHL connector supports up to six A/V lanes (36 Gbit/s).
In addition to supporting a variable number of lanes, the specification supports VESA Display Stream Compression (DSC) 1.1, a "visually lossless" (but mathematically lossy) video compression standard. In cases when the bandwidth of the available lane(s) is unable to meet the rate of the uncompressed video stream, bandwidth savings of up to 3:1 can be achieved with a DSC compression rate of 3.0×. For example, 4K 60 Hz is possible using a single lane (e.g., Micro-USB / HDMI Type-A) with a DSC rate of 3.0×.
superMHL can use a variety of source and sink connectors with certain limitations: micro-USB or proprietary connectors can be used for the source only, HDMI Type-A for the sink only, while the USB Type-C and the superMHL connectors can be used for the source or sink.
Connectors
Micro-USB–to–HDMI (five-pin)
The first implementations used the most common connection for non-Apple mobile phones at the time, (Micro-USB), and the most common TV connection (HDMI). There are two types of connection, depending on whether the display device directly supports MHL.
Passive cable
Passive cables allow MHL devices to connect directly to MHL-enabled TVs (i.e. display devices or AV receivers with an MHL-enabled HDMI port) while providing charging power upstream to the mobile device. Other than the physical connectors, no USB or HDMI technology is being used. Exclusively MHL signaling is used through the connectors and over the cable.
Active adapter
With an active adapter, MHL devices are able to connect to HDMI display devices that do not have MHL capability by actively converting the signal to HDMI. These adapters often feature an additional Micro-USB port on them to provide charging power to the mobile device because standard HDMI ports do not supply sufficient current.
Samsung Micro-USB–to–HDMI adapter and tip (eleven-pin)
The Samsung Galaxy S III, and later Galaxy Note II and Galaxy S4, use an 11-pin connector and the six additional connector pins in order to achieve functional improvements over the 5-pin design (like simultaneous USB-OTG use). However, if consumers have a standard MHL-to-HDMI adapter all they need to purchase is a tip. With the launch of the Samsung Galaxy S4, Samsung also released MHL 2.0 smart adapter with a built-in 11-pin connector. The first Samsung MHL 1.0 smart adapter released with the Galaxy S III requires external power and is able to work with HDMI TVs at 1080p at 24 Hz. The MHL 2.0 adapter released with the Galaxy S4 can output 1080p at 60 Hz and does not need external power.
USB Type-C (MHL Alternate Mode)
The MHL Alternate Mode for USB 3.1 specification allows MHL enabled source and display devices to be connected through a USB Type-C port. The standard was released on November 17, 2014, and is backward compatible with existing MHL specifications: supporting MHL 1, 2, 3 and superMHL. The standard supports the simultaneous transfer of data (at least USB 2.0, and depending on video resolution: USB 3.1 Gen 1 or 2) and power charging (up to 40 W via USB Power Delivery), in addition to MHL audio/video. This allows the connection to be used with mobile docks, allowing devices to connect to other peripherals while charging. The use of passive cables is possible when both devices support the standard, e.g., when connecting to superMHL, USB Type-C, and MHL-enabled HDMI, otherwise, an active cable adapter is necessary to connect to standard HDMI devices.
Depending on the bandwidth requirement, the standard makes use of a variable number of USB Type-C's four SuperSpeed differential pairs to carry each TMDS lane: a single lane is required for resolutions up to 4K/60 Hz, two lanes for 4K/120 Hz, and all four lanes for 8K/60 Hz. The MHL eCBUS signal is sent over a side-band (SBU) pin on the USB Type-C connector. When one or two lanes are used, USB 3.1 data transfer is supported.
In common MHL Alternate Mode implementations, the video from the GPU will be converted to an MHL signal by using an MHL transmitter chip. The transmitter chips often accept video in MIPI (DSI/DPI) or HDMI format and convert it to MHL format. The USB Type-C port controller functions as a switch and multiplexer, passing the MHL signal through to the external devices. The dock or display device may use an MHL bridge chip to convert the MHL signal to HDMI signal format.
superMHL (32-pin)
In conjunction with the release of the superMHL specification in January 2015, MHL introduced a reversible 32-pin superMHL connector. The connector can carry six A/V lanes over six differential pairs, catering for the full 36 Gbit/s bandwidth available from the superMHL standard. The connector also enables 40 W of charging power at a higher voltage and current.
Alternatives
SlimPort is a proprietary alternative to MHL, based on the DisplayPort standard integrated into common Micro-USB ports, and supports up to 1080p60 or 1080p30 with 3D content over HDMI 1.4 (up to 5.4 Gbit/s of bandwidth), in addition to support for DVI, VGA (up to 1920 × 1080 at 60 Hz), and DisplayPort.
See also
SlimPort (Mobility DisplayPort), also known as MyDP
Miracast (wireless display technology)
Chromecast (proprietary media broadcast over IP: Google Cast for audio or audiovisual playback)
AirPlay (proprietary IP-based)
Digital Living Network Alliance (DLNA) (IP-based)
WiDi version 3.5 to 6.0 supports Miracast; discontinued
Wireless HDMI
WirelessHD proprietary
Wireless Home Digital Interface
References
External links
Computer-related introductions in 2008
Computer-related introductions in 2010
Digital display connectors
High-definition television
Standards
Television technology
Ultra-high-definition television
Serial buses | Mobile High-Definition Link | [
"Technology"
] | 3,692 | [
"Information and communications technology",
"Television technology"
] |
17,477,796 | https://en.wikipedia.org/wiki/Issue-based%20information%20system | The issue-based information system (IBIS) is an argumentation-based approach to clarifying wicked problems—complex, ill-defined problems that involve multiple stakeholders. Diagrammatic visualization using IBIS notation is often called issue mapping.
IBIS was invented by Werner Kunz and Horst Rittel in the 1960s. According to Kunz and Rittel, "Issue-Based Information Systems (IBIS) are meant to support coordination and planning of political decision processes. IBIS guides the identification, structuring, and settling of issues raised by problem-solving groups, and provides information pertinent to the discourse."
Subsequently, the understanding of planning and design as a process of argumentation (of the designer with himself or with others) has led to the use of IBIS in design rationale, where IBIS notation is one of a number of different kinds of rationale notation. The simplicity of IBIS notation, and its focus on questions, makes it especially suited for representing conversations during the early exploratory phase of problem solving, when a problem is relatively ill-defined.
The basic structure of IBIS is a graph. It is therefore quite suitable to be manipulated by computer, as in a graph database.
Overview
The elements of IBIS are: issues (questions that need to be answered), each of which are associated with (answered by) alternative positions (possible answers or ideas), which are associated with arguments which support or object to a given position; arguments that support a position are called "pros", and arguments that object to a position are called "cons". In the course of the treatment of issues, new issues come up which are treated likewise.
IBIS elements are usually represented as nodes, and the associations between elements are represented as directed edges (arrows).
In 1988, Douglas E. Noble and Horst Rittel described the overall purpose of IBIS as follows:
IBIS notation has been used, along with function analysis diagram (FAD) notation, as an aid for root cause analysis.
Issue mapping
IBIS notation is used in issue mapping, an argument visualization technique closely related to argument mapping. An issue map aims to comprehensively diagram the rhetorical structure of a conversation (or a series of conversations) as seen by the participants in the conversation, as opposed to an ideal conceptual structure such as, for example, a causal loop diagram, flowchart, or structure chart.
Dialogue mapping
Issue mapping is the basis of a meeting facilitation technique called dialogue mapping. In dialogue mapping, a person called a facilitator uses IBIS notation to record a group conversation, while it is happening, on a "shared display" (usually a video projector). The facilitator listens to the conversation, and summarizes the ideas mentioned in the conversation on the shared display using IBIS notation, and if possible "validates" the map often by checking with the group to make sure each recorded element accurately represents the group's thinking. Dialogue mapping, like a few other facilitation methods, has been called "nondirective" because it does not require participants or leaders to agree on an agenda or a problem definition. Users of dialogue mapping have reported that dialogue mapping, under certain conditions, can improve the efficiency of meetings by reducing unnecessary redundancy and digressions in conversations, among other benefits.
A dialogue map does not aim to be as formal as, for example, a logic diagram or decision tree, but rather aims to be a comprehensive display of all the ideas that people shared during a conversation. Other decision algorithms can be applied to a dialogue map after it has been created, although dialogue mapping is also well suited to situations that are too complex and context-dependent for an algorithmic approach to decision-making. Some researchers and practitioners have combined IBIS with numerical decision-making software based on multi-criteria decision making.
History
Rittel's interest lay in the area of public policy and planning, which is also the context in which he and his colleagues defined wicked problems. So it is no surprise that Kunz and Rittel envisaged IBIS as the "type of information system meant to support the work of cooperatives like governmental or administrative agencies or committees, planning groups, etc., that are confronted with a problem complex in order to arrive at a plan for decision".
When Kunz and Rittel's paper was written, there were three manual, paper-based IBIS-type systems in use—two in government agencies and one in a university.
A renewed interest in IBIS-type systems came about in the following decade, when advances in technology made it possible to design relatively inexpensive, computer-based IBIS-type systems. By 1983, Raymond McCall and colleagues had implemented a version of IBIS called PHIBIS (procedurally hierarchical IBIS) in personal computer software called MIKROPLIS (microcomputer-based planning and information system), which was described as an information system for "professional problem solvers—including planners, designers and scientists". In 1987, Douglas E. Noble completed a computer-supported IBIS program as part of his doctoral dissertation. Another IBIS computer program developed in the late 1980s was called HyperIBIS. Jeff Conklin and co-workers adapted the IBIS structure for use in software engineering, creating the gIBIS (graphical IBIS) hypertext system in the late 1980s. Around 1990, a program called Author's Argumentation Assistant (AAA) combined the PHIBIS model with the Toulmin model of argumentation in "a hypertext-based authoring tool for argumentative texts". In the 1990s, architecture researchers experimented with enhancing IBIS with a fuzzy reasoning system.
Several other graphical IBIS-type systems were developed once it was realised that such systems facilitated collaborative design and problem solving. These efforts culminated in the creation of the open source Compendium (software) tool which supports—among other things—a graphical IBIS notation. Another IBIS tool that integrates with Microsoft SharePoint is called Glyma. Similar tools which do not rely on a database for storage include DRed (Design Rationale editor) and designVUE.
Since the mid-2000s, there has been a renewed interest in IBIS-type systems, particularly in the context of sensemaking and collaborative problem solving in a variety of social and technical contexts. Of particular note is the facilitation method called dialogue mapping which uses the IBIS notation to map out a design (or any other) dialogue as it evolves.
In 2021, researchers reported that IBIS notation is used in D-Agree, a discussion support platform with artificial intelligence–based facilitation. The discussion trees in D-Agree, inspired by IBIS, contain a combination of four types of elements: issues, ideas, pros, and cons. The software extracts a discussion's structure in real time based on IBIS, automatically classifying all the sentences.
See also
References
Argument mapping
Information systems
Justification (epistemology)
Problem structuring methods | Issue-based information system | [
"Technology"
] | 1,449 | [
"Information systems",
"Information technology"
] |
17,478,272 | https://en.wikipedia.org/wiki/Louis%20Robinson | Louis Robinson (1857–1928) was a 19th-century English physician, paediatrician and author. An ardent evolutionist, he helped pioneer modern child medicine during the later Victorian era, writing prolifically in journals on the emerging science of paediatrics. Active in scientific debate, Robinson was critiqued in some parts of the press for his outspoken evolutionary views in the wider debate between scientific theories of human origin and the religious view.
Early life
Born 8 August 1857 to a Quaker family in Saddlescombe near Brighton, Sussex, Robinson was educated at Quaker schools in Ackworth and York. His younger sister was the English novelist Maude Robinson. He went on to study medicine in London (at St Bartholomew's Hospital) and Newcastle upon Tyne, before graduating top of his class in 1889. He was married the previous year to Edith Aline Craddock, with whom he went on to have four children.
Medical career
Drawing on his extensive research, Robinson's interest in evolution was expressed in a series of articles, which led to an appearance before the British Association at Edinburgh to present his paper "The Prehensile Power of Infants". A keen practitioner as well as theorist, Robinson was one of the first doctors of his era to conduct experiments with young babies, testing over sixty subjects immediately after birth on their power of grip. This echoed the approach of the pioneering German physician Adolph Kussmaul.
Later years
Following a series of lectures at Oxford on vestigial reflexes, he was sought after to teach in both British and American universities, and increasingly noticed by prominent scientists like Huxley, Burdon-Sanderson and Flower. However, Robinson opted to focus on his work as a doctor in Streatham. Nonetheless, he continued his research, employing several assistants, and leading to his publication of a volume on evolution that focused on animal behaviour.
He died as a result of an accidental gunshot wound in Folkestone, Kent on 5 February 1928 aged 70.
See also
Royal College of Paediatrics and Child Health
Charles Darwin
Thomas Henry Huxley
References
1857 births
1928 deaths
19th-century English medical doctors
British paediatricians
Ethologists
People from Newtimber | Louis Robinson | [
"Biology"
] | 448 | [
"Ethology",
"Behavior",
"Ethologists"
] |
17,479,962 | https://en.wikipedia.org/wiki/Autoclitic | Autoclitics are verbal responses that modify the effect on the listener of the primary operants that comprise B.F. Skinner's classification of Verbal Behavior.
Autoclitics
An autoclitic is a verbal behavior that modifies the functions of other verbal behaviors. For example, "I think it is raining" possesses the autoclitic "I think," which moderates the strength of the statement "it is raining." Research that involves autoclitics includes Lodhi & Greer (1989).
Descriptive autoclitics
A speaker may acquire verbal behaviour that describes their own behaviour. "I said Noam C. Hayes is wrong" is a descriptive autoclitic that describes the behaviour of talking about one's own behaviour. They may also describe strength of response, as the emission of "I think" is often used to indicate some level of weakness, as in "Noam Chomsky is smart, I think." Descriptive autoclitics modify the listener's reaction by specifying something about the circumstances of the emission of a response or the condition of the speaker providing the verbal response. For example, the "I guess" in "I guess he is here" describes strength of the statement "he is here." It does so because "I guess" specifies that the speaker is not sure he is here, just guessing, thus showing weakness in the strength of the response "he is here." In describing something about a response, descriptive autoclitics specify some condition of a response, such as "I said" in "I said 'Hello. The "I said" describes the condition under which "Hello" was said. Descriptive autoclitics can include information regarding the type of verbal operant it accompanies, the strength of the verbal response, the relation between responses, or the emotional or motivation conditions of the speaker. In addition, negative autoclitics quantify or cancel the responses they accompany. For example, the not in "it is not raining" cancels the response "it is raining." Descriptive autoclitics can also just indicate a response is being omitted, or that the omitted response is subordinate in relation to what has been said, e.g., "for example." Qualifying autoclitics modify the listener's behaviour in their qualification of tacts in its intensity or direction. Negation is a common qualifying autoclitic, as in "it is not raining", the not qualifies it is raining. Without the not, the listener's behaviour would be inappropriate. "No!" also serves to cancel a response, while "Yes!" encourages a response, as qualifying autoclitics can serve to assert a response.
Quantifying autoclitics modify the reaction of the listener, in that all, some, and no affect the responses they accompany. A and the narrow a listener in on the response that follows and its relation to the controlling stimulus. For example, circumstances under which we say "book" vary from those where we say "the book," with the functioning to modify the listener's reaction. Relational autoclitics are different from descriptive autoclitics in that they affect the behavior of the listener. For example, above in "the book is above the shelf" tells the listener where to find the book, thereby altering where the listener looks for the book. Another way to look at relational autoclitics is that they describe the relation between verbal operants, and modify the listener's behavior in that way. For example, in the statement "the book is black" the is tells the listener there is a relation between book and black, is specifies what is black.
Grammar and syntax as autoclitic processes
Skinner describes grammatical manipulations, such as the order or grouping of responses, as autoclitic. The ordering of patterns may be a function of relevant strength, temporal ordering, or other factors. Skinner speaks to the use of predication and the use of tags, contrasting the Latin forms, which use tags—and English, which uses grouping and ordering. Skinner proposes the relational autoclitic as a descriptor for these kinds of relationships.
Composition and its effect
Composition represents a special class of autoclitic responding, because the responding is itself a response to previously existing verbal responses. The autoclitic is controlled not only by the effects on the listener but upon the speaker as listener of their own responses. Skinner notes that "emotional and imaginal" behavior has little to do with grammar and syntax. Obscene words and poetry are likely to be effective, even when emitted non-grammatically.
Self-editing
Self-editing as a compositional process follows the autoclitic process of manipulating responses. After the responses are changed with autoclitics they are examined for their effects and then "rejected or released." Conditions may prevent self-editing, such as a very high response strength.
Rejection
The physical topography of the rejection of verbal behavior in the process of editing varies from the partial emission of a written word to the apparent non-emission of a vocal response. It may include ensuring that responses simply do not reach a listener, as in not delivering a manuscript or letter. Manipulative autoclitics can revoke words by striking them out, as in a court of law. Similar effects may arise from expression like "Forget it."
Defective feedback
A speaker may fail to react as a listener to their own speech under conditions where the emission of verbal responses is very quick. The speed may be a function of strength or of differential reinforcement. Physical interruption may arise as in the case of those who are hearing impaired, or under conditions of mechanical impairment such as ambient noise. Skinner argues the Ouija board may operate to mask feedback and so produce unedited verbal behavior.
References
Behaviorism
Psycholinguistics | Autoclitic | [
"Biology"
] | 1,193 | [
"Behavior",
"Behaviorism"
] |
17,481,271 | https://en.wikipedia.org/wiki/Fluorine | Fluorine is a chemical element; it has symbol F and atomic number 9. It is the lightest halogen and exists at standard conditions as pale yellow diatomic gas. Fluorine is extremely reactive as it reacts with all other elements except for the light inert gases. It is highly toxic.
Among the elements, fluorine ranks 24th in cosmic abundance and 13th in crustal abundance. Fluorite, the primary mineral source of fluorine, which gave the element its name, was first described in 1529; as it was added to metal ores to lower their melting points for smelting, the Latin verb meaning gave the mineral its name. Proposed as an element in 1810, fluorine proved difficult and dangerous to separate from its compounds, and several early experimenters died or sustained injuries from their attempts. Only in 1886 did French chemist Henri Moissan isolate elemental fluorine using low-temperature electrolysis, a process still employed for modern production. Industrial production of fluorine gas for uranium enrichment, its largest application, began during the Manhattan Project in World War II.
Owing to the expense of refining pure fluorine, most commercial applications use fluorine compounds, with about half of mined fluorite used in steelmaking. The rest of the fluorite is converted into hydrogen fluoride en route to various organic fluorides, or into cryolite, which plays a key role in aluminium refining. The carbon–fluorine bond is usually very stable. Organofluorine compounds are widely used as refrigerants, electrical insulation, and PTFE (Teflon). Pharmaceuticals such as atorvastatin and fluoxetine contain C−F bonds. The fluoride ion from dissolved fluoride salts inhibits dental cavities and so finds use in toothpaste and water fluoridation. Global fluorochemical sales amount to more than US$15 billion a year.
Fluorocarbon gases are generally greenhouse gases with global-warming potentials 100 to 23,500 times that of carbon dioxide, and SF6 has the highest global warming potential of any known substance. Organofluorine compounds often persist in the environment due to the strength of the carbon–fluorine bond. Fluorine has no known metabolic role in mammals; a few plants and marine sponges synthesize organofluorine poisons (most often monofluoroacetates) that help deter predation.
Characteristics
Electron configuration
Fluorine atoms have nine electrons, one fewer than neon, and electron configuration 1s22s22p5: two electrons in a filled inner shell and seven in an outer shell requiring one more to be filled. The outer electrons are ineffective at nuclear shielding, and experience a high effective nuclear charge of 9 − 2 = 7; this affects the atom's physical properties.
Fluorine's first ionization energy is third-highest among all elements, behind helium and neon, which complicates the removal of electrons from neutral fluorine atoms. It also has a high electron affinity, second only to chlorine, and tends to capture an electron to become isoelectronic with the noble gas neon; it has the highest electronegativity of any reactive element. Fluorine atoms have a small covalent radius of around 60 picometers, similar to those of its period neighbors oxygen and neon.
Reactivity
The bond energy of difluorine is much lower than that of either or and similar to the easily cleaved peroxide bond; this, along with high electronegativity, accounts for fluorine's easy dissociation, high reactivity, and strong bonds to non-fluorine atoms. Conversely, bonds to other atoms are very strong because of fluorine's high electronegativity. Unreactive substances like powdered steel, glass fragments, and asbestos fibers react quickly with cold fluorine gas; wood and water spontaneously combust under a fluorine jet.
Reactions of elemental fluorine with metals require varying conditions. Alkali metals cause explosions and alkaline earth metals display vigorous activity in bulk; to prevent passivation from the formation of metal fluoride layers, most other metals such as aluminium and iron must be powdered, and noble metals require pure fluorine gas at . Some solid nonmetals (sulfur, phosphorus) react vigorously in liquid fluorine. Hydrogen sulfide and sulfur dioxide combine readily with fluorine, the latter sometimes explosively; sulfuric acid exhibits much less activity, requiring elevated temperatures.
Hydrogen, like some of the alkali metals, reacts explosively with fluorine. Carbon, as lamp black, reacts at room temperature to yield tetrafluoromethane. Graphite combines with fluorine above to produce non-stoichiometric carbon monofluoride; higher temperatures generate gaseous fluorocarbons, sometimes with explosions. Carbon dioxide and carbon monoxide react at or just above room temperature, whereas paraffins and other organic chemicals generate strong reactions: even completely substituted haloalkanes such as carbon tetrachloride, normally incombustible, may explode. Although nitrogen trifluoride is stable, nitrogen requires an electric discharge at elevated temperatures for reaction with fluorine to occur, due to the very strong triple bond in elemental nitrogen; ammonia may react explosively. Oxygen does not combine with fluorine under ambient conditions, but can be made to react using electric discharge at low temperatures and pressures; the products tend to disintegrate into their constituent elements when heated. Heavier halogens react readily with fluorine as does the noble gas radon; of the other noble gases, only xenon and krypton react, and only under special conditions. Argon does not react with fluorine gas; however, it does form a compound with fluorine, argon fluorohydride.
Phases
At room temperature, fluorine is a gas of diatomic molecules, pale yellow when pure (sometimes described as yellow-green). It has a characteristic halogen-like pungent and biting odor detectable at 20 ppb. Fluorine condenses into a bright yellow liquid at , a transition temperature similar to those of oxygen and nitrogen.
Fluorine has two solid forms, α- and β-fluorine. The latter crystallizes at and is transparent and soft, with the same disordered cubic structure of freshly crystallized solid oxygen, unlike the orthorhombic systems of other solid halogens. Further cooling to induces a phase transition into opaque and hard α-fluorine, which has a monoclinic structure with dense, angled layers of molecules. The transition from β- to α-fluorine is more exothermic than the condensation of fluorine, and can be violent.
Isotopes
Only one isotope of fluorine occurs naturally in abundance, the stable isotope . It has a high magnetogyric ratio and exceptional sensitivity to magnetic fields; because it is also the only stable isotope, it is used in magnetic resonance imaging. Eighteen radioisotopes with mass numbers 13–31 have been synthesized, of which is the most stable with a half-life of 109.734 minutes. is a natural trace radioisotope produced by cosmic ray spallation of atmospheric argon as well as by reaction of protons with natural oxygen: 18O + p → 18F + n. Other radioisotopes have half-lives less than 70 seconds; most decay in less than half a second. The isotopes and undergo β+ decay and electron capture, lighter isotopes decay by proton emission, and those heavier than undergo β− decay (the heaviest ones with delayed neutron emission). Two metastable isomers of fluorine are known, , with a half-life of 162(7) nanoseconds, and , with a half-life of 2.2(1) milliseconds.
Occurrence
Universe
Among the lighter elements, fluorine's abundance value of 400 ppb (parts per billion) – 24th among elements in the universe – is exceptionally low: other elements from carbon to magnesium are twenty or more times as common. This is because stellar nucleosynthesis processes bypass fluorine, and any fluorine atoms otherwise created have high nuclear cross sections, allowing collisions with hydrogen or helium to generate oxygen or neon respectively.
Beyond this transient existence, three explanations have been proposed for the presence of fluorine:
during type II supernovae, bombardment of neon atoms by neutrinos could transmute them to fluorine;
the solar wind of Wolf–Rayet stars could blow fluorine away from any hydrogen or helium atoms; or
fluorine is borne out on convection currents arising from fusion in asymptotic giant branch stars.
Earth
Fluorine is the 13th most abundant element in Earth's crust at 600–700 ppm (parts per million) by mass. Though believed not to occur naturally, elemental fluorine has been shown to be present as an occlusion in antozonite, a variant of fluorite. Most fluorine exists as fluoride-containing minerals. Fluorite, fluorapatite and cryolite are the most industrially significant. Fluorite (), also known as fluorspar, abundant worldwide, is the main source of fluoride, and hence fluorine. China and Mexico are the major suppliers. Fluorapatite (Ca5(PO4)3F), which contains most of the world's fluoride, is an inadvertent source of fluoride as a byproduct of fertilizer production. Cryolite (), used in the production of aluminium, is the most fluorine-rich mineral. Economically viable natural sources of cryolite have been exhausted, and most is now synthesised commercially.
Other minerals such as topaz contain fluorine. Fluorides, unlike other halides, are insoluble and do not occur in commercially favorable concentrations in saline waters. Trace quantities of organofluorines of uncertain origin have been detected in volcanic eruptions and geothermal springs. The existence of gaseous fluorine in crystals, suggested by the smell of crushed antozonite, is contentious; a 2012 study reported the presence of 0.04% by weight in antozonite, attributing these inclusions to radiation from the presence of tiny amounts of uranium.
History
Early discoveries
In 1529, Georgius Agricola described fluorite as an additive used to lower the melting point of metals during smelting. He penned the Latin word fluorēs (fluor, flow) for fluorite rocks. The name later evolved into fluorspar (still commonly used) and then fluorite. The composition of fluorite was later determined to be calcium difluoride.
Hydrofluoric acid was used in glass etching from 1720 onward. Andreas Sigismund Marggraf first characterized it in 1764 when he heated fluorite with sulfuric acid, and the resulting solution corroded its glass container. Swedish chemist Carl Wilhelm Scheele repeated the experiment in 1771, and named the acidic product fluss-spats-syran (fluorspar acid). In 1810, the French physicist André-Marie Ampère suggested that hydrogen and an element analogous to chlorine constituted hydrofluoric acid. He also proposed in a letter to Sir Humphry Davy dated August 26, 1812 that this then-unknown substance may be named fluorine from fluoric acid and the -ine suffix of other halogens. This word, often with modifications, is used in most European languages; however, Greek, Russian, and some others, following Ampère's later suggestion, use the name ftor or derivatives, from the Greek φθόριος (phthorios, destructive). The New Latin name fluorum gave the element its current symbol F; Fl was used in early papers.
Isolation
Initial studies on fluorine were so dangerous that several 19th-century experimenters were deemed "fluorine martyrs" after misfortunes with hydrofluoric acid. Isolation of elemental fluorine was hindered by the extreme corrosiveness of both elemental fluorine itself and hydrogen fluoride, as well as the lack of a simple and suitable electrolyte. Edmond Frémy postulated that electrolysis of pure hydrogen fluoride to generate fluorine was feasible and devised a method to produce anhydrous samples from acidified potassium bifluoride; instead, he discovered that the resulting (dry) hydrogen fluoride did not conduct electricity. Frémy's former student Henri Moissan persevered, and after much trial and error found that a mixture of potassium bifluoride and dry hydrogen fluoride was a conductor, enabling electrolysis. To prevent rapid corrosion of the platinum in his electrochemical cells, he cooled the reaction to extremely low temperatures in a special bath and forged cells from a more resistant mixture of platinum and iridium, and used fluorite stoppers. In 1886, after 74 years of effort by many chemists, Moissan isolated elemental fluorine.
In 1906, two months before his death, Moissan received the Nobel Prize in Chemistry, with the following citation:
Later uses
The Frigidaire division of General Motors (GM) experimented with chlorofluorocarbon refrigerants in the late 1920s, and Kinetic Chemicals was formed as a joint venture between GM and DuPont in 1930 hoping to market Freon-12 () as one such refrigerant. It replaced earlier and more toxic compounds, increased demand for kitchen refrigerators, and became profitable; by 1949 DuPont had bought out Kinetic and marketed several other Freon compounds. Polytetrafluoroethylene (Teflon) was serendipitously discovered in 1938 by Roy J. Plunkett while working on refrigerants at Kinetic, and its superlative chemical and thermal resistance lent it to accelerated commercialization and mass production by 1941.
Large-scale production of elemental fluorine began during World War II. Germany used high-temperature electrolysis to make tons of the planned incendiary chlorine trifluoride and the Manhattan Project used huge quantities to produce uranium hexafluoride for uranium enrichment. Since is as corrosive as fluorine, gaseous diffusion plants required special materials: nickel for membranes, fluoropolymers for seals, and liquid fluorocarbons as coolants and lubricants. This burgeoning nuclear industry later drove post-war fluorochemical development.
Compounds
Fluorine has a rich chemistry, encompassing organic and inorganic domains. It combines with metals, nonmetals, metalloids, and most noble gases. Fluorine's high electron affinity results in a preference for ionic bonding; when it forms covalent bonds, these are polar, and almost always single.
Oxidation states
In compounds, fluorine almost exclusively assumes an oxidation state of −1. Fluorine in is defined to have oxidation state 0. The unstable species and , which decompose at around 40 K, have intermediate oxidation states; and a few related species are predicted to be stable.
Metals
Alkali metals form ionic and highly soluble monofluorides; these have the cubic arrangement of sodium chloride and analogous chlorides. Alkaline earth difluorides possess strong ionic bonds but are insoluble in water, with the exception of beryllium difluoride, which also exhibits some covalent character and has a quartz-like structure. Rare earth elements and many other metals form mostly ionic trifluorides.
Covalent bonding first comes to prominence in the tetrafluorides: those of zirconium, hafnium and several actinides are ionic with high melting points, while those of titanium, vanadium, and niobium are polymeric, melting or decomposing at no more than . Pentafluorides continue this trend with their linear polymers and oligomeric complexes. Thirteen metal hexafluorides are known, all octahedral, and are mostly volatile solids but for liquid and , and gaseous . Rhenium heptafluoride, the only characterized metal heptafluoride, is a low-melting molecular solid with pentagonal bipyramidal molecular geometry. Metal fluorides with more fluorine atoms are particularly reactive.
Hydrogen
Hydrogen and fluorine combine to yield hydrogen fluoride, in which discrete molecules form clusters by hydrogen bonding, resembling water more than hydrogen chloride. It boils at a much higher temperature than heavier hydrogen halides and unlike them is miscible with water. Hydrogen fluoride readily hydrates on contact with water to form aqueous hydrogen fluoride, also known as hydrofluoric acid. Unlike the other hydrohalic acids, which are strong, hydrofluoric acid is a weak acid at low concentrations. However, it can attack glass, something the other acids cannot do.
Other reactive nonmetals
Binary fluorides of metalloids and p-block nonmetals are generally covalent and volatile, with varying reactivities. Period 3 and heavier nonmetals can form hypervalent fluorides.
Boron trifluoride is planar and possesses an incomplete octet. It functions as a Lewis acid and combines with Lewis bases like ammonia to form adducts. Carbon tetrafluoride is tetrahedral and inert; its group analogues, silicon and germanium tetrafluoride, are also tetrahedral but behave as Lewis acids. The pnictogens form trifluorides that increase in reactivity and basicity with higher molecular weight, although nitrogen trifluoride resists hydrolysis and is not basic. The pentafluorides of phosphorus, arsenic, and antimony are more reactive than their respective trifluorides, with antimony pentafluoride the strongest neutral Lewis acid known, only behind gold pentafluoride.
Chalcogens have diverse fluorides: unstable difluorides have been reported for oxygen (the only known compound with oxygen in an oxidation state of +2), sulfur, and selenium; tetrafluorides and hexafluorides exist for sulfur, selenium, and tellurium. The latter are stabilized by more fluorine atoms and lighter central atoms, so sulfur hexafluoride is especially inert. Chlorine, bromine, and iodine can each form mono-, tri-, and pentafluorides, but only iodine heptafluoride has been characterized among possible interhalogen heptafluorides. Many of them are powerful sources of fluorine atoms, and industrial applications using chlorine trifluoride require precautions similar to those using fluorine.
Noble gases
Noble gases, having complete electron shells, defied reaction with other elements until 1962 when Neil Bartlett reported synthesis of xenon hexafluoroplatinate; xenon difluoride, tetrafluoride, hexafluoride, and multiple oxyfluorides have been isolated since then. Among other noble gases, krypton forms a difluoride, and radon and fluorine generate a solid suspected to be radon difluoride. Binary fluorides of lighter noble gases are exceptionally unstable: argon and hydrogen fluoride combine under extreme conditions to give argon fluorohydride. Helium has no long-lived fluorides, and no neon fluoride has ever been observed; helium fluorohydride has been detected for milliseconds at high pressures and low temperatures.
Organic compounds
The carbon–fluorine bond is organic chemistry's strongest, and gives stability to organofluorines. It is almost non-existent in nature, but is used in artificial compounds. Research in this area is usually driven by commercial applications; the compounds involved are diverse and reflect the complexity inherent in organic chemistry.
Discrete molecules
The substitution of hydrogen atoms in an alkane by progressively more fluorine atoms gradually alters several properties: melting and boiling points are lowered, density increases, solubility in hydrocarbons decreases and overall stability increases. Perfluorocarbons, in which all hydrogen atoms are substituted, are insoluble in most organic solvents, reacting at ambient conditions only with sodium in liquid ammonia.
The term perfluorinated compound is used for what would otherwise be a perfluorocarbon if not for the presence of a functional group, often a carboxylic acid. These compounds share many properties with perfluorocarbons such as stability and hydrophobicity, while the functional group augments their reactivity, enabling them to adhere to surfaces or act as surfactants. Fluorosurfactants, in particular, can lower the surface tension of water more than their hydrocarbon-based analogues. Fluorotelomers, which have some unfluorinated carbon atoms near the functional group, are also regarded as perfluorinated.
Polymers
Polymers exhibit the same stability increases afforded by fluorine substitution (for hydrogen) in discrete molecules; their melting points generally increase too. Polytetrafluoroethylene (PTFE), the simplest fluoropolymer and perfluoro analogue of polyethylene with structural unit ––, demonstrates this change as expected, but its very high melting point makes it difficult to mold. Various PTFE derivatives are less temperature-tolerant but easier to mold: fluorinated ethylene propylene replaces some fluorine atoms with trifluoromethyl groups, perfluoroalkoxy alkanes do the same with trifluoromethoxy groups, and Nafion contains perfluoroether side chains capped with sulfonic acid groups. Other fluoropolymers retain some hydrogen atoms; polyvinylidene fluoride has half the fluorine atoms of PTFE and polyvinyl fluoride has a quarter, but both behave much like perfluorinated polymers.
Production
Elemental fluorine and virtually all fluorine compounds are produced from hydrogen fluoride or its aqueous solution, hydrofluoric acid. Hydrogen fluoride is produced in kilns by the endothermic reaction of fluorite (CaF2) with sulfuric acid:
CaF2 + H2SO4 → 2 HF(g) + CaSO4
The gaseous HF can then be absorbed in water or liquefied.
About 20% of manufactured HF is a byproduct of fertilizer production, which produces hexafluorosilicic acid (H2SiF6), which can be degraded to release HF thermally and by hydrolysis:
H2SiF6 → 2 HF + SiF4
SiF4 + 2 H2O → 4 HF + SiO2
Industrial routes to F2
Moissan's method is used to produce industrial quantities of fluorine, via the electrolysis of a potassium bifluoride/hydrogen fluoride mixture: hydrogen ions are reduced at a steel container cathode and fluoride ions are oxidized at a carbon block anode, under 8–12 volts, to generate hydrogen and fluorine gas respectively. Temperatures are elevated, KF•2HF melting at and being electrolyzed at . KF, which acts to provide electrical conductivity, is essential since pure HF cannot be electrolyzed because it is virtually non-conductive. Fluorine can be stored in steel cylinders that have passivated interiors, at temperatures below ; otherwise nickel can be used. Regulator valves and pipework are made of nickel, the latter possibly using Monel instead. Frequent passivation, along with the strict exclusion of water and greases, must be undertaken. In the laboratory, glassware may carry fluorine gas under low pressure and anhydrous conditions; some sources instead recommend nickel-Monel-PTFE systems.
Laboratory routes
While preparing for a 1986 conference to celebrate the centennial of Moissan's achievement, Karl O. Christe reasoned that chemical fluorine generation should be feasible since some metal fluoride anions have no stable neutral counterparts; their acidification potentially triggers oxidation instead. He devised a method which evolves fluorine at high yield and atmospheric pressure:
2 KMnO4 + 2 KF + 10 HF + 3 H2O2 → 2 K2MnF6 + 8 H2O + 3 O2↑
2 K2MnF6 + 4 SbF5 → 4 KSbF6 + 2 MnF3 + F2↑
Christe later commented that the reactants "had been known for more than 100 years and even Moissan could have come up with this scheme." As late as 2008, some references still asserted that fluorine was too reactive for any chemical isolation.
Industrial applications
Fluorite mining, which supplies most global fluorine, peaked in 1989 when 5.6 million metric tons of ore were extracted. Chlorofluorocarbon restrictions lowered this to 3.6 million tons in 1994; production has since been increasing. Around 4.5 million tons of ore and revenue of US$550 million were generated in 2003; later reports estimated 2011 global fluorochemical sales at $15 billion and predicted 2016–18 production figures of 3.5 to 5.9 million tons, and revenue of at least $20 billion. Froth flotation separates mined fluorite into two main metallurgical grades of equal proportion: 60–85% pure metspar is almost all used in iron smelting whereas 97%+ pure acidspar is mainly converted to the key industrial intermediate hydrogen fluoride.
At least 17,000 metric tons of fluorine are produced each year. It costs only $5–8 per kilogram as uranium or sulfur hexafluoride, but many times more as an element because of handling challenges. Most processes using free fluorine in large amounts employ in situ generation under vertical integration.
The largest application of fluorine gas, consuming up to 7,000 metric tons annually, is in the preparation of for the nuclear fuel cycle. Fluorine is used to fluorinate uranium tetrafluoride, itself formed from uranium dioxide and hydrofluoric acid. Fluorine is monoisotopic, so any mass differences between molecules are due to the presence of or , enabling uranium enrichment via gaseous diffusion or gas centrifuge. About 6,000 metric tons per year go into producing the inert dielectric for high-voltage transformers and circuit breakers, eliminating the need for hazardous polychlorinated biphenyls associated with devices. Several fluorine compounds are used in electronics: rhenium and tungsten hexafluoride in chemical vapor deposition, tetrafluoromethane in plasma etching and nitrogen trifluoride in cleaning equipment. Fluorine is also used in the synthesis of organic fluorides, but its reactivity often necessitates conversion first to the gentler , , or , which together allow calibrated fluorination. Fluorinated pharmaceuticals use sulfur tetrafluoride instead.
Inorganic fluorides
As with other iron alloys, around metspar is added to each metric ton of steel; the fluoride ions lower its melting point and viscosity. Alongside its role as an additive in materials like enamels and welding rod coats, most acidspar is reacted with sulfuric acid to form hydrofluoric acid, which is used in steel pickling, glass etching and alkane cracking. One-third of HF goes into synthesizing cryolite and aluminium trifluoride, both fluxes in the Hall–Héroult process for aluminium extraction; replenishment is necessitated by their occasional reactions with the smelting apparatus. Each metric ton of aluminium requires about of flux. Fluorosilicates consume the second largest portion, with sodium fluorosilicate used in water fluoridation and laundry effluent treatment, and as an intermediate en route to cryolite and silicon tetrafluoride. Other important inorganic fluorides include those of cobalt, nickel, and ammonium.
Organic fluorides
Organofluorides consume over 20% of mined fluorite and over 40% of hydrofluoric acid, with refrigerant gases dominating and fluoropolymers increasing their market share. Surfactants are a minor application but generate over $1 billion in annual revenue. Due to the danger from direct hydrocarbon–fluorine reactions above , industrial fluorocarbon production is indirect, mostly through halogen exchange reactions such as Swarts fluorination, in which chlorocarbon chlorines are substituted for fluorines by hydrogen fluoride under catalysts. Electrochemical fluorination subjects hydrocarbons to electrolysis in hydrogen fluoride, and the Fowler process treats them with solid fluorine carriers like cobalt trifluoride.
Refrigerant gases
Halogenated refrigerants, termed Freons in informal contexts, are identified by R-numbers that denote the amount of fluorine, chlorine, carbon, and hydrogen present. Chlorofluorocarbons (CFCs) like R-11, R-12, and R-114 once dominated organofluorines, peaking in production in the 1980s. Used for air conditioning systems, propellants and solvents, their production was below one-tenth of this peak by the early 2000s, after widespread international prohibition. Hydrochlorofluorocarbons (HCFCs) and hydrofluorocarbons (HFCs) were designed as replacements; their synthesis consumes more than 90% of the fluorine in the organic industry. Important HCFCs include R-22, chlorodifluoromethane, and R-141b. The main HFC is R-134a with a new type of molecule HFO-1234yf, a Hydrofluoroolefin (HFO) coming to prominence owing to its global warming potential of less than 1% that of HFC-134a.
Polymers
About 180,000 metric tons of fluoropolymers were produced in 2006 and 2007, generating over $3.5 billion revenue per year. The global market was estimated at just under $6 billion in 2011. Fluoropolymers can only be formed by polymerizing free radicals.
Polytetrafluoroethylene (PTFE), sometimes called by its DuPont name Teflon, represents 60–80% by mass of the world's fluoropolymer production. The largest application is in electrical insulation since PTFE is an excellent dielectric. It is also used in the chemical industry where corrosion resistance is needed, in coating pipes, tubing, and gaskets. Another major use is in PFTE-coated fiberglass cloth for stadium roofs. The major consumer application is for non-stick cookware. Jerked PTFE film becomes expanded PTFE (ePTFE), a fine-pored membrane sometimes referred to by the brand name Gore-Tex and used for rainwear, protective apparel, and filters; ePTFE fibers may be made into seals and dust filters. Other fluoropolymers, including fluorinated ethylene propylene, mimic PTFE's properties and can substitute for it; they are more moldable, but also more costly and have lower thermal stability. Films from two different fluoropolymers replace glass in solar cells.
The chemically resistant (but expensive) fluorinated ionomers are used as electrochemical cell membranes, of which the first and most prominent example is Nafion. Developed in the 1960s, it was initially deployed as fuel cell material in spacecraft and then replaced mercury-based chloralkali process cells. Recently, the fuel cell application has reemerged with efforts to install proton exchange membrane fuel cells into automobiles. Fluoroelastomers such as Viton are crosslinked fluoropolymer mixtures mainly used in O-rings; perfluorobutane (C4F10) is used as a fire-extinguishing agent.
Surfactants
Fluorosurfactants are small organofluorine molecules used for repelling water and stains. Although expensive (comparable to pharmaceuticals at $200–2000 per kilogram), they yielded over $1 billion in annual revenues by 2006; Scotchgard alone generated over $300 million in 2000. Fluorosurfactants are a minority in the overall surfactant market, most of which is taken up by much cheaper hydrocarbon-based products. Applications in paints are burdened by compounding costs; this use was valued at only $100 million in 2006.
Agrichemicals
About 30% of agrichemicals contain fluorine, most of them herbicides and fungicides with a few crop regulators. Fluorine substitution, usually of a single atom or at most a trifluoromethyl group, is a robust modification with effects analogous to fluorinated pharmaceuticals: increased biological stay time, membrane crossing, and altering of molecular recognition. Trifluralin is a prominent example, with large-scale use in the U.S. as a weedkiller, but it is a suspected carcinogen and has been banned in many European countries. Sodium monofluoroacetate (1080) is a mammalian poison in which one sodium acetate hydrogen is replaced with fluorine; it disrupts cell metabolism by replacing acetate in the citric acid cycle. First synthesized in the late 19th century, it was recognized as an insecticide in the early 20th century, and was later deployed in its current use. New Zealand, the largest consumer of 1080, uses it to protect kiwis from the invasive Australian common brushtail possum. Europe and the U.S. have banned 1080.
Medicinal applications
Dental care
Population studies from the mid-20th century onwards show topical fluoride reduces dental caries. This was first attributed to the conversion of tooth enamel hydroxyapatite into the more durable fluorapatite, but studies on pre-fluoridated teeth refuted this hypothesis, and current theories involve fluoride aiding enamel growth in small caries. After studies of children in areas where fluoride was naturally present in drinking water, controlled public water supply fluoridation to fight tooth decay began in the 1940s and is now applied to water supplying 6 percent of the global population, including two-thirds of Americans. Reviews of the scholarly literature in 2000 and 2007 associated water fluoridation with a significant reduction of tooth decay in children. Despite such endorsements and evidence of no adverse effects other than mostly benign dental fluorosis, opposition still exists on ethical and safety grounds. The benefits of fluoridation have lessened, possibly due to other fluoride sources, but are still measurable in low-income groups. Sodium monofluorophosphate and sometimes sodium or tin(II) fluoride are often found in fluoride toothpastes, first introduced in the U.S. in 1955 and now ubiquitous in developed countries, alongside fluoridated mouthwashes, gels, foams, and varnishes.
Pharmaceuticals
Twenty percent of modern pharmaceuticals contain fluorine. One of these, the cholesterol-reducer atorvastatin (Lipitor), made more revenue than any other drug until it became generic in 2011. The combination asthma prescription Seretide, a top-ten revenue drug in the mid-2000s, contains two active ingredients, one of which – fluticasone – is fluorinated. Many drugs are fluorinated to delay inactivation and lengthen dosage periods because the carbon–fluorine bond is very stable. Fluorination also increases lipophilicity because the bond is more hydrophobic than the carbon–hydrogen bond, and this often helps in cell membrane penetration and hence bioavailability.
Tricyclics and other pre-1980s antidepressants had several side effects due to their non-selective interference with neurotransmitters other than the serotonin target; the fluorinated fluoxetine was selective and one of the first to avoid this problem. Many current antidepressants receive this same treatment, including the selective serotonin reuptake inhibitors: citalopram, its enantiomer escitalopram, and fluvoxamine and paroxetine. Quinolones are artificial broad-spectrum antibiotics that are often fluorinated to enhance their effects. These include ciprofloxacin and levofloxacin. Fluorine also finds use in steroids: fludrocortisone is a blood pressure-raising mineralocorticoid, and triamcinolone and dexamethasone are strong glucocorticoids. The majority of inhaled anesthetics are heavily fluorinated; the prototype halothane is much more inert and potent than its contemporaries. Later compounds such as the fluorinated ethers sevoflurane and desflurane are better than halothane and are almost insoluble in blood, allowing faster waking times.
PET scanning
Fluorine-18 is often found in radioactive tracers for positron emission tomography, as its half-life of almost two hours is long enough to allow for its transport from production facilities to imaging centers. The most common tracer is fluorodeoxyglucose which, after intravenous injection, is taken up by glucose-requiring tissues such as the brain and most malignant tumors; computer-assisted tomography can then be used for detailed imaging.
Oxygen carriers
Liquid fluorocarbons can hold large volumes of oxygen or carbon dioxide, more so than blood, and have attracted attention for their possible uses in artificial blood and in liquid breathing. Because fluorocarbons do not normally mix with water, they must be mixed into emulsions (small droplets of perfluorocarbon suspended in water) to be used as blood. One such product, Oxycyte, has been through initial clinical trials. These substances can aid endurance athletes and are banned from sports; one cyclist's near death in 1998 prompted an investigation into their abuse. Applications of pure perfluorocarbon liquid breathing (which uses pure perfluorocarbon liquid, not a water emulsion) include assisting burn victims and premature babies with deficient lungs. Partial and complete lung filling have been considered, though only the former has had any significant tests in humans. An Alliance Pharmaceuticals effort reached clinical trials but was abandoned because the results were not better than normal therapies.
Biological role
Fluorine is not essential for humans and other mammals, but small amounts are known to be beneficial for the strengthening of dental enamel (where the formation of fluorapatite makes the enamel more resistant to attack, from acids produced by bacterial fermentation of sugars). Small amounts of fluorine may be beneficial for bone strength, but the latter has not been definitively established. Both the WHO and the Institute of Medicine of the US National Academies publish recommended daily allowance (RDA) and upper tolerated intake of fluorine, which varies with age and gender.
Natural organofluorines have been found in microorganisms, plants and, recently, animals. The most common is fluoroacetate, which is used as a defense against herbivores by at least 40 plants in Africa, Australia and Brazil. Other examples include terminally fluorinated fatty acids, fluoroacetone, and 2-fluorocitrate. An enzyme that binds fluorine to carbon – adenosyl-fluoride synthase – was discovered in bacteria in 2002.
Toxicity
Elemental fluorine is highly toxic to living organisms. Its effects in humans start at concentrations lower than hydrogen cyanide's 50 ppm and are similar to those of chlorine: significant irritation of the eyes and respiratory system as well as liver and kidney damage occur above 25 ppm, which is the immediately dangerous to life and health value for fluorine. The eyes and nose are seriously damaged at 100 ppm, and inhalation of 1,000 ppm fluorine will cause death in minutes, compared to 270 ppm for hydrogen cyanide.
Hydrofluoric acid
Hydrofluoric acid is the weakest of the hydrohalic acids, having a pKa of 3.2 at 25 °C. Pure hydrogen fluoride is a volatile liquid due to the presence of hydrogen bonding, while the other hydrogen halides are gases. It is able to attack glass, concrete, metals, and organic matter.
Hydrofluoric acid is a contact poison with greater hazards than many strong acids like sulfuric acid even though it is weak: it remains neutral in aqueous solution and thus penetrates tissue faster, whether through inhalation, ingestion or the skin, and at least nine U.S. workers died in such accidents from 1984 to 1994. It reacts with calcium and magnesium in the blood leading to hypocalcemia and possible death through cardiac arrhythmia. Insoluble calcium fluoride formation triggers strong pain and burns larger than 160 cm2 (25 in2) can cause serious systemic toxicity.
Exposure may not be evident for eight hours for 50% HF, rising to 24 hours for lower concentrations, and a burn may initially be painless as hydrogen fluoride affects nerve function. If skin has been exposed to HF, damage can be reduced by rinsing it under a jet of water for 10–15 minutes and removing contaminated clothing. Calcium gluconate is often applied next, providing calcium ions to bind with fluoride; skin burns can be treated with 2.5% calcium gluconate gel or special rinsing solutions. Hydrofluoric acid absorption requires further medical treatment; calcium gluconate may be injected or administered intravenously. Using calcium chloride – a common laboratory reagent – in lieu of calcium gluconate is contraindicated, and may lead to severe complications. Excision or amputation of affected parts may be required.
Fluoride ion
Soluble fluorides are moderately toxic: 5–10 g sodium fluoride, or 32–64 mg fluoride ions per kilogram of body mass, represents a lethal dose for adults. One-fifth of the lethal dose can cause adverse health effects, and chronic excess consumption may lead to skeletal fluorosis, which affects millions in Asia and Africa, and, in children, to reduced intelligence. Ingested fluoride forms hydrofluoric acid in the stomach which is easily absorbed by the intestines, where it crosses cell membranes, binds with calcium and interferes with various enzymes, before urinary excretion. Exposure limits are determined by urine testing of the body's ability to clear fluoride ions.
Historically, most cases of fluoride poisoning have been caused by accidental ingestion of insecticides containing inorganic fluorides. Most current calls to poison control centers for possible fluoride poisoning come from the ingestion of fluoride-containing toothpaste. Malfunctioning water fluoridation equipment is another cause: one incident in Alaska affected almost 300 people and killed one person. Dangers from toothpaste are aggravated for small children, and the Centers for Disease Control and Prevention recommends supervising children below six brushing their teeth so that they do not swallow toothpaste. One regional study examined a year of pre-teen fluoride poisoning reports totaling 87 cases, including one death from ingesting insecticide. Most had no symptoms, but about 30% had stomach pains. A larger study across the U.S. had similar findings: 80% of cases involved children under six, and there were few serious cases.
Environmental concerns
Atmosphere
The Montreal Protocol, signed in 1987, set strict regulations on chlorofluorocarbons (CFCs) and bromofluorocarbons due to their ozone damaging potential (ODP). The high stability which suited them to their original applications also meant that they were not decomposing until they reached higher altitudes, where liberated chlorine and bromine atoms attacked ozone molecules. Even with the ban, and early indications of its efficacy, predictions warned that several generations would pass before full recovery. With one-tenth the ODP of CFCs, hydrochlorofluorocarbons (HCFCs) are the current replacements, and are themselves scheduled for substitution by 2030–2040 by hydrofluorocarbons (HFCs) with no chlorine and zero ODP. In 2007 this date was brought forward to 2020 for developed countries; the Environmental Protection Agency had already prohibited one HCFC's production and capped those of two others in 2003. Fluorocarbon gases are generally greenhouse gases with global-warming potentials (GWPs) of about 100 to 10,000; sulfur hexafluoride has a value of around 20,000. An outlier is HFO-1234yf which is a new type of refrigerant called a Hydrofluoroolefin (HFO) and has attracted global demand due to its GWP of less than 1 compared to 1,430 for the current refrigerant standard HFC-134a.
Biopersistence
Organofluorines exhibit biopersistence due to the strength of the carbon–fluorine bond. Perfluoroalkyl acids (PFAAs), which are sparingly water-soluble owing to their acidic functional groups, are noted persistent organic pollutants; perfluorooctanesulfonic acid (PFOS) and perfluorooctanoic acid (PFOA) are most often researched. PFAAs have been found in trace quantities worldwide from polar bears to humans, with PFOS and PFOA known to reside in breast milk and the blood of newborn babies. A 2013 review showed a slight correlation between groundwater and soil PFAA levels and human activity; there was no clear pattern of one chemical dominating, and higher amounts of PFOS were correlated to higher amounts of PFOA. In the body, PFAAs bind to proteins such as serum albumin; they tend to concentrate within humans in the liver and blood before excretion through the kidneys. Dwell time in the body varies greatly by species, with half-lives of days in rodents, and years in humans. High doses of PFOS and PFOA cause cancer and death in newborn rodents but human studies have not established an effect at current exposure levels.
See also
Argon fluoride laser
Electrophilic fluorination
Fluoride selective electrode, which measures fluoride concentration
Fluorine absorption dating
Fluorous chemistry, a process used to separate reagents from organic solvents
Krypton fluoride laser
Radical fluorination
Notes
Sources
Citations
Indexed references
.
<
External links
Chemical elements
Halogens
Reactive nonmetals
Diatomic nonmetals
Fluorinating agents
Oxidizing agents
Industrial gases
Gases with color | Fluorine | [
"Physics",
"Chemistry",
"Materials_science"
] | 9,864 | [
"Chemical elements",
"Redox",
"Reactive nonmetals",
"Diatomic nonmetals",
"Nonmetals",
"Oxidizing agents",
"Fluorinating agents",
"Industrial gases",
"Reagents for organic chemistry",
"Chemical process engineering",
"Atoms",
"Matter"
] |
17,482,765 | https://en.wikipedia.org/wiki/Barreira%20do%20Inferno%20Launch%20Center | The Barreira do Inferno Launch Center (, ) is a rocket launch base of the Brazilian Space Agency. It was created in 1965, and is located near Ponta Negra beach, near Natal, the capital of the state of Rio Grande do Norte. It has been used for 233 launches from 1965 to 2007, reaching up to 1100 kilometers in altitude.
It provides tracking support for launches from the Alcântara Launch Center and Guiana Space Centre.
Launches
The following rockets have been launched from CLBI:
Loki-Dart
Nike-Cajun
Orion
Nike-Apache
Aerobee 150
Javelin
Nike-Tomahawk
Black Brant 4A
Nike-Iroquois
Boosted Dart
Super Arcas
Rocketsonde
Black Brant 5C
Black Brant 4B
Paiute Tomahawk
Castor Lance
Black Brant 8B
Sonda 3
Skylark 12
Cuckoo 4
Nike Orion
Sonda 4
VLS-R1
VS-30
Projected
Operação São Lourenço - VS-40/SARA Suborbital I
Gallery
References
External links
Official site
Spaceports
Rocket launch sites in Brazil
Space program of Brazil | Barreira do Inferno Launch Center | [
"Astronomy"
] | 213 | [
"Outer space stubs",
"Outer space",
"Astronomy stubs"
] |
17,482,772 | https://en.wikipedia.org/wiki/U.S.%20Bank%20Building%20%28Chicago%29 |
U.S. Bank Building, formerly 190 South LaSalle Street, is a tall skyscraper in Chicago, Illinois.
History
It was completed in 1987 and has 40 floors. Johnson/Burgee Architects designed the building, which is the 57th tallest building in Chicago.
From 1988-2016 the lobby of the building featured a tapestry by Helena Hernmarck titled "The 1909 Plan of Chicago" depicting the Civic Center Plaza proposed in the Burnham Plan of Chicago. The tapestry is now in the collection of the Art Institute of Chicago.
In May 2013, U.S. Bank announced it agreed to increase its leased space in the structure from to . The terms of the lease also gave the bank naming rights for the building through 2026.
Gallery
See also
List of tallest buildings in Chicago
References
Notes
External links
Skyscraper office buildings in Chicago
Office buildings completed in 1987
Philip Johnson buildings
Leadership in Energy and Environmental Design certified buildings
1987 establishments in Illinois | U.S. Bank Building (Chicago) | [
"Engineering"
] | 187 | [
"Building engineering",
"Leadership in Energy and Environmental Design certified buildings"
] |
17,482,912 | https://en.wikipedia.org/wiki/Relations%20between%20heat%20capacities | In thermodynamics, the heat capacity at constant volume, , and the heat capacity at constant pressure, , are extensive properties that have the magnitude of energy divided by temperature.
Relations
The laws of thermodynamics imply the following relations between these two heat capacities (Gaskell 2003:23):
Here is the thermal expansion coefficient:
is the isothermal compressibility (the inverse of the bulk modulus):
and is the isentropic compressibility:
A corresponding expression for the difference in specific heat capacities (intensive properties) at constant volume and constant pressure is:
where ρ is the density of the substance under the applicable conditions.
The corresponding expression for the ratio of specific heat capacities remains the same since the thermodynamic system size-dependent quantities, whether on a per mass or per mole basis, cancel out in the ratio because specific heat capacities are intensive properties. Thus:
The difference relation allows one to obtain the heat capacity for solids at constant volume which is not readily measured in terms of quantities that are more easily measured. The ratio relation allows one to express the isentropic compressibility in terms of the heat capacity ratio.
Derivation
If an infinitesimally small amount of heat is supplied to a system in a reversible way then, according to the second law of thermodynamics, the entropy change of the system is given by:
Since
where C is the heat capacity, it follows that:
The heat capacity depends on how the external variables of the system are changed when the heat is supplied. If the only external variable of the system is the volume, then we can write:
From this follows:
Expressing dS in terms of dT and dP similarly as above leads to the expression:
One can find the above expression for by expressing dV in terms of dP and dT in the above expression for dS.
results in
and it follows:
Therefore,
The partial derivative can be rewritten in terms of variables that do not involve the entropy using a suitable Maxwell relation. These relations follow from the fundamental thermodynamic relation:
It follows from this that the differential of the Helmholtz free energy is:
This means that
and
The symmetry of second derivatives of F with respect to T and V then implies
allowing one to write:
The r.h.s. contains a derivative at constant volume, which can be difficult to measure. It can be rewritten as follows. In general,
Since the partial derivative is just the ratio of dP and dT for dV = 0, one can obtain this by putting dV = 0 in the above equation and solving for this ratio:
which yields the expression:
The expression for the ratio of the heat capacities can be obtained as follows:
The partial derivative in the numerator can be expressed as a ratio of partial derivatives of the pressure w.r.t. temperature and entropy. If in the relation
we put and solve for the ratio we obtain . Doing so gives:
One can similarly rewrite the partial derivative by expressing dV in terms of dS and dT, putting dV equal to zero and solving for the ratio . When one substitutes that expression in the heat capacity ratio expressed as the ratio of the partial derivatives of the entropy above, it follows:
Taking together the two derivatives at constant S:
Taking together the two derivatives at constant T:
From this one can write:
Ideal gas
This is a derivation to obtain an expression for for an ideal gas.
An ideal gas has the equation of state:
where
P = pressure
V = volume
n = number of moles
R = universal gas constant
T = temperature
The ideal gas equation of state can be arranged to give:
or
The following partial derivatives are obtained from the above equation of state:
The following simple expressions are obtained for thermal expansion coefficient :
and for isothermal compressibility :
One can now calculate for ideal gases from the previously obtained general formula:
Substituting from the ideal gas equation gives finally:
where n = number of moles of gas in the thermodynamic system under consideration and R = universal gas constant. On a per mole basis, the expression for difference in molar heat capacities becomes simply R for ideal gases as follows:
This result would be consistent if the specific difference were derived directly from the general expression for .
See also
Heat capacity ratio
References
David R. Gaskell (2008), Introduction to the thermodynamics of materials, Fifth Edition, Taylor & Francis. .
Thermodynamics | Relations between heat capacities | [
"Physics",
"Chemistry",
"Mathematics"
] | 906 | [
"Thermodynamics",
"Dynamical systems"
] |
17,483,997 | https://en.wikipedia.org/wiki/Overhang%20%28vehicles%29 | Overhangs are the lengths of a road vehicle which extend beyond the wheelbase at the front and rear. They are normally described as front overhang and rear overhang. Practicality, style, and performance are affected by the size and weight of overhangs.
Characterization
Along with clearance, length of overhangs affects the approach and departure angles, which measure the vehicle's ability to overcome steep obstacles and rough terrain. The longer the front overhang, the smaller is the approach angle, and thus lesser the car's ability to climb or descend steep ramps without damaging the front bumpers. Typically, the rear overhang is larger on rear-wheel drive cars, while the front overhang is larger on front-wheel drive cars.
Overhangs in the case of rolling stock are the lengths from the bogie pivots to the ends of the car, or in the case of two axles the distances outside of the wheelbase to the ends of the car.
Journalist Paul Niedermeyer has proposed an overhang ratio (OHR) to characterize the combined size of the front and rear overhangs, normalized to vehicle length, computed as . Because most vehicles are styled so the wheelbase is typically equal to four wheel+tyre diameters, the minimum OHR (with no bodywork projecting beyond the front or rear wheels) is approximately 20%.
Advantages
Large overhangs contribute to large vehicle dimensions, and the associated advantages of size. On front-engined saloon/sedans, measuring rear overhang is helpful in predicting the size of the trunk. For these same vehicles, large front overhangs can accommodate larger engines. Large overhangs also contribute to safety due to increased bulk, as well as space for crumple zones that provide defense for passengers in frontal and rear collisions.
The Porsche 911, produced since 1964, has always contained its entire flat-6 engine within its rear overhang, with the center of mass of the engine outside of the wheelbase. In the case of the 911, the rear-mounted engine allows for increased practicality in the form of a small rear row of seats that would be impossible with a mid-engined sports car.
Disadvantages
Excessive weight that is concentrated outside of the wheelbase can interfere with accurate negotiation of corners at high speed. The rear-engined Porsche 911, with its engine far in the rear, was notorious for dangerous oversteer in its early days, and cars with engines far in the front often suffered from the opposite problem of understeer, for which many old American cars with heavy V8 engines were infamous. Front-engined Ferraris, such as the Ferrari 612 Scaglietti place their engines within the wheelbase, so as to avoid the problem of understeer. Reducing overhanging weight in sports cars is usually a priority, with the notable exception of the 911.
In contrast, the excellent handling of the Mini, with its wheels pushed far out at each corner, can be partly credited to its small overhang. The classic Mini and New MINI are both automobiles with very little overhang and thus handle very well under extreme conditions. The minimal overhang gives the Mini its bulldog-like stance.
Rear overhang may present a problem in large vehicles such as buses. Long rear overhang would require the driver to pay attention to nearby vehicles when turning at 90 degrees. Since the rear overhang is outside the wheelbase, it may hit a vehicle in the adjacent lane, especially when turning 90 degrees right (in a right-hand drive country).
Also, some specialized vehicles (such as the AM General HMMWV and the related Hummer H1) are designed with no frontal overhang, allowing it to possess incredible abilities such as climbing vertical walls. This does, however, place these vehicles' front wheels as the furthest forward point of the vehicle, which can lead to disastrous results in the event of a frontal collision.
See also
Approach and departure angles
Cantilever
Idler flatcar
Minimum railway curve radius
Ride height
Turning diameter
Wheelbase
References
External links
Ideal handling characteristics
Vehicle design | Overhang (vehicles) | [
"Engineering"
] | 822 | [
"Vehicle design",
"Design"
] |
17,484,717 | https://en.wikipedia.org/wiki/Ultrasonic%20nozzle | Ultrasonic nozzles are a type of spray nozzle that use high frequency vibrations produced by piezoelectric transducers acting upon the nozzle tip that create capillary waves in a liquid film. Once the amplitude of the capillary waves reaches a critical height (due to the power level supplied by the generator), they become too tall to support themselves and tiny droplets fall off the tip of each wave resulting in atomization.
The primary factors influencing the initial droplet size produced are frequency of vibration, surface tension, and viscosity of the liquid. Frequencies are commonly in the range of 20–180 kHz, beyond the range of human hearing, where the highest frequencies produce the smallest drop size.
History
In 1962 Dr. Robert Lang followed up on this work, essentially proving a correlation between his atomized droplet size relative to Rayleigh's liquid wavelength. Ultrasonic nozzles were first commercialized by Dr. Harvey L. Berger.
.
Applications
Subsequent uses of the technology include coating blood collection tubes, spraying flux onto printed circuit boards, coating implantable drug eluting stents and balloon/catheters, float glass manufacturing coatings, anti-microbial coatings onto food, precision semiconductor coatings and alternative energy coatings for solar cell and fuel cell manufacturing, among others.
Drug eluting stents and drug-coated balloons
Pharmaceuticals such as Sirolimus (also called rapamycin) and paclitaxel are coated on the surface of drug eluting stents (DES) and drug-coated balloons (DCB). These devices benefit greatly from ultrasonic spray nozzles for their ability to apply coatings with little to no loss. Medical devices such as DES and DCB require very narrow spray patterns, a low-velocity atomized spray and low-pressure air because of their small size.
Fuel cells
Research has shown that ultrasonic nozzles can be effectively used to manufacture proton exchange membrane fuel cells. The inks typically used are a platinum-carbon suspension, where the platinum acts as a catalyst inside the cell. Traditional methods to apply the catalyst to the proton exchange membrane typically involve screen printing or doctor-blades. However, these methods can result in undesirable cell performance due to the tendency of the catalyst to form agglomerations resulting in non-uniform gas flow in the cell and prohibiting the catalyst from being fully exposed, running the risk that the solvent or carrier liquid may be absorbed into the membrane, both of which impeded proton exchange efficiency. When ultrasonic nozzles are used, the spray can be made to be as dry as necessary by the nature of the small and uniform droplet size, by varying the distance the droplets travel and by applying low heat to the substrate such that the droplets dry in the air before reaching the substrate. Process engineers have finer control over these types of variables as opposed to other technologies. Additionally, because the ultrasonic nozzle imparts energy to the suspension just prior to and during atomization, possible agglomerates in the suspension are broken up resulting in homogenous distribution of the catalyst, resulting in higher efficiency of the catalyst and in turn, the fuel cell.
Transparent conductive films
Ultrasonic spray nozzle technology has been used to create films of indium tin oxide (ITO) in the formation of transparent conductive films (TCF). ITO has excellent transparency and low sheet resistance, however it is a scarce material and prone to cracking, which does not make it a good candidate for the new flexible TCFs. Graphene on the other hand can be made into a flexible film, extremely conductive and has high transparency. Ag nanowires (AgNWs) when combined with Graphene have been reported to be a promising superior TCF alternative to ITO. Prior studies focus on spin and bar coating methods which are not suitable for large area TCFs. A multi-step process utilizing ultrasonic spray of graphene oxide and conventional spray of AgNWs followed by a hydrazine vapor reduction, followed by the application of polymethylmethacrylate (PMMA) topcoat resulted in a peelable TCF that can be scaled to a large size.
Carbon nanotubes
CNT thin films are used as alternative materials to create transparent conducting films (TCO layers) for touch panel displays or other glass substrates, as well as organic solar cell active layers.
Photoresist spray onto MEMs wafers
Microelectromechanical systems (MEMs) are small microfabricated devices that combine electrical and mechanical components. Devices vary in size from below one micron to millimeters in size, functioning individually or in arrays to sense, control, and activate mechanical processes on the micro scale. Examples include pressure sensors, accelerometers, and microengines. Fabrication of MEMs involves depositing a uniform layer of photoresist onto the Si wafer. Photoresist has traditionally been applied to wafers in IC manufacturing using a spin coating technique. In complex MEMs devices that have etched areas with high aspect ratios, it can be difficult to achieve uniform coverage along the top, side walls, and bottoms of deep grooves and trenches using spin coating techniques due to the high rate of spin needed to remove excess liquid. Ultrasonic spray techniques are used to spray uniform coatings of photoresist onto high aspect ratio MEMs devices and can minimize usage and overspray of photoresist.
Printed circuit boards
The non-clogging nature of ultrasonic nozzles, the small and uniform droplet size created by them, and the fact that the spray plume can be shaped by tightly controlled air shaping devices makeS the application quite successful in wave soldering processes. The viscosity of nearly all fluxes on the market fit well within the capabilities of the technology. In soldering, "no-clean" flux is highly preferred. But if excessive quantities are applied the process will result in corrosive residues on the bottom of the circuit assembly.
Solar cells
Photovoltaic and dye-sensitized solar technology both need the application of liquids and coatings during the manufacturing process. With most of these substances being very expensive, any losses due to over-spray or quality control are minimized with the use of ultrasonic nozzles. In efforts to reduce the manufacturing costs of solar cell, traditionally done using the batch-based phosphoryl chloride or POCl3 method, it has been shown that using ultrasonic nozzles to lay a thin aqueous-based film onto silicon wafers can effectively be used as a diffusion process to create N-type layers with uniform surface resistance.
Ultrasonic spray pyrolysis
Ultrasonic spray pyrolysis is a chemical vapor deposition (CVD) method utilized in the formation of a variety of materials in thin film or nanoparticle form. Precursor materials are often fabricated through sol-gel methods and examples include the formation of aqueous silver nitrate, synthesis of zirconia particles, and fabrication of solid oxide fuel cell SOFC cathodes.
An atomized spray produced from an ultrasonic nozzle is subjected to a heated substrate typically ranging from 300–400 degrees C. Due to the high temperatures of the spray chamber, extensions to the ultrasonic nozzle (as pictured and labeled – High Temperature Ultrasonic Nozzle) such as a removable tip (tip is hidden under the vortex air shroud labeled #2) have been designed to be subjected to high temperatures while protecting the body (labeled #1) of the ultrasonic nozzle that contains temperature sensitive piezoelectric elements, typically outside of the spray chamber or by other means of isolation.
References
Berger, Harvey L. Ultrasonic Liquid Atomization: Theory and Application. 2nd ed. Hyde Park: Partrige Hill, 2006. 1-177.
Lefebvre, Arthur, Atomization and Sprays, Hemisphere, 1989,
External links
Further explanation of how an ultrasonic nozzle works
Fluid mechanics
Tools
Articles containing video clips | Ultrasonic nozzle | [
"Engineering"
] | 1,628 | [
"Civil engineering",
"Fluid mechanics"
] |
17,484,978 | https://en.wikipedia.org/wiki/Cold%20shock%20response | Cold shock response is a series of neurogenic cardio-respiratory responses caused by sudden immersion in cold water.
In cold water immersions, such as by falling through thin ice, cold shock response is perhaps the most common cause of death. Also, the abrupt contact with very cold water may cause involuntary inhalation, which, if underwater, can result in fatal drowning.
Death which occurs in such scenarios is complex to investigate and there are several possible causes and phenomena that can take part. The cold water can cause heart attack due to severe vasoconstriction, where the heart has to work harder to pump the same volume of blood throughout the arteries. For people with pre-existing cardiovascular disease, the additional workload can result in myocardial infarction and/or acute heart failure, which ultimately may lead to a cardiac arrest. A vagal response to an extreme stimulus as this one, may, in very rare cases, render per se a cardiac arrest. Hypothermia and extreme stress can both precipitate fatal tachyarrhythmias. A more modern view suggests that an autonomic conflictsympathetic (due to stress) and parasympathetic (due to the diving reflex) coactivationmay be responsible for some cold water immersion deaths. Gasp reflex and uncontrollable tachypnea can severely increase the risk of water inhalation and drowning.
Some people are much better prepared to survive sudden exposure to very cold water due to body and mental characteristics and due to conditioning. In fact, cold water swimming (also known as ice swimming or winter swimming) is a sport and an activity that reportedly can lead to several health benefits when done regularly.
Physiological response
Cold water immersion syndromefour-stage model
The physiological response to a sudden immersion in cold water may be divided in three or four discrete stages, with different risks and physiological changes, all being part of an entity labelled as Cold Water Immersion Syndrome. Although this process is a continuum, the 4 phases were initially described in the 1980s as follows:
The first stage of cold water immersion syndrome, the cold shock response, includes a group of reflexes lasting under 5 min in laboratory volunteers and initiated by thermoreceptors sensing rapid skin cooling. Water has a thermal conductivity 25 times and a volume-specific heat capacity over 3000 times that of air; subsequently, surface cooling is precipitous. The primary components of the cold shock reflex include gasping, tachypnea, reduced breath-holding time, and peripheral vasoconstriction, the latter effect highlighting the presumed physiologic principle (i.e., warmth preservation via central blood shunting). The magnitude of the cold shock response parallels the cutaneous cooling rate, and its termination is likely due to reflex baroreceptor responses or thermoreceptor habituation.
Diving reflex
The diving reflex is a set of physiological responses that occur in response to cold water immersion, particularly when the face or body is exposed to cold water. It is an evolutionary adaptation that helps mammals, including humans, manage the challenges of being submerged in cold water. The diving reflex is more pronounced in aquatic mammals and is thought to have originated as a way to conserve oxygen and enhance the ability to stay underwater for longer periods.
Key components of the diving reflex include:
Bradycardia: The heart rate decreases significantly when the face is exposed to cold water. This helps to conserve oxygen by slowing down the heartbeat. The degree of bradycardia can vary among individuals, but it is a common and well-documented response.
Peripheral Vasoconstriction: Blood vessels in the extremities constrict, reducing blood flow to the limbs. This shunting of blood helps to redirect it to essential organs, such as the heart and brain, preserving oxygen for vital functions.
Apnea: The diving reflex triggers an involuntary breath-holding response (apnea). This allows individuals to hold their breath for longer periods, enhancing their ability to stay submerged without the immediate need to breathe.
Blood Redistribution: The body redistributes blood flow, prioritizing essential organs and minimizing blood flow to non-essential areas, such as the skin and muscles. This redistribution helps to conserve heat and oxygen.
While the diving reflex is more pronounced in some mammals, its presence in humans is well-documented, particularly in cold water situations. The reflex is more prominent in infants and young children but can be observed in individuals of all ages.
Cardiac arrhythmias and autonomic conflict
Early models of cold water immersion syndrome focused primarily on sympathetic responses, however recent research suggests sympathetic and parasympathetic coactivation (leading to a conflict of the autonomic system response) may be responsible for some cold water immersion deaths. Although reciprocal activation between sympathetic (cold shock) and parasympathetic (diving response) systems is commonly adaptive (follow one another), simultaneous activation appears to be associated with arrythmya. Cold water induced rhythm disturbances are common, albeit frequently asymptomatic. In most humans, head-out cold-water immersion results in sympathetically driven tachycardia with variable disturbances. These cold water immersion induced arrhythmias appear to be accentuated by parasympathetic stimulation resulting from facial submersion or breath holding. Even vagally dominant diving bradycardia caused by isolated cold water facial immersion frequently is interrupted by supraventricular arrhythmias or premature beats. In theory, atrioventricular blockade or sinus arrest due to profound parasympathetic dominance might result in syncope or sudden cardiac death, but these rhythms tend to be rapidly reversed by lung stretch receptor activation associated with breathing. As such, a vagally produced arrest scenario is likelier during entrapment submersion than in flush drowning.
Conditioning against cold shock
It is possible to undergo physiological conditioning to reduce the cold shock response, and some people are naturally better suited to swimming in very cold water. Beneficial adaptations include the following:
having an insulating layer of body fat covering the limbs and torso;
ability to experience immersion without involuntary physical shock or mental panic;
ability to resist shivering;
ability to raise metabolism (and, in some cases, increase blood temperature slightly above the normal level);
a generalized delaying of metabolic shutdown (including slipping into unconsciousness) as central and peripheral body temperatures fall.
Cold shock response in other organisms
Cold shock in mammals
Cold shock has been described in several species and at least part of the physiology is similar, as described above in the Diving Reflex.
Cold shock in bacteria
A cold shock is when bacteria undergo a significant reduction in temperature, likely due to their environment dropping in temperature. To constitute as a cold shock the temperature reduction needs to be both significant, for example dropping from 37 °C to 20 °C, and it needs to happen over a short period of time, traditionally in under 24 hours. Both prokaryotic and eukaryotic cells are capable of undergoing a cold shock response. The effects of a cold shock in bacteria include:
Decreased cell membrane fluidity
Decreased enzyme activity
Decreased efficiency of transcription and translation
Decreased efficiency of protein folding
Decreased ribosome function
The bacteria uses the cytoplasmic membrane, RNA/DNA, and ribosomes as cold sensors in the cell, placing them in charge of monitoring the cell's temperature. Once these sensors send the signal that a cold shock is occurring, the bacteria will pause the majority of protein synthesis in order to redirect its focus to producing what are called cold shock proteins (Csp). The volume of the cold shock proteins produced will depend on the severity of the temperature decrease. The function of these cold shock proteins is to assist the cell in adapting to the sudden temperature change, allowing it to maintain as close to a normal level of function as possible.
One way cold shock proteins are thought to function is by acting as nucleic acid chaperones. These cold shock proteins will block the formation of secondary structures in the mRNA during the cold shock, leaving the bacteria with only single strand RNA. Single strand is the most efficient form of RNA for the facilitation of transcription and translation. This will help to counteract the decreased efficiency of transcription and translation brought about by the cold shock. Cold shock proteins also affect the formation of hairpin structures in the RNA, blocking them from being formed. The function of these hairpin structures is to slow down or decrease the transcription of RNA. So by removing them, this will also help to increase the efficiency of transcription and translation.
Once the initial shock of the temperature decrease has been dealt with, the production of cold shock proteins is slowly tapered off. Instead, other proteins are synthesized in their place as the cell continues to grow at this new lower temperature. However, the rate of growth seen by these bacterial cells at colder temperatures is often lower than the rates of growth they exhibit at warmer temperatures.
Transcriptional response of Escherichia coli to cold shock
Cold shocks cause the repression of several hundreds of genes in the bacterium E. coli. Many of these genes are repressed quickly after the decrease in temperature, while others are only affected several hours after this event. The repression mechanism is described in. Shortly, during cold-shocks, cellular energy levels decrease. This hampers the efficiency by which DNA gyrases remove positive supercoils produced by transcription events, whose accumulation eventually blocks future transcription events.
Many of the genes repressed during cold shock are involved in cell metabolism. By knowing the mechanism by which these genes respond, one can potentially tune it, in genetically modified bacteria, to modify at which temperature is the response to cold shock activated. This modification could reduce the energy costs of bioreactors.
See also
References
Sources
Introduction to Frozen Mythbusters and Myth #1. Wilderness Medicine Newsletter. Sourced 2008-05-17.
Effects of external causes
Physiology
Thermal medicine
Wilderness medical emergencies
Causes of death | Cold shock response | [
"Biology"
] | 2,035 | [
"Physiology"
] |
17,484,994 | https://en.wikipedia.org/wiki/Elonex | Elonex is a British Digital Out of Home media owner and supplier of LED screens.
History
Elonex was founded in Finchley, London in 1986 by the German-born Israel Wetrin. The name was derived from the last two letters of Wetrin's two sons, Daniel and Gideon, and the first two letters of "export". Today, the company is based in Birmingham.
Elonex was the official LED supplier to the Super League, the UK's top tier Rugby league competition. The contract was mutually terminated in June 2012.
Products
In addition to their well-known Elonex ONE, the Elonex eBook debuted in July 2009.
See also
Mesh Computers
Viglen
References
External links
Manufacturing companies based in Birmingham, West Midlands
Companies established in 1986
Computer hardware companies
Computer companies of the United Kingdom
Defunct computer systems companies
1986 establishments in England
British brands | Elonex | [
"Technology"
] | 178 | [
"Computer hardware companies",
"Computers"
] |
17,486,518 | https://en.wikipedia.org/wiki/Neutron%20electric%20dipole%20moment | The neutron electric dipole moment (nEDM), denoted dn, is a measure for the distribution of positive and negative charge inside the neutron. A nonzero electric dipole moment can only exist if the centers of the negative and positive charge distribution inside the particle do not coincide. So far, no neutron EDM has been found. The current best measured limit for dn is .
Theory
A permanent electric dipole moment of a fundamental particle violates both parity (P) and time reversal symmetry (T). These violations can be understood by examining the neutron's magnetic dipole moment and hypothetical electric dipole moment. Under time reversal, the magnetic dipole moment changes its direction, whereas the electric dipole moment stays unchanged. Under parity, the electric dipole moment changes its direction but not the magnetic dipole moment. As the resulting system under P and T is not symmetric with respect to the initial system, these symmetries are violated in the case of the existence of an EDM. Having also CPT symmetry, the combined symmetry CP is violated as well.
Standard Model prediction
As it is depicted above, in order to generate a nonzero nEDM one needs processes that violate CP symmetry. CP violation has been observed in weak interactions and is included in the Standard Model of particle physics via the CP-violating phase in the CKM matrix. However, the amount of CP violation is very small and therefore also the contribution to the nEDM: .
Matter–antimatter asymmetry
From the asymmetry between matter and antimatter in the universe, one suspects that there must be a sizeable amount of CP-violation. Measuring a neutron electric dipole moment at a much higher level than predicted by the Standard Model would therefore directly confirm this suspicion and improve our understanding of CP-violating processes.
Strong CP problem
As the neutron is built up of quarks, it is also susceptible to CP violation stemming from strong interactions. Quantum chromodynamics – the theoretical description of the strong force – naturally includes a term that breaks CP-symmetry. The strength of this term is characterized by the angle θ. The current limit on the nEDM constrains this angle to be less than 10−10 radians. This fine-tuning of the angle θ, which is naturally expected to be of order 1, is the strong CP problem.
SUSY CP problem
Supersymmetric extensions to the Standard Model, such as the Minimal Supersymmetric Standard Model, generally lead to a large CP-violation. Typical predictions for the neutron EDM arising from the theory range between and . As in the case of the strong interaction, the limit on the neutron EDM is already constraining the CP violating phases. The fine-tuning is, however, not as severe yet.
Experimental technique
In order to extract the neutron EDM, one measures the Larmor precession of the neutron spin in the presence of parallel and antiparallel magnetic and electric fields. The precession frequency for each of the two cases is given by
,
the addition or subtraction of the frequencies stemming from the precession of the magnetic moment around the magnetic field and the precession of the electric dipole moment around the electric field. From the difference of those two frequencies one readily obtains a measure of the neutron EDM:
The biggest challenge of the experiment (and at the same time the source of the biggest systematic false effects) is to ensure that the magnetic field does not change during these two measurements.
History
The first experiments searching for the electric dipole moment of the neutron used beams of thermal (and later cold) neutrons to conduct the measurement. It started with the experiment by James Smith, Purcell, and Ramsey in 1951 (and published in 1957) at ORNL's Graphite Reactor (as the three researchers were from Harvard University, this experiment is called ORNL/Harvard or something similar, see figure in this section), obtaining a limit of Beams of neutrons were used until 1977 for nEDM experiments. At this point, systematic effects related to the high velocities of the neutrons in the beam became insurmountable. The final limit obtained with a neutron beam amounts to .
After that, experiments with ultracold neutrons (UCN) took over. It started in 1980 with an experiment at the (LNPI) obtaining a limit of . This experiment and especially the experiment starting in 1984 at the Institut Laue-Langevin (ILL) pushed the limit down by another two orders of magnitude yielding the best upper limit in 2006, revised in 2015.
During these 70 years of experiments, six orders of magnitude have been covered, thereby putting stringent constraints on theoretical models.
The latest best limit of has been published 2020 by the nEDM collaboration at Paul Scherrer Institute (PSI).
Current experiments
Currently, there are at least six experiments aiming at improving the current limit (or measuring for the first time) on the neutron EDM with a sensitivity down to over the next 10 years, thereby covering the range of prediction coming from supersymmetric extensions to the Standard Model.
n2EDM of the nEDM collaboration under construction at the UCN source at the Paul Scherrer Institute. In February 2022 the apparatus was being set up at PSI, and commissioning with neutrons expected in late 2022. The apparatus is expected to reach sensitivity of after 500 days of operation.
TUCAN, a UCN nEDM experiment under construction at TRIUMF
nEDM@SNS experiment under construction (as of 2022) at the Spallation Neutron Source
PNPI nEDM experiment awaiting operation approval at the Institut Laue-Langevin
PanEDM experiment being built at the Institut Laue-Langevin
LANL Electric Dipole Moment (LANL nEDM) at Los Alamos National Laboratory
Beam EDM at University of Bern, Switzerland
The Cryogenic neutron EDM experiment or CryoEDM was under development at the Institut Laue-Langevin but its activities were stopped in 2013/2014.
See also
Anomalous electric dipole moment
Anomalous magnetic dipole moment
Axion – a hypothetical particle proposed to explain the strong force's unexpected preservation of CP
Electric dipole spin resonance
Electron electric dipole moment – another electric dipole which should exist, but also should be too small to have yet been measured
Electron magnetic moment
Nucleon magnetic moment – the corresponding magnetic property, which has been measured
References
Electric dipole moment
Electromagnetism
Particle physics | Neutron electric dipole moment | [
"Physics",
"Mathematics"
] | 1,326 | [
"Physical phenomena",
"Electromagnetism",
"Electric dipole moment",
"Physical quantities",
"Quantity",
"Fundamental interactions",
"Particle physics",
"Moment (physics)"
] |
16,234,805 | https://en.wikipedia.org/wiki/Cray%20S-MP | The Cray S-MP was a multiprocessor server computer sold by Cray Research from 1992 to 1993. It was based on the Sun SPARC microprocessor architecture and could be configured with up to eight 66 MHz BIT B5000 processors. Optionally, a Cray APP matrix co-processor cluster could be added to an S-MP system.
The S-MP was originally designed by FPS Computing as the FPS Model 500EA. FPS were acquired by Cray Research in 1991, becoming Cray Research Superservers Inc., and the Model 500EA was relaunched by Cray in 1992 as the S-MP.
The S-MP was a short-lived model, and was superseded by the Cray CS6400.
References
New Computer by Cray Research Uses Sun Processor, New York Times
Cockcroft, Adrian and Pettit, Richard (1998), Sun Performance and Tuning: Java and the Internet, Sun Microsystems.
Smp
Supercomputers
32-bit computers | Cray S-MP | [
"Technology"
] | 209 | [
"Supercomputers",
"Computing stubs",
"Supercomputing",
"Computer hardware stubs"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.