text stringlengths 26 3.6k | page_title stringlengths 1 71 | source stringclasses 1
value | token_count int64 10 512 | id stringlengths 2 8 | url stringlengths 31 117 | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|---|---|
El Gordo is a massive interacting galaxy cluster in the early Universe (). The extreme properties of El Gordo in terms of its redshift, mass, and the collision velocity leads to strong () tension with the ΛCDM model. The properties of El Gordo are however consistent with cosmological simulations in the framework of MOND due to more rapid structure formation.
KBC void
The KBC void is an immense, comparatively empty region of space containing the Milky Way approximately 2 billion light-years (600 megaparsecs, Mpc) in diameter. Some authors have said the existence of the KBC void violates the assumption that the CMB reflects baryonic density fluctuations at or Einstein's theory of general relativity, either of which would violate the ΛCDM model, while other authors have claimed that supervoids as large as the KBC void are consistent with the ΛCDM model.
Hubble tension
Statistically significant differences remain in values of the Hubble constant derived by matching the ΛCDM model to data from the "early universe", like the cosmic background radiation compared to values derived from stellar distance measurements, called the "late universe". While systematic error in the measurements remains a possibility, many different kinds of observations agree with one of these two values of the constant. This difference, called the Hubble tension, widely acknowledged to be a major problem for the ΛCDM model.
Dozens of proposals for modifications of ΛCDM or completely new models have been published to explain the Hubble tension. Among these models are many that modify the properties of dark energy or of dark matter over time, interactions between dark energy and dark matter, unified dark energy and matter, other forms of dark radiation like sterile neutrinos, modifications to the properties of gravity, or the modification of the effects of inflation, changes to the properties of elementary particles in the early universe, among others. None of these models can simultaneously explain the breadth of other cosmological data as well as ΛCDM.
S8 tension
The tension in cosmology is another major problem for the ΛCDM model. The parameter in the ΛCDM model quantifies the amplitude of matter fluctuations in the late universe and is defined as | Lambda-CDM model | Wikipedia | 454 | 985963 | https://en.wikipedia.org/wiki/Lambda-CDM%20model | Physical sciences | Physical cosmology | Astronomy |
Early- (e.g. from CMB data collected using the Planck observatory) and late-time (e.g. measuring weak gravitational lensing events) facilitate increasingly precise values of . However, these two categories of measurement differ by more standard deviations than their uncertainties. This discrepancy is called the tension. The name "tension" reflects that the disagreement is not merely between two data sets: the many sets of early- and late-time measurements agree well within their own categories, but there is an unexplained difference between values obtained from different points in the evolution of the universe. Such a tension indicates that the ΛCDM model may be incomplete or in need of correction.
Some values for are (2020 Planck), (2021 KIDS), (2022 DES), (2023 DES+KIDS), – (2023 HSC-SSP), (2024 EROSITA). Values have also obtained using peculiar velocities, (2020) and (2020), among other methods.
Axis of evil
Cosmological lithium problem
The actual observable amount of lithium in the universe is less than the calculated amount from the ΛCDM model by a factor of 3–4. If every calculation is correct, then solutions beyond the existing ΛCDM model might be needed.
Shape of the universe
The ΛCDM model assumes that the shape of the universe is of zero curvature (is flat) and has an undetermined topology. In 2019, interpretation of Planck data suggested that the curvature of the universe might be positive (often called "closed"), which would contradict the ΛCDM model. Some authors have suggested that the Planck data detecting a positive curvature could be evidence of a local inhomogeneity in the curvature of the universe rather than the universe actually being globally a 3-manifold of positive curvature.
Violations of the strong equivalence principle | Lambda-CDM model | Wikipedia | 389 | 985963 | https://en.wikipedia.org/wiki/Lambda-CDM%20model | Physical sciences | Physical cosmology | Astronomy |
The ΛCDM model assumes that the strong equivalence principle is true. However, in 2020 a group of astronomers analyzed data from the Spitzer Photometry and Accurate Rotation Curves (SPARC) sample, together with estimates of the large-scale external gravitational field from an all-sky galaxy catalog. They concluded that there was highly statistically significant evidence of violations of the strong equivalence principle in weak gravitational fields in the vicinity of rotationally supported galaxies. They observed an effect inconsistent with tidal effects in the ΛCDM model. These results have been challenged as failing to consider inaccuracies in the rotation curves and correlations between galaxy properties and clustering strength. and as inconsistent with similar analysis of other galaxies.
Cold dark matter discrepancies
Several discrepancies between the predictions of cold dark matter in the ΛCDM model and observations of galaxies and their clustering have arisen. Some of these problems have proposed solutions, but it remains unclear whether they can be solved without abandoning the ΛCDM model.
Milgrom, McGaugh, and Kroupa have criticized the dark matter portions of the theory from the perspective of galaxy formation models and supporting the alternative modified Newtonian dynamics (MOND) theory, which requires a modification of the Einstein field equations and the Friedmann equations as seen in proposals such as modified gravity theory (MOG theory) or tensor–vector–scalar gravity theory (TeVeS theory). Other proposals by theoretical astrophysicists of cosmological alternatives to Einstein's general relativity that attempt to account for dark energy or dark matter include f(R) gravity, scalar–tensor theories such as theories (see Galilean invariance), brane cosmologies, the DGP model, and massive gravity and its extensions such as bimetric gravity.
Cuspy halo problem
The density distributions of dark matter halos in cold dark matter simulations (at least those that do not include the impact of baryonic feedback) are much more peaked than what is observed in galaxies by investigating their rotation curves.
Dwarf galaxy problem
Cold dark matter simulations predict large numbers of small dark matter halos, more numerous than the number of small dwarf galaxies that are observed around galaxies like the Milky Way.
Satellite disk problem
Dwarf galaxies around the Milky Way and Andromeda galaxies are observed to be orbiting in thin, planar structures whereas the simulations predict that they should be distributed randomly about their parent galaxies. However, latest research suggests this seemingly bizarre alignment is just a quirk which will dissolve over time. | Lambda-CDM model | Wikipedia | 512 | 985963 | https://en.wikipedia.org/wiki/Lambda-CDM%20model | Physical sciences | Physical cosmology | Astronomy |
High-velocity galaxy problem
Galaxies in the NGC 3109 association are moving away too rapidly to be consistent with expectations in the ΛCDM model. In this framework, NGC 3109 is too massive and distant from the Local Group for it to have been flung out in a three-body interaction involving the Milky Way or Andromeda Galaxy.
Galaxy morphology problem
If galaxies grew hierarchically, then massive galaxies required many mergers. Major mergers inevitably create a classical bulge. On the contrary, about 80% of observed galaxies give evidence of no such bulges, and giant pure-disc galaxies are commonplace. The tension can be quantified by comparing the observed distribution of galaxy shapes today with predictions from high-resolution hydrodynamical cosmological simulations in the ΛCDM framework, revealing a highly significant problem that is unlikely to be solved by improving the resolution of the simulations. The high bulgeless fraction was nearly constant for 8 billion years.
Fast galaxy bar problem
If galaxies were embedded within massive halos of cold dark matter, then the bars that often develop in their central regions would be slowed down by dynamical friction with the halo. This is in serious tension with the fact that observed galaxy bars are typically fast.
Small scale crisis
Comparison of the model with observations may have some problems on sub-galaxy scales, possibly predicting too many dwarf galaxies and too much dark matter in the innermost regions of galaxies. This problem is called the "small scale crisis". These small scales are harder to resolve in computer simulations, so it is not yet clear whether the problem is the simulations, non-standard properties of dark matter, or a more radical error in the model.
High redshift galaxies
Observations from the James Webb Space Telescope have resulted in various galaxies confirmed by spectroscopy at high redshift, such as JADES-GS-z13-0 at cosmological redshift of 13.2. Other candidate galaxies which have not been confirmed by spectroscopy include CEERS-93316 at cosmological redshift of 16.4. | Lambda-CDM model | Wikipedia | 411 | 985963 | https://en.wikipedia.org/wiki/Lambda-CDM%20model | Physical sciences | Physical cosmology | Astronomy |
Existence of surprisingly massive galaxies in the early universe challenges the preferred models describing how dark matter halos drive galaxy formation. It remains to be seen whether a revision of the Lambda-CDM model with parameters given by Planck Collaboration is necessary to resolve this issue. The discrepancies could also be explained by particular properties (stellar masses or effective volume) of the candidate galaxies, yet unknown force or particle outside of the Standard Model through which dark matter interacts, more efficient baryonic matter accumulation by the dark matter halos, early dark energy models, or the hypothesized long-sought Population III stars.
Missing baryon problem
Massimo Persic and Paolo Salucci first estimated the baryonic density today present in ellipticals, spirals, groups and clusters of galaxies.
They performed an integration of the baryonic mass-to-light ratio over luminosity (in the following ), weighted with the luminosity function over the previously mentioned classes of astrophysical objects:
The result was:
where .
Note that this value is much lower than the prediction of standard cosmic nucleosynthesis , so that stars and gas in galaxies and in galaxy groups and clusters account for less than 10% of the primordially synthesized baryons. This issue is known as the problem of the "missing baryons".
The missing baryon problem is claimed to be resolved. Using observations of the kinematic Sunyaev–Zel'dovich effect spanning more than 90% of the lifetime of the Universe, in 2021 astrophysicists found that approximately 50% of all baryonic matter is outside dark matter haloes, filling the space between galaxies. Together with the amount of baryons inside galaxies and surrounding them, the total amount of baryons in the late time Universe is compatible with early Universe measurements.
Unfalsifiability
It has been argued that the ΛCDM model is built upon a foundation of conventionalist stratagems, rendering it unfalsifiable in the sense defined by Karl Popper.
Extended models | Lambda-CDM model | Wikipedia | 420 | 985963 | https://en.wikipedia.org/wiki/Lambda-CDM%20model | Physical sciences | Physical cosmology | Astronomy |
Extended models allow one or more of the "fixed" parameters above to vary, in addition to the basic six; so these models join smoothly to the basic six-parameter model in the limit that the additional parameter(s) approach the default values. For example, possible extensions of the simplest ΛCDM model allow for spatial curvature ( may be different from 1); or quintessence rather than a cosmological constant where the equation of state of dark energy is allowed to differ from −1. Cosmic inflation predicts tensor fluctuations (gravitational waves). Their amplitude is parameterized by the tensor-to-scalar ratio (denoted ), which is determined by the unknown energy scale of inflation. Other modifications allow hot dark matter in the form of neutrinos more massive than the minimal value, or a running spectral index; the latter is generally not favoured by simple cosmic inflation models.
Allowing additional variable parameter(s) will generally increase the uncertainties in the standard six parameters quoted above, and may also shift the central values slightly. The table below shows results for each of the possible "6+1" scenarios with one additional variable parameter; this indicates that, as of 2015, there is no convincing evidence that any additional parameter is different from its default value.
Some researchers have suggested that there is a running spectral index, but no statistically significant study has revealed one. Theoretical expectations suggest that the tensor-to-scalar ratio should be between 0 and 0.3, and the latest results are within those limits. | Lambda-CDM model | Wikipedia | 309 | 985963 | https://en.wikipedia.org/wiki/Lambda-CDM%20model | Physical sciences | Physical cosmology | Astronomy |
In condensed matter physics, the Fermi surface is the surface in reciprocal space which separates occupied electron states from unoccupied electron states at zero temperature. The shape of the Fermi surface is derived from the periodicity and symmetry of the crystalline lattice and from the occupation of electronic energy bands. The existence of a Fermi surface is a direct consequence of the Pauli exclusion principle, which allows a maximum of one electron per quantum state. The study of the Fermi surfaces of materials is called fermiology.
Theory
Consider a spin-less ideal Fermi gas of particles. According to Fermi–Dirac statistics, the mean occupation number of a state with energy is given by
where
is the mean occupation number of the th state
is the kinetic energy of the th state
is the chemical potential (at zero temperature, this is the maximum kinetic energy the particle can have, i.e. Fermi energy )
is the absolute temperature
is the Boltzmann constant
Suppose we consider the limit . Then we have,
By the Pauli exclusion principle, no two fermions can be in the same state. Additionally, at zero temperature the enthalpy of the electrons must be minimal, meaning that they cannot change state. If, for a particle in some state, there existed an unoccupied lower state that it could occupy, then the energy difference between those states would give the electron an additional enthalpy. Hence, the enthalpy of the electron would not be minimal. Therefore, at zero temperature all the lowest energy states must be saturated. For a large ensemble the Fermi level will be approximately equal to the chemical potential of the system, and hence every state below this energy must be occupied. Thus, particles fill up all energy levels below the Fermi level at absolute zero, which is equivalent to saying that is the energy level below which there are exactly states.
In momentum space, these particles fill up a ball of radius , the surface of which is called the Fermi surface. | Fermi surface | Wikipedia | 402 | 986096 | https://en.wikipedia.org/wiki/Fermi%20surface | Physical sciences | Crystallography | Physics |
The linear response of a metal to an electric, magnetic, or thermal gradient is determined by the shape of the Fermi surface, because currents are due to changes in the occupancy of states near the Fermi energy. In reciprocal space, the Fermi surface of an ideal Fermi gas is a sphere of radius
,
determined by the valence electron concentration where is the reduced Planck constant. A material whose Fermi level falls in a gap between bands is an insulator or semiconductor depending on the size of the bandgap. When a material's Fermi level falls in a bandgap, there is no Fermi surface.
Materials with complex crystal structures can have quite intricate Fermi surfaces. Figure 2 illustrates the anisotropic Fermi surface of graphite, which has both electron and hole pockets in its Fermi surface due to multiple bands crossing the Fermi energy along the direction. Often in a metal, the Fermi surface radius is larger than the size of the first Brillouin zone, which results in a portion of the Fermi surface lying in the second (or higher) zones. As with the band structure itself, the Fermi surface can be displayed in an extended-zone scheme where is allowed to have arbitrarily large values or a reduced-zone scheme where wavevectors are shown modulo (in the 1-dimensional case) where a is the lattice constant. In the three-dimensional case the reduced zone scheme means that from any wavevector there is an appropriate number of reciprocal lattice vectors subtracted that the new now is closer to the origin in -space than to any . Solids with a large density of states at the Fermi level become unstable at low temperatures and tend to form ground states where the condensation energy comes from opening a gap at the Fermi surface. Examples of such ground states are superconductors, ferromagnets, Jahn–Teller distortions and spin density waves.
The state occupancy of fermions like electrons is governed by Fermi–Dirac statistics so at finite temperatures the Fermi surface is accordingly broadened. In principle all fermion energy level populations are bound by a Fermi surface although the term is not generally used outside of condensed-matter physics. | Fermi surface | Wikipedia | 462 | 986096 | https://en.wikipedia.org/wiki/Fermi%20surface | Physical sciences | Crystallography | Physics |
Experimental determination
Electronic Fermi surfaces have been measured through observation of the oscillation of transport properties in magnetic fields , for example the de Haas–van Alphen effect (dHvA) and the Shubnikov–de Haas effect (SdH). The former is an oscillation in magnetic susceptibility and the latter in resistivity. The oscillations are periodic versus and occur because of the quantization of energy levels in the plane perpendicular to a magnetic field, a phenomenon first predicted by Lev Landau. The new states are called Landau levels and are separated by an energy where is called the cyclotron frequency, is the electronic charge, is the electron effective mass and is the speed of light. In a famous result, Lars Onsager proved that the period of oscillation is related to the cross-section of the Fermi surface (typically given in Å−2) perpendicular to the magnetic field direction by the equation. Thus the determination of the periods of oscillation for various applied field directions allows mapping of the Fermi surface. Observation of the dHvA and SdH oscillations requires magnetic fields large enough that the circumference of the cyclotron orbit is smaller than a mean free path. Therefore, dHvA and SdH experiments are usually performed at high-field facilities like the High Field Magnet Laboratory in Netherlands, Grenoble High Magnetic Field Laboratory in France, the Tsukuba Magnet Laboratory in Japan or the National High Magnetic Field Laboratory in the United States.
The most direct experimental technique to resolve the electronic structure of crystals in the momentum-energy space (see reciprocal lattice), and, consequently, the Fermi surface, is the angle-resolved photoemission spectroscopy (ARPES). An example of the Fermi surface of superconducting cuprates measured by ARPES is shown in Figure 3. | Fermi surface | Wikipedia | 383 | 986096 | https://en.wikipedia.org/wiki/Fermi%20surface | Physical sciences | Crystallography | Physics |
With positron annihilation it is also possible to determine the Fermi surface as the annihilation process conserves the momentum of the initial particle. Since a positron in a solid will thermalize prior to annihilation, the annihilation radiation carries the information about the electron momentum. The corresponding experimental technique is called angular correlation of electron positron annihilation radiation (ACAR) as it measures the angular deviation from of both annihilation quanta. In this way it is possible to probe the electron momentum density of a solid and determine the Fermi surface. Furthermore, using spin polarized positrons, the momentum distribution for the two spin states in magnetized materials can be obtained. ACAR has many advantages and disadvantages compared to other experimental techniques: It does not rely on UHV conditions, cryogenic temperatures, high magnetic fields or fully ordered alloys. However, ACAR needs samples with a low vacancy concentration as they act as effective traps for positrons. In this way, the first determination of a smeared Fermi surface in a 30% alloy was obtained in 1978. | Fermi surface | Wikipedia | 226 | 986096 | https://en.wikipedia.org/wiki/Fermi%20surface | Physical sciences | Crystallography | Physics |
A hair dryer (the handheld type also referred to as a blow dryer) is an electromechanical device that blows ambient air in hot or warm settings for styling or drying hair. Hair dryers enable better control over the shape and style of hair, by accelerating and controlling the formation of temporary hydrogen bonds within each strand. These bonds are powerful, but are temporary and extremely vulnerable to humidity. They disappear with a single washing of the hair.
Hairstyles using hair dryers usually have volume and discipline, which can be further improved with styling products, hairbrushes, and combs during drying to add tension, hold and lift. Hair dryers were invented in the late 19th century. The first model was created in 1911 by Gabriel Ghazanchyan. Handheld, household hair dryers first appeared in 1920. Hair dryers are used in beauty salons by professional stylists, as well as by consumers at home.
History
In 1888 the first hair dryer was invented by French stylist Alexandre Godefroy. His invention was a large, seated version that consisted of a bonnet that attached to the chimney pipe of a gas stove. Godefroy invented it for use in his hair salon in France, and it was not portable or handheld. It could only be used by having the person sit underneath it.
Armenian American inventor Gabriel Kazanjian was the first to patent a hair dryer in the United States, in 1911. Around 1920, hair dryers began to go on the market in handheld form. This was due to innovations by National Stamping and Electricworks under the white cross brand, and later U.S. Racine Universal Motor Company and the Hamilton Beach Co., which allowed the dryer to be small enough to be held by hand. Even in the 1920s, the new dryers were often heavy, weighing in at approximately , and were difficult to use. They also had many instances of overheating and electrocution. Hair dryers were only capable of using 100 watts, which increased the amount of time needed to dry hair (the average dryer today can use up to 2000 watts of heat). | Hair dryer | Wikipedia | 434 | 986413 | https://en.wikipedia.org/wiki/Hair%20dryer | Technology | Household appliances | null |
Since the 1920s, development of the hair dryer has mainly focused on improving the wattage and superficial exterior and material changes. In fact, the mechanism of the dryer has not had any significant changes since its inception. One of the more important changes for the hair dryer is to be made of plastic, so that it is more lightweight. This really caught on in the 1960s with the introduction of better electrical motors and the improvement of plastics. Another important change happened in 1954 when GEC changed the design of the dryer to move the motor inside the casing.
The bonnet dryer was introduced to consumers in 1951. This type worked by having the dryer, usually in a small portable box, connected to a tube that went into a bonnet with holes in it that could be placed on top of a person's head. This worked by giving an even amount of heat to the whole head at once.
The 1950s also saw the introduction of the rigid-hood hair dryer which is the type most frequently seen in salons. It had a hard plastic helmet that wraps around the person's head. This dryer works similarly to the bonnet dryer of the 1950s but at a much higher wattage.
In the 1970s, the U.S. Consumer Product Safety Commission set up guidelines that hair dryers had to meet to be considered safe to manufacture. Since 1991 the CPSC has mandated that all dryers must use a ground fault circuit interrupter so that it cannot electrocute a person if it gets wet. By 2000, deaths by blowdryers had dropped to fewer than four people a year, a stark difference to the hundreds of cases of electrocution accidents during the mid-20th century.
Function
Most hair dryers consist of electric heating coils and a fan that blows the air (usually powered by a universal motor). The heating element in most dryers is a bare, coiled nichrome wire that is wrapped around mica insulators. Nichrome is used due to its high resistivity, and low tendency to corrode when heated.
A survey of stores in 2007 showed that most hair dryers had ceramic heating elements (like ceramic heaters) because of their "instant heat" capability. This means that it takes less time for the dryers to heat up and for the hair to dry. | Hair dryer | Wikipedia | 471 | 986413 | https://en.wikipedia.org/wiki/Hair%20dryer | Technology | Household appliances | null |
Many of these dryers have "normal mode" buttons that turn off the heater and blow room-temperature air while the button is pressed. This function helps to maintain the hairstyle by setting it. The colder air reduces frizz and can help to promote shine in the hair.
Many feature "ionic" operation, to reduce the build-up of static electricity in the hair, though the efficacy of ionic technology is of some debate. Manufacturers claim this makes the hair "smoother".
Hair dryers are available with attachments, such as diffusers, airflow concentrators, and comb nozzles.
A diffuser is an attachment that is used on hair that is fine, colored, permed or naturally curly. It diffuses the jet of air, so that the hair is not blown around while it dries. The hair dries more slowly, at a cooler temperature, and with less physical disturbance. This makes it so that the hair is less likely to frizz and it gives the hair more volume.
An airflow concentrator does the opposite of a diffuser. It makes the end of the hair dryer narrower and thus helps to concentrate the heat into one spot to make it dry rapidly.
The comb nozzle attachment is the same as the airflow concentrator, but it ends with comb-like teeth so that the user can dry the hair using the dryer without a brush or comb.
Hair dryers have been cited as an effective treatment for head lice.
Types
Today there are two major types of hair dryers: the handheld and the rigid-hood dryer.
A hood dryer has a hard plastic dome that fits over a person's head to dry their hair. Hot air is blown out through tiny openings around the inside of the dome so the hair is dried evenly. Hood dryers are mainly found in hair salons.
Hair dryer brush
A hair dryer brush (also called "hot air brush" and "round brush hair dryer" and "hair styler") has the shape of a brush and it is used as a volumizer too.
There are two types of round brush hair dryers – rotating and static. Rotating round brush hair dryers have barrels that rotate automatically while static round brush hair dryers don't.
Cultural references
The British historical drama television series Downton Abbey made note of the invention of the portable hair dryer when a character purchased one in Series 6 Episode 9, set in the year 1925.
Gallery | Hair dryer | Wikipedia | 510 | 986413 | https://en.wikipedia.org/wiki/Hair%20dryer | Technology | Household appliances | null |
A drug test (also often toxicology screen or tox screen) is a technical analysis of a biological specimen, for example urine, hair, blood, breath, sweat, or oral fluid/saliva—to determine the presence or absence of specified parent drugs or their metabolites. Major applications of drug testing include detection of the presence of performance enhancing steroids in sport, employers and parole/probation officers screening for drugs prohibited by law (such as cocaine, methamphetamine, and heroin) and police officers testing for the presence and concentration of alcohol (ethanol) in the blood commonly referred to as BAC (blood alcohol content). BAC tests are typically administered via a breathalyzer while urinalysis is used for the vast majority of drug testing in sports and the workplace. Numerous other methods with varying degrees of accuracy, sensitivity (detection threshold/cutoff), and detection periods exist.
A drug test may also refer to a test that provides quantitative chemical analysis of an illegal drug, typically intended to help with responsible drug use.
Detection periods
The detection windows depend upon multiple factors: drug class, amount and frequency of use, metabolic rate, body mass, age, overall health, and urine pH. For ease of use, the detection times of metabolites have been incorporated into each parent drug. For example, heroin and cocaine can only be detected for a few hours after use, but their metabolites can be detected for several days in urine. The chart depicts the longer detection times of the metabolites. In the case of hair testing, the metabolytes are permanently embedded into hair, and the detection time is determined by the length of the hair sample used in the analysis. The standard length of head hair used in the test is 1.5", which corresponds to about 3 months. Body/pubic hair grows slower, and the same 1.5" would result in a longer detection time.
Oral fluid or saliva testing results for the most part mimic that of blood. The only exceptions are THC (tetrahydrocannabinol) and benzodiazepines. Oral fluid will likely detect THC from ingestion up to a maximum period of 6–12 hours. This continues to cause difficulty in oral fluid detection of THC and benzodiazepines. | Drug test | Wikipedia | 472 | 986871 | https://en.wikipedia.org/wiki/Drug%20test | Biology and health sciences | Diagnostics | Health |
Breath air for the most part mimics blood tests as well. Due to the very low levels of substances in the breath air, liquid chromatography—mass spectrometry has to be used to analyze the sample according to a recent publication wherein 12 analytes were investigated.
Rapid oral fluid products are not approved for use in workplace drug testing programs and are not FDA cleared. Using rapid oral fluid drug tests in the workplace is prohibited in only:
California
Kansas
Maine
Minnesota
New York
Vermont
The following chart gives approximate detection periods for each substance by test type.
Types
Urine drug screen
Urine analysis is primarily used because of its low cost. Urine drug testing is one of the most common testing methods used. The enzyme-multiplied immune test is the most frequently used urinalysis. Complaints have been made about the relatively high rates of false positives using this test.
Urine drug tests screen the urine for the presence of a parent drug or its metabolites. The level of drug or its metabolites is not predictive of when the drug was taken or how much the patient used.
Urine drug testing is an immunoassay based on the principle of competitive binding. Drugs which may be present in the urine specimen compete against their respective drug conjugate for binding sites on their specific antibody. During testing, a urine specimen migrates upward by capillary action. A drug, if present in the urine specimen below its cut-off concentration, will not saturate the binding sites of its specific antibody. The antibody will then react with the drug-protein conjugate and a visible colored line will show up in the test line region of the specific drug strip.
A common misconception is that a drug test that is testing for a class of drugs, for example, opioids, will detect all drugs of that class. However, most opioid tests will not reliably detect oxycodone, oxymorphone, meperidine, or fentanyl. Likewise, most benzodiazepine drug tests will not reliably detect lorazepam. However, urine drug screens that test for a specific drug, rather than an entire class, are often available. | Drug test | Wikipedia | 448 | 986871 | https://en.wikipedia.org/wiki/Drug%20test | Biology and health sciences | Diagnostics | Health |
When an employer requests a drug test from an employee, or a physician requests a drug test from a patient, the employee or patient is typically instructed to go to a collection site or their home. The urine sample goes through a specified 'chain of custody' to ensure that it is not tampered with or invalidated through lab or employee error. The patient or employee's urine is collected at a remote location in a specially designed secure cup, sealed with tamper-resistant tape, and sent to a testing laboratory to be screened for drugs (typically the Substance Abuse and Mental Health Services Administration 5 panel). The first step at the testing site is to split the urine into two aliquots. One aliquot is first screened for drugs using an analyzer that performs immunoassay as the initial screen. To ensure the specimen integrity and to detect possible adulterants, additional parameters are tested for. Some test the properties of normal urine, such as, urine creatinine, pH, and specific gravity. Others are intended to catch substances added to the urine to alter the test result, such as, oxidants (including bleach), nitrites, and gluteraldehyde. If the urine screen is positive then another aliquot of the sample is used to confirm the findings by gas chromatography—mass spectrometry (GC-MS) or liquid chromatography - mass spectrometry methodology. If requested by the physician or employer, certain drugs are screened for individually; these are generally drugs part of a chemical class that are, for one of many reasons, considered more habit-forming or of concern. For instance, oxycodone and diamorphine may be tested, both sedative analgesics. If such a test is not requested specifically, the more general test (in the preceding case, the test for opioids) will detect most of the drugs of a class, but the employer or physician will not have the benefit of the identity of the drug. | Drug test | Wikipedia | 421 | 986871 | https://en.wikipedia.org/wiki/Drug%20test | Biology and health sciences | Diagnostics | Health |
Employment-related test results are relayed to a medical review office (MRO) where a medical physician reviews the results. If the result of the screen is negative, the MRO informs the employer that the employee has no detectable drug in the urine, typically within 24 hours. However, if the test result of the immunoassay and GC-MS are non-negative and show a concentration level of parent drug or metabolite above the established limit, the MRO contacts the employee to determine if there is any legitimate reason—such as a medical treatment or prescription.
On-site instant drug testing is a more cost-efficient method of effectively detecting substance use amongst employees, as well as in rehabilitation programs to monitor patient progress. These instant tests can be used for both urine and saliva testing. Although the accuracy of such tests varies with the manufacturer, some kits have rates of accuracy correlating closely with laboratory test results.
Breath test
Breath test is a widespread method for quickly determining alcohol intoxication. A breath test measures the alcohol concentration in the body by a deep-lung breath. There are different instruments used for measuring the alcohol content of an individual though their breath. Breathalyzer is a widely known instrument which was developed in 1954 and contained chemicals unlike other breath-testing instruments. More modernly used instruments are the infrared light-absorption devices and fuel cell detectors, these two testers are microprocessor controlled meaning the operator only has to press the start button.
To get accurate readings on a breath-testing device the individual must blow for approximately 6 seconds and need to contain roughly 1.1 to 1.5 liters of breath. For a breath-test to result accurately and truly an operator must take steps such as avoiding measuring "mouth alcohol" which is a result from regurgitation, belching, or recent intake of an alcoholic beverage. To avoid measuring "mouth alcohol" the operator must not allow the individual that's taking the test to consume any materials for at least fifteen minutes before the breath test. When pulled over for a driving violation if an individual in the United States refuses to take a breath test that individual's driver's license can be suspended for a 6 to 12 months time period.
Hair testing | Drug test | Wikipedia | 455 | 986871 | https://en.wikipedia.org/wiki/Drug%20test | Biology and health sciences | Diagnostics | Health |
Hair analysis to detect addictive substances has been used by court systems in the United States, United Kingdom, Canada, and other countries worldwide. In the United States, hair testing has been accepted in court cases as forensic evidence following the Frye Rule, the Federal Rules of Evidence, and the Daubert Rule. As such, hair testing results are legally and scientifically recognized as admissible evidence. Hair testing is commonly used in the USA as pre-employment drug test. The detection time for this test is roughly 3 months, which is the time, that takes head hair to grow ca. 1.5 inches, that are collected as a specimen. Longer detection times are possible with longer hair samples.
A 2014 collaborative US study of 359 adults with moderate-risk drug use found, that a large number of participants, who reported drug use in the last 3 months, had negative hair tests. The tests were done using an immunoassay followed by a confirmatory GC-MS.
For marijuana, only about half of self-disclosed users had a positive hair test. Under-identification of drug use by hair testing (or over-reporting) was also widespread for cocaine, amphetamines, and opioids. Because such under-identification was more common among participants, who self-reported an infrequent use, the authors suggested, that the immunoassay did not have the sensitivity required for such infrequent uses. It is worth noting, that most earlier studies reported, that hair tests found ca. 50-fold higher prevalence of illicit drug use, than self reports.
In late 2022 the US Federal Motor Carrier Safety Administration denied a petition to recognize hair samples as an alternative (to the currently used urine samples) drug-testing method for truckers. The agency did not comment on the test validity, but rather stated, that it lacks the statutory authority to adopt new analytical methods.
Although some lower courts may have accepted hair test evidence, there is no controlling judicial ruling in either the federal or any state system declaring any type of hair test as reliable.
Hair testing is now recognized in both the UK and US judicial systems. There are guidelines for hair testing that have been published by the Society of Hair Testing (a private company in France) that specify the markers to be tested for and the cutoff concentrations that need to be tested. Addictive substances that can be detected include Cannabis, Cocaine, Amphetamines and drugs new to the UK such as Mephedrone. | Drug test | Wikipedia | 509 | 986871 | https://en.wikipedia.org/wiki/Drug%20test | Biology and health sciences | Diagnostics | Health |
Alcohol
In contrast to other drugs consumed, alcohol is deposited directly in the hair. For this reason the investigation procedure looks for direct products of ethanol metabolism. The main part of alcohol is oxidized in the human body. This means it is released as water and carbon dioxide. One part of the alcohol reacts with fatty acids to produce esters. The sum of the concentrations of four of these fatty acid ethyl esters (FAEEs: ethyl myristate, ethyl palmitate, ethyl oleate and ethyl stearate) are used as indicators of the alcohol consumption. The amounts found in hair are measured in nanograms (one nanogram equals only one billionth of a gram), however with the benefit of modern technology, it is possible to detect such small amounts. In the detection of ethyl glucuronide, or EtG, testing can detect amounts in picograms (one picogram equals 0.001 nanograms).
However, there is one major difference between most drugs and alcohol metabolites in the way in which they enter into the hair: on the one hand like other drugs FAEEs enter into the hair via the keratinocytes, the cells responsible for hair growth. These cells form the hair in the root and then grow through the skin surface taking any substances with them. On the other hand, the sebaceous glands produce FAEEs in the scalp and these migrate together with the sebum along the hair shaft (Auwärter et al., 2001, Pragst et al., 2004). So these glands lubricate not only the part of the hair that is just growing at 0.3 mm per day on the skin surface, but also the more mature hair growth, providing it with a protective layer of fat.
FAEEs (nanogram = one billionth of a gram) appear in hair in almost one order of magnitude lower than (the relevant order of magnitude of) EtG (picogram = one trillionth of a gram). It has been technically possible to measure FAEEs since 1993, and the first study reporting the detection of EtG in hair was done by Sachs in 1993. | Drug test | Wikipedia | 448 | 986871 | https://en.wikipedia.org/wiki/Drug%20test | Biology and health sciences | Diagnostics | Health |
In practice, most hair which is sent for analysis has been cosmetically treated in some way (bleached, permed etc.). It has been proven that FAEEs are not significantly affected by such treatments (Hartwig et al., 2003a). FAEE concentrations in hair from other body sites can be interpreted in a similar fashion as scalp hair (Hartwig et al., 2003b).
Presumptive substance testing
Presumptive substance tests attempt to identify a suspicious substance, material or surface where traces of drugs are thought to be, instead of testing individuals through biological methods such as urine or hair testing. The test involves mixing the suspicious material with a chemical in order to trigger a color change to indicate if a drug is present. Most are now available over-the-counter for consumer use, and do not require a lab to read results.
Benefits to this method include that the person who is suspected of drug use does not need to be confronted or aware of testing. Only a very small amount of material is needed to obtain results, and can be used to test powder, pills, capsules, crystals, or organic material. There is also the ability to detect illicit material when mixed with other non-illicit materials. The tests are used for general screening purposes, offering a generic result for the presence of a wide range of drugs, including Heroin, Cocaine, Methamphetamine, Amphetamine, Ecstasy/MDMA, Methadone, Ketamine, PCP, PMA, DMT, MDPV, and may detect rapidly evolving synthetic designer drugs. Separate tests for Marijuana/Hashish are also available.
There are five primary color-tests reagents used for general screening purposes. The Marquis reagent turns into a variety of colors when in the presence of different substances. Dille-Koppanyi reagent uses two chemical solutions which turns a violet-blue color in the presence of barbiturates. Duquenois-Levine reagent is a series of chemical solutions that turn to the color of purple when the vegetation of marijuana is added. Van Urk reagent turns blue-purple when in the presence of LSD. Scott test's chemical solution shows up as a faint blue for cocaine base.
In recent years, the use of presumptive test kits in the criminal justice system has come under great scrutiny due to the lack to forensic studies, questioned reliability, rendering of false positives with legal substances, and wrongful arrests. | Drug test | Wikipedia | 510 | 986871 | https://en.wikipedia.org/wiki/Drug%20test | Biology and health sciences | Diagnostics | Health |
Saliva drug screen / Oral fluid-based drug screen
Saliva / oral fluid-based drug tests can generally detect use during the previous few days. It is better at detecting very recent use of a substance. THC may only be detectable for 2–24 hours in most cases. On site drug tests are allowed per the Department of Labor.
Detection in saliva tests begins almost immediately upon use of the following substances, and lasts for approximately the following times:
Alcohol: 6-12 h
Marijuana: 1-24h
A disadvantage of saliva based drug testing is that it is not approved by FDA or SAMHSA for use with DOT / Federal Mandated Drug Testing. Oral fluid is not considered a bio-hazard unless there is visible blood; however, it should be treated with care.
Sweat drug screen
Sweat patches are attached to the skin to collect sweat over a long period of time (up to 14 days). These are used by child protective services, parole departments, and other government institutions concerned with drug use over long periods, when urine testing is not practical.
There are also surface drug tests that test for the metabolite of parent drug groups in the residue of drugs left in sweat. An example of a rapid, non-invasive, sweat-based drug test is fingerprint drug screening. This 10 minute fingerprint test is in use by a variety of organisations in the UK and beyond, including within workplaces, drug treatment and family safeguarding services at airport border control (to detect drug mules) and in mortuaries to assist in investigations into cause of death.
Blood
Drug-testing a blood sample measures whether or not a drug or a metabolite is in the body at a particular time. These types of tests are considered to be the most accurate way of telling if a person is intoxicated. Blood drug tests are not used very often because they need specialized equipment and medically trained administrators.
Depending on how much marijuana was consumed, it can usually be detected in blood tests within six hours of consumption. After six hours has passed, the concentration of marijuana in the blood decreases significantly. It generally disappears completely within 30 days. | Drug test | Wikipedia | 428 | 986871 | https://en.wikipedia.org/wiki/Drug%20test | Biology and health sciences | Diagnostics | Health |
Random drug testing
Can occur at any time, usually when the investigator has reason to believe that a substance is possibly being used by the subject by behavior or immediately after an employee-related incident occurs during work hours. Testing protocol typically conforms to the national medical standard, candidates are given up to 120 minutes to reasonably produce a urine sample from the time of commencement (in some instances this time frame may be extended at the examiner's discretion).
Diagnostic screening
In the case of life-threatening symptoms, unconsciousness, or bizarre behavior in an emergency situation, screening for common drugs and toxins may help find the cause, called a toxicology test or tox screen to denote the broader area of possible substances beyond just self-administered drugs. These tests can also be done post-mortem during an autopsy in cases where a death was not expected. The test is usually done within 96 hours (4 days) after the desire for the test is realized. Both a urine sample and a blood sample may be tested. A blood sample is routinely used to detect ethanol/methanol and ASA/paracetamol intoxication. Various panels are used for screening urine samples for common substances, e.g. triage 8 that detects amphetamines, benzodiazepines, cocaine, methadone, opiates, cannabis, barbiturates and tricyclic antidepressants. Results are given in 10–15 min.
Similar screenings may be used to evaluate the possible use of date rape drugs. This is usually done on a urine sample.
Optional harm reduction scheme
Drug checks/tests (also known as pill testing) are provided at some events such as concerts and music festivals. Attendees can voluntarily hand over a sample of any drug or drugs in their possession to be tested to check what the drug is and its purity. The scheme is used as a harm reduction technique so people are more aware of what they are taking and the potential risks. | Drug test | Wikipedia | 399 | 986871 | https://en.wikipedia.org/wiki/Drug%20test | Biology and health sciences | Diagnostics | Health |
Occupational harm reduction strategies
Drug and alcohol impairment while at work increases the risk of work-place accidents and decreases productivity. Employers such as the commercial driving and airline industry may conduct random drug tests on employees with the goal of deterring use to improve safety. There is some evidence that increasing the use of random drug testing in the airline industry reduces the percentage of people who test positive, however, it is unclear if this decrease is associated with a corresponding decrease in fatal or non-fatal injuries, other accidents, number of days absent from work. It is also not clear if there are other unwanted side effects that may result from random drug and alcohol testing in the workplace.
Commonly tested substances
Anabolic steroids
Anabolic steroids are used to enhance performance in sports and as they are prohibited in most high-level competitions drug testing is used extensively in order to enforce this prohibition. This is particularly so in individual (rather than team) sports such as athletics and cycling.
Methodologies
Before testing samples, the tamper-evident seal is checked for integrity. If it appears to have been tampered with or damaged, the laboratory rejects the sample and does not test it.
Next, the sample must be made testable. Urine and oral fluid can be used "as is" for some tests, but other tests require the drugs to be extracted from urine. Strands of hair, patches, and blood must be prepared before testing. Hair is washed in order to eliminate second-hand sources of drugs on the surface of the hair, then the keratin is broken down using enzymes. Blood plasma may need to be separated by centrifuge from blood cells prior to testing. Sweat patches are opened and the sweat collection component is removed and soaked in a solvent to dissolve any drugs present.
Laboratory-based drug testing is done in two steps. The first step is the screening test, which is an immunoassay based test applied to all samples. The second step, known as the confirmation test, is usually undertaken by a laboratory using highly specific chromatographic techniques and only applied to samples that test positive during the screening test. Screening tests are usually done by immunoassay (EMIT, ELISA, and RIA are the most common). A "dipstick" drug testing method which could provide screening test capabilities to field investigators has been developed at the University of Illinois. | Drug test | Wikipedia | 478 | 986871 | https://en.wikipedia.org/wiki/Drug%20test | Biology and health sciences | Diagnostics | Health |
After a suspected positive sample is detected during screening, the sample is tested using a confirmation test. Samples that are negative on the screening test are discarded and reported as negative. The confirmation test in most laboratories (and all SAMHSA certified labs) is performed using mass spectrometry, and is precise but expensive. False positive samples from the screening test will almost always be negative on the confirmation test. Samples testing positive during both screening and confirmation tests are reported as positive to the entity that ordered the test. Most laboratories save positive samples for some period of months or years in the event of a disputed result or lawsuit. For workplace drug testing, a positive result is generally not confirmed without a review by a Medical Review Officer who will normally interview the subject of the drug test.
Urine drug testing
Urine drug test kits are available as on-site tests, or laboratory analysis. Urinalysis is the most common test type and used by federally mandated drug testing programs and is considered the Gold Standard of drug testing. Urine based tests have been upheld in most courts for more than 30 years. However, urinalysis conducted by the Department of Defense has been challenged for reliability of testing the metabolite of cocaine. There are two associated metabolites of cocaine, benzoylecgonine (BZ) and ecgonine methyl ester (EME), the first (BZ) is created by the presence of cocaine in an aqueous solution with a pH greater than 7.0, while the second (EME) results from the actual human metabolic process. The presence of EME confirms actual ingestion of cocaine by a human being, while the presence of BZ is indicative only. BZ without EME is evidence of sample contamination, however, the US Department of Defense has chosen not to test for EME in its urinalysis program.
A number of different analyses (defined as the unknown substance being tested for) are available on Urine Drug Screens.
Spray drug testing
Spray (sweat) drug test kits are non-invasive. It is a simple process to collect the required specimen, no bathroom is needed, no laboratory is required for analysis, and the tests themselves are difficult to manipulate and relatively tamper-resistant. The detection window is long and can detect recent drug use within several hours. | Drug test | Wikipedia | 466 | 986871 | https://en.wikipedia.org/wiki/Drug%20test | Biology and health sciences | Diagnostics | Health |
There are also some disadvantages to spray or sweat testing. There is not much variety in these drug tests, only a limited number of drugs can be detected, prices tend to be higher, and inconclusive results can be produced by variations in sweat production rates in donors. They also have a relatively long specimen collection period and are more vulnerable to contamination than other common forms of testing.
Hair drug testing
Hair drug testing is a method that can detect drug use over a much longer period of time than saliva, sweat or urine tests. Hair testing is also more robust with respect to tampering. Thus, hair sampling is preferred by the US military and by many large corporations, which are subject to Drug-Free Workplace Act of 1988.
Head hair normally growth at the rate of 0.5 inches per month. Thus, the most common hair sample length of 1.5" from the scalp would detect drug use within the last 90-100 days. 80-120 strands of hair are sufficient for the test. In the absence of hair on the head, body hair can be used as an acceptable substitute. This includes facial hair, the underarms, arms, and legs or even pubic hair. Because body hair usually grows slower than head hair, drugs can often be detected in body hair for longer periods, e.g. up to 12 months. Currently, most entities that use hair testing have prescribed consequences for individuals removing hair to avoid a hair drug test.
Most drugs are analysed in hair samples not as the original psychoactive molecules, but rather as their metabolytes. For example, ethanol is determined as ethyl glucuronide, while cocaine use is confirmed using ecgonine. Testing for metabolytes reduces the likelihood of false positive results due to contamination. One disadvantage of hair testing is, that it cannot detect recent drug use, because it takes at least a week after a drug intake for the metabolytes to show up in a growing hair above the skin. Urine tests are better suited for detecting recent (within a week) drug use.
In a practical test, hair sample is usually washed with a low polarity solvent (such as dichloromethane) to remove surface contaminations. Then, the sample is pulverized and extracted with a more polar solvent, such as methanol. | Drug test | Wikipedia | 475 | 986871 | https://en.wikipedia.org/wiki/Drug%20test | Biology and health sciences | Diagnostics | Health |
Although thousand different substances can be determined in a single gas chromatography–mass spectrometry or liquid chromatography–mass spectrometry experiment, due to the low concentration of analytes, practical measurements (see selective ion monitoring) are limited to a smaller number (10-20) of analytes. Designer drugs are usually missed in such measurements, because the analyst must know in advance what chemicals to look for.
Most hair testing laboratories use the aforementioned chromato-mass-spectrometry methods for confirmation or for rarely tested drugs only. Mass screening (preliminary or final) is usually done with immunoassays, because of their lower cost.
Legality, ethics and politics
The results of federally mandating drug testing were similar to the effects of simply extending to the trucking industry the right to perform drug tests, and it has been argued that the latter approach would have been as effective at lower cost.
Psychologist Tony Buon has criticized the use of workplace drug testing on a number of grounds, including:
Flawed technology: The real world performance of testing is much lower than that claimed by its promoters. Buon suggest that tests are probably adequate for rehabilitation and treatment situations, possibly adequate for pre-employment situations, but not for dismissing employees.
Ethical Issues: Because of the fairly simple ways that an employee can invalidate the test, drug testing must be strictly monitored. This means that the specimen must be observed leaving the body. Many legal objections currently being raised in the courts about drug testing are pointing to legal requirements of prior notice, consent, due process, and cause.
Wrong focus: As has been shown with Employee Assistance Programs, the focus of management concern should be on work performance decline. Buon suggests effective management practices are an infinitely better approach to managing workplace alcohol and other drug issues.
Tony Buon has also reported by the CIPD as stating that "drug testing captures the stupid—experienced drug users know how to beat the tests".
From a penological standpoint, one purpose of drug testing is to help classify the
people taking the drug test within risk groups so that those who pose more of a danger to the public can be incapacitated through incarceration or other restrictions on liberty. Thus, the drug testing serves a crime control purpose even if there is no expectation of rehabilitating the drug user through treatment, deterring drug use through sanctions, or sending a message that drug use is a deviant behavior that will not be tolerated. | Drug test | Wikipedia | 505 | 986871 | https://en.wikipedia.org/wiki/Drug%20test | Biology and health sciences | Diagnostics | Health |
United Kingdom
A study in 2004 by the Independent Inquiry into Drug Testing at Work found that attempts by employers to force employees to take drug tests could potentially be challenged as a violation of privacy under the Human Rights Act 1998 and Article 8 of the European Convention of Human Rights. However, this does not apply to industries where drug testing is a matter of personal and public safety or security rather than productivity.
United States
In consultation with Dr. Carlton Turner, President Ronald Reagan issued Executive Order 12564. In doing so, he instituted mandatory drug-testing for all safety-sensitive executive-level and civil-service Federal employees. This was challenged in the courts by the National Treasury Employees Union. In 1988, this challenge was considered by the US Supreme Court. A similar challenge resulted in the Court extending the drug-free workplace concept to the private sector. These decisions were then incorporated into the White House Drug Control Strategy directive issued by President George H.W. Bush in 1989. All defendants serving on federal probation or federal supervised release are required to submit to at least three drug tests. Failing a drug test can be construed as possession of a controlled substance, resulting in mandatory revocation and imprisonment.
There have been inconsistent evaluation results as to whether continued pretrial drug testing has beneficial effects.
Testing positive can lead to bail not being granted, or if bail has already been granted, to bail revocation or other sanctions. Arizona also adopted a law in 1987 authorizing mandatory drug testing of felony arrestees for the purpose of informing the pretrial release decision, and the District of Columbia has had a similar law since the 1970s. It has been argued that one of the problems with such testing is that there is often not enough time between the arrest and the bail decision to confirm positive results using GC/MS technology. It has also been argued that such testing potentially implicates the Fifth Amendment privilege against self-incrimination, the right to due process (including the prohibition against gathering evidence in a manner that shocks the conscience or constitutes outrageous government conduct), and the prohibition against unreasonable searches and seizures contained in the Fourth Amendment. | Drug test | Wikipedia | 425 | 986871 | https://en.wikipedia.org/wiki/Drug%20test | Biology and health sciences | Diagnostics | Health |
According to Henriksson, the anti-drug appeals of the Reagan administration "created an environment in which many employers felt compelled to implement drug testing programs because failure to do so might be perceived as condoning drug use. This fear was easily exploited by aggressive marketing and sales forces, who often overstated the value of testing and painted a bleak picture of the consequences of failing to use the drug testing product or service being offered." On March 10, 1986, the Commission on Organized Crime asked all U.S. companies to test employees for drug use. By 1987, nearly 25% of the Fortune 500 companies used drug tests.
According to an uncontrolled self-report study done by DATIA and Society for Human Resource Management in 2012 (sample of 6,000 randomly selected human resource professionals), human resource professionals reported the following results after implementing a drug testing program: 19% of companies reported a subjective increase in employee productivity, 16% reported a decrease in employee turnover (8% reported an increase), and unspecified percentages reported decreases in absenteeism and improvement of workers' compensation incidence rates.
According to US Chamber of Commerce 70% of all illicit drug users are employed. Some industries have high rates of employee drug use such as construction (12.8%), repair (11.1%), and hospitality (7.9-16.3%).
Australia
A person conducting a business or undertaking (PCBU—the new term that includes employers) has duties under the work health and safety (WHS) legislation to ensure a worker affected by alcohol or other drugs does not place themselves or other persons at risk of injury while at work. Workplace policies and prevention programs can help change the norms and culture around substance use.
All organisations—large and small—can benefit from an agreed policy on alcohol and drug misuse that applies to all workers. Such a policy should form part of an organisations overall health and safety management system. PCBUs are encouraged to establish a policy and procedure, in consultation with workers, to constructively manage alcohol and other drug related hazards in their workplace. A comprehensive workplace alcohol and other drug policy should apply to everyone in the workplace and include prevention, education, counselling and rehabilitation arrangements. In addition, the roles and responsibilities of managers and supervisors should be clearly outlined.
All Australian workplace drug testing must comply with Australian standard AS/NZS4308:2008. | Drug test | Wikipedia | 485 | 986871 | https://en.wikipedia.org/wiki/Drug%20test | Biology and health sciences | Diagnostics | Health |
In Victoria, roadside saliva tests detect drugs that contain:
THC (Delta-9 tetrahydrocannabinol), the active component in cannabis.
methamphetamine, also known as "ice", "crystal" and "crank".
MDMA (Methylenedioxymethamphetamine), which is known as ecstasy.
In February 2016 a New South Wales magistrate "acquitted a man who tested positive for cannabis". He had been arrested and charged after testing positive during a roadside drug test, despite not having smoked for nine days. He was relying on advice previously given to him by police.
Refusal
In the United States federal criminal system, refusing to take a drug test triggers an automatic revocation of probation or supervised release.
In Victoria, Australia the driver of the car has the option to refuse the drug test. Refusing to undergo a drug test or refusing to undergo a secondary drug test after the first one, triggers an automatic suspension and disqualification for a period of two years and a fine of AUD$1000. The second refusal triggers an automatic suspension and disqualification for a period of four years and an even larger fine.
Historical cases
In 2000, an Australian Mining Company South Blackwater Coal Ltd with 400 employees, imposed drug-testing procedures, and the trade unions advised their members to refuse to take the tests, partly because a positive result does not necessarily indicate present impairment; the workers were stood-down by the company without pay for a week.
In 2003, sixteen members of the Chicago White Sox considered refusing to take a drug test, in hopes of making steroid testing mandatory.
In 2006, Levy County, Florida, volunteer librarians resigned en masse rather than take drug tests.
In 2010, Iranian super heavyweight class weightlifters refused to submit to a drug test authorized by the Iran Weightlifting League. | Drug test | Wikipedia | 370 | 986871 | https://en.wikipedia.org/wiki/Drug%20test | Biology and health sciences | Diagnostics | Health |
The Siberian Traps () are a large region of volcanic rock, known as a large igneous province, in Siberia, Russia. The massive eruptive event that formed the traps is one of the largest known volcanic events in the last years.
The eruptions continued for roughly two million years and spanned the Permian–Triassic boundary, or P–T boundary, which occurred around 251.9 million years ago. The Siberian Traps are believed to be the primary cause of the Permian–Triassic extinction event, the most severe extinction event in the geologic record. Subsequent periods of Siberian Traps activity have been linked to a number of smaller biotic crises, including the Smithian-Spathian, Olenekian-Anisian, Middle-Late Anisian, and Anisian-Ladinian extinction events.
Large volumes of basaltic lava covered a large expanse of Siberia in a flood basalt event. Today, the area is covered by about of basaltic rock, with a volume of around .
Etymology
The term "trap" has been used in geology since 1785–1795 for such rock formations. It is derived from the Swedish word for stairs ("trappa") and refers to the step-like hills forming the landscape of the region.
Formation
The source of the Siberian Traps basaltic rock has been attributed to a mantle plume, which rose until it reached the bottom of the Earth's crust, producing volcanic eruptions through the Siberian Craton. It has been suggested that, as the Earth's lithospheric plates moved over the mantle plume (the Iceland plume), the plume produced the Siberian Traps in the Permian and Triassic periods, after earlier producing the Viluy Traps to the east, and later going on to produce volcanic activity on the floor of the Arctic Ocean in the Jurassic and Cretaceous, and then generating volcanic activity in Iceland. Other plate tectonic causes have also been suggested. Another possible cause may be the impact that formed the Wilkes Land crater in Antarctica, which is estimated to have occurred around the same time and been nearly antipodal to the traps. | Siberian Traps | Wikipedia | 419 | 987039 | https://en.wikipedia.org/wiki/Siberian%20Traps | Physical sciences | Geologic features | Earth science |
The main source of rock in this formation is basalt, but both mafic and felsic rocks are present, so this formation is officially called a Flood Basalt Province. The inclusion of mafic and felsic rock indicates multiple other eruptions that occurred and coincided with the one-million-year-long set of eruptions that created the majority of the basaltic layers. The traps are divided into sections based on their chemical, stratigraphical, and petrographical composition.
The Siberian traps are underlain by the Tungus Syneclise, a large sedimentary basin containing thick sequences of Early-Mid Paleozoic aged carbonate and evaporite deposits, as well as Carboniferous-Permian aged coal bearing clastic rocks. When heated, such as by igneous intrusions, these rocks are capable of emitting large amounts of toxic and greenhouse gases.
Effects on prehistoric life
One of the major questions is whether the Siberian Traps were directly responsible for the Permian–Triassic mass extinction event that occurred 250 million years ago, or if they were themselves caused by some other, larger event, such as an asteroid impact. One hypothesis put forward is that the volcanism triggered the growth of Methanosarcina, a microbe that then emitted large amounts of methane into Earth's atmosphere, ultimately altering the Earth's carbon cycle based on observations such as a significant increase of inorganic carbon reservoirs in marine environments. Recent research has highlighted the impact of vegetative deposition in the preceding Carboniferous period on the severity of the disruption to the carbon cycle.
This extinction event, also colloquially called the Great Dying, affected all life on Earth, and is estimated to have led to the extinction of about 81% of all marine species and 70% of terrestrial vertebrate species living at the time. Some of the disastrous events that affected the Earth continued to repeat themselves five to six million years after the initial extinction occurred. Over time a small portion of the life that survived the extinction was able to repopulate and expand starting with low trophic levels (producers) until the higher trophic levels (consumers) were able to be re-established. Calculations of sea water temperature from δ18O measurements indicate that at the peak of the extinction, the Earth underwent lethally hot global warming, in which equatorial ocean temperatures exceeded . It took roughly eight to nine million years for any diverse ecosystem to be re-established; however, new classes of animals were established after the extinction that did not exist beforehand. | Siberian Traps | Wikipedia | 508 | 987039 | https://en.wikipedia.org/wiki/Siberian%20Traps | Physical sciences | Geologic features | Earth science |
Palaeontological evidence further indicates that the global distribution of tetrapods vanished between latitudes approximating 40° south to 30° north, with very rare exceptions in the region of Pangaea that is today Utah. This tetrapod gap of equatorial Pangaea coincides with an end-Permian to Middle Triassic global "coal gap" that indicates the loss of peat swamps. Peat formation, a product of high plant productivity, was reestablished only in the Anisian stage of the Triassic, and even then only in high southern latitudes, although gymnosperm forests appeared earlier (in the Early Spathian), but again only in northern and southern higher latitudes. In equatorial Pangaea, the establishment of conifer-dominated forests was not until the end of the Spathian, and the first coals at these latitudes did not appear until the Carnian, around 15 million years after their end-Permian disappearance. These signals suggest equatorial temperatures exceeded their thermal tolerance for many marine vertebrates at least during two thermal maxima, whereas terrestrial equatorial temperatures were sufficiently severe to suppress plant and animal abundance during most of the Early Triassic.
Dating
The volcanism that occurred in the Siberian Traps resulted in copious amounts of magma being ejected from the Earth's crust—leaving permanent traces of rock from the same time period of the mass extinction that can be examined today. More specifically, zircon is found in some of the volcanic rocks. To further the accuracy of the age of the zircon, several varying aged pieces of zircon were organized into a timeline based on when they crystallized. The CA-TIMS technique, a chemical abrasion age-dating technique that eliminates variability in accuracy due to lead depletion in zircon over time, was then used to accurately determine the age of the zircons found in the Siberian Traps. Eliminating the variability due to lead, the CA-TIMS age-dating technique allowed uranium within the zircon to be the centre focus in linking the volcanism in the Siberian Traps that resulted in high amounts of magmatic material with the Permian–Triassic mass extinction. | Siberian Traps | Wikipedia | 439 | 987039 | https://en.wikipedia.org/wiki/Siberian%20Traps | Physical sciences | Geologic features | Earth science |
To further the connection with the Permian–Triassic extinction event, other disastrous events occurred around the same time period, such as sea level changes, meteor impacts and volcanism. Specifically focusing on volcanism, rock samples from the Siberian Traps and other southern regions were obtained and compared. Basalts and gabbro samples from several southern regions close to and from the Siberian Traps were dated based on argon isotope 40 and argon isotope 39 age-dating methods. Feldspar and biotite was specifically used to focus on the samples' age and duration of the presence of magma from the volcanic event in the Siberian Traps. The majority of the basalt and gabbro samples dated to 250 million years ago, covered a surface area of five million square kilometres on the Siberian Traps and occurred within a short period of time with rapid rock solidification/cooling. Studies confirmed that samples of gabbro and basalt from the same time period of the Permian–Triassic event from the other southern regions also matched the age of samples within the Siberian Traps. This confirms the assumption of the linkage between the age of volcanic rocks within the Siberian Traps, along with rock samples from other southern regions to the Permian–Triassic mass extinction event.
Mineral deposits
The giant Norilsk-Talnakh nickel–copper–palladium deposit formed within the magma conduits in the most complete part of the Siberian Traps. It has been linked to the Permian–Triassic extinction event, which occurred approximately 251.4 million years ago, based on large amounts of nickel and other elements found in rock beds that were laid down after the extinction occurred. The method used to correlate the extinction event with the surplus amount of nickel located in the Siberian Traps compares the timeline of the magmatism within the traps and the timeline of the extinction itself. Before the linkage between magmatism and the extinction event was discovered, it was hypothesized that the mass extinction and volcanism occurred at the same time due to the linkages in rock composition. | Siberian Traps | Wikipedia | 410 | 987039 | https://en.wikipedia.org/wiki/Siberian%20Traps | Physical sciences | Geologic features | Earth science |
"NFPA 704: Standard System for the Identification of the Hazards of Materials for Emergency Response" is a standard maintained by the U.S.-based National Fire Protection Association. First "tentatively adopted as a guide" in 1960, and revised several times since then, it defines the "Safety Square" or "Fire Diamond" which is used to quickly and easily identify the risks posed by hazardous materials. This helps determine what, if any, special equipment should be used, procedures followed, or precautions taken during the initial stages of an emergency response. It is an internationally accepted safety standard, and is crucial while transporting chemicals.
Codes
The four divisions are typically color-coded with red on top indicating flammability, blue on the left indicating level of health hazard, yellow on the right for chemical reactivity, and white containing codes for special hazards. Each of health, flammability and reactivity is rated on a scale from 0 (no hazard) to 4 (severe hazard). The latest version of NFPA 704 sections 5, 6, 7 and 8 for the specifications of each classification are listed below. The numeric values in the first column are designated in the standard by "Degree of Hazard" using Arabic numerals (0, 1, 2, 3, 4), not to be confused with other classification systems, such as that in the NFPA 30 Flammable and Combustible Liquids Code, where flammable and combustible liquid categories are designated by "Class", using Roman numerals (I, II, III).
History
The development of NFPA 704 is credited to the Charlotte Fire Department after a fire at the Charlotte Chemical Company in 1959 led to severe injuries to many of the firefighters. Upon arrival, the fire crew found a fire burning inside a vat that firefighters assumed to be burning kerosene. The crew tried to suppress the fire, which resulted in the vat exploding due to metallic sodium being stored in the kerosene. Thirteen firefighters were injured, several of whom had critical injuries while one lost both ears and most of his face from the incident.
At the time, such vats were not labelled with the materials they contained, so firefighters did not have the necessary information to recognize that hazardous materials were present, which required a specific response. In this case, sodium was able to react with water to release hydrogen gas and large amounts of heat, which has the potential to explode. | NFPA 704 | Wikipedia | 501 | 987544 | https://en.wikipedia.org/wiki/NFPA%20704 | Physical sciences | Basics: General | Chemistry |
The Charlotte Fire Department developed training to respond to fires involving hazardous materials, ensured that protective clothing was available to those responding, and expanded the fire prevention inspection program. Fire Marshal J. F. Morris developed the diamond shaped placard as a marking system to indicate when a building contained hazardous materials, with their levels of flammability, reactivity and health effects. | NFPA 704 | Wikipedia | 74 | 987544 | https://en.wikipedia.org/wiki/NFPA%20704 | Physical sciences | Basics: General | Chemistry |
The Harmattan is a season in West Africa that occurs between the end of November and the middle of March. It is characterized by the dry and dusty northeasterly trade wind, of the same name, which blows from the Sahara over West Africa into the Gulf of Guinea. The name is related to the word in the Twi language. The temperature is cold mostly at night in some places but can be very hot in certain places during daytime. Generally, temperature differences can also depend on local circumstances.
The Harmattan blows during the dry season, which occurs during the months with the lowest sun. In this season, the subtropical ridge of high pressure stays over the central Sahara and the low-pressure Intertropical Convergence Zone (ITCZ) stays over the Gulf of Guinea. On its passage over the Sahara, the Harmattan picks fine dust and sand particles (between 0.5 and 10 microns). It is also known as the "doctor wind", because of its invigorating dryness compared with humid tropical air.
Effects
This season differs from winter because it is characterized by cold, dry, dust-laden wind, and also wide fluctuations in the ambient temperatures of the day and night. Temperatures can easily be as low as all day, but sometimes in the afternoon the temperature can also soar to as high as , while the relative humidity drops under 5%. It can also be hot in some regions, like in the Sahara.
The air is particularly dry and desiccating when the Harmattan blows over the region. The Harmattan brings desert-like weather conditions: it lowers the humidity, dissipates cloud cover, prevents rainfall formation and sometimes creates big clouds of dust which can result in dust storms or sandstorms. The wind can increase fire risk and cause severe crop damage. The interaction of the Harmattan with monsoon winds can cause tornadoes.
Harmattan haze
In some countries in West Africa, the heavy amount of dust in the air can severely limit visibility and block the sun for several days, comparable to a heavy fog. This effect is known as the Harmattan haze. It costs airlines millions of dollars in cancelled and diverted flights each year. When the haze is weak, the skies are clear. The extreme dryness of the air may cause branches of trees to die.
Health
A 2024 study found that dust carried by the Harmattan increases infant and child mortality, as well as has persistent adverse health impacts on surviving children. | Harmattan | Wikipedia | 501 | 987546 | https://en.wikipedia.org/wiki/Harmattan | Physical sciences | Seasons | Earth science |
Humidity can drop lower than 15%, which can result in spontaneous nosebleeds for some people. Other health effects on humans may include conditions of the skin (dryness of the skin), dried or chapped lips, eyes, and respiratory system, including aggravation of asthma. | Harmattan | Wikipedia | 59 | 987546 | https://en.wikipedia.org/wiki/Harmattan | Physical sciences | Seasons | Earth science |
In computing, a plug and play (PnP) device or computer bus is one with a specification that facilitates the recognition of a hardware component in a system without the need for physical device configuration or user intervention in resolving resource conflicts. The term "plug and play" has since been expanded to a wide variety of applications to which the same lack of user setup applies.
Expansion devices are controlled and exchange data with the host system through defined memory or I/O space port addresses, direct memory access channels, interrupt request lines and other mechanisms, which must be uniquely associated with a particular device to operate. Some computers provided unique combinations of these resources to each slot of a motherboard or backplane. Other designs provided all resources to all slots, and each peripheral device had its own address decoding for the registers or memory blocks it needed to communicate with the host system. Since fixed assignments made expansion of a system difficult, devices used several manual methods for assigning addresses and other resources, such as hard-wired jumpers, pins that could be connected with wire or removable straps, or switches that could be set for particular addresses. As microprocessors made mass-market computers affordable, software configuration of I/O devices was advantageous to allow installation by non-specialist users. Early systems for software configuration of devices included the MSX standard, NuBus, Amiga Autoconfig, and IBM Microchannel. Initially all expansion cards for the IBM PC required physical selection of I/O configuration on the board with jumper straps or DIP switches, but increasingly ISA bus devices were arranged for software configuration. By 1995, Microsoft Windows included a comprehensive method of enumerating hardware at boot time and allocating resources, which was called the "Plug and Play" standard.
Plug and play devices can have resources allocated at boot-time only, or may be hotplug systems such as USB and IEEE 1394 (FireWire).
History of device configuration
Some early microcomputer peripheral devices required the end user physically to cut some wires and solder together others in order to make configuration changes; such changes were intended to be largely permanent for the life of the hardware.
As computers became more accessible to the general public, the need developed for more frequent changes to be made by computer users unskilled with using soldering irons. Rather than cutting and soldering connections, configuration was accomplished by jumpers or DIP switches.
Later on this configuration process was automated: Plug and Play. | Plug and play | Wikipedia | 498 | 158859 | https://en.wikipedia.org/wiki/Plug%20and%20play | Technology | Computer hardware | null |
MSX
The MSX system, released in 1983, was designed to be plug and play from the ground up, and achieved this by a system of slots and subslots, where each had its own virtual address space, thus eliminating device addressing conflicts in its very source. No jumpers or any manual configuration was required, and the independent address space for each slot allowed very cheap and commonplace chips to be used, alongside cheap glue logic.
On the software side, the drivers and extensions were supplied in the card's own ROM, thus requiring no disks or any kind of user intervention to configure the software. The ROM extensions abstracted any hardware differences and offered standard APIs as specified by ASCII Corporation.
NuBus
In 1984, the NuBus architecture was developed by the Massachusetts Institute of Technology (MIT) as a platform agnostic peripheral interface that fully automated device configuration. The specification was sufficiently intelligent that it could work with both big endian and little endian computer platforms that had previously been mutually incompatible. However, this agnostic approach increased interfacing complexity and required support chips on every device which in the 1980s was expensive to do, and apart from its use in Apple Macintoshes and NeXT machines, the technology was not widely adopted.
Amiga Autoconfig and Zorro bus
In 1984, Commodore developed the Autoconfig protocol and the Zorro expansion bus for its Amiga line of expandable computers. The first public appearance was in the CES computer show at Las Vegas in 1985, with the so-called "Lorraine" prototype. Like NuBus, Zorro devices had absolutely no jumpers or DIP switches. Configuration information was stored on a read-only device on each peripheral, and at boot time the host system allocated the requested resources to the installed card. The Zorro architecture did not spread to general computing use outside of the Amiga product line, but was eventually upgraded as Zorro II and Zorro III for the later iteration of Amiga computers.
Micro-Channel Architecture
In 1987, IBM released an update to the IBM PC known as the Personal System/2 line of computers using the Micro Channel Architecture. The PS/2 was capable of totally automatic self-configuration. Every piece of expansion hardware was issued with a floppy disk containing a special file used to auto-configure the hardware to work with the computer. The user would install the device, turn on the computer, load the configuration information from the disk, and the hardware automatically assigned interrupts, DMA, and other needed settings. | Plug and play | Wikipedia | 510 | 158859 | https://en.wikipedia.org/wiki/Plug%20and%20play | Technology | Computer hardware | null |
However, the disks posed a problem if they were damaged or lost, as the only options at the time to obtain replacements were via postal mail or IBM's dial-up BBS service. Without the disks, any new hardware would be completely useless and the computer would occasionally not boot at all until the unconfigured device was removed.
Micro Channel did not gain widespread support, because IBM wanted to exclude clone manufacturers from this next-generation computing platform. Anyone developing for MCA had to sign non-disclosure agreements and pay royalties to IBM for each device sold, putting a price premium on MCA devices. End-users and clone manufacturers revolted against IBM and developed their own open standards bus, known as EISA. Consequently, MCA usage languished except in IBM's mainframes.
ISA and PCI self-configuration
In time, many Industry Standard Architecture (ISA) cards incorporated, through proprietary and varied techniques, hardware to self-configure or to provide for software configuration; often, the card came with a configuration program on disk that could automatically set the software-configurable (but not itself self-configuring) hardware. Some cards had both jumpers and software-configuration, with some settings controlled by each; this compromise reduced the number of jumpers that had to be set, while avoiding great expense for certain settings, e.g. nonvolatile registers for a base address setting. The problems of required jumpers continued on, but slowly diminished as more and more devices, both ISA and other types, included extra self-configuration hardware. However, these efforts still did not solve the problem of making sure the end-user has the appropriate software driver for the hardware.
ISA PnP or (legacy) Plug & Play ISA was a plug-and-play system that used a combination of modifications to hardware, the system BIOS, and operating system software to automatically manage resource allocations. It was superseded by the PCI bus during the mid-1990s.
The PCI plug and play (autoconfiguration) is based on the PCI BIOS Specification in 1990s, the PCI BIOS Specification is superseded by the ACPI in 2000s.
Legacy Plug and Play | Plug and play | Wikipedia | 451 | 158859 | https://en.wikipedia.org/wiki/Plug%20and%20play | Technology | Computer hardware | null |
In 1995, Microsoft released Windows 95, which tried to automate device detection and configuration as much as possible, but could still fall back to manual settings if necessary. During the initial install process of Windows 95, it would attempt to automatically detect all devices installed in the system. Since full auto-detection of everything was a new process without full industry support, the detection process constantly wrote to a progress tracking log file during the detection process. In the event that device probing would fail and the system would freeze, the end-user could reboot the computer, restart the detection process, and the installer would use the tracking log to skip past the point that caused the previous freeze.
At the time, there could be a mix of devices in a system, some capable of automatic configuration, and some still using fully manual settings via jumpers and DIP switches. The old world of DOS still lurked underneath Windows 95, and systems could be configured to load devices in three different ways:
through Windows 95 Device Manager drivers only
using DOS drivers loaded in the CONFIG.SYS and AUTOEXEC.BAT configuration files
using a combination of DOS drivers and Windows 95 Device Manager drivers
Microsoft could not assert full control over all device settings, so configuration files could include a mix of driver entries inserted by the Windows 95 automatic configuration process, and could also include driver entries inserted or modified manually by the computer users themselves. The Windows 95 Device Manager also could offer users a choice of several semi-automatic configurations to try to free up resources for devices that still needed manual configuration.
Also, although some later ISA devices were capable of automatic configuration, it was common for PC ISA expansion cards to limit themselves to a very small number of choices for interrupt request lines. For example, a network interface might limit itself to only interrupts 3, 7, and 10, while a sound card might limit itself to interrupts 5, 7, and 12. This results in few configuration choices if some of those interrupts are already used by some other device.
The hardware of PC computers additionally limited device expansion options because interrupts could not be shared, and some multifunction expansion cards would use multiple interrupts for different card functions, such as a dual-port serial card requiring a separate interrupt for each serial port. | Plug and play | Wikipedia | 457 | 158859 | https://en.wikipedia.org/wiki/Plug%20and%20play | Technology | Computer hardware | null |
Because of this complex operating environment, the autodetection process sometimes produced incorrect results, especially in systems with large numbers of expansion devices. This led to device conflicts within Windows 95, resulting in devices which were supposed to be fully self-configuring failing to work. The unreliability of the device installation process led to Plug and Play being sometimes referred to as Plug and Pray.
Until approximately 2000, PC computers could still be purchased with a mix of ISA and PCI slots, so it was still possible that manual ISA device configuration might be necessary. But with successive releases of new operating systems like Windows 2000 and Windows XP, Microsoft had sufficient clout to say that drivers would no longer be provided for older devices that did not support auto-detection. In some cases, the user was forced to purchase new expansion devices or a whole new system to support the next operating system release.
Current plug and play interfaces
Several completely automated computer interfaces are currently used, each of which requires no device configuration or other action on the part of the computer user, apart from software installation, for the self-configuring devices. These interfaces include:
IEEE 1394 (FireWire)
PCI, Mini PCI
PCI Express, Mini PCI Express, Thunderbolt
PCMCIA, PC Card, ExpressCard
SATA, Serial Attached SCSI
USB
DVI, HDMI
For most of these interfaces, very little technical information is available to the end user about the performance of the interface. Although both FireWire and USB have bandwidth that must be shared by all devices, most modern operating systems are unable to monitor and report the amount of bandwidth being used or available, or to identify which devices are currently using the interface. | Plug and play | Wikipedia | 346 | 158859 | https://en.wikipedia.org/wiki/Plug%20and%20play | Technology | Computer hardware | null |
The chlorite ion, or chlorine dioxide anion, is the halite with the chemical formula of . A chlorite (compound) is a compound that contains this group, with chlorine in the oxidation state of +3. Chlorites are also known as salts of chlorous acid.
Compounds
The free acid, chlorous acid HClO2, is the least stable oxoacid of chlorine and has only been observed as an aqueous solution at low concentrations. Since it cannot be concentrated, it is not a commercial product. The alkali metal and alkaline earth metal compounds are all colorless or pale yellow, with sodium chlorite (NaClO2) being the only commercially important chlorite. Heavy metal chlorites (Ag+, Hg+, Tl+, Pb2+, and also Cu2+ and ) are unstable and decompose explosively with heat or shock.
Sodium chlorite is derived indirectly from sodium chlorate, NaClO3. First, the explosively unstable gas chlorine dioxide, ClO2 is produced by reducing sodium chlorate with a suitable reducing agent such as methanol, hydrogen peroxide, hydrochloric acid or sulfur dioxide.
Structure and properties
The chlorite ion adopts a bent molecular geometry, due to the effects of the lone pairs on the chlorine atom, with an O–Cl–O bond angle of 111° and Cl–O bond lengths of 156 pm.
Chlorite is the strongest oxidiser of the chlorine oxyanions on the basis of standard half cell potentials.
Uses
The most important chlorite is sodium chlorite (NaClO2), used in the bleaching of textiles, pulp, and paper. However, despite its strongly oxidizing nature, it is often not used directly, being instead used to generate the neutral species chlorine dioxide (ClO2), normally via a reaction with HCl:
5 NaClO2 + 4 HCl → 5 NaCl + 4 ClO2 + 2 H2O | Chlorite | Wikipedia | 442 | 158901 | https://en.wikipedia.org/wiki/Chlorite | Physical sciences | Halide oxyanions | Chemistry |
Health risks
In 2009, the California Office of Environmental Health Hazard Assessment, or OEHHA, released a public health goal of maintaining amounts lower than 50 parts per billion for chlorite in drinking water after scientists in the state reported that exposure to higher levels of chlorite affect sperm and thyroid function, cause stomach ulcers, and caused red blood cell damage in laboratory animals. Some studies have indicated that at certain levels chlorite may also be carcinogenic.
The federal legal limit in the United States allows chlorite up to levels of 1,000 parts per billion in drinking water, 20 times as much chlorite as California’s public health goal.
Other oxyanions
Several oxyanions of chlorine exist, in which it can assume oxidation states of −1, +1, +3, +5, or +7 within the corresponding anions Cl−, ClO−, , , or , known commonly and respectively as chloride, hypochlorite, chlorite, chlorate, and perchlorate. These are part of a greater family of other chlorine oxides. | Chlorite | Wikipedia | 230 | 158901 | https://en.wikipedia.org/wiki/Chlorite | Physical sciences | Halide oxyanions | Chemistry |
Arsenopyrite (IMA symbol: Apy) is an iron arsenic sulfide (FeAsS). It is a hard (Mohs 5.5–6) metallic, opaque, steel grey to silver white mineral with a relatively high specific gravity of 6.1.
When dissolved in nitric acid, it releases elemental sulfur. When arsenopyrite is heated, it produces sulfur and arsenic vapor. With 46% arsenic content, arsenopyrite, along with orpiment, is a principal ore of arsenic. When deposits of arsenopyrite become exposed to the atmosphere, the mineral slowly converts into iron arsenates. Arsenopyrite is generally an acid-consuming sulfide mineral, unlike iron pyrite which can lead to acid mine drainage.
The crystal habit, hardness, density, and garlic odour when struck are diagnostic. Arsenopyrite in older literature may be referred to as mispickel, a name of German origin. It is also sometimes referred to as mundic, a word derived from Cornish dialect and which also refers to a copper ore, as well as a form of deterioration in aggregate concrete made with mine tailings.
Arsenopyrite also can be associated with significant amounts of gold. Consequently, it serves as an indicator of gold bearing reefs. Many arsenopyrite gold ores are refractory, i.e. the gold is not easily cyanide leached from the mineral matrix.
Arsenopyrite is found in high temperature hydrothermal veins, in pegmatites, and in areas of contact metamorphism or metasomatism.
Crystallography
Arsenopyrite crystallizes in the monoclinic crystal system and often shows prismatic crystal or columnar forms with striations and twinning common. Arsenopyrite may be referred to in older references as orthorhombic, but it has been shown to be monoclinic. In terms of its atomic structure, each Fe center is linked to three As atoms and three S atoms. The material can be described as Fe3+ with the diatomic trianion AsS3−. The connectivity of the atoms is more similar to that in marcasite than pyrite. The ion description is imperfect because the material is semiconducting and the Fe-As and Fe-S bonds are highly covalent. | Arsenopyrite | Wikipedia | 484 | 158914 | https://en.wikipedia.org/wiki/Arsenopyrite | Physical sciences | Minerals | Earth science |
Related minerals
Various transition group metals can substitute for iron in arsenopyrite. The arsenopyrite group includes the following rare minerals:
Clinosafflorite:
Gudmundite:
Glaucodot or alloclasite: or
Iridarsenite:
Osarsite or ruarsite: or | Arsenopyrite | Wikipedia | 66 | 158914 | https://en.wikipedia.org/wiki/Arsenopyrite | Physical sciences | Minerals | Earth science |
Chalcopyrite ( ) is a copper iron sulfide mineral and the most abundant copper ore mineral. It has the chemical formula CuFeS2 and crystallizes in the tetragonal system. It has a brassy to golden yellow color and a hardness of 3.5 to 4 on the Mohs scale. Its streak is diagnostic as green-tinged black.
On exposure to air, chalcopyrite tarnishes to a variety of oxides, hydroxides, and sulfates. Associated copper minerals include the sulfides bornite (Cu5FeS4), chalcocite (Cu2S), covellite (CuS), digenite (Cu9S5); carbonates such as malachite and azurite, and rarely oxides such as cuprite (Cu2O). It is rarely found in association with native copper. Chalcopyrite is a conductor of electricity.
Copper can be extracted from chalcopyrite ore using various methods. The two predominant methods are pyrometallurgy and hydrometallurgy, the former being the most commercially viable.
Etymology
The name chalcopyrite comes from the Greek words , which means copper, and , which means striking fire. It was sometimes historically referred to as "yellow copper".
Identification
Chalcopyrite is often confused with pyrite and gold since all three of these minerals have a yellowish color and a metallic luster. Some important mineral characteristics that help distinguish these minerals are hardness and streak. Chalcopyrite is much softer than pyrite and can be scratched with a knife, whereas pyrite cannot be scratched by a knife. However, chalcopyrite is harder than gold, which, if pure, can be scratched by copper. Additionally, gold is malleable, while chalcopyrite is brittle. Chalcopyrite has a distinctive black streak with green flecks in it. Pyrite has a black streak and gold has a yellow streak.
Chemistry
Natural chalcopyrite has no solid solution series with any other sulfide minerals. There is limited substitution of zinc with copper despite chalcopyrite having the same crystal structure as sphalerite. | Chalcopyrite | Wikipedia | 453 | 158916 | https://en.wikipedia.org/wiki/Chalcopyrite | Physical sciences | Minerals | Earth science |
Minor amounts of elements such as silver, gold, cadmium, cobalt, nickel, lead, tin, and zinc can be measured (at parts per million levels), likely substituting for copper and iron. Selenium, bismuth, tellurium, and arsenic may substitute for sulfur in minor amounts. Chalcopyrite can be oxidized to form malachite, azurite, and cuprite.
Structure
Chalcopyrite is a member of the tetragonal crystal system. Crystallographically the structure of chalcopyrite is closely related to that of zinc blende ZnS (sphalerite). The unit cell is twice as large, reflecting an alternation of Cu+ and Fe3+ ions replacing Zn2+ ions in adjacent cells. In contrast to the pyrite structure chalcopyrite has single S2− sulfide anions rather than disulfide pairs. Another difference is that the iron cation is not diamagnetic low spin Fe(II) as in pyrite.
In the crystal structure, each metal ion is tetrahedrally coordinated to 4 sulfur anions. Each sulfur anion is bonded to two copper atoms and two iron atoms.
Paragenesis
Chalcopyrite is present with many ore-bearing environments via a variety of ore forming processes.
Chalcopyrite is present in volcanogenic massive sulfide ore deposits and sedimentary exhalative deposits, formed by deposition of copper during hydrothermal circulation. Chalcopyrite is concentrated in this environment via fluid transport. Porphyry copper ore deposits are formed by concentration of copper within a granitic stock during the ascent and crystallisation of a magma. Chalcopyrite in this environment is produced by concentration within a magmatic system.
Chalcopyrite is an accessory mineral in Kambalda type komatiitic nickel ore deposits, formed from an immiscible sulfide liquid in sulfide-saturated ultramafic lavas. In this environment chalcopyrite is formed by a sulfide liquid stripping copper from an immiscible silicate liquid.
Chalcopyrite has been the most important ore of copper since the Bronze Age.
Occurrence | Chalcopyrite | Wikipedia | 449 | 158916 | https://en.wikipedia.org/wiki/Chalcopyrite | Physical sciences | Minerals | Earth science |
Even though Chalcopyrite does not contain the most copper in its structure relative to other minerals, it is the most important copper ore since it can be found in many localities. Chalcopyrite ore occurs in a variety of ore types, from huge masses as at Timmins, Ontario, to irregular veins and disseminations associated with granitic to dioritic intrusives as in the porphyry copper deposits of Broken Hill, the American Cordillera and the Andes. The largest deposit of nearly pure chalcopyrite ever discovered in Canada was at the southern end of the Temagami Greenstone Belt where Copperfields Mine extracted the high-grade copper.
Chalcopyrite is present in the supergiant Olympic Dam Cu-Au-U deposit in South Australia.
Chalcopyrite may also be found in coal seams associated with pyrite nodules, and as disseminations in carbonate sedimentary rocks.
Extraction of copper
Copper metal is predominantly extracted from chalcopyrite ore using two methods: pyrometallurgy and hydrometallurgy. The most common and commercially viable method, pyrometallurgy, involves "crushing, grinding, flotation, smelting, refining, and electro-refining" techniques. Crushing, leaching, solvent extraction, and electrowinning are techniques used in hydrometallurgy. Specifically in the case of chalcopyrite, pressure oxidation leaching is practiced.
Pyrometallurgical processes
The most important method for copper extraction from chalcopyrite is pyrometallurgy. Pyrometallurgy is commonly used for large scale, copper rich operations with high-grade ores. This is because Cu-Fe-S ores, such as chalcopyrite, are difficult to dissolve in aqueous solutions. The extraction process using this method undergoes four stages:
Isolating desired elements from ore using froth flotation to create a concentration
Creating a high-Cu sulfide matte by smelting the concentration
Oxidizing/converting the sulfide matte, resulting in an impure molten copper
Refining by fire and electrowinning techniques to increase purity of resultant copper | Chalcopyrite | Wikipedia | 447 | 158916 | https://en.wikipedia.org/wiki/Chalcopyrite | Physical sciences | Minerals | Earth science |
Chalcopyrite ore is not directly smelted. This is because the ore is primarily composed of non-economically valuable material, or waste rock, with low concentrations of copper. The abundance of waste material results in a lot of hydrocarbon fuel being required to heat and melt the ore. Alternatively, copper is isolated from the ore first using a technique called froth flotation. Essentially, reagents are used to make the copper water-repellent, thus the Cu is able to concentrate in a flotation cell by floating on air bubbles. In contrast to the 0.5–2% copper in chalcopyrite ore, froth flotation results in a concentrate containing about 30% copper.
The concentrate then undergoes a process called matte smelting. Matte smelting oxidizes the sulfur and iron by melting the flotation concentrate in a 1250°C furnace to create a new concentrate (matte) with about 45–75% copper. This process is typically done in flash furnaces. To reduce the amount of copper in the slag material, the slag is kept molten with an addition of SiO2 flux to promote immiscibility between concentration and slag. In terms of byproducts, matte smelting copper can produce SO2 gas which is harmful to the environment, thus it is captured in the form of sulfuric acid. Example reactions are as follows:
2CuFeS2 (s) +3.25O2(g) → Cu2S-0.5FeS(l) + 1.5FeO(s) + 2.5SO2(g)
2FeO(s) + SiO2(s) → Fe2SiO4(l)
Converting involves oxidizing the matte once more to further remove sulfur and iron; however, the product is 99% molten copper. Converting occurs in two stages: the slag forming stage and the copper forming stage. In the slag forming stage, iron and sulfur are reduced to concentrations of less than 1% and 0.02%, respectively. The concentrate from matte smelting is poured into a converter that is then rotated, supplying the slag with oxygen through tuyeres. The reaction is as follows:
2FeS(l)+3O2(g)+SiO2(s) -> Fe2SiO4(l) + 2SO2(g) + heat | Chalcopyrite | Wikipedia | 509 | 158916 | https://en.wikipedia.org/wiki/Chalcopyrite | Physical sciences | Minerals | Earth science |
In the copper forming stage, the matte produced from the slag stage undergoes charging (inputting the matte in the converter), blowing (blasting more oxygen), and skimming (retrieving impure molten copper known as blister copper). The reaction is as follows:
Cu2S(l) + O2(g) -> 2Cu(l) + SO2(g) + heat
Finally, the blister copper undergoes refinement through fire, electrorefining or both. In this stage, copper is refined to a high-purity cathode.
Hydrometallurgical processes
Chalcopyrite is an exception to most copper bearing minerals. In contrast to the majority of copper minerals which can be leached at atmospheric conditions, such as through heap leaching, chalcopyrite is a refractory mineral that requires elevated temperatures as well as oxidizing conditions to release its copper into solution. This is because of the extracting challenges which arise from the 1:1 presence of iron to copper, resulting in slow leaching kinetics. Elevated temperatures and pressures create an abundance of oxygen in solution, which facilitates faster reaction speeds in terms of breaking down chalcopyrite's crystal lattice. A hydrometallurgical process which elevates temperature with oxidizing conditions required for chalcopyrite is known as pressure oxidation leaching. A typical reaction series of chalcopyrite under oxidizing, high temperature conditions is as follows:
i) 2CuFeS2 + 4Fe2(SO4)3 -> 2Cu2++ 2SO42- + 10FeSO4+4S
ii) 4FeSO4 + O2 + 2H2SO4 -> 2Fe2(SO4)3 +2H2O
iii) 2S + 3O2 +2H2O -> 2H2SO4
(overall) 4CuFeS2+ 17O2 + 4H2O -> 4Cu2++ 2Fe2O3 + 4H2SO4 | Chalcopyrite | Wikipedia | 428 | 158916 | https://en.wikipedia.org/wiki/Chalcopyrite | Physical sciences | Minerals | Earth science |
Pressure oxidation leaching is particularly useful for low grade chalcopyrite. This is because it can "process concentrate product from flotation" rather than having to process whole ore. Additionally, it can be used as an alternative method to pyrometallurgy for variable ore. Other advantages hydrometallurgical processes have in regards to copper extraction over pyrometallurgical processes (smelting) include:
The highly variable cost of smelting
Depending on the location, the amount of smelting availability is limited
High cost of installing smelting infrastructure
Ability to treat high-impurity concentrates
Increase of recovery due to ability of treating lower-grade deposits on site
Lower transport costs (shipping concentrate not necessary)
Overall lower cost of copper production
Although hydrometallurgy has its advantages, it continues to face challenges in the commercial setting. In turn, smelting continues to remain the most commercially viable method of copper extraction. | Chalcopyrite | Wikipedia | 194 | 158916 | https://en.wikipedia.org/wiki/Chalcopyrite | Physical sciences | Minerals | Earth science |
Labradorite ((Ca, Na)(Al, Si)4O8) is a calcium-enriched feldspar mineral first identified in Labrador, Canada, which can display an iridescent effect (schiller).
Labradorite is an intermediate to calcic member of the plagioclase series. It has an anorthite percentage (%An) of between 50 and 70. The specific gravity ranges from 2.68 to 2.72. The streak is white, like most silicates. The refractive index ranges from 1.559 to 1.573 and twinning is common. As with all plagioclase members, the crystal system is triclinic, and three directions of cleavage are present, two of which are nearly at right angles and are more obvious, being of good to perfect quality (while the third direction is poor). It occurs as clear, white to gray, blocky to lath shaped grains in common mafic igneous rocks such as basalt and gabbro, as well as in anorthosites.
Occurrence
The geological type area for labradorite is Paul's Island near the town of Nain in Labrador, Canada. It has also been reported in Poland, Norway, Finland and various other locations worldwide, with notable distribution in Madagascar, China, Australia, Slovakia and the United States.
Labradorite occurs in mafic igneous rocks and is the feldspar variety most common in basalt and gabbro. The uncommon anorthosite bodies are composed almost entirely of labradorite. It also is found in metamorphic amphibolites and as a detrital component of some sediments. Common mineral associates in igneous rocks include olivine, pyroxenes, amphiboles and magnetite.
Labradorescence
Labradorite can display an iridescent optical effect (or schiller) known as labradorescence. The term labradorescence was coined by Ove Balthasar Bøggild, who defined it (labradorization) as follows:
Contributions to the understanding of the origin and cause of the effect were made by Robert Strutt, 4th Baron Rayleigh (1923), and by Bøggild (1924). | Labradorite | Wikipedia | 456 | 159034 | https://en.wikipedia.org/wiki/Labradorite | Physical sciences | Silicate minerals | Earth science |
The cause of this optical phenomenon is phase exsolution lamellar structure, occurring in the Bøggild miscibility gap. The effect is visible when the lamellar separation is between ; the lamellae are not necessarily parallel; and the lamellar structure is found to lack long range order.
The lamellar separation only occurs in plagioclases of a certain composition; those of calcic labradorite (50–70% anorthite) and bytownite (formula: , i.e., with an anorthite content of ~70 to 90%) particularly exemplify this. Another requirement for the lamellar separation is a very slow cooling of the rock containing the plagioclase. Slow cooling is required to allow the Ca, Na, Si, and Al ions to diffuse through the plagioclase and produce the lamellar separation. Therefore, not all labradorites exhibit labradorescence (they might not have the correct composition, cooled too quickly, or both), and not all plagioclases that exhibit labradorescence are labradorites (they may be bytownite).
Some gemstone varieties of labradorite exhibiting a high degree of labradorescence are called spectrolite.
Gallery | Labradorite | Wikipedia | 250 | 159034 | https://en.wikipedia.org/wiki/Labradorite | Physical sciences | Silicate minerals | Earth science |
In mechanics and physics, shock is a sudden acceleration caused, for example, by impact, drop, kick, earthquake, or explosion. Shock is a transient physical excitation.
Shock describes matter subject to extreme rates of force with respect to time. Shock is a vector that has units of an acceleration (rate of change of velocity). The unit g (or g) represents multiples of the standard acceleration of gravity and is conventionally used.
A shock pulse can be characterised by its peak acceleration, the duration, and the shape of the shock pulse (half sine, triangular, trapezoidal, etc.). The shock response spectrum is a method for further evaluating a mechanical shock.
Shock measurement
Shock measurement is of interest in several fields such as
Propagation of heel shock through a runner's body
Measure the magnitude of a shock need to cause damage to an item: fragility.
Measure shock attenuation through athletic flooring
Measuring the effectiveness of a shock absorber
Measuring the shock absorbing ability of package cushioning
Measure the ability of an athletic helmet to protect people
Measure the effectiveness of shock mounts
Determining the ability of structures to resist seismic shock: earthquakes, etc.
Determining whether personal protective fabric attenuates or amplifies shocks
Verifying that a Naval ship and its equipment can survive explosive shocks
Shocks are usually measured by accelerometers but other transducers and high speed imaging are also used. A wide variety of laboratory instrumentation is available; stand-alone shock data loggers are also used.
Field shocks are highly variable and often have very uneven shapes. Even laboratory controlled shocks often have uneven shapes and include short duration spikes; Noise can be reduced by appropriate digital or analog filtering.
Governing test methods and specifications provide detail about the conduct of shock tests. Proper placement of measuring instruments is critical. Fragile items and packaged goods respond with variation to uniform laboratory shocks; Replicate testing is often called for. For example, MIL-STD-810G Method 516.6 indicates: ''at least three times in both directions along each of three orthogonal axes".
Shock testing | Shock (mechanics) | Wikipedia | 424 | 159081 | https://en.wikipedia.org/wiki/Shock%20%28mechanics%29 | Physical sciences | Basics_4 | Physics |
Shock testing typically falls into two categories, classical shock testing and pyroshock or ballistic shock testing. Classical shock testing consists of the following shock impulses: half sine, haversine, sawtooth wave, and trapezoid. Pyroshock and ballistic shock tests are specialized and are not considered classical shocks. Classical shocks can be performed on Electro Dynamic (ED) Shakers, Free Fall Drop Tower or Pneumatic Shock Machines. A classical shock impulse is created when the shock machine table changes direction abruptly. This abrupt change in direction causes a rapid velocity change which creates the shock impulse. Testing the effects of shock are sometimes conducted on end-use applications: for example, automobile crash tests.
Use of proper test methods and Verification and validation protocols are important for all phases of testing and evaluation.
Effects of shock
Mechanical shock has the potential for damaging an item (e.g., an entire light bulb) or an element of the item (e.g. a filament in an Incandescent light bulb):
A brittle or fragile item can fracture. For example, two crystal wine glasses may shatter when impacted against each other. A shear pin in an engine is designed to fracture with a specific magnitude of shock. Note that a soft ductile material may sometimes exhibit brittle failure during shock due to time-temperature superposition.
A malleable item can be bent by a shock. For example, a copper pitcher may bend when dropped on the floor.
Some items may appear to be not damaged by a single shock but will experience fatigue failure with numerous repeated low-level shocks.
A shock may result in only minor damage which may not be critical for use. However, cumulative minor damage from several shocks will eventually result in the item being unusable.
A shock may not produce immediate apparent damage but might cause the service life of the product to be shortened: the reliability is reduced.
A shock may cause an item to become out of adjustment. For example, when a precision scientific instrument is subjected to a moderate shock, good metrology practice may be to have it recalibrated before further use.
Some materials such as primary high explosives may detonate with mechanical shock or impact.
When glass bottles of liquid are dropped or subjected to shock, the water hammer effect may cause hydrodynamic glass breakage. | Shock (mechanics) | Wikipedia | 470 | 159081 | https://en.wikipedia.org/wiki/Shock%20%28mechanics%29 | Physical sciences | Basics_4 | Physics |
Considerations
When laboratory testing, field experience, or engineering judgement indicates that an item could be damaged by mechanical shock, several courses of action might be considered:
Reduce and control the input shock at the source.
Modify the item to improve its toughness or support it to better handle shocks.
Use shock absorbers, shock mounts, or cushions to control the shock transmitted to the item. Cushioning reduces the peak acceleration by extending the duration of the shock.
Plan for failures: accept certain losses. Have redundant systems available, etc. | Shock (mechanics) | Wikipedia | 104 | 159081 | https://en.wikipedia.org/wiki/Shock%20%28mechanics%29 | Physical sciences | Basics_4 | Physics |
A chemical composition specifies the identity, arrangement, and ratio of the chemical elements making up a compound by way of chemical and atomic bonds.
Chemical formulas can be used to describe the relative amounts of elements present in a compound. For example, the chemical formula for water is H2O: this means that each molecule of water is constituted by 2 atoms of hydrogen (H) and 1 atom of oxygen (O). The chemical composition of water may be interpreted as a 2:1 ratio of hydrogen atoms to oxygen atoms. Different types of chemical formulas are used to convey composition information, such as an empirical or molecular formula.
Nomenclature can be used to express not only the elements present in a compound but their arrangement within the molecules of the compound. In this way, compounds will have unique names which can describe their elemental composition.
Composite mixture
The chemical composition of a mixture can be defined as the distribution of the individual substances that constitute the mixture, called "components". In other words, it is equivalent to quantifying the concentration of each component. Because there are different ways to define the concentration of a component, there are also different ways to define the composition of a mixture. It may be expressed as molar fraction, volume fraction, mass fraction, molality, molarity or normality or mixing ratio.
Chemical composition of a mixture can be represented graphically in plots like ternary plot and quaternary plot. | Chemical composition | Wikipedia | 286 | 159151 | https://en.wikipedia.org/wiki/Chemical%20composition | Physical sciences | Substance | Chemistry |
Conduct disorder (CD) is a mental disorder diagnosed in childhood or adolescence that presents itself through a repetitive and persistent pattern of behavior that includes theft, lies, physical violence that may lead to destruction, and reckless breaking of rules, in which the basic rights of others or major age-appropriate norms are violated. These behaviors are often referred to as "antisocial behaviors", and is often seen as the precursor to antisocial personality disorder; however, the latter, by definition, cannot be diagnosed until the individual is 18 years old. Conduct disorder may result from parental rejection and neglect and in such cases can be treated with family therapy, as well as behavioral modifications and pharmacotherapy. It may also be caused by environmental lead exposure. Conduct disorder is estimated to affect 51.1 million people globally
Signs and symptoms
One of the symptoms of conduct disorder is a lower level of fear. Research performed on the impact of toddlers exposed to fear and distress shows that negative emotionality (fear) predicts toddlers' empathy-related response to distress. The findings support that if a caregiver is able to respond to infant cues, the toddler has a better ability to respond to fear and distress. If a child does not learn how to handle fear or distress the child will be more likely to lash out at other children. If the caregiver is able to provide therapeutic intervention teaching children at risk better empathy skills, the child will have a lower incident level of conduct disorder.
The condition is also linked to a rise in violent and antisocial behaviour; examples may range from pushing, hitting and biting when the child is young, progressing towards beating and inflicted cruelty as the child becomes older. Additionally, self-harm has been observed in children with conduct disorder (CD). A predisposition towards impulsivity and lowered emotional intelligence have been cited as contributing factors to this phenomenon. However, in order to determine direct causal links further studies must be conducted.
Conduct disorder can present with limited prosocial emotions, lack of remorse or guilt, lack of empathy, lack of concern for performance, and shallow or deficient affect. Symptoms vary by individual, but the four main groups of symptoms are described below. | Conduct disorder | Wikipedia | 445 | 159209 | https://en.wikipedia.org/wiki/Conduct%20disorder | Biology and health sciences | Mental disorders | Health |
Aggression to people and animals
Often bullies, threatens or intimidates others
Often initiates physical fights
Has used a weapon that can cause serious physical harm to others (e.g., a bat, brick, broken bottle, knife, gun)
Has been physically cruel to people
Has been physically cruel to animals
Has stolen while confronting a victim (e.g., mugging, purse snatching, extortion, armed robbery)
Feels no remorse or empathy towards the harm, fear, or pain they may have inflicted on others
Destruction of property
Has deliberately engaged in fire setting with the intention of causing serious damage
Has deliberately destroyed others' property (other than by fire setting)
Deceitfulness or theft
Has broken into someone else's house, other building, car, other vehicle, etc
Often lies to obtain goods or favors or to avoid obligations (i.e., "cons" others)
Has stolen items of nontrivial value without confronting a victim (e.g., shoplifting, but without breaking and entering; forgery)
Serious violations of rules
Often stays out at night despite parental prohibitions, beginning before age 13
Has run away from home overnight at least twice while living in parental or parental surrogate home (or once without returning for a lengthy period)
Is often truant from school, beginning before age 13
The lack of empathy these individuals have and the aggression that accompanies this carelessness for the consequences is dangerous, not only for the individual but for those around them.
Developmental course
Currently, two possible developmental courses are thought to lead to conduct disorder. The first is known as the "childhood-onset type" and occurs when conduct disorder symptoms are present before the age of 10 years. This course is often linked to a more persistent life course and more pervasive behaviors. Specifically, children in this group have greater levels of ADHD symptoms, neuropsychological deficits, more academic problems, increased family dysfunction and higher likelihood of aggression and violence.
There is debate among professionals regarding the validity and appropriateness of diagnosing young children with conduct disorder. The characteristics of the diagnosis are commonly seen in young children who are referred to mental health professionals. A premature diagnosis made in young children, and thus labeling and stigmatizing an individual, may be inappropriate. It is also argued that some children may not in fact have conduct disorder, but are engaging in developmentally appropriate disruptive behavior. | Conduct disorder | Wikipedia | 490 | 159209 | https://en.wikipedia.org/wiki/Conduct%20disorder | Biology and health sciences | Mental disorders | Health |
The second developmental course is known as the "adolescent-onset type" and occurs when conduct disorder symptoms are present after the age of 10 years. Individuals with adolescent-onset conduct disorder exhibit less impairment than those with the childhood-onset type and are not characterized by similar psychopathology. At times, these individuals will remit in their deviant patterns before adulthood. Research has shown that there is a greater number of children with adolescent-onset conduct disorder than those with childhood-onset, suggesting that adolescent-onset conduct disorder is an exaggeration of developmental behaviors that are typically seen in adolescence, such as rebellion against authority figures and rejection of conventional values. However, this argument is not established and empirical research suggests that these subgroups are not as valid as once thought.
In addition to these two courses that are recognized by the DSM-IV-TR, there appears to be a relationship among oppositional defiant disorder, conduct disorder, and antisocial personality disorder. Specifically, research has demonstrated continuity in the disorders such that conduct disorder is often diagnosed in children who have been previously diagnosed with oppositional defiant disorder, and most adults with antisocial personality disorder were previously diagnosed with conduct disorder. For example, some research has shown that 90% of children diagnosed with conduct disorder had a previous diagnosis of oppositional defiant disorder. Moreover, both disorders share relevant risk factors and disruptive behaviors, suggesting that oppositional defiant disorder is a developmental precursor and milder variant of conduct disorder. However, this is not to say that this trajectory occurs in all individuals. In fact, only about 25% of children with oppositional defiant disorder will receive a later diagnosis of conduct disorder. Correspondingly, there is an established link between conduct disorder and the diagnosis of antisocial personality disorder as an adult. In fact, the current diagnostic criteria for antisocial personality disorder require a conduct disorder diagnosis before the age of 15. However, again, only 25–40% of youths with conduct disorder will develop an antisocial personality disorder. Nonetheless, many of the individuals who do not meet full criteria for antisocial personality disorder still exhibit a pattern of social and personal impairments or antisocial behaviors. These developmental trajectories suggest the existence of antisocial pathways in certain individuals, which have important implications for both research and treatment. | Conduct disorder | Wikipedia | 467 | 159209 | https://en.wikipedia.org/wiki/Conduct%20disorder | Biology and health sciences | Mental disorders | Health |
Associated conditions
Children with conduct disorder have a high risk of developing other adjustment problems. Specifically, risk factors associated with conduct disorder and the effects of conduct disorder symptomatology on a child's psychosocial context have been linked to overlapping with other psychological disorders. In this way, there seems to be reciprocal effects of comorbidity with certain disorders, leading to increased overall risk for these youth.
Attention deficit hyperactivity disorder
ADHD is the condition most commonly associated with conduct disorders, with approximately 25–30% of boys and 50–55% of girls with conduct disorder having a comorbid ADHD diagnosis. While it is unlikely that ADHD alone is a risk factor for developing conduct disorder, children who exhibit hyperactivity and impulsivity along with aggression is associated with the early onset of conduct problems. Moreover, children with comorbid conduct disorder and ADHD show more severe aggression.
Substance use disorders
Conduct disorder is also highly associated with both substance use and abuse. Children with conduct disorder have an earlier onset of substance use, as compared to their peers, and also tend to use multiple substances. However, substance use disorders themselves can directly or indirectly cause conduct disorder like traits in about half of adolescents who have a substance use disorder. As mentioned above, it seems that there is a transactional relationship between substance use and conduct problems, such that aggressive behaviors increase substance use, which leads to increased aggressive behavior.
Substance use in conduct disorder can lead to antisocial behavior in adulthood.
Schizophrenia
Conduct disorder is a precursor to schizophrenia in a minority of cases, with about 40% of men and 31% of women with schizophrenia meeting criteria for childhood conduct disorder.
Cause
While the cause of conduct disorder is complicated by an intricate interplay of biological and environmental factors, identifying underlying mechanisms is crucial for obtaining accurate assessment and implementing effective treatment. These mechanisms serve as the fundamental building blocks on which evidence-based treatments are developed. Despite the complexities, several domains have been implicated in the development of conduct disorder including cognitive variables, neurological factors, intraindividual factors, familial and peer influences, and wider contextual factors. These factors may also vary based on the age of onset, with different variables related to early (e.g., neurodevelopmental basis) and adolescent (e.g., social/peer relationships) onset. | Conduct disorder | Wikipedia | 478 | 159209 | https://en.wikipedia.org/wiki/Conduct%20disorder | Biology and health sciences | Mental disorders | Health |
Risks
The development of conduct disorder is not immutable or predetermined. A number of interactive risk and protective factors exist that can influence and change outcomes, and in most cases conduct disorder develops due to an interaction and gradual accumulation of risk factors. In addition to the risk factors identified under cause, several other variables place youth at increased risk for developing the disorder, including child physical abuse, in-utero alcohol exposure, and maternal smoking during pregnancy. Protective factors have also been identified, and most notably include high IQ, being female, positive social orientations, good coping skills, and supportive family and community relationships.
However, a correlation between a particular risk factor and a later developmental outcome (such as conduct disorder) cannot be taken as definitive evidence for a causal link. Co-variation between two variables can arise, for instance, if they represent age-specific expressions of similar underlying genetic factors. There have been studies that found that, although smoking during pregnancy does contribute to increased levels of antisocial behaviour, in mother-fetus pairs that were not genetically related (by virtue of in-vitro fertilisation), no link between smoking during pregnancy and later conduct problems was found. Thus, the distinction between causality and correlation is an important consideration.
Learning disabilities
While language impairments are most common, approximately 20–25% of youth with conduct disorder have some type of learning disability. Although the relationship between the disorders is complex, it seems as if learning disabilities result from a combination of ADHD, a history of academic difficulty and failure, and long-standing socialization difficulties with family and peers. However, confounding variables, such as language deficits, SES disadvantage, or neurodevelopmental delay also need to be considered in this relationship, as they could help explain some of the association between conduct disorder and learning problems. | Conduct disorder | Wikipedia | 376 | 159209 | https://en.wikipedia.org/wiki/Conduct%20disorder | Biology and health sciences | Mental disorders | Health |
Cognitive factors
In terms of cognitive function, intelligence and cognitive deficits are common amongst youths with conduct disorder, particularly those with early-onset and have intelligence quotients (IQ) one standard deviation below the mean and severe deficits in verbal reasoning and executive function. Executive function difficulties may manifest in terms of one's ability to shift between tasks, plan as well as organize, and also inhibit a prepotent response. These findings hold true even after taking into account other variables such as socioeconomic status (SES), and education. However, IQ and executive function deficits are only one piece of the puzzle, and the magnitude of their influence is increased during transactional processes with environmental factors.
Brain differences
Beyond difficulties in executive function, neurological research on youth with conduct disorder also demonstrate differences in brain anatomy and function that reflect the behaviors and mental anomalies associated in conduct disorder. Compared to normal controls, youths with early and adolescent onset of conduct disorder displayed reduced responses in brain regions associated with social behavior (i.e., amygdala, ventromedial prefrontal cortex, insula, and orbitofrontal cortex). In addition, youths with conduct disorder also demonstrated less responsiveness in the orbitofrontal regions of the brain during a stimulus-reinforcement and reward task. This provides a neural explanation for why youths with conduct disorder may be more likely to repeat poor decision making patterns. Lastly, youths with conduct disorder display a reduction in grey matter volume in the amygdala, which may account for the fear conditioning deficits. This reduction has been linked to difficulty processing social emotional stimuli, regardless of the age of onset. Aside from the differences in neuroanatomy and activation patterns between youth with conduct disorder and controls, neurochemical profiles also vary between groups. Individuals with conduct disorder are characterized as having reduced serotonin and cortisol levels (e.g., reduced hypothalamic-pituitary-adrenal (HPA) axis), as well as reduced autonomic nervous system (ANS) functioning. These reductions are associated with the inability to regulate mood and impulsive behaviors, weakened signals of anxiety and fear, and decreased self-esteem. Taken together, these findings may account for some of the variance in the psychological and behavioral patterns of youth with conduct disorder. | Conduct disorder | Wikipedia | 476 | 159209 | https://en.wikipedia.org/wiki/Conduct%20disorder | Biology and health sciences | Mental disorders | Health |
Intra-individual factors
Aside from findings related to neurological and neurochemical profiles of youth with conduct disorder, intraindividual factors such as genetics may also be relevant. Having a sibling or parent with conduct disorder increases the likelihood of having the disorder, with a heritability rate of .53. There also tends to be a stronger genetic link for individuals with childhood-onset compared to adolescent onset. In addition, youth with conduct disorder also exhibit polymorphism in the monoamine oxidase A gene, low resting heart rates, and increased testosterone.
Family and peer influences
Elements of the family and social environment may also play a role in the development and maintenance of conduct disorder. For instance, antisocial behavior suggestive of conduct disorder is associated with single parent status, parental divorce, large family size, and the young age of mothers. However, these factors are difficult to tease apart from other demographic variables that are known to be linked with conduct disorder, including poverty and low socioeconomic status. Family functioning and parent–child interactions also play a substantial role in childhood aggression and conduct disorder, with low levels of parental involvement, inadequate supervision, and unpredictable discipline practices reinforcing youth's defiant behaviors. Moreover, maternal depression has a significant impact on conduct disordered children and can lead to negative reciprocal feedback between the mother and conduct disordered child. Peer influences have also been related to the development of antisocial behavior in youth, particularly peer rejection in childhood and association with deviant peers. Peer rejection is not only a marker of a number of externalizing disorders, but also a contributing factor for the continuity of the disorders over time. Hinshaw and Lee (2003) also explain that association with deviant peers has been thought to influence the development of conduct disorder in two ways: 1) a "selection" process whereby youth with aggressive characteristics choose deviant friends, and 2) a "facilitation" process whereby deviant peer networks bolster patterns of antisocial behavior. In a separate study by Bonin and colleagues, parenting programs were shown to positively affect child behavior and reduce costs to the public sector. | Conduct disorder | Wikipedia | 428 | 159209 | https://en.wikipedia.org/wiki/Conduct%20disorder | Biology and health sciences | Mental disorders | Health |
Wider contextual factors
In addition to the individual and social factors associated with conduct disorder, research has highlighted the importance of environment and context in youth with antisocial behavior. However, it is important to note that these are not static factors, but rather transactional in nature (e.g., individuals are influenced by and also influence their environment). For instance, neighborhood safety and exposure to violence have been studied in conjunction with conduct disorder, but it is not simply the case that youth with aggressive tendencies reside in violent neighborhoods. Transactional models propose that youth may resort to violence more often as a result of exposure to community violence, but their predisposition towards violence also contributes to neighborhood climate.
Diagnosis
Conduct disorder is classified in the fourth edition of Diagnostic and Statistical Manual of Mental Disorders (DSM). It is diagnosed based on a prolonged pattern of antisocial behaviour such as serious violation of laws and social norms and rules in people younger than the age of 18. Similar criteria are used in those over the age of 18 for the diagnosis of antisocial personality disorder. No proposed revisions for the main criteria of conduct disorder exist in the DSM-5; there is a recommendation by the work group to add an additional specifier for callous and unemotional traits. According to DSM-5 criteria for conduct disorder, there are four categories that could be present in the child's behavior: aggression to people and animals, destruction of property, deceitfulness or theft, and serious violation of rules.
Almost all adolescents who have a substance use disorder have conduct disorder-like traits, but after successful treatment of the substance use disorder, about half of these adolescents no longer display conduct disorder-like symptoms. Therefore, it is important to exclude a substance-induced cause and instead address the substance use disorder prior to making a psychiatric diagnosis of conduct disorder.
Treatment
First-line treatment is psychotherapy based on behavior modification and problem-solving skills. This treatment seeks to integrate individual, school, and family settings. Parent-management training can also be helpful. No medications have been FDA approved for conduct disorder, but risperidone (a second-generation antipsychotic) has the most evidence to support its use for aggression in children who have not responded to behavioral and psychosocial interventions. Selective Serotonin Reuptake Inhibitors (SSRIs) are also sometimes used to treat irritability in these patients. | Conduct disorder | Wikipedia | 488 | 159209 | https://en.wikipedia.org/wiki/Conduct%20disorder | Biology and health sciences | Mental disorders | Health |
Prognosis
About 25–40% of youths diagnosed with conduct disorder qualify for a diagnosis of antisocial personality disorder when they reach adulthood. For those that do not develop ASPD, most still exhibit social dysfunction in adult life.
Epidemiology
Conduct disorder is estimated to affect 51.1 million people globally as of 2013. The percentage of children affected by conduct disorder is estimated to range from 1–10%. However, among incarcerated youth or youth in juvenile detention facilities, rates of conduct disorder are between 23% and 87%.
Sex differences
The majority of research on conduct disorder suggests that there are a significantly greater number of males than females with the diagnosis, with some reports demonstrating a threefold to fourfold difference in prevalence. However, this difference may be somewhat biased by the diagnostic criteria which focus on more overt behaviors, such as aggression and fighting, which are more often exhibited by males. Females are more likely to be characterized by covert behaviors, such as stealing or running away. Moreover, conduct disorder in females is linked to several negative outcomes, such as antisocial personality disorder and early pregnancy, suggesting that sex differences in disruptive behaviors need to be more fully understood.
Females are more responsive to peer pressure including feelings of guilt than males.
Racial differences
Research on racial or cultural differences on the prevalence or presentation of conduct disorder is limited. However, according to studies on American youth, it appears that black youths are more often diagnosed with conduct disorder, while Asian youths are about one-third as likely to be diagnosed with conduct disorder when compared to white youths. It has been widely theorized for decades that this disparity is due to unconscious bias in those who give the diagnosis. | Conduct disorder | Wikipedia | 341 | 159209 | https://en.wikipedia.org/wiki/Conduct%20disorder | Biology and health sciences | Mental disorders | Health |
Fermi–Dirac statistics is a type of quantum statistics that applies to the physics of a system consisting of many non-interacting, identical particles that obey the Pauli exclusion principle. A result is the Fermi–Dirac distribution of particles over energy states. It is named after Enrico Fermi and Paul Dirac, each of whom derived the distribution independently in 1926. Fermi–Dirac statistics is a part of the field of statistical mechanics and uses the principles of quantum mechanics.
Fermi–Dirac statistics applies to identical and indistinguishable particles with half-integer spin (1/2, 3/2, etc.), called fermions, in thermodynamic equilibrium. For the case of negligible interaction between particles, the system can be described in terms of single-particle energy states. A result is the Fermi–Dirac distribution of particles over these states where no two particles can occupy the same state, which has a considerable effect on the properties of the system. Fermi–Dirac statistics is most commonly applied to electrons, a type of fermion with spin 1/2.
A counterpart to Fermi–Dirac statistics is Bose–Einstein statistics, which applies to identical and indistinguishable particles with integer spin (0, 1, 2, etc.) called bosons. In classical physics, Maxwell–Boltzmann statistics is used to describe particles that are identical and treated as distinguishable. For both Bose–Einstein and Maxwell–Boltzmann statistics, more than one particle can occupy the same state, unlike Fermi–Dirac statistics.
History
Before the introduction of Fermi–Dirac statistics in 1926, understanding some aspects of electron behavior was difficult due to seemingly contradictory phenomena. For example, the electronic heat capacity of a metal at room temperature seemed to come from 100 times fewer electrons than were in the electric current. It was also difficult to understand why the emission currents generated by applying high electric fields to metals at room temperature were almost independent of temperature.
The difficulty encountered by the Drude model, the electronic theory of metals at that time, was due to considering that electrons were (according to classical statistics theory) all equivalent. In other words, it was believed that each electron contributed to the specific heat an amount on the order of the Boltzmann constant kB.
This problem remained unsolved until the development of Fermi–Dirac statistics. | Fermi–Dirac statistics | Wikipedia | 495 | 159225 | https://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac%20statistics | Physical sciences | Statistical mechanics | Physics |
Fermi–Dirac statistics was first published in 1926 by Enrico Fermi and Paul Dirac. According to Max Born, Pascual Jordan developed in 1925 the same statistics, which he called Pauli statistics, but it was not published in a timely manner. According to Dirac, it was first studied by Fermi, and Dirac called it "Fermi statistics" and the corresponding particles "fermions".
Fermi–Dirac statistics was applied in 1926 by Ralph Fowler to describe the collapse of a star to a white dwarf. In 1927 Arnold Sommerfeld applied it to electrons in metals and developed the free electron model, and in 1928 Fowler and Lothar Nordheim applied it to field electron emission from metals. Fermi–Dirac statistics continue to be an important part of physics.
Fermi–Dirac distribution
For a system of identical fermions in thermodynamic equilibrium, the average number of fermions in a single-particle state is given by the Fermi–Dirac (F–D) distribution:
where is the Boltzmann constant, is the absolute temperature, is the energy of the single-particle state , and is the total chemical potential. The distribution is normalized by the condition
that can be used to express in that can assume either a positive or negative value.
At zero absolute temperature, is equal to the Fermi energy plus the potential energy per fermion, provided it is in a neighbourhood of positive spectral density. In the case of a spectral gap, such as for electrons in a semiconductor, the point of symmetry is typically called the Fermi level or—for electrons—the electrochemical potential, and will be located in the middle of the gap.
The Fermi–Dirac distribution is only valid if the number of fermions in the system is large enough so that adding one more fermion to the system has negligible effect on . Since the Fermi–Dirac distribution was derived using the Pauli exclusion principle, which allows at most one fermion to occupy each possible state, a result is that .
The variance of the number of particles in state i can be calculated from the above expression for :
Distribution of particles over energy | Fermi–Dirac statistics | Wikipedia | 450 | 159225 | https://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac%20statistics | Physical sciences | Statistical mechanics | Physics |
From the Fermi–Dirac distribution of particles over states, one can find the distribution of particles over energy. The average number of fermions with energy can be found by multiplying the Fermi–Dirac distribution by the degeneracy (i.e. the number of states with energy ),
When , it is possible that , since there is more than one state that can be occupied by fermions with the same energy .
When a quasi-continuum of energies has an associated density of states (i.e. the number of states per unit energy range per unit volume), the average number of fermions per unit energy range per unit volume is
where is called the Fermi function and is the same function that is used for the Fermi–Dirac distribution :
so that
Quantum and classical regimes
The Fermi–Dirac distribution approaches the Maxwell–Boltzmann distribution in the limit of high temperature and low particle density, without the need for any ad hoc assumptions:
In the limit of low particle density, , therefore or equivalently . In that case, , which is the result from Maxwell-Boltzmann statistics.
In the limit of high temperature, the particles are distributed over a large range of energy values, therefore the occupancy on each state (especially the high energy ones with ) is again very small, . This again reduces to Maxwell-Boltzmann statistics.
The classical regime, where Maxwell–Boltzmann statistics can be used as an approximation to Fermi–Dirac statistics, is found by considering the situation that is far from the limit imposed by the Heisenberg uncertainty principle for a particle's position and momentum. For example, in physics of semiconductor, when the density of states of conduction band is much higher than the doping concentration, the energy gap between conduction band and fermi level could be calculated using Maxwell-Boltzmann statistics. Otherwise, if the doping concentration is not negligible compared to density of states of conduction band, the Fermi–Dirac distribution should be used instead for accurate calculation. It can then be shown that the classical situation prevails when the concentration of particles corresponds to an average interparticle separation that is much greater than the average de Broglie wavelength of the particles:
where is the Planck constant, and is the mass of a particle. | Fermi–Dirac statistics | Wikipedia | 480 | 159225 | https://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac%20statistics | Physical sciences | Statistical mechanics | Physics |
For the case of conduction electrons in a typical metal at = 300 K (i.e. approximately room temperature), the system is far from the classical regime because . This is due to the small mass of the electron and the high concentration (i.e. small ) of conduction electrons in the metal. Thus Fermi–Dirac statistics is needed for conduction electrons in a typical metal.
Another example of a system that is not in the classical regime is the system that consists of the electrons of a star that has collapsed to a white dwarf. Although the temperature of white dwarf is high (typically = on its surface), its high electron concentration and the small mass of each electron precludes using a classical approximation, and again Fermi–Dirac statistics is required.
Derivations
Grand canonical ensemble
The Fermi–Dirac distribution, which applies only to a quantum system of non-interacting fermions, is easily derived from the grand canonical ensemble. In this ensemble, the system is able to exchange energy and exchange particles with a reservoir (temperature T and chemical potential μ fixed by the reservoir).
Due to the non-interacting quality, each available single-particle level (with energy level ϵ) forms a separate thermodynamic system in contact with the reservoir.
In other words, each single-particle level is a separate, tiny grand canonical ensemble.
By the Pauli exclusion principle, there are only two possible microstates for the single-particle level: no particle (energy E = 0), or one particle (energy E = ε). The resulting partition function for that single-particle level therefore has just two terms:
and the average particle number for that single-particle level substate is given by
This result applies for each single-particle level, and thus gives the Fermi–Dirac distribution for the entire state of the system.
The variance in particle number (due to thermal fluctuations) may also be derived (the particle number has a simple Bernoulli distribution):
This quantity is important in transport phenomena such as the Mott relations for electrical conductivity and thermoelectric coefficient for an electron gas, where the ability of an energy level to contribute to transport phenomena is proportional to .
Canonical ensemble | Fermi–Dirac statistics | Wikipedia | 456 | 159225 | https://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac%20statistics | Physical sciences | Statistical mechanics | Physics |
It is also possible to derive Fermi–Dirac statistics in the canonical ensemble. Consider a many-particle system composed of N identical fermions that have negligible mutual interaction and are in thermal equilibrium. Since there is negligible interaction between the fermions, the energy of a state of the many-particle system can be expressed as a sum of single-particle energies:
where is called the occupancy number and is the number of particles in the single-particle state with energy . The summation is over all possible single-particle states .
The probability that the many-particle system is in the state is given by the normalized canonical distribution:
where , is called the Boltzmann factor, and the summation is over all possible states of the many-particle system. The average value for an occupancy number is
Note that the state of the many-particle system can be specified by the particle occupancy of the single-particle states, i.e. by specifying so that
and the equation for becomes
where the summation is over all combinations of values of which obey the Pauli exclusion principle, and = 0 or for each . Furthermore, each combination of values of satisfies the constraint that the total number of particles is :
Rearranging the summations,
where the upper index on the summation sign indicates that the sum is not over and is subject to the constraint that the total number of particles associated with the summation is . Note that still depends on through the constraint, since in one case and is evaluated with while in the other case and is evaluated with To simplify the notation and to clearly indicate that still depends on through define
so that the previous expression for can be rewritten and evaluated in terms of the :
The following approximation will be used to find an expression to substitute for :
where
If the number of particles is large enough so that the change in the chemical potential is very small when a particle is added to the system, then Applying the exponential function to both sides, substituting for and rearranging,
Substituting the above into the equation for and using a previous definition of to substitute for , results in the Fermi–Dirac distribution:
Like the Maxwell–Boltzmann distribution and the Bose–Einstein distribution, the Fermi–Dirac distribution can also be derived by the Darwin–Fowler method of mean values.
Microcanonical ensemble | Fermi–Dirac statistics | Wikipedia | 489 | 159225 | https://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac%20statistics | Physical sciences | Statistical mechanics | Physics |
A result can be achieved by directly analyzing the multiplicities of the system and using Lagrange multipliers.
Suppose we have a number of energy levels, labeled by index i, each level having energy εi and containing a total of ni particles. Suppose each level contains gi distinct sublevels, all of which have the same energy, and which are distinguishable. For example, two particles may have different momenta (i.e. their momenta may be along different directions), in which case they are distinguishable from each other, yet they can still have the same energy. The value of gi associated with level i is called the "degeneracy" of that energy level. The Pauli exclusion principle states that only one fermion can occupy any such sublevel.
The number of ways of distributing ni indistinguishable particles among the gi sublevels of an energy level, with a maximum of one particle per sublevel, is given by the binomial coefficient, using its combinatorial interpretation:
For example, distributing two particles in three sublevels will give population numbers of 110, 101, or 011 for a total of three ways which equals 3!/(2!1!).
The number of ways that a set of occupation numbers ni can be realized is the product of the ways that each individual energy level can be populated:
Following the same procedure used in deriving the Maxwell–Boltzmann statistics, we wish to find the set of ni for which W is maximized, subject to the constraint that there be a fixed number of particles and a fixed energy. We constrain our solution using Lagrange multipliers forming the function:
Using Stirling's approximation for the factorials, taking the derivative with respect to ni, setting the result to zero, and solving for ni yields the Fermi–Dirac population numbers:
By a process similar to that outlined in the Maxwell–Boltzmann statistics article, it can be shown thermodynamically that and , so that finally, the probability that a state will be occupied is | Fermi–Dirac statistics | Wikipedia | 432 | 159225 | https://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac%20statistics | Physical sciences | Statistical mechanics | Physics |
Gene expression is the process by which information from a gene is used in the synthesis of a functional gene product that enables it to produce end products, proteins or non-coding RNA, and ultimately affect a phenotype. These products are often proteins, but in non-protein-coding genes such as transfer RNA (tRNA) and small nuclear RNA (snRNA), the product is a functional non-coding RNA.
The process of gene expression is used by all known life—eukaryotes (including multicellular organisms), prokaryotes (bacteria and archaea), and utilized by viruses—to generate the macromolecular machinery for life.
In genetics, gene expression is the most fundamental level at which the genotype gives rise to the phenotype, i.e. observable trait. The genetic information stored in DNA represents the genotype, whereas the phenotype results from the "interpretation" of that information. Such phenotypes are often displayed by the synthesis of proteins that control the organism's structure and development, or that act as enzymes catalyzing specific metabolic pathways.
All steps in the gene expression process may be modulated (regulated), including the transcription, RNA splicing, translation, and post-translational modification of a protein. Regulation of gene expression gives control over the timing, location, and amount of a given gene product (protein or ncRNA) present in a cell and can have a profound effect on the cellular structure and function. Regulation of gene expression is the basis for cellular differentiation, development, morphogenesis and the versatility and adaptability of any organism. Gene regulation may therefore serve as a substrate for evolutionary change.
Mechanism
Transcription
The production of a RNA copy from a DNA strand is called transcription, and is performed by RNA polymerases, which add one ribonucleotide at a time to a growing RNA strand as per the complementarity law of the nucleotide bases. This RNA is complementary to the template 3′ → 5′ DNA strand, with the exception that thymines (T) are replaced with uracils (U) in the RNA and possible errors. | Gene expression | Wikipedia | 443 | 159266 | https://en.wikipedia.org/wiki/Gene%20expression | Biology and health sciences | Genetics and taxonomy | null |
In bacteria, transcription is carried out by a single type of RNA polymerase, which needs to bind a DNA sequence called a Pribnow box with the help of the sigma factor protein (σ factor) to start transcription. In eukaryotes, transcription is performed in the nucleus by three types of RNA polymerases, each of which needs a special DNA sequence called the promoter and a set of DNA-binding proteins—transcription factors—to initiate the process (see regulation of transcription below). RNA polymerase I is responsible for transcription of ribosomal RNA (rRNA) genes. RNA polymerase II (Pol II) transcribes all protein-coding genes but also some non-coding RNAs (e.g., snRNAs, snoRNAs or long non-coding RNAs). RNA polymerase III transcribes 5S rRNA, transfer RNA (tRNA) genes, and some small non-coding RNAs (e.g., 7SK). Transcription ends when the polymerase encounters a sequence called the terminator.
mRNA processing
While transcription of prokaryotic protein-coding genes creates messenger RNA (mRNA) that is ready for translation into protein, transcription of eukaryotic genes leaves a primary transcript of RNA (pre-RNA), which first has to undergo a series of modifications to become a mature RNA. Types and steps involved in the maturation processes vary between coding and non-coding preRNAs; i.e. even though preRNA molecules for both mRNA and tRNA undergo splicing, the steps and machinery involved are different. The processing of non-coding RNA is described below (non-coding RNA maturation).
The processing of pre-mRNA include 5′ capping, which is set of enzymatic reactions that add 7-methylguanosine (m7G) to the 5′ end of pre-mRNA and thus protect the RNA from degradation by exonucleases. The m7G cap is then bound by cap binding complex heterodimer (CBP20/CBP80), which aids in mRNA export to cytoplasm and also protect the RNA from decapping. | Gene expression | Wikipedia | 449 | 159266 | https://en.wikipedia.org/wiki/Gene%20expression | Biology and health sciences | Genetics and taxonomy | null |
Another modification is 3′ cleavage and polyadenylation. They occur if polyadenylation signal sequence (5′- AAUAAA-3′) is present in pre-mRNA, which is usually between protein-coding sequence and terminator. The pre-mRNA is first cleaved and then a series of ~200 adenines (A) are added to form poly(A) tail, which protects the RNA from degradation. The poly(A) tail is bound by multiple poly(A)-binding proteins (PABPs) necessary for mRNA export and translation re-initiation. In the inverse process of deadenylation, poly(A) tails are shortened by the CCR4-Not 3′-5′ exonuclease, which often leads to full transcript decay.
A very important modification of eukaryotic pre-mRNA is RNA splicing. The majority of eukaryotic pre-mRNAs consist of alternating segments called exons and introns. During the process of splicing, an RNA-protein catalytical complex known as spliceosome catalyzes two transesterification reactions, which remove an intron and release it in form of lariat structure, and then splice neighbouring exons together. In certain cases, some introns or exons can be either removed or retained in mature mRNA. This so-called alternative splicing creates series of different transcripts originating from a single gene. Because these transcripts can be potentially translated into different proteins, splicing extends the complexity of eukaryotic gene expression and the size of a species proteome.
Extensive RNA processing may be an evolutionary advantage made possible by the nucleus of eukaryotes. In prokaryotes, transcription and translation happen together, whilst in eukaryotes, the nuclear membrane separates the two processes, giving time for RNA processing to occur.
Non-coding RNA maturation | Gene expression | Wikipedia | 394 | 159266 | https://en.wikipedia.org/wiki/Gene%20expression | Biology and health sciences | Genetics and taxonomy | null |
In most organisms non-coding genes (ncRNA) are transcribed as precursors that undergo further processing. In the case of ribosomal RNAs (rRNA), they are often transcribed as a pre-rRNA that contains one or more rRNAs. The pre-rRNA is cleaved and modified (2′-O-methylation and pseudouridine formation) at specific sites by approximately 150 different small nucleolus-restricted RNA species, called snoRNAs. SnoRNAs associate with proteins, forming snoRNPs. While snoRNA part basepair with the target RNA and thus position the modification at a precise site, the protein part performs the catalytical reaction. In eukaryotes, in particular a snoRNP called RNase, MRP cleaves the 45S pre-rRNA into the 28S, 5.8S, and 18S rRNAs. The rRNA and RNA processing factors form large aggregates called the nucleolus.
In the case of transfer RNA (tRNA), for example, the 5′ sequence is removed by RNase P, whereas the 3′ end is removed by the tRNase Z enzyme and the non-templated 3′ CCA tail is added by a nucleotidyl transferase. In the case of micro RNA (miRNA), miRNAs are first transcribed as primary transcripts or pri-miRNA with a cap and poly-A tail and processed to short, 70-nucleotide stem-loop structures known as pre-miRNA in the cell nucleus by the enzymes Drosha and Pasha. After being exported, it is then processed to mature miRNAs in the cytoplasm by interaction with the endonuclease Dicer, which also initiates the formation of the RNA-induced silencing complex (RISC), composed of the Argonaute protein.
Even snRNAs and snoRNAs themselves undergo series of modification before they become part of functional RNP complex. This is done either in the nucleoplasm or in the specialized compartments called Cajal bodies. Their bases are methylated or pseudouridinilated by a group of small Cajal body-specific RNAs (scaRNAs), which are structurally similar to snoRNAs.
RNA export | Gene expression | Wikipedia | 474 | 159266 | https://en.wikipedia.org/wiki/Gene%20expression | Biology and health sciences | Genetics and taxonomy | null |
In eukaryotes most mature RNA must be exported to the cytoplasm from the nucleus. While some RNAs function in the nucleus, many RNAs are transported through the nuclear pores and into the cytosol. Export of RNAs requires association with specific proteins known as exportins. Specific exportin molecules are responsible for the export of a given RNA type. mRNA transport also requires the correct association with Exon Junction Complex (EJC), which ensures that correct processing of the mRNA is completed before export. In some cases RNAs are additionally transported to a specific part of the cytoplasm, such as a synapse; they are then towed by motor proteins that bind through linker proteins to specific sequences (called "zipcodes") on the RNA.
Translation
For some non-coding RNA, the mature RNA is the final gene product. In the case of messenger RNA (mRNA) the RNA is an information carrier coding for the synthesis of one or more proteins. mRNA carrying a single protein sequence (common in eukaryotes) is monocistronic whilst mRNA carrying multiple protein sequences (common in prokaryotes) is known as polycistronic.
Every mRNA consists of three parts: a 5′ untranslated region (5′UTR), a protein-coding region or open reading frame (ORF), and a 3′ untranslated region (3′UTR). The coding region carries information for protein synthesis encoded by the genetic code to form triplets. Each triplet of nucleotides of the coding region is called a codon and corresponds to a binding site complementary to an anticodon triplet in transfer RNA. Transfer RNAs with the same anticodon sequence always carry an identical type of amino acid. Amino acids are then chained together by the ribosome according to the order of triplets in the coding region. The ribosome helps transfer RNA to bind to messenger RNA and takes the amino acid from each transfer RNA and makes a structure-less protein out of it. Each mRNA molecule is translated into many protein molecules, on average ~2800 in mammals. | Gene expression | Wikipedia | 440 | 159266 | https://en.wikipedia.org/wiki/Gene%20expression | Biology and health sciences | Genetics and taxonomy | null |
In prokaryotes translation generally occurs at the point of transcription (co-transcriptionally), often using a messenger RNA that is still in the process of being created. In eukaryotes translation can occur in a variety of regions of the cell depending on where the protein being written is supposed to be. Major locations are the cytoplasm for soluble cytoplasmic proteins and the membrane of the endoplasmic reticulum for proteins that are for export from the cell or insertion into a cell membrane. Proteins that are supposed to be produced at the endoplasmic reticulum are recognised part-way through the translation process. This is governed by the signal recognition particle—a protein that binds to the ribosome and directs it to the endoplasmic reticulum when it finds a signal peptide on the growing (nascent) amino acid chain.
Folding
Each protein exists as an unfolded polypeptide or random coil when translated from a sequence of mRNA into a linear chain of amino acids. This polypeptide lacks any developed three-dimensional structure (the left hand side of the neighboring figure). The polypeptide then folds into its characteristic and functional three-dimensional structure from a random coil. Amino acids interact with each other to produce a well-defined three-dimensional structure, the folded protein (the right hand side of the figure) known as the native state. The resulting three-dimensional structure is determined by the amino acid sequence (Anfinsen's dogma).
The correct three-dimensional structure is essential to function, although some parts of functional proteins may remain unfolded. Failure to fold into the intended shape usually produces inactive proteins with different properties including toxic prions. Several neurodegenerative and other diseases are believed to result from the accumulation of misfolded proteins. Many allergies are caused by the folding of the proteins, for the immune system does not produce antibodies for certain protein structures.
Enzymes called chaperones assist the newly formed protein to attain (fold into) the 3-dimensional structure it needs to function. Similarly, RNA chaperones help RNAs attain their functional shapes. Assisting protein folding is one of the main roles of the endoplasmic reticulum in eukaryotes. | Gene expression | Wikipedia | 462 | 159266 | https://en.wikipedia.org/wiki/Gene%20expression | Biology and health sciences | Genetics and taxonomy | null |
Translocation
Secretory proteins of eukaryotes or prokaryotes must be translocated to enter the secretory pathway. Newly synthesized proteins are directed to the eukaryotic Sec61 or prokaryotic SecYEG translocation channel by signal peptides. The efficiency of protein secretion in eukaryotes is very dependent on the signal peptide which has been used.
Protein transport
Many proteins are destined for other parts of the cell than the cytosol and a wide range of signalling sequences or (signal peptides) are used to direct proteins to where they are supposed to be. In prokaryotes this is normally a simple process due to limited compartmentalisation of the cell. However, in eukaryotes there is a great variety of different targeting processes to ensure the protein arrives at the correct organelle.
Not all proteins remain within the cell and many are exported, for example, digestive enzymes, hormones and extracellular matrix proteins. In eukaryotes the export pathway is well developed and the main mechanism for the export of these proteins is translocation to the endoplasmic reticulum, followed by transport via the Golgi apparatus.
Regulation of gene expression
Regulation of gene expression is the control of the amount and timing of appearance of the functional product of a gene. Control of expression is vital to allow a cell to produce the gene products it needs when it needs them; in turn, this gives cells the flexibility to adapt to a variable environment, external signals, damage to the cell, and other stimuli. More generally, gene regulation gives the cell control over all structure and function, and is the basis for cellular differentiation, morphogenesis and the versatility and adaptability of any organism. | Gene expression | Wikipedia | 356 | 159266 | https://en.wikipedia.org/wiki/Gene%20expression | Biology and health sciences | Genetics and taxonomy | null |
Numerous terms are used to describe types of genes depending on how they are regulated; these include:
A constitutive gene is a gene that is transcribed continually as opposed to a facultative gene, which is only transcribed when needed.
A housekeeping gene is a gene that is required to maintain basic cellular function and so is typically expressed in all cell types of an organism. Examples include actin, GAPDH and ubiquitin. Some housekeeping genes are transcribed at a relatively constant rate and these genes can be used as a reference point in experiments to measure the expression rates of other genes.
A facultative gene is a gene only transcribed when needed as opposed to a constitutive gene.
An inducible gene is a gene whose expression is either responsive to environmental change or dependent on the position in the cell cycle.
Any step of gene expression may be modulated, from the DNA-RNA transcription step to post-translational modification of a protein. The stability of the final gene product, whether it is RNA or protein, also contributes to the expression level of the gene—an unstable product results in a low expression level. In general gene expression is regulated through changes in the number and type of interactions between molecules that collectively influence transcription of DNA and translation of RNA.
Some simple examples of where gene expression is important are:
Control of insulin expression so it gives a signal for blood glucose regulation.
X chromosome inactivation in female mammals to prevent an "overdose" of the genes it contains.
Cyclin expression levels control progression through the eukaryotic cell cycle.
Transcriptional regulation
Regulation of transcription can be broken down into three main routes of influence; genetic (direct interaction of a control factor with the gene), modulation interaction of a control factor with the transcription machinery and epigenetic (non-sequence changes in DNA structure that influence transcription).
Direct interaction with DNA is the simplest and the most direct method by which a protein changes transcription levels. Genes often have several protein binding sites around the coding region with the specific function of regulating transcription. There are many classes of regulatory DNA binding sites known as enhancers, insulators and silencers. The mechanisms for regulating transcription are varied, from blocking key binding sites on the DNA for RNA polymerase to acting as an activator and promoting transcription by assisting RNA polymerase binding. | Gene expression | Wikipedia | 475 | 159266 | https://en.wikipedia.org/wiki/Gene%20expression | Biology and health sciences | Genetics and taxonomy | null |
The activity of transcription factors is further modulated by intracellular signals causing protein post-translational modification including phosphorylation, acetylation, or glycosylation. These changes influence a transcription factor's ability to bind, directly or indirectly, to promoter DNA, to recruit RNA polymerase, or to favor elongation of a newly synthesized RNA molecule.
The nuclear membrane in eukaryotes allows further regulation of transcription factors by the duration of their presence in the nucleus, which is regulated by reversible changes in their structure and by binding of other proteins. Environmental stimuli or endocrine signals may cause modification of regulatory proteins eliciting cascades of intracellular signals, which result in regulation of gene expression.
It has become apparent that there is a significant influence of non-DNA-sequence specific effects on transcription. These effects are referred to as epigenetic and involve the higher order structure of DNA, non-sequence specific DNA binding proteins and chemical modification of DNA. In general epigenetic effects alter the accessibility of DNA to proteins and so modulate transcription.
In eukaryotes the structure of chromatin, controlled by the histone code, regulates access to DNA with significant impacts on the expression of genes in euchromatin and heterochromatin areas.
Enhancers, transcription factors, mediator complex and DNA loops in mammalian transcription
Gene expression in mammals is regulated by many cis-regulatory elements, including core promoters and promoter-proximal elements that are located near the transcription start sites of genes, upstream on the DNA (towards the 5' region of the sense strand). Other important cis-regulatory modules are localized in DNA regions that are distant from the transcription start sites. These include enhancers, silencers, insulators and tethering elements. Enhancers and their associated transcription factors have a leading role in the regulation of gene expression.
Enhancers are genome regions that regulate genes. Enhancers control cell-type-specific gene expression programs, most often by looping through long distances to come in physical proximity with the promoters of their target genes. Multiple enhancers, each often tens or hundred of thousands of nucleotides distant from their target genes, loop to their target gene promoters and coordinate with each other to control gene expression. | Gene expression | Wikipedia | 465 | 159266 | https://en.wikipedia.org/wiki/Gene%20expression | Biology and health sciences | Genetics and taxonomy | null |
The illustration shows an enhancer looping around to come into proximity with the promoter of a target gene. The loop is stabilized by a dimer of a connector protein (e.g. dimer of CTCF or YY1). One member of the dimer is anchored to its binding motif on the enhancer and the other member is anchored to its binding motif on the promoter (represented by the red zigzags in the illustration). Several cell function-specific transcription factors (among the about 1,600 transcription factors in a human cell) generally bind to specific motifs on an enhancer. A small combination of these enhancer-bound transcription factors, when brought close to a promoter by a DNA loop, govern transcription level of the target gene. Mediator (a complex usually consisting of about 26 proteins in an interacting structure) communicates regulatory signals from enhancer DNA-bound transcription factors directly to the RNA polymerase II (pol II) enzyme bound to the promoter.
Enhancers, when active, are generally transcribed from both strands of DNA with RNA polymerases acting in two different directions, producing two eRNAs as illustrated in the figure. An inactive enhancer may be bound by an inactive transcription factor. Phosphorylation of the transcription factor may activate it and that activated transcription factor may then activate the enhancer to which it is bound (see small red star representing phosphorylation of transcription factor bound to enhancer in the illustration). An activated enhancer begins transcription of its RNA before activating transcription of messenger RNA from its target gene.
DNA methylation and demethylation in transcriptional regulation
DNA methylation is a widespread mechanism for epigenetic influence on gene expression and is seen in bacteria and eukaryotes and has roles in heritable transcription silencing and transcription regulation. Methylation most often occurs on a cytosine (see Figure). Methylation of cytosine primarily occurs in dinucleotide sequences where a cytosine is followed by a guanine, a CpG site. The number of CpG sites in the human genome is about 28 million. Depending on the type of cell, about 70% of the CpG sites have a methylated cytosine. | Gene expression | Wikipedia | 457 | 159266 | https://en.wikipedia.org/wiki/Gene%20expression | Biology and health sciences | Genetics and taxonomy | null |
Methylation of cytosine in DNA has a major role in regulating gene expression. Methylation of CpGs in a promoter region of a gene usually represses gene transcription while methylation of CpGs in the body of a gene increases expression. TET enzymes play a central role in demethylation of methylated cytosines. Demethylation of CpGs in a gene promoter by TET enzyme activity increases transcription of the gene.
Transcriptional regulation in learning and memory
In a rat, contextual fear conditioning (CFC) is a painful learning experience. Just one episode of CFC can result in a life-long fearful memory. After an episode of CFC, cytosine methylation is altered in the promoter regions of about 9.17% of all genes in the hippocampus neuron DNA of a rat. The hippocampus is where new memories are initially stored. After CFC about 500 genes have increased transcription (often due to demethylation of CpG sites in a promoter region) and about 1,000 genes have decreased transcription (often due to newly formed 5-methylcytosine at CpG sites in a promoter region). The pattern of induced and repressed genes within neurons appears to provide a molecular basis for forming the first transient memory of this training event in the hippocampus of the rat brain. | Gene expression | Wikipedia | 278 | 159266 | https://en.wikipedia.org/wiki/Gene%20expression | Biology and health sciences | Genetics and taxonomy | null |
Some specific mechanisms guiding new DNA methylations and new DNA demethylations in the hippocampus during memory establishment have been established (see for summary). One mechanism includes guiding the short isoform of the TET1 DNA demethylation enzyme, TET1s, to about 600 locations on the genome. The guidance is performed by association of TET1s with EGR1 protein, a transcription factor important in memory formation. Bringing TET1s to these locations initiates DNA demethylation at those sites, up-regulating associated genes. A second mechanism involves DNMT3A2, a splice-isoform of DNA methyltransferase DNMT3A, which adds methyl groups to cytosines in DNA. This isoform is induced by synaptic activity, and its location of action appears to be determined by histone post-translational modifications (a histone code). The resulting new messenger RNAs are then transported by messenger RNP particles (neuronal granules) to synapses of the neurons, where they can be translated into proteins affecting the activities of synapses.
In particular, the brain-derived neurotrophic factor gene (BDNF) is known as a "learning gene". After CFC there was upregulation of BDNF gene expression, related to decreased CpG methylation of certain internal promoters of the gene, and this was correlated with learning.
Transcriptional regulation in cancer
The majority of gene promoters contain a CpG island with numerous CpG sites. When many of a gene's promoter CpG sites are methylated the gene becomes silenced. Colorectal cancers typically have 3 to 6 driver mutations and 33 to 66 hitchhiker or passenger mutations. However, transcriptional silencing may be of more importance than mutation in causing progression to cancer. For example, in colorectal cancers about 600 to 800 genes are transcriptionally silenced by CpG island methylation (see regulation of transcription in cancer). Transcriptional repression in cancer can also occur by other epigenetic mechanisms, such as altered expression of microRNAs. In breast cancer, transcriptional repression of BRCA1 may occur more frequently by over-transcribed microRNA-182 than by hypermethylation of the BRCA1 promoter (see Low expression of BRCA1 in breast and ovarian cancers).
Post-transcriptional regulation | Gene expression | Wikipedia | 498 | 159266 | https://en.wikipedia.org/wiki/Gene%20expression | Biology and health sciences | Genetics and taxonomy | null |
In eukaryotes, where export of RNA is required before translation is possible, nuclear export is thought to provide additional control over gene expression. All transport in and out of the nucleus is via the nuclear pore and transport is controlled by a wide range of importin and exportin proteins.
Expression of a gene coding for a protein is only possible if the messenger RNA carrying the code survives long enough to be translated. In a typical cell, an RNA molecule is only stable if specifically protected from degradation. RNA degradation has particular importance in regulation of expression in eukaryotic cells where mRNA has to travel significant distances before being translated. In eukaryotes, RNA is stabilised by certain post-transcriptional modifications, particularly the 5′ cap and poly-adenylated tail.
Intentional degradation of mRNA is used not just as a defence mechanism from foreign RNA (normally from viruses) but also as a route of mRNA destabilisation. If an mRNA molecule has a complementary sequence to a small interfering RNA then it is targeted for destruction via the RNA interference pathway.
Three prime untranslated regions and microRNAs
Three prime untranslated regions (3′UTRs) of messenger RNAs (mRNAs) often contain regulatory sequences that post-transcriptionally influence gene expression. Such 3′-UTRs often contain both binding sites for microRNAs (miRNAs) as well as for regulatory proteins. By binding to specific sites within the 3′-UTR, miRNAs can decrease gene expression of various mRNAs by either inhibiting translation or directly causing degradation of the transcript. The 3′-UTR also may have silencer regions that bind repressor proteins that inhibit the expression of a mRNA.
The 3′-UTR often contains microRNA response elements (MREs). MREs are sequences to which miRNAs bind. These are prevalent motifs within 3′-UTRs. Among all regulatory motifs within the 3′-UTRs (e.g. including silencer regions), MREs make up about half of the motifs. | Gene expression | Wikipedia | 418 | 159266 | https://en.wikipedia.org/wiki/Gene%20expression | Biology and health sciences | Genetics and taxonomy | null |
As of 2014, the miRBase web site, an archive of miRNA sequences and annotations, listed 28,645 entries in 233 biologic species. Of these, 1,881 miRNAs were in annotated human miRNA loci. miRNAs were predicted to have an average of about four hundred target mRNAs (affecting expression of several hundred genes). Friedman et al. estimate that >45,000 miRNA target sites within human mRNA 3′UTRs are conserved above background levels, and >60% of human protein-coding genes have been under selective pressure to maintain pairing to miRNAs.
Direct experiments show that a single miRNA can reduce the stability of hundreds of unique mRNAs. Other experiments show that a single miRNA may repress the production of hundreds of proteins, but that this repression often is relatively mild (less than 2-fold).
The effects of miRNA dysregulation of gene expression seem to be important in cancer. For instance, in gastrointestinal cancers, nine miRNAs have been identified as epigenetically altered and effective in down regulating DNA repair enzymes.
The effects of miRNA dysregulation of gene expression also seem to be important in neuropsychiatric disorders, such as schizophrenia, bipolar disorder, major depression, Parkinson's disease, Alzheimer's disease and autism spectrum disorders.
Translational regulation
Direct regulation of translation is less prevalent than control of transcription or mRNA stability but is occasionally used. Inhibition of protein translation is a major target for toxins and antibiotics, so they can kill a cell by overriding its normal gene expression control. Protein synthesis inhibitors include the antibiotic neomycin and the toxin ricin.
Post-translational modifications
Post-translational modifications (PTMs) are covalent modifications to proteins. Like RNA splicing, they help to significantly diversify the proteome. These modifications are usually catalyzed by enzymes. Additionally, processes like covalent additions to amino acid side chain residues can often be reversed by other enzymes. However, some, like the proteolytic cleavage of the protein backbone, are irreversible. | Gene expression | Wikipedia | 438 | 159266 | https://en.wikipedia.org/wiki/Gene%20expression | Biology and health sciences | Genetics and taxonomy | null |
PTMs play many important roles in the cell. For example, phosphorylation is primarily involved in activating and deactivating proteins and in signaling pathways. PTMs are involved in transcriptional regulation: an important function of acetylation and methylation is histone tail modification, which alters how accessible DNA is for transcription. They can also be seen in the immune system, where glycosylation plays a key role. One type of PTM can initiate another type of PTM, as can be seen in how ubiquitination tags proteins for degradation through proteolysis. Proteolysis, other than being involved in breaking down proteins, is also important in activating and deactivating them, and in regulating biological processes such as DNA transcription and cell death.
Measurement
Measuring gene expression is an important part of many life sciences, as the ability to quantify the level at which a particular gene is expressed within a cell, tissue or organism can provide a lot of valuable information. For example, measuring gene expression can:
Identify viral infection of a cell (viral protein expression).
Determine an individual's susceptibility to cancer (oncogene expression).
Find if a bacterium is resistant to penicillin (beta-lactamase expression).
Gene expression profiling evaluates a panel of genes to help understand the fundamental mechanism of a cell. This is increasingly used in cancer therapy to target specific chemotherapy. (See RNA-Seq and DNA_microarray for details.)
Similarly, the analysis of the location of protein expression is a powerful tool, and this can be done on an organismal or cellular scale. Investigation of localization is particularly important for the study of development in multicellular organisms and as an indicator of protein function in single cells. Ideally, measurement of expression is done by detecting the final gene product (for many genes, this is the protein); however, it is often easier to detect one of the precursors, typically mRNA and to infer gene-expression levels from these measurements. | Gene expression | Wikipedia | 419 | 159266 | https://en.wikipedia.org/wiki/Gene%20expression | Biology and health sciences | Genetics and taxonomy | null |
mRNA quantification
Levels of mRNA can be quantitatively measured by northern blotting, which provides size and sequence information about the mRNA molecules. A sample of RNA is separated on an agarose gel and hybridized to a radioactively labeled RNA probe that is complementary to the target sequence. The radiolabeled RNA is then detected by an autoradiograph. Because the use of radioactive reagents makes the procedure time-consuming and potentially dangerous, alternative labeling and detection methods, such as digoxigenin and biotin chemistries, have been developed. Perceived disadvantages of Northern blotting are that large quantities of RNA are required and that quantification may not be completely accurate, as it involves measuring band strength in an image of a gel. On the other hand, the additional mRNA size information from the Northern blot allows the discrimination of alternately spliced transcripts.
Another approach for measuring mRNA abundance is RT-qPCR. In this technique, reverse transcription is followed by quantitative PCR. Reverse transcription first generates a DNA template from the mRNA; this single-stranded template is called cDNA. The cDNA template is then amplified in the quantitative step, during which the fluorescence emitted by labeled hybridization probes or intercalating dyes changes as the DNA amplification process progresses. With a carefully constructed standard curve, qPCR can produce an absolute measurement of the number of copies of original mRNA, typically in units of copies per nanolitre of homogenized tissue or copies per cell. qPCR is very sensitive (detection of a single mRNA molecule is theoretically possible), but can be expensive depending on the type of reporter used; fluorescently labeled oligonucleotide probes are more expensive than non-specific intercalating fluorescent dyes. | Gene expression | Wikipedia | 365 | 159266 | https://en.wikipedia.org/wiki/Gene%20expression | Biology and health sciences | Genetics and taxonomy | null |
For expression profiling, or high-throughput analysis of many genes within a sample, quantitative PCR may be performed for hundreds of genes simultaneously in the case of low-density arrays. A second approach is the hybridization microarray. A single array or "chip" may contain probes to determine transcript levels for every known gene in the genome of one or more organisms. Alternatively, "tag based" technologies like Serial analysis of gene expression (SAGE) and RNA-Seq, which can provide a relative measure of the cellular concentration of different mRNAs, can be used. An advantage of tag-based methods is the "open architecture", allowing for the exact measurement of any transcript, with a known or unknown sequence. Next-generation sequencing (NGS) such as RNA-Seq is another approach, producing vast quantities of sequence data that can be matched to a reference genome. Although NGS is comparatively time-consuming, expensive, and resource-intensive, it can identify single-nucleotide polymorphisms, splice-variants, and novel genes, and can also be used to profile expression in organisms for which little or no sequence information is available.
RNA profiles in Wikipedia
Profiles like these are found for almost all proteins listed in Wikipedia. They are generated by organizations such as the Genomics Institute of the Novartis Research Foundation and the European Bioinformatics Institute. Additional information can be found by searching their databases (for an example of the GLUT4 transporter pictured here, see citation). These profiles indicate the level of DNA expression (and hence RNA produced) of a certain protein in a certain tissue, and are color-coded accordingly in the images located in the Protein Box on the right side of each Wikipedia page.
Protein quantification
For genes encoding proteins, the expression level can be directly assessed by a number of methods with some clear analogies to the techniques for mRNA quantification. | Gene expression | Wikipedia | 391 | 159266 | https://en.wikipedia.org/wiki/Gene%20expression | Biology and health sciences | Genetics and taxonomy | null |
One of the most commonly used methods is to perform a Western blot against the protein of interest. This gives information on the size of the protein in addition to its identity. A sample (often cellular lysate) is separated on a polyacrylamide gel, transferred to a membrane and then probed with an antibody to the protein of interest. The antibody can either be conjugated to a fluorophore or to horseradish peroxidase for imaging and/or quantification. The gel-based nature of this assay makes quantification less accurate, but it has the advantage of being able to identify later modifications to the protein, for example proteolysis or ubiquitination, from changes in size.
mRNA-protein correlation
While transcription directly reflects gene expression, the copy number of mRNA molecules does not directly correlate with the number of protein molecules translated from mRNA. Quantification of both protein and mRNA permits a correlation of the two levels. Regulation on each step of gene expression can impact the correlation, as shown for regulation of translation or protein stability. Post-translational factors, such as protein transport in highly polar cells, can influence the measured mRNA-protein correlation as well.
Localization
Analysis of expression is not limited to quantification; localization can also be determined. mRNA can be detected with a suitably labelled complementary mRNA strand and protein can be detected via labelled antibodies. The probed sample is then observed by microscopy to identify where the mRNA or protein is.
By replacing the gene with a new version fused to a green fluorescent protein marker or similar, expression may be directly quantified in live cells. This is done by imaging using a fluorescence microscope. It is very difficult to clone a GFP-fused protein into its native location in the genome without affecting expression levels, so this method often cannot be used to measure endogenous gene expression. It is, however, widely used to measure the expression of a gene artificially introduced into the cell, for example via an expression vector. By fusing a target protein to a fluorescent reporter, the protein's behavior, including its cellular localization and expression level, can be significantly changed. | Gene expression | Wikipedia | 448 | 159266 | https://en.wikipedia.org/wiki/Gene%20expression | Biology and health sciences | Genetics and taxonomy | null |
The enzyme-linked immunosorbent assay works by using antibodies immobilised on a microtiter plate to capture proteins of interest from samples added to the well. Using a detection antibody conjugated to an enzyme or fluorophore the quantity of bound protein can be accurately measured by fluorometric or colourimetric detection. The detection process is very similar to that of a Western blot, but by avoiding the gel steps more accurate quantification can be achieved.
Expression system
An expression system is a system specifically designed for the production of a gene product of choice. This is normally a protein although may also be RNA, such as tRNA or a ribozyme. An expression system consists of a gene, normally encoded by DNA, and the molecular machinery required to transcribe the DNA into mRNA and translate the mRNA into protein using the reagents provided. In the broadest sense this includes every living cell but the term is more normally used to refer to expression as a laboratory tool. An expression system is therefore often artificial in some manner. Expression systems are, however, a fundamentally natural process. Viruses are an excellent example where they replicate by using the host cell as an expression system for the viral proteins and genome.
Inducible expression
Doxycycline is also used in "Tet-on" and "Tet-off" tetracycline controlled transcriptional activation to regulate transgene expression in organisms and cell cultures.
In nature
In addition to these biological tools, certain naturally observed configurations of DNA (genes, promoters, enhancers, repressors) and the associated machinery itself are referred to as an expression system. This term is normally used in the case where a gene or set of genes is switched on under well defined conditions, for example, the simple repressor switch expression system in Lambda phage and the lac operator system in bacteria. Several natural expression systems are directly used or modified and used for artificial expression systems such as the Tet-on and Tet-off expression system.
Gene networks
Genes have sometimes been regarded as nodes in a network, with inputs being proteins such as transcription factors, and outputs being the level of gene expression. The node itself performs a function, and the operation of these functions have been interpreted as performing a kind of information processing within cells and determines cellular behavior. | Gene expression | Wikipedia | 475 | 159266 | https://en.wikipedia.org/wiki/Gene%20expression | Biology and health sciences | Genetics and taxonomy | null |
Gene networks can also be constructed without formulating an explicit causal model. This is often the case when assembling networks from large expression data sets. Covariation and correlation of expression is computed across a large sample of cases and measurements (often transcriptome or proteome data). The source of variation can be either experimental or natural (observational). There are several ways to construct gene expression networks, but one common approach is to compute a matrix of all pair-wise correlations of expression across conditions, time points, or individuals and convert the matrix (after thresholding at some cut-off value) into a graphical representation in which nodes represent genes, transcripts, or proteins and edges connecting these nodes represent the strength of association (see GeneNetwork GeneNetwork 2).
Techniques and tools
The following experimental techniques are used to measure gene expression and are listed in roughly chronological order, starting with the older, more established technologies. They are divided into two groups based on their degree of multiplexity.
Low-to-mid-plex techniques:
Reporter gene
Northern blot
Western blot
Fluorescent in situ hybridization
Reverse transcription PCR
Higher-plex techniques:
SAGE
DNA microarray
Tiling array
RNA-Seq
Gene expression databases
Gene expression omnibus (GEO) at NCBI
Expression Atlas at the EBI
Bgee Bgee at the SIB Swiss Institute of Bioinformatics
Mouse Gene Expression Database at the Jackson Laboratory
CollecTF: a database of experimentally validated transcription factor-binding sites in Bacteria.
COLOMBOS: collection of bacterial expression compendia.
Many Microbe Microarrays Database: microbial Affymetrix data | Gene expression | Wikipedia | 336 | 159266 | https://en.wikipedia.org/wiki/Gene%20expression | Biology and health sciences | Genetics and taxonomy | null |
Potassium chloride (KCl, or potassium salt) is a metal halide salt composed of potassium and chlorine. It is odorless and has a white or colorless vitreous crystal appearance. The solid dissolves readily in water, and its solutions have a salt-like taste. Potassium chloride can be obtained from ancient dried lake deposits. KCl is used as a fertilizer, in medicine, in scientific applications, domestic water softeners (as a substitute for sodium chloride salt), and in food processing, where it may be known as E number additive E508.
It occurs naturally as the mineral sylvite, which is named after salt's historical designations sal degistivum Sylvii and sal febrifugum Sylvii, and in combination with sodium chloride as sylvinite.
Uses
Fertilizer
The majority of the potassium chloride produced is used for making fertilizer, called potash, since the growth of many plants is limited by potassium availability. The term "potash" refers to various mined and manufactured salts that contain potassium in water-soluble form. Potassium chloride sold as fertilizer is known as "muriate of potash"—it is the common name for potassium chloride () used in agriculture. The vast majority of potash fertilizer worldwide is sold as muriate of potash. The dominance of muriate of potash in the fertilizer market is due to its high potassium content (approximately 60% equivalent) and relative affordability compared to other potassium sources like sulfate of potash (potassium sulfate). Potassium is one of the three primary macronutrients essential for plant growth, alongside nitrogen and phosphorus. Potassium plays a vital role in various plant physiological processes, including enzyme activation, photosynthesis, protein synthesis, and water regulation. For watering plants, a moderate concentration of potassium chloride (KCl) is used to avoid potential toxicity: 6 mM (millimolar) is generally effective and safe for most plants, that is approximately per liter of water.
Medical use | Potassium chloride | Wikipedia | 422 | 159292 | https://en.wikipedia.org/wiki/Potassium%20chloride | Physical sciences | Halide salts | Chemistry |
Potassium is vital in the human body, and potassium chloride by mouth is the standard means to treat low blood potassium, although it can also be given intravenously. It is on the World Health Organization's List of Essential Medicines. It is also an ingredient in Oral Rehydration Therapy (ORT)/solution (ORS) to reduce hypokalemia caused by diarrhoea. This is another medicine on the WHO's List of Essential Medicines.
Potassium chloride contains 52% of elemental potassium by mass.
Overdose causes hyperkalemia which can disrupt cell signaling to the extent that the heart will stop, reversibly in the case of some open heart surgeries.
Culinary use
Potassium chloride can be used as a salt substitute for food, but due to its weak, bitter, unsalty flavor, it is often mixed with ordinary table salt (sodium chloride) to improve the taste, to form low sodium salt. The addition of 1 ppm of thaumatin considerably reduces this bitterness. Complaints of bitterness or a chemical or metallic taste are also reported with potassium chloride used in food.
Execution
In the United States, potassium chloride is used as the final drug in the three-injection sequence of lethal injection as a form of capital punishment. It induces cardiac arrest, ultimately killing the inmate.
Industrial
As a chemical feedstock, the salt is used for the manufacture of potassium hydroxide and potassium metal. It is also used in medicine, lethal injections, scientific applications, food processing, soaps, and as a sodium-free substitute for table salt for people concerned about the health effects of sodium.
It is used as a supplement in animal feed to boost the potassium level in the feed. As an added benefit, it is known to increase milk production.
It is sometimes used in solution as a completion fluid in petroleum and natural gas operations, as well as being an alternative to sodium chloride in household water softener units.
Glass manufacturers use granular potash as a flux, lowering the temperature at which a mixture melts. Because potash imparts excellent clarity to glass, it is commonly used in eyeglasses, glassware, televisions, and computer monitors.
Because natural potassium contains a tiny amount of the isotope potassium-40, potassium chloride is used as a beta radiation source to calibrate radiation monitoring equipment. It also emits a relatively low level of 511 keV gamma rays from positron annihilation, which can be used to calibrate medical scanners. | Potassium chloride | Wikipedia | 507 | 159292 | https://en.wikipedia.org/wiki/Potassium%20chloride | Physical sciences | Halide salts | Chemistry |
Potassium chloride is used in some de-icing products designed to be safer for pets and plants, though these are inferior in melting quality to calcium chloride. It is also used in various brands of bottled water.
Potassium chloride was once used as a fire extinguishing agent, and in portable and wheeled fire extinguishers. Known as Super-K dry chemical, it was more effective than sodium bicarbonate-based dry chemicals and was compatible with protein foam. This agent fell out of favor with the introduction of potassium bicarbonate (Purple-K) dry chemical in the late 1960s, which was much less corrosive, as well as more effective. It is rated for B and C fires.
Along with sodium chloride and lithium chloride, potassium chloride is used as a flux for the gas welding of aluminium.
Potassium chloride is also an optical crystal with a wide transmission range from 210 nm to 20 μm. While cheap, KCl crystals are hygroscopic. This limits its application to protected environments or short-term uses such as prototyping. Exposed to free air, KCl optics will "rot". Whereas KCl components were formerly used for infrared optics, they have been entirely replaced by much tougher crystals such as zinc selenide.
Potassium chloride is used as a scotophor with designation P10 in dark-trace CRTs, e.g. in the Skiatron.
Toxicity
The typical amounts of potassium chloride found in the diet appear to be generally safe. In larger quantities, however, potassium chloride is toxic. The of orally ingested potassium chloride is approximately 2.5 g/kg, or for a body mass of . In comparison, the of sodium chloride (table salt) is 3.75 g/kg.
Intravenously, the of potassium chloride is far smaller, at about 57.2 mg/kg to 66.7 mg/kg; this is found by dividing the lethal concentration of positive potassium ions (about 30 to 35 mg/kg) by the proportion by mass of potassium ions in potassium chloride (about 0.52445 mg K+/mg KCl).
Chemical properties
Solubility
KCl is soluble in a variety of polar solvents.
Solutions of KCl are common standards, for example for calibration of the electrical conductivity of (ionic) solutions, since KCl solutions are stable, allowing for reproducible measurements. In aqueous solution, it is essentially fully ionized into solvated and ions. | Potassium chloride | Wikipedia | 511 | 159292 | https://en.wikipedia.org/wiki/Potassium%20chloride | Physical sciences | Halide salts | Chemistry |
Redox and the conversion to potassium metal
Although potassium is more electropositive than sodium, KCl can be reduced to the metal by reaction with metallic sodium at 850 °C because the more volatile potassium can be removed by distillation (see Le Chatelier's principle):
KCl_{(l)}{} + Na_{(l)} <=> NaCl_{(l)}{} + K_{(g)}
This method is the main method for producing metallic potassium. Electrolysis (used for sodium) fails because of the high solubility of potassium in molten KCl.
Other potassium chloride stoichiometries
Potassium chlorides with formulas other than KCl have been predicted to become stable under pressures of 20 GPa or more. Among these, two phases of KCl3 were synthesized and characterized. At 20-40 GPa, a trigonal structure containing K+ and Cl3− is obtained; above 40 GPa this gives way to a phase isostructural with the intermetallic compound Cr3Si.
Physical properties
Under ambient conditions, the crystal structure of potassium chloride is like that of NaCl. It adopts a face-centered cubic structure known as the B1 phase with a lattice constant of roughly 6.3 Å. Crystals cleave easily in three directions. Other polymorphic and hydrated phases are adopted at high pressures.
Some other properties are
Transmission range: 210 nm to 20 μm
Transmittivity = 92% at 450 nm and rises linearly to 94% at 16 μm
Refractive index = 1.456 at 10 μm
Reflection loss = 6.8% at 10 μm (two surfaces)
dN/dT (expansion coefficient)= −33.2×10−6/°C
dL/dT (refractive index gradient)= 40×10−6/°C
Thermal conductivity = 0.036 W/(cm·K)
Damage threshold (Newman and Novak): 4 GW/cm2 or 2 J/cm2 (0.5 or 1 ns pulse rate); 4.2 J/cm2 (1.7 ns pulse rate Kovalev and Faizullov)
As with other compounds containing potassium, KCl in powdered form gives a lilac flame.
Production | Potassium chloride | Wikipedia | 474 | 159292 | https://en.wikipedia.org/wiki/Potassium%20chloride | Physical sciences | Halide salts | Chemistry |
Potassium chloride is extracted from minerals sylvite, carnallite, and potash. It is also extracted from salt water and can be manufactured by crystallization from solution, flotation or electrostatic separation from suitable minerals. It is a by-product of the production of nitric acid from potassium nitrate and hydrochloric acid.
Most potassium chloride is produced as agricultural and industrial-grade potash in Saskatchewan, Canada, Russia, and Belarus. Saskatchewan alone accounted for over 25% of the world's potash production in 2017.
Laboratory methods
Potassium chloride is inexpensively available and is rarely prepared intentionally in the laboratory. It can be generated by treating potassium hydroxide (or other potassium bases) with hydrochloric acid:
KOH + HCl -> KCl + H2O
This conversion is an acid-base neutralization reaction. The resulting salt can then be purified by recrystallization. Another method would be to allow potassium to burn in the presence of chlorine gas, also a very exothermic reaction:
2 K + Cl2 -> 2 KCl | Potassium chloride | Wikipedia | 224 | 159292 | https://en.wikipedia.org/wiki/Potassium%20chloride | Physical sciences | Halide salts | Chemistry |
Ultralight aviation (called microlight aviation in some countries) is the flying of lightweight, 1- or 2-seat fixed-wing aircraft. Some countries differentiate between weight-shift control and conventional three-axis control aircraft with ailerons, elevator and rudder, calling the former "microlight" and the latter "ultralight".
During the late 1970s and early 1980s, mostly stimulated by the hang gliding movement, many people sought affordable powered flight. As a result, many aviation authorities set up definitions of lightweight, slow-flying aeroplanes that could be subject to minimum regulations. The resulting aeroplanes are commonly called "ultralight aircraft" or "microlights", although the weight and speed limits differ from country to country. In Europe, the sporting (FAI) definition limits the maximum stalling speed to and the maximum take-off weight to , or if a ballistic parachute is installed. The definition means that the aircraft has a slow landing speed and short landing roll in the event of an engine failure.
In most affluent countries, microlights or ultralight aircraft now account for a significant percentage of the global civilian-owned aircraft. For instance, in Canada in February 2018, the ultralight aircraft fleet made up to 20.4% of the total civilian aircraft registered. In other countries that do not register ultralight aircraft, like in the United States, it is unknown what proportion of the total fleet they make up. In countries where there is no specific extra regulation, ultralights are considered regular aircraft and subject to certification requirements for both aircraft and pilot.
Definitions
Australia
In Australia, ultralight aircraft and their pilots can either be registered with the Hang Gliding Federation of Australia (HGFA) or Recreational Aviation Australia (RA Aus). In all cases, except for privately built single seat ultralight aeroplanes, microlight aircraft or trikes are regulated by the Civil Aviation Regulations.
Canada
United Kingdom
Pilots of a powered, fixed wing aircraft or paramotors do not need a licence, provided its weight with a full fuel tank is not more than , but they must obey the rules of the air.
For heavier microlights the current UK regulations are similar to the European ones, but helicopters and gyroplanes are not included. | Ultralight aviation | Wikipedia | 449 | 159298 | https://en.wikipedia.org/wiki/Ultralight%20aviation | Technology | Types of aircraft | null |
Other than the very earliest aircraft, all two-seat UK microlights (and until 2007 all single-seaters) have been required to meet an airworthiness standard; BCAR Section S.
In 2007, Single Seat DeRegulated (SSDR), a sub-category of single seat aircraft was introduced, allowing owners more freedom for modification and experiments. By 2017 the airworthiness of all single seat microlights became solely the responsibility of the user, but pilots must hold a microlight licence; currently NPPL(M) (National Private Pilots Licence).
New Zealand
Ultralights in New Zealand are subject to NZCAA General Aviation regulations with microlight specific variations as described in Part 103 and AC103-1.
United States
The United States FAA's definition of an ultralight is significantly different from that in most other countries and can lead to some confusion when discussing the topic. The governing regulation in the United States is FAR 103 Ultralight Vehicles. In 2004, the FAA introduced the "Light-sport aircraft" category, which resembles some other countries' microlight categories. Ultralight aviation is represented by the United States Ultralight Association (USUA), which acts as the US aeroclub representative to the Fédération Aéronautique Internationale. | Ultralight aviation | Wikipedia | 254 | 159298 | https://en.wikipedia.org/wiki/Ultralight%20aviation | Technology | Types of aircraft | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.