id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
1,609,200 | https://en.wikipedia.org/wiki/Signedness | In computing, signedness is a property of data types representing numbers in computer programs. A numeric variable is signed if it can represent both positive and negative numbers, and unsigned if it can only represent non-negative numbers (zero or positive numbers).
As signed numbers can represent negative numbers, they lose a range of positive numbers that can only be represented with unsigned numbers of the same size (in bits) because roughly half the possible values are non-positive values, whereas the respective unsigned type can dedicate all the possible values to the positive number range.
For example, a two's complement signed 16-bit integer can hold the values −32768 to 32767 inclusively, while an unsigned 16 bit integer can hold the values 0 to 65535. For this sign representation method, the leftmost bit (most significant bit) denotes whether the value is negative (0 for positive or zero, 1 for negative).
In programming languages
For most architectures, there is no signed–unsigned type distinction in the machine language. Nevertheless, arithmetic instructions usually set different CPU flags such as the carry flag for unsigned arithmetic and the overflow flag for signed. Those values can be taken into account by subsequent branch or arithmetic commands.
The C programming language, along with its derivatives, implements a signedness for all integer data types, as well as for "character". For Integers, the modifier defines the type to be unsigned. The default integer signedness outside bit-fields is signed, but can be set explicitly with modifier. By contrast, the C standard declares , , and , to be three distinct types, but specifies that all three must have the same size and alignment. Further, must have the same numeric range as either or , but the choice of which depends on the platform. Integer literals can be made unsigned with suffix.
Compilers often issue a warning when comparisons are made between signed and unsigned numbers or when one is cast to the other. These are potentially dangerous operations as the ranges of the signed and unsigned types are different.
See also
Sign bit
Signed number representations
Sign (mathematics)
Binary Angular Measurement System, an example of semantics where signedness does not matter
External links
Computer arithmetic
Data types
Sign (mathematics) | Signedness | [
"Mathematics"
] | 454 | [
"Sign (mathematics)",
"Mathematical objects",
"Computer arithmetic",
"Arithmetic",
"Numbers"
] |
1,609,267 | https://en.wikipedia.org/wiki/Eastmain%20River | The Eastmain River, formerly written East Main, is a river in west central Quebec. It rises in central Quebec and flows west to James Bay, draining an area of . The First Nations Cree village of Eastmain is located beside the mouth.
Name
Eastmain is a compounding of the river's former name East Main, which was taken from the former Hudson's Bay Company outpost at its mouth. This post controlled company trading operations in the East Main District on the eastern side of James Bay.
Geography
Since the late 1980s, most of the waters of the Eastmain River have been diverted and flow northwards through the Opinaca Reservoir, with a surface area of about , and into the Robert-Bourassa Reservoir of Hydro-Québec's La Grande Complex. The remainder of the Eastmain River contains only about 10 percent of the volume of its former flow, and is now subject to freeze-up in winter (see photo). These changes have affected the Cree and Inuit peoples who live along the Eastmain River and James Bay coast, making it more difficult for them to travel in winter and reducing their access to fish in the river.
In 2005, a further hydroelectric project on the upper Eastmain River was under construction. The project was part of the original hydroelectric project provided for by the James Bay and Northern Quebec Agreement of 1975. The Eastmain Reservoir will eventually have a surface area of about , and the Eastmain-1 power plant will generate a maximum of 900 MW.
History
The mouth of the Eastmain was a centre of the Hudson's Bay Company fur trade. Charles Bayly reached it from Rupert House in the 1670s. After Rupert House was destroyed in 1686, the area was visited by a ship from York Factory. In 1723 to 1724, Joseph Myatt of the Hudson's Bay Company built a post.
See also
Centrale Eastmain-1
James Bay Project
Jamésie
List of rivers of Quebec
List of longest rivers of Canada
References
Rivers of Nord-du-Québec
James Bay Project
Tributaries of James Bay | Eastmain River | [
"Engineering"
] | 412 | [
"James Bay Project",
"Macro-engineering"
] |
1,609,504 | https://en.wikipedia.org/wiki/Vi%C3%A8te%27s%20formula | In mathematics, Viète's formula is the following infinite product of nested radicals representing twice the reciprocal of the mathematical constant :
It can also be represented as
The formula is named after François Viète, who published it in 1593. As the first formula of European mathematics to represent an infinite process, it can be given a rigorous meaning as a limit expression and marks the beginning of mathematical analysis. It has linear convergence and can be used for calculations of , but other methods before and since have led to greater accuracy. It has also been used in calculations of the behavior of systems of springs and masses and as a motivating example for the concept of statistical independence.
The formula can be derived as a telescoping product of either the areas or perimeters of nested polygons converging to a circle. Alternatively, repeated use of the half-angle formula from trigonometry leads to a generalized formula, discovered by Leonhard Euler, that has Viète's formula as a special case. Many similar formulas involving nested roots or infinite products are now known.
Significance
François Viète (1540–1603) was a French lawyer, privy councillor to two French kings, and amateur mathematician. He published this formula in 1593 in his work Variorum de rebus mathematicis responsorum, liber VIII. At this time, methods for approximating to (in principle) arbitrary accuracy had long been known. Viète's own method can be interpreted as a variation of an idea of Archimedes of approximating the circumference of a circle by the perimeter of a many-sided polygon, used by Archimedes to find the approximation
By publishing his method as a mathematical formula, Viète formulated the first instance of an infinite product known in mathematics, and the first example of an explicit formula for the exact value of . As the first representation in European mathematics of a number as the result of an infinite process rather than of a finite calculation, Eli Maor highlights Viète's formula as marking the beginning of mathematical analysis and Jonathan Borwein calls its appearance "the dawn of modern mathematics".
Using his formula, Viète calculated to an accuracy of nine decimal digits. However, this was not the most accurate approximation to known at the time, as the Persian mathematician Jamshīd al-Kāshī had calculated to an accuracy of nine sexagesimal digits and 16 decimal digits in 1424. Not long after Viète published his formula, Ludolph van Ceulen used a method closely related to Viète's to calculate 35 digits of , which were published only after van Ceulen's death in 1610.
Beyond its mathematical and historical significance, Viète's formula can be used to explain the different speeds of waves of different frequencies in an infinite chain of springs and masses, and the appearance of in the limiting behavior of these speeds. Additionally, a derivation of this formula as a product of integrals involving the Rademacher system, equal to the integral of products of the same functions, provides a motivating example for the concept of statistical independence.
Interpretation and convergence
Viète's formula may be rewritten and understood as a limit expression
where
For each choice of , the expression in the limit is a finite product, and as gets arbitrarily large, these finite products have values that approach the value of Viète's formula arbitrarily closely. Viète did his work long before the concepts of limits and rigorous proofs of convergence were developed in mathematics; the first proof that this limit exists was not given until the work of Ferdinand Rudio in 1891.
The rate of convergence of a limit governs the number of terms of the expression needed to achieve a given number of digits of accuracy. In Viète's formula, the numbers of terms and digits are proportional to each other: the product of the first terms in the limit gives an expression for that is accurate to approximately digits. This convergence rate compares very favorably with the Wallis product, a later infinite product formula for . Although Viète himself used his formula to calculate only with nine-digit accuracy, an accelerated version of his formula has been used to calculate to hundreds of thousands of digits.
Related formulas
Viète's formula may be obtained as a special case of a formula for the sinc function that has often been attributed to Leonhard Euler, more than a century later:
Substituting in this formula yields
Then, expressing each term of the product on the right as a function of earlier terms using the half-angle formula:
gives Viète's formula.
It is also possible to derive from Viète's formula a related formula for that still involves nested square roots of two, but uses only one multiplication:
which can be rewritten compactly as
Many formulae for and other constants such as the golden ratio are now known, similar to Viète's in their use of either nested radicals or infinite products of trigonometric functions.
Derivation
Viète obtained his formula by comparing the areas of regular polygons with and sides inscribed in a circle. The first term in the product, , is the ratio of areas of a square and an octagon, the second term is the ratio of areas of an octagon and a hexadecagon, etc. Thus, the product telescopes to give the ratio of areas of a square (the initial polygon in the sequence) to a circle (the limiting case of a -gon). Alternatively, the terms in the product may be instead interpreted as ratios of perimeters of the same sequence of polygons, starting with the ratio of perimeters of a digon (the diameter of the circle, counted twice) and a square, the ratio of perimeters of a square and an octagon, etc.
Another derivation is possible based on trigonometric identities and Euler's formula.
Repeatedly applying the double-angle formula
leads to a proof by mathematical induction that, for all positive integers ,
The term goes to in the limit as goes to infinity, from which Euler's formula follows. Viète's formula may be obtained from this formula by the substitution .
See also
Morrie's law, same identity taking on Viète's formula
List of trigonometric identities
References
External links
Viète's Variorum de rebus mathematicis responsorum, liber VIII (1593) on Google Books. The formula is on the second half of p. 30.
Articles containing proofs
Infinite products
Pi algorithms | Viète's formula | [
"Mathematics"
] | 1,341 | [
"Mathematical analysis",
"Pi algorithms",
"Infinite products",
"Articles containing proofs",
"Pi"
] |
1,609,767 | https://en.wikipedia.org/wiki/Laser-induced%20breakdown%20spectroscopy | Laser-induced breakdown spectroscopy (LIBS) is a type of atomic emission spectroscopy which uses a highly energetic laser pulse as the excitation source. The laser is focused to form a plasma, which atomizes and excites samples. The formation of the plasma only begins when the focused laser achieves a certain threshold for optical breakdown, which generally depends on the environment and the target material.
2000s developments
From 2000 to 2010, the U.S. Army Research Laboratory (ARL) researched potential extensions to LIBS technology, which focused on hazardous material detection. Applications investigated at ARL included the standoff detection of explosive residues and other hazardous materials, plastic landmine discrimination, and material characterization of various metal alloys and polymers. Results presented by ARL suggest that LIBS may be able to discriminate between energetic and non-energetic materials.
Research
Broadband high-resolution spectrometers were developed in 2000 and commercialized in 2003. Designed for material analysis, the spectrometer allowed the LIBS system to be sensitive to chemical elements in low concentration.
ARL LIBS applications studied from 2000 to 2010 included:
Tested for detection of Halon alternative agents
Tested a field-portable LIBS system for the detection of lead in soil and paint
Studied the spectral emission of aluminum and aluminum oxides from bulk aluminum in different bath gases
Performed kinetic modeling of LIBS plumes
Demonstrated the detection and discrimination of geological materials, plastic landmines, explosives, and chemical and biological warfare agent surrogates
ARL LIBS prototypes studied during this period included:
Laboratory LIBS setup
Commercial LIBS system
Man-portable LIBS device
Standoff LIBS system developed for 100+ m detection and discriminate on of explosive residues.
2010s developments
LIBS is one of several analytical techniques that can be deployed in the field as opposed to pure laboratory techniques e.g. spark OES. , recent research on LIBS focuses on compact and (man-)portable systems. Some industrial applications of LIBS include the detection of material mix-ups, analysis of inclusions in steel, analysis of slags in secondary metallurgy, analysis of combustion processes, and high-speed identification of scrap pieces for material-specific recycling tasks. Armed with data analysis techniques, this technique is being extended to pharmaceutical samples.
LIBS using short laser pulses
Following multiphoton or tunnel ionization the electron is being accelerated by inverse Bremsstrahlung and can collide with the nearby molecules and generate new electrons through collisions. If the pulse duration is long, the newly ionized electrons can be accelerated and eventually avalanche or cascade ionization follows. Once the density of the electrons reaches a critical value, breakdown occurs and high density plasma is created which has no memory of the laser pulse. So, the criterion for the shortness of a pulse in dense media is as follows: A pulse interacting with a dense matter is considered to be short if during the interaction the threshold for the avalanche ionization is not reached. At the first glance this definition may appear to be too limiting. Fortunately, due to the delicately balanced behavior of the pulses in dense media, the threshold cannot be reached easily. The phenomenon responsible for the balance is the intensity clamping through the onset of filamentation process during the propagation of strong laser pulses in dense media.
A potentially important development to LIBS involves the use of a short laser pulse as a spectroscopic source. In this method, a plasma column is created as a result of focusing ultrafast laser pulses in a gas. The self-luminous plasma is far superior in terms of low level of continuum and also smaller line broadening. This is attributed to the lower density of the plasma in the case of short laser pulses due to the defocusing effects which limits the intensity of the pulse in the interaction region and thus prevents further multiphoton/tunnel ionization of the gas.
Line intensity
For an optically thin plasma composed of a single, neutral atomic species in local thermal equilibrium (LTE), the density of photons emitted by a transition from level i to level j is
where :
is the emission rate density of photons (in m−3 sr−1 s−1)
is the number of neutral atoms in the plasma (in m−3)
is the transition probability between level i and level j (in s−1)
is the degeneracy of the upper level i (2J+1)
is the partition function (unitless)
is the energy level of the upper level i (in eV)
is the Boltzmann constant (in eV/K)
is the temperature (in K)
is the line profile such that
is the wavelength (in nm)
The partition function is the statistical occupation fraction of every level of the atomic species :
LIBS for food analysis
Recently, LIBS has been investigated as a fast, micro-destructive food analysis tool. It is considered a potential analytical tool for qualitative and quantitative chemical analysis, making it suitable as a PAT (Process Analytical Technology) or portable tool. Milk, bakery products, tea, vegetable oils, water, cereals, flour, potatoes, palm date and different types of meat have been analyzed using LIBS. Few studies have shown its potential as an adulteration detection tool for certain foods. LIBS has also been evaluated as a promising elemental imaging technique in meat.
In 2019, researchers of the University of York and of the Liverpool John Moores University employed LIBS for studying 12 European oysters (Ostrea edulis, Linnaeus, 1758) from the Late Mesolithic shell midden at Conors Island (Republic of Ireland). The results highlighted the applicability of LIBS to determine prehistoric seasonality practices as well as biological age and growth at an improved rate and reduced cost than was previously achievable.
See also
Atomic spectroscopy
Laser ablation
Laser-induced fluorescence
List of surface analysis methods
Photoacoustic spectroscopy
Raman spectroscopy
Spectroscopy
References
Further reading
External links
NIST LIBS Database
Scientific techniques
Spectroscopy
Emission spectroscopy | Laser-induced breakdown spectroscopy | [
"Physics",
"Chemistry"
] | 1,215 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Emission spectroscopy",
"Spectroscopy"
] |
1,609,861 | https://en.wikipedia.org/wiki/Graph%20labeling | In the mathematical discipline of graph theory, a graph labeling is the assignment of labels, traditionally represented by integers, to edges and/or vertices of a graph.
Formally, given a graph , a vertex labeling is a function of to a set of labels; a graph with such a function defined is called a vertex-labeled graph. Likewise, an edge labeling is a function of to a set of labels. In this case, the graph is called an edge-labeled graph.
When the edge labels are members of an ordered set (e.g., the real numbers), it may be called a weighted graph.
When used without qualification, the term labeled graph generally refers to a vertex-labeled graph with all labels distinct. Such a graph may equivalently be labeled by the consecutive integers , where is the number of vertices in the graph. For many applications, the edges or vertices are given labels that are meaningful in the associated domain. For example, the edges may be assigned weights representing the "cost" of traversing between the incident vertices.
In the above definition a graph is understood to be a finite undirected simple graph. However, the notion of labeling may be applied to all extensions and generalizations of graphs. For example, in automata theory and formal language theory it is convenient to consider labeled multigraphs, i.e., a pair of vertices may be connected by several labeled edges.
History
Most graph labelings trace their origins to labelings presented by Alexander Rosa in his 1967 paper. Rosa identified three types of labelings, which he called -, -, and -labelings. -labelings were later renamed as "graceful" by Solomon Golomb, and the name has been popular since.
Special cases
Graceful labeling
A graph is known as graceful if its vertices are labeled from to , the size of the graph, and if this vertex labeling induces an edge labeling from to . For any edge , the label of is the positive difference between the labels of the two vertices incident with . In other words, if is incident with vertices labeled and , then will be labeled . Thus, a graph is graceful if and only if there exists an injection from to that induces a bijection from to .
In his original paper, Rosa proved that all Eulerian graphs with size equivalent to or ( ) are not graceful. Whether or not certain families of graphs are graceful is an area of graph theory under extensive study. Arguably, the largest unproven conjecture in graph labeling is the Ringel–Kotzig conjecture, which hypothesizes that all trees are graceful. This has been proven for all paths, caterpillars, and many other infinite families of trees. Anton Kotzig himself has called the effort to prove the conjecture a "disease".
Edge-graceful labeling
An edge-graceful labeling on a simple graph without loops or multiple edges on vertices and edges is a labeling of the edges by distinct integers in such that the labeling on the vertices induced by labeling a vertex with the sum of the incident edges taken modulo assigns all values from 0 to to the vertices. A graph is said to be "edge-graceful" if it admits an edge-graceful labeling.
Edge-graceful labelings were first introduced by Sheng-Ping Lo in 1985.
A necessary condition for a graph to be edge-graceful is "Lo's condition":
Harmonious labeling
A "harmonious labeling" on a graph is an injection from the vertices of to the group of integers modulo , where is the number of edges of , that induces a bijection between the edges of and the numbers modulo by taking the edge label for an edge to be the sum of the labels of the two vertices . A "harmonious graph" is one that has a harmonious labeling. Odd cycles are harmonious, as are Petersen graphs. It is conjectured that trees are all harmonious if one vertex label is allowed to be reused. The seven-page book graph provides an example of a graph that is not harmonious.
Graph coloring
A graph coloring is a subclass of graph labelings. Vertex colorings assign different labels to adjacent vertices, while edge colorings assign different labels to adjacent edges.
Lucky labeling
A lucky labeling of a graph is an assignment of positive integers to the vertices of such that if denotes the sum of the labels on the neighbors of , then is a vertex coloring of . The "lucky number" of is the least such that has a lucky labeling with the integers
References
Extensions and generalizations of graphs | Graph labeling | [
"Mathematics"
] | 918 | [
"Mathematical relations",
"Extensions and generalizations of graphs",
"Graph theory"
] |
1,609,912 | https://en.wikipedia.org/wiki/5040%20%28number%29 | 5040 (five thousand [and] forty) is the natural number following 5039 and preceding 5041.
It is a factorial (7!), the 8th superior highly composite number, the 19th highly composite number, an abundant number, the 8th colossally abundant number and the number of permutations of 4 items out of 10 choices (10 × 9 × 8 × 7 = 5040). It is also one less than a square, making (7, 71) a Brown number pair.
Philosophy
Plato mentions in his Laws that 5040 is a convenient number to use for dividing many things (including both the citizens and the land of a city-state or polis) into lesser parts, making it an ideal number for the number of citizens (heads of families) making up a polis. He remarks that this number can be divided by all the (natural) numbers from 1 to 12 with the single exception of 11 (however, it is not the smallest number to have this property; 2520 is). He rectifies this "defect" by suggesting that two families could be subtracted from the citizen body to produce the number 5038, which is divisible by 11. Plato also took notice of the fact that 5040 can be divided by 12 twice over. Indeed, Plato's repeated insistence on the use of 5040 for various state purposes is so evident that Benjamin Jowett, in the introduction to his translation of Laws, wrote, "Plato, writing under Pythagorean influences, seems really to have supposed that the well-being of the city depended almost as much on the number 5040 as on justice and moderation."
Jean-Pierre Kahane has suggested that Plato's use of the number 5040 marks the first appearance of the concept of a highly composite number, a number with more divisors than any smaller number.
Number theoretical
If is the sum-of-divisors function and is the Euler–Mascheroni constant, then 5040 is the largest of 27 known numbers for which this inequality holds:
.
This is somewhat unusual, since in the limit we have:
Guy Robin showed in 1984 that the inequality fails for all larger numbers if and only if the Riemann hypothesis is true.
Interesting notes
5040 has exactly 60 divisors, counting itself and 1.
5040 is the largest factorial (7! = 5040) that is a highly composite number. All factorials smaller than 8! = 40320 are highly composite.
5040 is the sum of 42 consecutive primes (23 + 29 + 31 + 37 + 41 + 43 + 47 + 53 + 59 + 61 + 67 + 71 + 73 + 79 + 83 + 89 + 97 + 101 + 103 + 107 + 109 + 113 + 127 + 131 + 137 + 139 + 149 + 151 + 157 + 163 + 167 + 173 + 179 + 181 + 191 + 193 + 197 + 199 + 211 + 223 + 227 + 229).
5040 is the least common multiple of the first 10 multiples of 2 (2, 4, 6, 8, 10, 12, 14, 16, 18 and 20).
References
External links
Mathworld article on Plato's numbers
Integers
Platonism | 5040 (number) | [
"Mathematics"
] | 659 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
1,609,956 | https://en.wikipedia.org/wiki/Vortex%20shedding | In fluid dynamics, vortex shedding is an oscillating flow that takes place when a fluid such as air or water flows past a bluff (as opposed to streamlined) body at certain velocities, depending on the size and shape of the body. In this flow, vortices are created at the back of the body and detach periodically from either side of the body forming a Kármán vortex street. The fluid flow past the object creates alternating low-pressure vortices on the downstream side of the object. The object will tend to move toward the low-pressure zone.
If the bluff structure is not mounted rigidly and the frequency of vortex shedding matches the resonance frequency of the structure, then the structure can begin to resonate, vibrating with harmonic oscillations driven by the energy of the flow. This vibration is the cause for overhead power line wires humming in the wind, and for the fluttering of automobile whip radio antennas at some speeds. Tall chimneys constructed of thin-walled steel tubes can be sufficiently flexible that, in air flow with a speed in the critical range, vortex shedding can drive the chimney into violent oscillations that can damage or destroy the chimney.
Vortex shedding was one of the causes proposed for the failure of the original Tacoma Narrows Bridge (Galloping Gertie) in 1940, but was rejected because the frequency of the vortex shedding did not match that of the bridge. The bridge actually failed by aeroelastic flutter.
A thrill ride, "VertiGo" at Cedar Point in Sandusky, Ohio suffered vortex shedding during the winter of 2001, causing one of the three towers to collapse. The ride was closed for the winter at the time. In northeastern Iran, the Hashemi-Nejad natural gas refinery's flare stacks suffered vortex shedding seven times from 1975 to 2003. Some simulation and analyses were done, which revealed that the main cause was the interaction of the pilot flame and flare stack. The problem was solved by removing the pilot.
Governing equation
The frequency at which vortex shedding takes place for a cylinder is related to the Strouhal number by the following equation:
Where is the dimensionless Strouhal number, is the vortex shedding frequency (Hz), is the diameter of the cylinder (m), and is the flow velocity (m/s).
The Strouhal number depends on the Reynolds number , but a value of 0.22 is commonly used. As the unit is dimensionless, any set of units can be used for the variables. Over four orders of magnitude in Reynolds number, from 102 to 105, the Strouhal number varies only between 0.18 and 0.22.
Mitigation of vortex shedding effects
Fairings can be fitted to a structure to streamline the flow past the structure, such as on an aircraft wing.
Tall metal smokestacks or other tubular structures such as antenna masts or tethered cables can be fitted with an external corkscrew fin (a strake) to deliberately introduce turbulence, so the load is less variable and resonant load frequencies have negligible amplitudes. The effectiveness of helical strakes for reducing vortex induced vibration was discovered in 1957 by Christopher Scruton and D. E. J. Walshe at the National Physics Laboratory in Great Britain. They are therefore often described as Scruton strakes. For maximum effectiveness in suppression of vortices caused by air flow, each fin or strake should have a height of about 10 percent of the cylinder diameter. The pitch of each fin should be approximately 5 times the cylinder diameter.
A tuned mass damper can be used to mitigate vortex shedding in stacks and chimneys.
A Stockbridge damper is used to mitigate aeolian vibrations caused by vortex shedding on overhead power lines.
See also
Aeroelastic flutter - vibration-induced vortices - by way of contrast
Vortex
Vortex-induced vibration
Von Kármán vortex street
References
External links
Flow visualisation of the vortex shedding mechanism on circular cylinder using hydrogen bubbles illuminated by a laser sheet in a water channel. Courtesy of G.R.S. Assi.
Vortices
Fluid dynamics | Vortex shedding | [
"Chemistry",
"Mathematics",
"Engineering"
] | 854 | [
"Vortices",
"Chemical engineering",
"Piping",
"Fluid dynamics",
"Dynamical systems"
] |
1,610,231 | https://en.wikipedia.org/wiki/Energy%20density | In physics, energy density is the quotient between the amount of energy stored in a given system or contained in a given region of space and the volume of the system or region considered. Often only the useful or extractable energy is measured. It is sometimes confused with stored energy per unit mass, which is called specific energy or .
There are different types of energy stored, corresponding to a particular type of reaction. In order of the typical magnitude of the energy stored, examples of reactions are: nuclear, chemical (including electrochemical), electrical, pressure, material deformation or in electromagnetic fields. Nuclear reactions take place in stars and nuclear power plants, both of which derive energy from the binding energy of nuclei. Chemical reactions are used by organisms to derive energy from food and by automobiles from the combustion of gasoline. Liquid hydrocarbons (fuels such as gasoline, diesel and kerosene) are today the densest way known to economically store and transport chemical energy at a large scale (1 kg of diesel fuel burns with the oxygen contained in ≈ 15 kg of air). Burning local biomass fuels supplies household energy needs (cooking fires, oil lamps, etc.) worldwide. Electrochemical reactions are used by devices such as laptop computers and mobile phones to release energy from batteries.
Energy per unit volume has the same physical units as pressure, and in many situations is synonymous. For example, the energy density of a magnetic field may be expressed as and behaves like a physical pressure. The energy required to compress a gas to a certain volume may be determined by multiplying the difference between the gas pressure and the external pressure by the change in volume. A pressure gradient describes the potential to perform work on the surroundings by converting internal energy to work until equilibrium is reached.
In cosmological and other contexts in general relativity, the energy densities considered relate to the elements of the stress–energy tensor and therefore do include the rest mass energy as well as energy densities associated with pressure.
Chemical energy
When discussing the chemical energy contained, there are different types which can be quantified depending on the intended purpose. One is the theoretical total amount of thermodynamic work that can be derived from a system, at a given temperature and pressure imposed by the surroundings, called exergy. Another is the theoretical amount of electrical energy that can be derived from reactants that are at room temperature and atmospheric pressure. This is given by the change in standard Gibbs free energy. But as a source of heat or for use in a heat engine, the relevant quantity is the change in standard enthalpy or the heat of combustion.
There are two kinds of heat of combustion:
The higher value (HHV), or gross heat of combustion, includes all the heat released as the products cool to room temperature and whatever water vapor is present condenses.
The lower value (LHV), or net heat of combustion, does not include the heat which could be released by condensing water vapor, and may not include the heat released on cooling all the way down to room temperature.
A convenient table of HHV and LHV of some fuels can be found in the references.
In energy storage and fuels
For energy storage, the energy density relates the stored energy to the volume of the storage equipment, e.g. the fuel tank. The higher the energy density of the fuel, the more energy may be stored or transported for the same amount of volume. The energy of a fuel per unit mass is called its specific energy.
The adjacent figure shows the gravimetric and volumetric energy density of some fuels and storage technologies (modified from the Gasoline article). Some values may not be precise because of isomers or other irregularities. The heating values of the fuel describe their specific energies more comprehensively.
The density values for chemical fuels do not include the weight of the oxygen required for combustion. The atomic weights of carbon and oxygen are similar, while hydrogen is much lighter. Figures are presented in this way for those fuels where in practice air would only be drawn in locally to the burner. This explains the apparently lower energy density of materials that contain their own oxidizer (such as gunpowder and TNT), where the mass of the oxidizer in effect adds weight, and absorbs some of the energy of combustion to dissociate and liberate oxygen to continue the reaction. This also explains some apparent anomalies, such as the energy density of a sandwich appearing to be higher than that of a stick of dynamite.
Given the high energy density of gasoline, the exploration of alternative media to store the energy of powering a car, such as hydrogen or battery, is strongly limited by the energy density of the alternative medium. The same mass of lithium-ion storage, for example, would result in a car with only 2% the range of its gasoline counterpart. If sacrificing the range is undesirable, much more storage volume is necessary. Alternative options are discussed for energy storage to increase energy density and decrease charging time, such as supercapacitors.
No single energy storage method boasts the best in specific power, specific energy, and energy density. Peukert's law describes how the amount of useful energy that can be obtained (for a lead-acid cell) depends on how quickly it is pulled out.
Efficiency
In general an engine will generate less kinetic energy due to inefficiencies and thermodynamic considerations—hence the specific fuel consumption of an engine will always be greater than its rate of production of the kinetic energy of motion.
Energy density differs from energy conversion efficiency (net output per input) or embodied energy (the energy output costs to provide, as harvesting, refining, distributing, and dealing with pollution all use energy). Large scale, intensive energy use impacts and is impacted by climate, waste storage, and environmental consequences.
Nuclear energy
The greatest energy source by far is matter itself, according to the mass–energy equivalence. This energy is described by , where c is the speed of light. In terms of density, , where ρ is the volumetric mass density, V is the volume occupied by the mass. This energy can be released by the processes of nuclear fission (~ 0.1%), nuclear fusion (~ 1%), or the annihilation of some or all of the matter in the volume V by matter–antimatter collisions (100%).
The most effective ways of accessing this energy, aside from antimatter, are fusion and fission. Fusion is the process by which the sun produces energy which will be available for billions of years (in the form of sunlight and heat). However as of 2024, sustained fusion power production continues to be elusive. Power from fission in nuclear power plants (using uranium and thorium) will be available for at least many decades or even centuries because of the plentiful supply of the elements on earth, though the full potential of this source can only be realized through breeder reactors, which are, apart from the BN-600 reactor, not yet used commercially.
Fission reactors
Nuclear fuels typically have volumetric energy densities at least tens of thousands of times higher than chemical fuels. A 1 inch tall uranium fuel pellet is equivalent to about 1 ton of coal, 120 gallons of crude oil, or 17,000 cubic feet of natural gas. In light-water reactors, 1 kg of natural uranium – following a corresponding enrichment and used for power generation– is equivalent to the energy content of nearly 10,000 kg of mineral oil or 14,000 kg of coal. Comparatively, coal, gas, and petroleum are the current primary energy sources in the U.S. but have a much lower energy density.
The density of thermal energy contained in the core of a light-water reactor (pressurized water reactor (PWR) or boiling water reactor (BWR)) of typically ( electrical corresponding to ≈ thermal) is in the range of 10 to 100 MW of thermal energy per cubic meter of cooling water depending on the location considered in the system (the core itself (≈ ), the reactor pressure vessel (≈ ), or the whole primary circuit (≈ )). This represents a considerable density of energy that requires a continuous water flow at high velocity at all times in order to remove heat from the core, even after an emergency shutdown of the reactor.
The incapacity to cool the cores of three BWRs at Fukushima after the 2011 tsunami and the resulting loss of external electrical power and cold source caused the meltdown of the three cores in only a few hours, even though the three reactors were correctly shut down just after the Tōhoku earthquake. This extremely high power density distinguishes nuclear power plants (NPP's) from any thermal power plants (burning coal, fuel or gas) or any chemical plants and explains the large redundancy required to permanently control the neutron reactivity and to remove the residual heat from the core of NPP's.
Antimatter–matter annihilation
Because antimatter–matter interactions result in complete conversion of the rest mass to radiant energy, the energy density of this reaction depends on the density of the matter and antimatter used. A neutron star would approximate the most dense system capable of matter-antimatter annihilation. A black hole, although denser than a neutron star, does not have an equivalent anti-particle form, but would offer the same 100% conversion rate of mass to energy in the form of Hawking radiation. Even in the case of relatively small black holes (smaller than astronomical objects) the power output would be tremendous.
Electric and magnetic fields
Electric and magnetic fields can store energy and its density relates to the strength of the fields within a given volume. This (volumetric) energy density is given by
where is the electric field, is the magnetic field, and and are the permittivity and permeability of the surroundings respectively. The SI unit is the joule per cubic metre.
In ideal (linear and nondispersive) substances, the energy density is
where is the electric displacement field and is the magnetizing field. In the case of absence of magnetic fields, by exploiting Fröhlich's relationships it is also possible to extend these equations to anisotropic and nonlinear dielectrics, as well as to calculate the correlated Helmholtz free energy and entropy densities.
In the context of magnetohydrodynamics, the physics of conductive fluids, the magnetic energy density behaves like an additional pressure that adds to the gas pressure of a plasma.
Pulsed sources
When a pulsed laser impacts a surface, the radiant exposure, i.e. the energy deposited per unit of surface, may also be called energy density or fluence.
Table of material energy densities
The following unit conversions may be helpful when considering the data in the tables: 3.6 MJ = 1 kW⋅h ≈ 1.34 hp⋅h. Since 1 J = 10−6 MJ and 1 m3 = 103 L, divide joule/m3 by 109 to get MJ/L = GJ/m3. Divide MJ/L by 3.6 to get kW⋅h/L.
Chemical reactions (oxidation)
Unless otherwise stated, the values in the following table are lower heating values for perfect combustion, not counting oxidizer mass or volume. When used to produce electricity in a fuel cell or to do work, it is the Gibbs free energy of reaction (ΔG) that sets the theoretical upper limit. If the produced is vapor, this is generally greater than the lower heat of combustion, whereas if the produced is liquid, it is generally less than the higher heat of combustion. But in the most relevant case of hydrogen, ΔG is 113 MJ/kg if water vapor is produced, and 118 MJ/kg if liquid water is produced, both being less than the lower heat of combustion (120 MJ/kg).
Electrochemical reactions (batteries)
Common battery formats
Nuclear reactions
In material deformation
The mechanical energy storage capacity, or resilience, of a Hookean material when it is deformed to the point of failure can be computed by calculating tensile strength times the maximum elongation dividing by two. The maximum elongation of a Hookean material can be computed by dividing stiffness of that material by its ultimate tensile strength. The following table lists these values computed using the Young's modulus as measure of stiffness:
Other release mechanisms
See also
Energy content of biofuel
Energy density Extended Reference Table
Figure of merit
Food energy
Heat of combustion
High-energy-density matter
Power density and specifically Power-to-weight ratio
Rechargeable battery
Solid-state battery
Specific energy
Specific impulse
Orders of magnitude (energy)
References
Further reading
The Inflationary Universe: The Quest for a New Theory of Cosmic Origins by Alan H. Guth (1998)
Cosmological Inflation and Large-Scale Structure by Andrew R. Liddle, David H. Lyth (2000)
Richard Becker, "Electromagnetic Fields and Interactions", Dover Publications Inc., 1964
"Aircraft Fuels". Energy, Technology and the Environment Ed. Attilio Bisio. Vol. 1. New York: John Wiley and Sons, Inc., 1995. 257–259
"Fuels of the Future for Cars and Trucks" – Dr. James J. Eberhardt – Energy Efficiency and Renewable Energy, U.S. Department of Energy – 2002 Diesel Engine Emissions Reduction (DEER) Workshop San Diego, California - August 25–29, 2002
Energy
Density
Volume-specific quantities
Physical cosmological concepts
Physical quantities | Energy density | [
"Physics",
"Mathematics"
] | 2,766 | [
"Physical cosmological concepts",
"Physical phenomena",
"Concepts in astrophysics",
"Physical quantities",
"Quantity",
"Intensive quantities",
"Mass",
"Volume-specific quantities",
"Energy (physics)",
"Energy",
"Density",
"Wikipedia categories named after physical quantities",
"Physical prop... |
1,610,282 | https://en.wikipedia.org/wiki/Amorphea | Amorphea is a taxonomic supergroup that includes the basal Amoebozoa and Obazoa. That latter contains the Opisthokonta, which includes the Fungi, Animals and the Choanomonada, or Choanoflagellates. The taxonomic affinities of the members of this clade were originally described and proposed by Thomas Cavalier-Smith in 2002.
The International Society of Protistologists, the recognised body for taxonomy of protozoa, recommended in 2012 that the term Unikont be changed to Amorphea because the name "Unikont" is based on a hypothesized synapomorphy that the ISOP authors and other scientists later rejected.
It includes amoebozoa, opisthokonts, and Apusomonada.
Taxonomic revisions within this group
Thomas Cavalier-Smith proposed two new phyla: Sulcozoa, which consists of the subphyla Apusozoa (Apusomonadida and Breviatea), and Varisulca, which includes the subphyla Diphyllatea, Discocelida, Mantamonadidae, Planomonadida and Rigifilida.
Further work by Cavalier-Smith showed that Sulcozoa is paraphyletic. Apusozoa also appears to be paraphyletic. Varisulca has been redefined to include planomonads, Mantamonas and Collodictyon. A new taxon has been created - Glissodiscea - for the planomonads and Mantamonas. Again, the validity of this revised taxonomy awaits confirmation.
Amoebozoa seems to be monophyletic with two major branches: Conosa and Lobosa. Conosa is divided into the aerobic infraphylum Semiconosia (Mycetozoa and Variosea) and secondarily anaerobic Archamoebae. Lobosa consists entirely of non-flagellated lobose amoebae and has been divided into two classes: Discosea, which have flattened cells, and Tubulinea, which has predominantly tube-shaped pseudopodia.
Clade
The group includes eukaryotic cells that, for the most part, have a single emergent flagellum, or are amoebae with no flagella. The unikonts include opisthokonts (animals, fungi, and related forms) and Amoebozoa. By contrast, other well-known eukaryotic groups, which more often have two emergent flagella (although there are many exceptions), are often referred to as bikonts. Bikonts include Archaeplastida (plants and relatives) and SAR supergroup, the Cryptista, Haptista, Telonemia and picozoa.
Characteristics
The unikonts have a triple-gene fusion that is lacking in the bikonts. The three genes that are fused together in the unikonts, but not bacteria or bikonts, encode enzymes for synthesis of the pyrimidine nucleotides: carbamoyl phosphate synthase, dihydroorotase, aspartate carbamoyltransferase. This must have involved a double fusion, a rare pair of events, supporting the shared ancestry of Opisthokonta and Amoebozoa.
Cavalier-Smith originally proposed that unikonts ancestrally had a single flagellum and single basal body. This is unlikely, however, as flagellated opisthokonts, as well as some flagellated Amoebozoa, including Breviata, actually have two basal bodies, as in typical 'bikonts' (even though only one is flagellated in most unikonts). This paired arrangement can also be seen in the organization of centrioles in typical animal cells. In spite of the name of the group, the common ancestor of all 'unikonts' was probably a cell with two basal bodies.
References
External links
Tree of Life.org
Eukaryote unranked clades | Amorphea | [
"Biology"
] | 852 | [
"Eukaryote taxa",
"Eukaryotes",
"Amorphea"
] |
9,564,150 | https://en.wikipedia.org/wiki/Jones%27%20stain | Jones' stain, also Jones stain, is a methenamine silver–periodic acid–Schiff stain used in pathology. It is also referred to as methenamine PAS which is commonly abbreviated MPAS.
It stains for basement membrane and is widely used in the investigation of medical kidney diseases.
The Jones stain demonstrates the spiked GBM, caused by subepithelial deposits, seen in membranous nephropathy.
See also
Staining
References
Staining | Jones' stain | [
"Chemistry",
"Biology"
] | 96 | [
"Staining",
"Microbiology techniques",
"Cell imaging",
"Microscopy"
] |
9,564,157 | https://en.wikipedia.org/wiki/Ioxilan | Ioxilan is a diagnostic contrast agent. It is injected intravenously before taking X-ray images to increase arterial contrast in the final image. It was marketed in the US under the trade name Oxilan by Guerbet, L.L.C., but was discontinued in 2017.
Mechanism of action
Ioxilan is an iodinated contrast agent.
References
Radiocontrast agents
Iodobenzene derivatives
Acetanilides
Polyols | Ioxilan | [
"Chemistry"
] | 95 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
9,564,185 | https://en.wikipedia.org/wiki/Van%20Gieson%27s%20stain | Van Gieson's stain is a mixture of picric acid and acid fuchsin. It is the simplest method of differential staining of collagen and other connective tissue. It was introduced to histology by American neuropsychiatrist and pathologist Ira Van Gieson.
HvG stain generally refers to the combination of hematoxylin and Van Gieson's stain, but can possibly refer to a combination of hibiscus extract-iron solution and Van Gieson's stain.
Other dyes
Other dyes used in connection with Van Gieson staining include:
Alcian blue
Amido black 10B
Verhoeff's stain
References
Histology
Staining | Van Gieson's stain | [
"Chemistry",
"Biology"
] | 144 | [
"Staining",
"Biotechnology stubs",
"Biochemistry stubs",
"Histology",
"Microbiology techniques",
"Microscopy",
"Biochemistry",
"Cell imaging"
] |
9,564,261 | https://en.wikipedia.org/wiki/Titanium%20isopropoxide | Titanium isopropoxide, also commonly referred to as titanium tetraisopropoxide or TTIP, is a chemical compound with the formula . This alkoxide of titanium(IV) is used in organic synthesis and materials science. It is a diamagnetic tetrahedral molecule. Titanium isopropoxide is a component of the Sharpless epoxidation, a method for the synthesis of chiral epoxides.
The structures of the titanium alkoxides are often complex. Crystalline titanium methoxide is tetrameric with the molecular formula . Alkoxides derived from bulkier alcohols such as isopropyl alcohol aggregate less. Titanium isopropoxide is mainly a monomer in nonpolar solvents.
Preparation
It is prepared by treating titanium tetrachloride with isopropanol in presence of ammonia. Hydrogen chloride is formed as a coproduct:
TiCl4 + 4 (CH3)2CHOH → Ti{OCH(CH3)2}4 + 4 HCl
Properties
Titanium isopropoxide reacts with water to deposit titanium dioxide:
Ti{OCH(CH3)2}4 + 2 H2O → TiO2 + 4 (CH3)2CHOH
This reaction is employed in the sol-gel synthesis of TiO2-based materials in the form of powders or thin films. Typically water is added in excess to a solution of the alkoxide in an alcohol. The composition, crystallinity and morphology of the inorganic product are determined by the presence of additives (e.g. acetic acid), the amount of water (hydrolysis ratio), and reaction conditions.
The compound is also used as a catalyst in the preparation of certain cyclopropanes in the Kulinkovich reaction. Prochiral thioethers are oxidized enantioselectively using a catalyst derived from Ti(O-i-Pr)4.
Naming
Titanium(IV) isopropoxide is a widely used item of commerce and has acquired many names in addition to those listed in the table. A sampling of the names include:
titanium(IV) i-propoxide, isopropyl titanate, tetraisopropyl titanate, tetraisopropyl orthotitanate, titanium tetraisopropylate, orthotitanic acid tetraisopropyl ester, Isopropyl titanate(IV), titanic acid tetraisopropyl ester, isopropyltitanate, titanium(IV) isopropoxide, titanium tetraisopropoxide, iso-propyl titanate, titanium tetraisopropanolate, tetraisopropoxytitanium(IV), tetraisopropanolatotitanium, tetrakis(isopropoxy) titanium, tetrakis(isopropanolato) titanium, titanic acid isopropyl ester, titanic acid tetraisopropyl ester, titanium isopropoxide, titanium isopropylate, tetrakis(1-methylethoxy)titanium.
Applications
TTIP can be used as a precursor for ambient conditions vapour phase deposition such as infiltration into polymer thin films.
References
External links
Alkoxides
Titanium(IV) compounds
Isopropyl compounds | Titanium isopropoxide | [
"Chemistry"
] | 695 | [
"Bases (chemistry)",
"Alkoxides",
"Functional groups"
] |
9,564,371 | https://en.wikipedia.org/wiki/Acid%20fuchsin | Acid fuchsin or fuchsine acid, (also called Acid Violet 19 and C.I. 42685) is an acidic magenta dye with the chemical formula C20H17N3Na2O9S3. It is a sodium sulfonate derivative of fuchsine. Acid fuchsin has wide use in histology, and is one of the dyes used in Masson's trichrome stain. This method is commonly used to stain cytoplasm and nuclei of tissue sections in the histology laboratory in order to distinguish muscle from collagen. The muscle stains red with the acid fuchsin, and the collagen is stained green or blue with Light Green SF yellowish or methyl blue. It can also be used to identify growing bacteria.
See also
New fuchsine
Pararosanilin
Verhoeff’s Stain
Pollen grain staining (Alexander's stain)
References
Staining dyes
Triarylmethane dyes
Anilines
Benzenesulfonates | Acid fuchsin | [
"Chemistry"
] | 209 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
9,564,390 | https://en.wikipedia.org/wiki/Swiss%20cheese%20model | The Swiss cheese model of accident causation is a model used in risk analysis and risk management. It likens human systems to multiple slices of Swiss cheese, which has randomly placed and sized holes in each slice, stacked side by side, in which the risk of a threat becoming a reality is mitigated by the differing layers and types of defenses which are "layered" behind each other. Therefore, in theory, lapses and weaknesses in one defense do not allow a risk to materialize (e.g. a hole in each slice in the stack aligning with holes in all other slices), since other defenses also exist (e.g. other slices of cheese), to prevent a single point of failure.
The model was originally formally propounded by James T. Reason of the University of Manchester, and has since gained widespread acceptance. It is sometimes called the "cumulative act effect". Applications include aviation safety, engineering, healthcare, emergency service organizations, and as the principle behind layered security, as used in computer security and defense in depth.
Although the Swiss cheese model is respected and considered a useful method of relating concepts, it has been subject to criticism that it is used too broadly, and without enough other models or support.
Holes and slices
In the Swiss cheese model, an organization's defenses against failure are modeled as a series of imperfect barriers, represented as slices of cheese, specifically Swiss cheese with holes known as "eyes", such as Emmental cheese. The holes in the slices represent weaknesses in individual parts of the system and are continually varying in size and position across the slices. The system produces failures when a hole in each slice momentarily aligns, permitting (in Reason's words) "a trajectory of accident opportunity", so that a hazard passes through holes in all of the slices, leading to a failure.
Frosch described Reason's model in mathematical terms as a model in percolation theory, which he analyses as a Bethe lattice.
Active and latent failures
The model includes active and latent failures. Active failures encompass the unsafe acts that can be directly linked to an accident, such as (in the case of aircraft accidents) a navigation error. Latent failures include contributory factors that may lie dormant for days, weeks, or months until they contribute to the accident. Latent failures span the first three domains of failure in Reason's model.
In the early days of the Swiss cheese model, late 1980 to about 1992, attempts were made to combine two theories: James Reason's multi-layer defence model and Willem Albert Wagenaar's tripod theory of accident causation. This resulted in a period in which the Swiss cheese diagram was represented with the slices of cheese labelled 'active failures', 'preconditions' and 'latent failures'.
These attempts to combine these theories still causes confusion today. A more correct version of the combined theories is shown with the active failures (now called immediate causes), preconditions and latent failures (now called underlying causes) shown as the reason each barrier (slice of cheese) has a hole in it, and the slices of cheese as the barriers.
Examples of applications
The framework has been applied to a range of areas including aviation safety, various engineering domains, emergency service organizations, and as the principle behind layered security, as used in computer security and defense in depth.
The model was used in some areas of healthcare. For example, a latent failure could be the similar packaging of two drugs that are then stored close to each other in a pharmacy. This failure would be a contributory factor in the administration of the wrong drug to a patient. Such research led to the realization that medical error can be the result of "system flaws, not character flaws", and that greed, ignorance, malice or laziness are not the only causes of error.
The Swiss cheese model is nowadays widely used within process safety. Each slice of cheese is usually associated to a safety-critical system, often with the support of bow-tie diagrams. This use has become particularly common when applied to oil and gas drilling and production, both for illustrative purposes and to support other processes, such as asset integrity management and incident investigation.
Lubnau, Lubnau, and Okray apply the model to the engineering of firefighting systems, aiming to reduce human errors by "inserting additional layers of cheese into the system", namely the techniques of Crew Resource Management.
Olson and Raz apply the model to improve deception in the methodology of experimental studies, with multiple thin layers of cheese representing subtle components of deception which hide the study hypothesis.
See also
Chain of events (accident analysis)
Healthcare error proliferation model
Iteration
Latent human error
Mitigation
Proximate and ultimate causation
Proximate cause
Redundancy (engineering)
Root cause analysis
System accident
Systems engineering
Systems modelling
References
Aviation safety
Error
Failure
Metaphors referring to food and drink
Process safety
Safety engineering
Scientific models | Swiss cheese model | [
"Chemistry",
"Engineering"
] | 1,007 | [
"Chemical process engineering",
"Systems engineering",
"Safety engineering",
"Process safety"
] |
9,565,562 | https://en.wikipedia.org/wiki/Dyneema%20Composite%20Fabric | Dyneema Composite Fabric (DCF), also known as Cuben Fiber (CTF3), is a high-performance non-woven composite material used in high-strength, low-weight applications. It is constructed from a thin sheet of ultra-high-molecular-weight polyethylene (UHMWPE, "Dyneema") laminated between two sheets of polyester.
It is used in various applications that call for a fabric with high tensile strength, but where low weight is desirable, such as sails and ultralight backpacking equipment.
The material was developed by the Cuben Fiber and Cubic Tech Corporations in the 1990s. In 2015, Cubic Tech was acquired by DSM, their supplier for the UHMWPE fiber. The product was subsequently renamed "Dyneema Composite Fabric" ("DCF"), a generic brand name DSM uses for all of their composite products which incorporate UHMWPE.
History
The name Cuben Fiber was coined by the press in reference to America³ (pronounced America Cubed), the winner of the 1992 America's Cup. During the 1992 Cup, that yacht reportedly used sails made from precursors to the currently available commercial product. In late 2007, the Cuben Fiber Corporation was acquired by North Sails. North Sails said they would continue to supply the materials to competitors when available. Cubic Tech Corporation has the exclusive rights to develop and sell "Cuben Fiber" laminates for all non-sailing applications.
from Heerlen, Netherlands
On 13 May 2015, a news release from Heerlen, Netherlands announced that Cubic Tech Corporation had been bought out by Dyneema, a subsidiary of DSM Dyneema, a subsidiary of DSM, officially known as Koninklijke Dsm Nv. Additional details about the buyout and the future of Cubic Tech Corp were revealed in an online Q&A with DSM.
Production
Dyneema Composite Fabric is a laminated fabric constructed from ultra high molecular weight polyethylene (UHMWPE) fiber monofilaments and polyester, polyvinyl fluoride, etc. films. Cuben fiber is sometimes confused with carbon fiber, one of the many fibers used as a reinforcement in some Cuben Fiber laminates. Cubic Tech Corporation's ultra-high performance flexible laminates were re-branded from Cuben Fiber to CTF3 in 2009. Cubic Tech Corp produces CTF3 with a wide variety of fibers such as Vectran, carbon, Kevlar, etc. In August 2013 it was announced that Cubic Tech Corp. had produced a version of their waterproof breathable fabric which utilizes the GE eVent fabric.
Application
The material is used in yachting, performance sailing, windsurfing, inflatables, airship hulls, medical applications and increasingly in ultralight backpacking equipment, such as tents and backpacks. It is also used to make wallets. Similar to sails made from traditional woven sail cloth, Dyneema Composite Fabric sails are constructed from panels that are bonded and sewn together, as opposed to three-dimensional laminated (3DL) sails that are laminated over a mold. The material is reportedly more durable than laminated sails of comparable strength while being lighter in weight. UHMWPE has excellent resistance to ultraviolet light and is less prone to disintegrate from repeated flexing than either Kevlar or carbon fiber.
See also
Silnylon
References
External links
Dyneema® Fabrics
Polypropylene Synthetic Fiber
Sailing equipment
Synthetic fibers | Dyneema Composite Fabric | [
"Chemistry"
] | 727 | [
"Synthetic materials",
"Synthetic fibers"
] |
9,565,831 | https://en.wikipedia.org/wiki/Kauffman%20polynomial | In knot theory, the Kauffman polynomial is a 2-variable knot polynomial due to Louis Kauffman. It is initially defined on a link diagram as
,
where is the writhe of the link diagram and is a polynomial in a and z defined on link diagrams by the following properties:
(O is the unknot).
L is unchanged under type II and III Reidemeister moves.
Here is a strand and (resp. ) is the same strand with a right-handed (resp. left-handed) curl added (using a type I Reidemeister move).
Additionally L must satisfy Kauffman's skein relation:
The pictures represent the L polynomial of the diagrams which differ inside a disc as shown but are identical outside.
Kauffman showed that L exists and is a regular isotopy invariant of unoriented links. It follows easily that F is an ambient isotopy invariant of oriented links.
The Jones polynomial is a special case of the Kauffman polynomial, as the L polynomial specializes to the bracket polynomial. The Kauffman polynomial is related to Chern–Simons gauge theories for SO(N) in the same way that the HOMFLY polynomial is related to Chern–Simons gauge theories for SU(N).
References
Further reading
External links
"Kauffman polynomial", Encyclopedia of Mathematics
Knot theory
Polynomials | Kauffman polynomial | [
"Mathematics"
] | 289 | [
"Polynomials",
"Algebra"
] |
9,566,223 | https://en.wikipedia.org/wiki/Temperament%20test | Temperament tests assess dogs for certain behaviors or suitability for dog sports or adoption from an animal shelter by observing the animal for unwanted or potentially dangerous behavioral traits, such as aggressiveness towards other dogs or humans, shyness, or extreme fear.
AKC Temperament Test
In 2019, the American Kennel Club launched its AKC Temperament Test (ATT), a pass-fail evaluation by AKC licensed or member clubs. Evaluators are specially trained AKC Obedience judges, Rally judges and AKC Approved Canine Good Evaluators.
American Temperament Test Society
American Temperament Test Society, Inc. was started by Alfons Ertel in 1977. Ertel created a test for dogs that checks a dog's reaction to strangers, to auditory and visual stimuli (such as the gun shot test), and to unusual situations in an outdoor setting; it does not test indoor or home situation scenarios. It favors a bold confident dog. , the top three dog breeds that have tested with ATTS are Rottweiler (17% of all tests conducted), German Shepherd Dog (10%), and Doberman (5%). The test itself is copyrighted and prospective testers must apply to become official. The test is conducted as a pass-fail by majority rule of three testers, and each individual dog is graded according to its own breed's native aptitudes, taking into account the individual dog's age, health and training. Though the ATTS is the only organization which posts pass rates "by breed", the breeds cannot be compared against each other because the grades are based on each breed's own characteristics. Despite that, attorneys have been encouraged to use the ATTS published "results by breed" to defend their clients in dangerous dog cases by comparing pass rates of the breed of their client's dog against the pass rates of other well-known non-aggressive pet dog breeds. , 34,686 tests have been completed; less than 1,000 per year.
BH-VT test by FCI
BH-VT, an abbreviation of a German term which roughly translates to "companion dog test with traffic safety part", is governed by rules from Fédération Cynologique Internationale (FCI). The BH-VT has become the prerequisite examination for entry into almost all dog sports in Europe that require off-leash work, such as Schutzhund/IPO/IGP, agility and flyball. It is not required for conformation shows where dogs are always on a leash.
Dogs must be at least 12 months old (older for some breeds). There are two portions: obedience and traffic. For the obedience portion, each of the following is part of the test: heeling on leash, heeling off leash, sit exercise, down with recall, and down under distraction. The traffic portion includes tests for encountering a group of people, bicyclists, cars, joggers, other dogs, and being tethered for a short period alone without its handler, and walking through a group of people that are moving.
Aggression towards other dogs is at all times forbidden in FCI events, and any bite or attempt to bite or attack a human or other dog will immediately disqualify a dog. Any aggression towards another dog will permanently disqualify a dog from any participation until it has proven itself through passing a repeat BH-VT with behavioral test.
An earlier version of the test was called simply "BH", and it was Schutzhund's preliminary test that all dogs must pass before going further in Schutzhund training. With the increase in (non-protection) dog sports for all breeds, the new BH-VT omits the "gun shy" test, which was instead moved to the next higher level of Schutzhund trials.
Canine Good Citizen by AKC
The Canine Good Citizen by the American Kennel Club tests for good behavior in a companion dog. Over 1 million dogs and their owners have participated in CGC since it was started in 1989 (over 30,000 dogs per year).
Puppy aptitude tests
There are numerous puppy aptitude and temperament tests which are used by buyers when selecting a puppy and by breeders when evaluating a litter of puppies.
Shelter evaluations
Shelters use temperament tests to help identify dogs with problem behaviors, including aggression, and to help increase the rate of successful adoptions. For some, these tests are a way to determine if a dog should even be offered for adoption, or to whom they will restrict adoption of an individual dog (adult-only household or sanctuary only, versus family with children). In a time when shelters are trying to improve outcomes for shelter animals, some consider temperament tests to be controversial and result in too many dogs being labeled negatively, leading to euthanasia. As such, some shelters have discontinued using any form of testing for their dogs.
Such tests seek to assess a dog's manners, and its reaction to strangers, small children and other pets. The tests try to identify if a dog has problems with food aggression, resource guarding, or separation anxiety. Tools used for evaluations might include a leash, bowl of food, a lifelike doll, a fake arm, and dog treats or toys.
Assess-a-Pet and Assess-a-Hand
The Assess-a-Pet Temperament Test involves use of the Assess-a-Hand, a vinyl or latex mock hand and arm mounted on a wooden dowel, used to avoid bites to the tester who uses it to approach, pet, and then pull away a bowl or toy from the dog. The device was invented by Sue Sternberg. The test is typically given after a certain number of days at a shelter, with retesting after a failure, and additionally after resolution of illness.
Match-Up II Shelter Dog Rehoming Program
This test requires two people: a handler and a recorder. It has 11 sub-tests and the answers are placed in a computer program. It was designed to "help shelters learn about the personality and needs of each dog so that behavioral interventions can be implemented and successful matches can be made."
SAFER Test
SAFER (Safety Assessment for Evaluating Rehoming) by the ASPCA is used to "help identify the risk of future aggression and individual behavioral support needed before adoption for each dog in a shelter."
Wolfhound testing
Temperament testing in wolfhounds is an old and proven form of mild dog fighting used in young dogs to test their temperament. For example, an American standard for an Irish Wolfhound is defined as "a large, rough-coated, greyhound-like dog, fast enough to catch a wolf and strong enough to kill it." It states that "the breed's well-being demands strong, gentle hounds, never aggressive or shy, not even "edgy" ones. Edgy hounds are presently under control, but without their handler's constant control would surely at least retreat, or perhaps manifest worse characteristics of the weak temperament."
Typically it is practiced with larger breeds known in Russia as волкодав (literally: dogs meant for the hunting of wolves). These large breeds (such as Caucasian Shepherd) in Russia undergo the testing called тестовые испытания волкодавов (i.e. testing/examination of dogs meant for hunting wolves). The breeders believe that males used for breeding have to have preserved fighting ability and dominant tendencies because it is a typical mark of their breed. They also believe that weak dogs without fighting abilities will cause a decrease in the quality of the breed.
As part of the test, breeders release two young dogs and let them behave naturally even if they begin to fight. If the fight looks dangerous, the breeders pull the dogs off each other to prevent their injury. If one of the participating dogs shows fear of the other dog and displays no dominant tendencies, he is removed from breeding to ensure his weak nature is not passed on to his descendants.
See also
Dog behavior
Dog training
Notes
References
Animal emotions
Ethology | Temperament test | [
"Biology"
] | 1,657 | [
"Behavioural sciences",
"Ethology",
"Behavior",
"Animal emotions"
] |
9,566,962 | https://en.wikipedia.org/wiki/Tornado%20Intercept%20Vehicle | The Tornado Intercept Vehicle 1 (TIV 1) and Tornado Intercept Vehicle 2 (TIV 2) are vehicles used to film with an IMAX camera from very close to or within a tornado. They were designed by film director Sean Casey. Both TIVs have "intercepted" numerous tornadoes, including the June 12, 2005, Jayton, Texas tornado, the June 5, 2009, Goshen County, Wyoming tornado, and the strongest intercept, made by TIV 2, the May 27, 2013, Lebanon, Kansas tornado.
TIV 1
The Tornado Intercept Vehicle 1 (TIV 1) is a heavily modified 1997 Ford F-Series F-Super Duty cab & chassis truck used as a storm chasing platform and built by Sean Casey. This heavily armored vehicle can drive into a weak to relatively strong tornado (EF0 to EF3) to film it and take measurements. Work began on the TIV in 2002 and took around eight months to finish, at a total cost of around US$81,000. TIV's armored shell consists of 1/8–1/4 inch steel plate welded to a two-inch square steel tubing frame. The windows are bullet resistant polycarbonate, measuring thick on the windshield and thick on the sides. The TIV weighs approximately fully loaded and is powered by a 7.3 litre Ford Powerstroke turbocharged diesel engine manufactured by Navistar-International, otherwise known as the Navistar T444E.
Before an intercept the front of the vehicle is angled downwards, and four hydraulic claws are lowered into the ground to stabilize the vehicle. The vehicle's top speed is . The TIV has a fuel capacity of , giving it a range of around . The TIV is featured in a series called Storm Chasers which began airing on the Discovery Channel in October 2007. TIV was succeeded in 2008 by TIV 2, but returned to service to finish out the first few chases of the 2008 storm chasing season after TIV 2 suffered mechanical problems. In a June 2011 interview with NPR's All Things Considered, Casey said that TIV was still in service and is designated as the backup vehicle in the event TIV 2 breaks down during a shoot.
After no longer needing the vehicle, Casey abandoned the vehicle on a central Kansas farm. Casey placed the TIV as a prize for a scavenger hunt, where the first one to find the TIV would be able to keep it. Wichita-based storm chaser Robert Clayton found the vehicle in 2020, after searching for it on Google Earth. Clayton's restoration plan for TIV 1 includes removing the claws and adding hydraulic anchoring spikes similar to TIV 2, repairing the air-ride suspension to drop TIV to the ground to prevent wind from getting underneath the vehicle, repainting the vehicle black, and adding instrumentation to collect data for future research.
TIV 2
Casey and his team developed and built the second Tornado Intercept Vehicle, dubbed TIV 2, to be featured in their next IMAX movie and the Storm Chasers series. Work began in September 2007 by forty welding students at the Great Plains Technology Center in Lawton, Oklahoma and was completed in time for the 2008 tornado chase season. TIV 2 was designed to address some of the problems experienced with the original TIV, namely its low ground clearance, lack of four-wheel drive, and low top speed. The TIV 2 has the ability to withstand wind speeds up to not deployed. Deployed, it can withstand a headwind. It is based on a Dodge Ram 3500 that was strengthened and converted to six-wheel drive by adding a third axle.
After season two, the six-wheel drive system was modified to four-wheel drive. It is powered by a 6.7-liter Cummins turbocharged diesel engine, modified with propane and water injection to produce . This gives TIV 2 an estimated top speed of over . Its fuel capacity is 92 US Gallons (348 L), giving TIV 2 an approximate range of around . The body of TIV 2 is constructed of a 1/8-inch steel skin welded over a square tubing steel frame. The windows in TIV 2 are all bullet-resistant interlayered polycarbonate sheets and tempered glass. TIV 2 also features an IMAX filming turret similar to the one on the original TIV. The original TIV's air ride suspension mechanism was not used on TIV 2 in favor of six hydraulic skirts that drop down to deflect wind over the TIV to stabilize it and protect the underside from debris. It was also not originally equipped with hydraulic claws.
TIV 2 debuted on the second season of Storm Chasers, which began airing on the Discovery Channel in October 2008. Its initial performance did not go well, as it was plagued by mechanical failures, including several broken axles, which forced Casey to abandon TIV 2 and return to chasing in the original TIV until TIV 2's issues could be resolved. Despite Discovery Channel showing that TIV 2 was out of commission for the majority of the season, TIV 2 could be seen chasing through to the end of the season, including the May 29, 2008 Kearney, NE tornado, though it was not shown in the series.
In the fall of 2008, TIV 2 received several modifications, mostly focused on reducing the vehicle's weight. To achieve this, less crucial areas of TIV 2's armor were converted from steel to aluminum while more vital areas were reinforced with supplemental composite armor consisting of thin layers of steel, Kevlar, polycarbonate, and rubber. In all, the weight reduction measures brought TIV 2's weight down to . The safety systems were also improved, with the three front wind flaps being consolidated into one skirt, and new hydraulic stabilizing spikes to further increase stability in high winds. Other modifications included additional doors that provided every seat position with an exit (wind skirts up or down), and a redesigned IMAX turret with 50% more windows. The third axle was disconnected from the drive train, thus changing TIV 2 to a 6×4 from its 6×6 design. The third axle now acts as a brace for the vehicle's weight.
The TIV 2 appeared again before the halfway point of the third season of Storm Chasers. In between seasons three and four of Storm Chasers, TIV 2 also appeared in an episode of another Discovery Channel series, Mythbusters, wherein both the TIV 2 and the SRV Dominator vehicle operated by Reed Timmer of TornadoVideos.Net were tested to determine their endurance to storm-force winds by being parked behind a Boeing 747 with the engines at full throttle. When tested at a wind speed of , the TIV 2 had the driver's door pulled open, though this was due to human error, as Casey forgot to lock the door prior to the test. When tested again at (equivalent to an EF5 tornado), the TIV 2 suffered no ill effects other than the anchoring spikes being slightly bent; the Dominator ended up being blown approximately , although it remained upright. TIV 2 would intercept a tornado near La Grange, WY in 2009 which would be the intercept shot Casey needed for his IMAX film. Future chases in TIV 2 would be for b-roll footage of the TIV 2 and for his new IMAX film.
In 2011, a siren was added to the vehicle to allow the TIV 2 to act as a mobile warning system for civilians in the path of incoming tornadoes, after several incidents earlier that year where the TIV 2 team was unable to effectively warn locals of the imminent danger of the tornadoes they were tracking, especially during the 2011 Super Outbreak. On April 27, 2011, the TIV 2 team intercepted an EF4 tornado that hit near Enterprise, Mississippi. While not in the path but 200 yards from it, it was the first tornado he shot with his new stereoscopic IMAX camera. Casey removed the rear flap in early 2012 and built a new set of two hydraulic spikes that go into the ground during an intercept.
On May 27, 2013, TIV 2 intercepted a large tornado near Smith Center, Kansas. The vehicle was struck by large debris from a nearby farm and suffered damage to the roof-mounted anemometer and at least two breaches of the crew compartment when the roof hatch and one of the doors were blown open. Before the anemometer was disabled, it recorded winds of , placing the tornado in the EF3 to EF4 range.
On October 21, 2019, Casey listed the TIV 2 on Craigslist for US$35,000 and it was later sold to storm chaser Ryan Shepard. The TIV 2 was fully restored and back on the road again in the 2021 storm season, where it made multiple close intercepts on June 10 in western North Dakota. It is under sponsorship of Storm of Passion and Live Storm Chasers.
Subanator
On March 6, 2023, Sean Casey announced on his Instagram the construction of a new storm chasing vehicle, not related to the previous Tornado Intercept Vehicles, using a Subaru Outback 3.6R as the base car. Unlike his previous vehicles, this vehicle will not be made from scratch, but is fitted with polycarbonate body panels instead of the original plastic ones. There are also 2 Lexan windows up front, and a Lexan windshield, to protect from debris, and hail.
On March 16, 2023, 10 days after the announcement of the new vehicle, Sean published another post, now showing the hydraulic spikes that were installed. There are 4 spikes in total, 2 on either end. They are first put into position and lowered with help of a third piston which moves them closer together, and closer to the ground. The spikes then shoot into the ground.
Instrumentation
Although primarily designed to shoot film from near or within tornadoes, the TIVs have at times been outfitted with meteorological instrumentation atop masts to complement the Doppler on Wheels (DOW) radar trucks of the Center for Severe Weather Research run by atmospheric scientist and inventor Joshua Wurman.
See also
SRV Dominator
References
External links
Tornado Alley IMAX movie
How the Tornado Intercept Vehicle Works
TIV images
Riders on the storm
Ryan Shepard
Meteorological instrumentation and equipment
Tornado
Armored cars of the United States
Storm chasing | Tornado Intercept Vehicle | [
"Technology",
"Engineering"
] | 2,109 | [
"Meteorological instrumentation and equipment",
"Measuring instruments"
] |
9,567,916 | https://en.wikipedia.org/wiki/Mechanical%20screening | Mechanical screening, often just called screening, is the practice of taking granulated or crushed ore material and separating it into multiple grades by particle size.
This practice occurs in a variety of industries such as mining and mineral processing, agriculture, pharmaceutical, food, plastics, and recycling.
A method of separating solid particles according to size alone is called screening.
General categories
Screening falls under two general categories: dry screening, and wet screening. From these categories, screening separates a flow of material into grades, these grades are then either further processed to an intermediary product or a finished product. Additionally, the machines can be categorized into a moving screen and static screen machines, as well as by whether the screens are horizontal or inclined.
Applications
The mining and mineral processing industry uses screening for a variety of processing applications. For example, after mining the minerals, the material is transported to a primary crusher. Before crushing large boulder are scalped on a shaker with thick shielding screening. Further down stream after crushing the material can pass through screens with openings or slots that continue to become smaller. Finally, screening is used to make a final separation to produce saleable products based on a grade or a size range.
Process
A screening machine consist of a drive that induces vibration, a screen media that causes particle separation, and a deck which holds the screen media and the drive and is the mode of transport for the vibration.
There are physical factors that makes screening practical. For example, vibration, g force, bed density, and material shape all facilitate the rate or cut. Electrostatic forces can also hinder screening efficiency in way of water attraction causing sticking or plugging, or very dry material generate a charge that causes it to attract to the screen itself.
As with any industrial process there is a group of terms that identify and define what screening is. Terms like blinding, contamination, frequency, amplitude, and others describe the basic characteristics of screening, and those characteristics in turn shape the overall method of dry or wet screening.
In addition, the way a deck is vibrated differentiates screens. Different types of motion have their advantages and disadvantages. In addition media types also have their different properties that lead to advantages and disadvantages.
Finally, there are issues and problems associated with screening. Screen tearing, contamination, blinding, and dampening all affect screening efficiency.
Physical principles
Vibration - either sinusoidal vibration or gyratory vibration.
Sinusoidal Vibration occurs at an angled plane relative to the horizontal. The vibration is in a wave pattern determined by frequency and amplitude.
Gyratory Vibration occurs at near level plane at low angles in a reciprocating side to side motion.
Gravity - This physical interaction is after material is thrown from the screen causing it to fall to a lower level. Gravity also pulls the particles through the screen media.
Density - The density of the material relates to material stratification.
Electrostatic Force - This force applies to screening when particles are extremely dry or is wet.
Screening terminology
Like any mechanical and physical entity there are scientific, industrial, and layman terminology. The following is a partial list of terms that are associated with mechanical screening.
Amplitude - This is a measurement of the screen cloth as it vertically peaks to its tallest height and troughs to its lowest point. Measured in multiples of the acceleration constant g (g-force).
Acceleration - Applied Acceleration to the screen mesh in order to overcome the van der waal forces
Blinding - When material plugs into the open slots of the screen cloth and inhibits overflowing material from falling through.
Brushing - This procedure is performed by an operator who uses a brush to brush over the screen cloth to dislodged blinded opening.
Contamination - This is unwanted material in a given grade. This occurs when there is oversize or fine size material relative to the cut or grade. Another type of contamination is foreign body contamination.
Oversize contamination occurs when there is a hole in the screen such that the hole is larger than the mesh size of the screen. Other instances where oversize occurs is material overflow falling into the grade from overhead, or there is the wrong mesh size screen in place.
Fines contamination is when large sections of the screen cloth is blinded over, and material flowing over the screen does not fall through. The fines are then retained in the grade.
Foreign body contamination is unwanted material that differs from the virgin material going over and through the screen. It can be anything ranging from tree twigs, grass, metal slag to other mineral types and composition. This contamination occurs when there is a hole in the scalping screen or a foreign material's mineralogy or chemical composition differs from the virgin material.
Deck - a deck is frame or apparatus that holds the screen cloth in place. It also contains the screening drive. It can contain multiple sections as the material travels from the feed end to the discharge end. Multiple decks are screen decks placed in a configuration where there are a series of decks attached vertically and lean at the same angle as it preceding and exceeding decks. Multiple decks are often referred to as single deck, double deck, triple deck, etc.
Frequency - Measured in hertz (Hz) or revolutions per minute (RPM). Frequency is the number of times the screen cloth sinusoidally peaks and troughs within a second. As for a gyratory screening motion it is the number of revolutions the screens or screen deck takes in a time interval, such as revolution per minute (RPM).
Gradation, grading - Also called "cut" or "cutting." Given a feed material in an initial state, the material can be defined to have a particle size distribution. Grading is removing the maximum size material and minimum size material by way of mesh selection.
Screen Media (Screen cloth) - it is the material defined by mesh size, which can be made of any type of material such steel, stainless steel, rubber compounds, polyurethane, brass, etc.
Shaker - the whole assembly of any type mechanical screening machine.
Stratification - This phenomenon occurs as vibration is passed through a bed of material. This causes coarse (larger) material to rise and finer (smaller) material to descend within the bed. The material in contact with screen cloth either falls through a slot or blinds the slot or contacts the cloth material and is thrown from the cloth to fall to the next lower level.
Mesh - The number of open slots per linear inch. Mesh is arranged in multiple configuration. Mesh can be a square pattern, long-slotted rectangular pattern, circular pattern, or diamond pattern.
Scalp, scalping - this is the very first cut of the incoming material with the sum of all its grades. Scalping is removing the largest size particles. This includes enormously large particles relative to the other particle's sizes. Scalping also cleans the incoming material from foreign body contamination such as twigs, trash, glass, or other unwanted oversize material.
Types of mechanical screening
There are a number of types of mechanical screening equipment that cause segregation. These types are based on the motion of the machine through its motor drive.
Circle-throw vibrating equipment - This type of equipment has an eccentric shaft that causes the frame of the shaker to lurch at a given angle. This lurching action literally throws the material forward and up. As the machine returns to its base state the material falls by gravity to physically lower level. This type of screening is used also in mining operations for large material with sizes that range from six inches to +20 mesh.
High frequency vibrating equipment - This type of equipment drives the screen cloth only. Unlike above the frame of the equipment is fixed and only the screen vibrates. However, this equipment is similar to the above such that it still throws material off of it and allows the particles to cascade down the screen cloth. These screens are for sizes smaller than 1/8 of an inch to +150 mesh.
Gyratory equipment - This type of equipment differs from the above two such that the machine gyrates in a circular motion at a near level plane at low angles. The drive is an eccentric gear box or eccentric weights.
Trommel screens - Does not require vibration. Instead, material is fed into a horizontal rotating drum with screen panels around the diameter of the drum.
Tumbler screening technique
An improvement on vibration, vibratory, and linear screeners, a tumbler screener uses elliptical action which aids in screening of even very fine material. As like panning for gold, the fine particles tend to stay towards the center and the larger go to the outside. It allows for segregation and unloads the screen surface so that it can effectively do its job. With the addition of multiple decks and ball cleaning decks, even difficult products can be screened at high capacity to very fine separations.
Circle-throw vibrating equipment
Circle-Throw Vibrating Equipment is a shaker or a series of shakers as to where the drive causes the whole structure to move. The structure extends to a maximum throw or length and then contracts to a base state. A pattern of springs are situated below the structure to where there is vibration and shock absorption as the structure returns to the base state.
This type of equipment is used for very large particles, sizes that range from pebble size on up to boulder size material. It is also designed for high volume output. As a scalper, this shaker will allow oversize material to pass over and fall into a crusher such a cone crusher, jaw crusher, or hammer mill. The material that passes the screen by-passes the crusher and is conveyed and combined with the crush material.
Also this equipment is used in washing processes, as material passes under spray bars, finer material and foreign material is washed through the screen. This is one example of wet screening.
High frequency vibrating equipment
High-frequency vibrating screening equipment is a shaker whose frame is fixed and the drive vibrates only the screen cloth. High frequency vibration equipment is for particles that are in this particle size range of an 1/8 in (3 mm) down to a +150 mesh. Traditional shaker screeners have a difficult time making separations at sizes like 44 microns. At the same time, other high energy sieves like the Elcan Industries' advanced screening technology allow for much finer separations down to as fine as 10um and 5um, respectively.
These shakers usually make a secondary cut for further processing or make a finished product cut.
These shakers are usually set at a steep angle relative to the horizontal level plane. Angles range from 25 to 45 degrees relative to the horizontal level plane.
Gyratory equipment
This type of equipment has an eccentric drive or weights that causes the shaker to travel in an orbital path. The material rolls over the screen and falls with the induction of gravity and directional shifts. Rubber balls and trays provide an additional mechanical means to cause the material to fall through. The balls also provide a throwing action for the material to find an open slot to fall through.
The shaker is set a shallow angle relative to the horizontal level plane. Usually, no more than 2 to 5 degrees relative to the horizontal level plane.
These types of shakers are used for very clean cuts. Generally, a final material cut will not contain any oversize or any fines contamination.
These shakers are designed for the highest attainable quality at the cost of a reduced feed rate.
Trommel screens
Trommel screens have a rotating drum on a shallow angle with screen panels around the diameter of the drum. The feed material always sits at the bottom of the drum and, as the drum rotates, always comes into contact with clean screen. The oversize travels to the end of the drum as it does not pass through the screen, while the undersize passes through the screen into a launder below.
Screen Media Attachment Systems
There are many ways to install screen media into a screen box deck (shaker deck). Also, the type of attachment system has an influence on the dimensions of the media.
Tensioned screen media
Tensioned screen cloth is typically 4 feet by the width or the length of the screening machine depending on whether the deck is side or end tensioned. Screen cloth for tensioned decks can be made with hooks and are attached with clamp rails bolted on both sides of the screen box. When the clamp rail bolts are tightened, the cloth is tensioned or even stretched in the case of some types of self-cleaning screen media. To ensure that the center of the cloth does not tap repeatedly on the deck due to the vibrating shaker and that the cloth stays tensioned, support bars are positioned at different heights on the deck to create a crown curve from hook to hook on the cloth. Tensioned screen cloth is available in various materials: stainless steel, high carbon steel and oil tempered steel wires, as well as moulded rubber or polyurethane and hybrid screens (a self-cleaning screen cloth made of rubber or polyurethane and metal wires).
Commonly, vibratory-type screening equipment employs rigid, circular sieve frames to which woven wire mesh is attached. Conventional methods of producing tensioned meshed screens has given way in recent years to bonding, whereby the mesh is no longer tensioned and trapped between a sieve frame body and clamping ring; instead, developments in modern adhesive technologies has allowed the industry to adopt high strength structural adhesives to bond tensioned mesh directly to frames.
Modular screen media
Modular screen media is typically 1 foot large by 1 or 2 feet long (4 feet long for ISEPREN WS 85 ) steel reinforced polyurethane or rubber panels. They are installed on a flat deck (no crown) that normally has a larger surface than a tensioned deck. This larger surface design compensates for the fact that rubber and polyurethane modular screen media offers less open area than wire cloth. Over the years, numerous ways have been developed to attach modular panels to the screen deck stringers (girders). Some of these attachment systems have been or are currently patented. Self-cleaning screen media is also available on this modular system.
Types of Screen Media
There are several types of screen media manufactured with different types of material that use the two common types of screen media attachment systems, tensioned and modular.
Woven Wire Cloth (Mesh)
Woven wire cloth, typically produced from stainless steel, is commonly employed as a filtration medium for sieving in a wide range of industries. Most often woven with a plain weave, or a twill weave for the lightest of meshes, apertures can be produced from a few microns upwards (e.g. 25 microns), employing wires with diameters from as little as 25 microns. A twill weave allows a mesh to be woven when the wire diameter is too thick in proportion to the aperture. Other, less commonplace, weaves, such as Dutch/Hollander, allow the production of meshes that are stronger and/or having smaller apertures.
Today wire cloth is woven to strict international standards, e.g. ISO1944:1999, which dictates acceptable tolerance regarding nominal mesh count and blemishes. The nominal mesh count, to which mesh is generally defined is a measure of the number of openings per lineal inch, determined by counting the number of openings from the centre of one wire to the centre of another wire one lineal inch away. For example, a 2 mesh woven with a wire of 1.6mm wire diameter has an aperture of 11.1mm (see picture below of a 2 mesh with an intermediate crimp). The formula for calculating the aperture of a mesh, with a known mesh count and wire diameter, is as follows:
where a = aperture, b = mesh count and c = wire diameter.
Other calculations regarding woven wire cloth/mesh can be made including weight and open area determination. Of note, wire diameters are often referred to by their standard wire gauge (swg); e.g. a 1.6mm wire is a 16 swg.
Traditionally, screen cloth was made with metal wires woven with a loom. Today, woven cloth is still widely used primarily because they are less expensive than other types of screen media. Over the years, different weaving techniques have been developed; either to increase the open area percentage or add wear-life. Slotted opening woven cloth is used where product shape is not a priority and where users need a higher open area percentage. Flat-top woven cloth is used when the consumer wants to increase wear-life. On regular woven wire, the crimps (knuckles on woven wires) wear out faster than the rest of the cloth resulting in premature breakage. On flat-top woven wire, the cloth wears out equally until half of the wire diameter is worn, resulting in a longer wear life. Unfortunately flat-top woven wire cloth is not widely used because of the lack of crimps that causes a pronounced reduction of passing fines resulting in premature wear of con crushers.
Perforated & Punch Plate
On a crushing and screening plant, punch plates or perforated plates are mostly used on scalper vibrating screens, after raw products pass on grizzly bars. Most likely installed on a tensioned deck, punch plates offer excellent wear life for high-impact and high material flow applications.
Synthetic screen media (typically rubber or polyurethane)
Synthetic screen media is used where wear life is an issue. Large producers such as mines or huge quarries use them to reduce the frequency of having to stop the plant for screen deck maintenance. Rubber is also used as a very resistant high-impact screen media material used on the top deck of a scalper screen. To compete with rubber screen media fabrication, polyurethane manufacturers developed screen media with lower Shore Hardness. To compete with self-cleaning screen media that is still primarily available in tensioned cloth, synthetic screen media manufacturers also developed membrane screen panels, slotted opening panels and diamond opening panels. Due to the 7-degree demoulding angle, polyurethane screen media users can experience granulometry changes of product during the wear life of the panel.
Self-Cleaning Screen Media
Self-cleaning screen media was initially engineered to resolve screen cloth blinding, clogging and pegging problems. The idea was to place crimped wires side by side on a flat surface, creating openings and then, in some way, holding them together over the support bars (crown bars or bucker bars). This would allow the wires to be free to vibrate between the support bars, preventing blinding, clogging and pegging of the cloth. Initially, crimped longitudinal wires on self-cleaning cloth were held together over support bars with woven wire. In the 50s, some manufacturers started to cover the woven cross wires with caulking or rubber to prevent premature wear of the crimps (knuckles on woven wires). One of the pioneer products in this category was ONDAP GOMME made by the French manufacturer Giron. During the mid 90s, Major Wire Industries Ltd., a Quebec manufacturer, developed a “hybrid” self-cleaning screen cloth called Flex-Mat, without woven cross wires. In this product, the crimped longitudinal wires are held in place by polyurethane strips. Rather than locking (impeding) vibration over the support bars due to woven cross wires, polyurethane strips reduce vibration of longitudinal wires over the support bars, thus allowing vibration from hook to hook. Major Wire quickly started to promote this product as a high-performance screen that helped producers screen more in-specification material for less cost and not simply a problem solver. They claimed that the independent vibrating wires helped produce more product compared to a woven wire cloth with the same opening (aperture) and wire diameter. This higher throughput would be a direct result of the higher vibration frequency of each independent wire of the screen cloth (calculated in hertz) compared to the shaker vibration (calculated in RPM), accelerating the stratification of the material bed. Another benefit that helped the throughput increase is that hybrid self-cleaning screen media offered a better open area percentage than woven wire screen media. Due to its flat surface (no knuckles), hybrid self-cleaning screen media can use a smaller wire diameter for the same aperture than woven wire and still lasts as long, resulting in a greater opening percentage.
References
Mining equipment
Plastics industry
Metallurgical processes
Industrial processes
Solid-solid separation | Mechanical screening | [
"Chemistry",
"Materials_science",
"Engineering"
] | 4,158 | [
"Solid-solid separation",
"Mining equipment",
"Separation processes by phases",
"Metallurgical processes",
"Metallurgy"
] |
9,568,170 | https://en.wikipedia.org/wiki/Pyrimidine%20metabolism | Pyrimidine biosynthesis occurs both in the body and through organic synthesis.
De novo biosynthesis of pyrimidine
De Novo biosynthesis of a pyrimidine is catalyzed by three gene products CAD, DHODH and UMPS. The first three enzymes of the process are all coded by the same gene in CAD which consists of carbamoyl phosphate synthetase II, aspartate carbamoyltransferase and dihydroorotase. Dihydroorotate dehydrogenase (DHODH) unlike CAD and UMPS is a mono-functional enzyme and is localized in the mitochondria. UMPS is a bifunctional enzyme consisting of orotate phosphoribosyltransferase (OPRT) and orotidine monophosphate decarboxylase (OMPDC). Both, CAD and UMPS are localized around the mitochondria, in the cytosol. In Fungi, a similar protein exists but lacks the dihydroorotase function: another protein catalyzes the second step.
In other organisms (Bacteria, Archaea and the other Eukaryota), the first three steps are done by three different enzymes.
Pyrimidine catabolism
Pyrimidines are ultimately catabolized (degraded) to CO2, H2O, and urea. Cytosine can be broken down to uracil, which can be further broken down to N-carbamoyl-β-alanine, and then to beta-alanine, CO2, and ammonia by beta-ureidopropionase. Thymine is broken down into β-aminoisobutyrate which can be further broken down into intermediates eventually leading into the citric acid cycle.
β-aminoisobutyrate acts as a rough indicator for rate of DNA turnover.
Regulations of pyrimidine nucleotide biosynthesis
Through negative feedback inhibition, the end-products UTP and UDP prevent the enzyme CAD from catalyzing the reaction in animals. Conversely, PRPP and ATP act as positive effectors that enhance the enzyme's activity.
Pharmacotherapy
Modulating the pyrimidine metabolism pharmacologically has therapeutical uses, and could implement in cancer treatment.
Pyrimidine synthesis inhibitors are used in active moderate to severe rheumatoid arthritis and psoriatic arthritis, as well as in multiple sclerosis. Examples include Leflunomide and Teriflunomide (the active metabolite of leflunomide).
Prebiotic synthesis of pyrimidine nucleotides
In order to understand how life arose, knowledge is required of the chemical pathways that permit formation of the key building blocks of life under plausible prebiotic conditions. The RNA world hypothesis holds that in the primordial soup there existed free-floating pyrimidine and purine ribonucleotides, the fundamental molecules that combine in series to form RNA. Complex molecules such as RNA must have emerged from relatively small molecules whose reactivity was governed by physico-chemical processes. RNA is composed of pyrimidine and purine nucleotides, both of which are necessary for reliable information transfer, and thus natural selection and Darwinian evolution. Becker et al. showed how pyrimidine nucleosides can be synthesized from small molecules and ribose, driven solely by wet-dry cycles.
References
External links
Overview at Queen Mary, University of London
Pyrimidines
Metabolism | Pyrimidine metabolism | [
"Chemistry",
"Biology"
] | 742 | [
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
9,568,579 | https://en.wikipedia.org/wiki/Fatty%20acid%20synthesis | In biochemistry, fatty acid synthesis is the creation of fatty acids from acetyl-CoA and NADPH through the action of enzymes called fatty acid synthases. This process takes place in the cytoplasm of the cell. Most of the acetyl-CoA which is converted into fatty acids is derived from carbohydrates via the glycolytic pathway. The glycolytic pathway also provides the glycerol with which three fatty acids can combine (by means of ester bonds) to form triglycerides (also known as "triacylglycerols" – to distinguish them from fatty "acids" – or simply as "fat"), the final product of the lipogenic process. When only two fatty acids combine with glycerol and the third alcohol group is phosphorylated with a group such as phosphatidylcholine, a phospholipid is formed. Phospholipids form the bulk of the lipid bilayers that make up cell membranes and surrounds the organelles within the cells (such as the cell nucleus, mitochondria, endoplasmic reticulum, Golgi apparatus, etc.). In addition to cytosolic fatty acid synthesis, there is also mitochondrial fatty acid synthesis (mtFASII), in which malonyl-CoA is formed from malonic acid with the help of malonyl-CoA synthetase (ACSF3), which then becomes the final product octanoyl-ACP (C8) via further intermediate steps.
Straight-chain fatty acids
Straight-chain fatty acids occur in two types: saturated and unsaturated. The latter are produced from the former
Saturated straight-chain fatty acids
Straight-chain fatty acid synthesis occurs via the six recurring reactions shown below, until the 16-carbon palmitic acid is produced.
The diagrams presented show how fatty acids are synthesized in microorganisms and list the enzymes found in Escherichia coli. These reactions are performed by fatty acid synthase II (FASII), which in general contain multiple enzymes that act as one complex. FASII is present in prokaryotes, plants, fungi, and parasites, as well as in mitochondria.
In animals, as well as some fungi such as yeast, these same reactions occur on fatty acid synthase I (FASI), a large dimeric protein that has all of the enzymatic activities required to create a fatty acid. FASII is less efficient than FASI; however, it allows for the formation of more molecules, including "medium-chain" fatty acids via early chain termination.
Once formed, the 16:0 carbon fatty acid can undergo a number of modifications, resulting in desaturation and/or elongation. Elongation to stearate (18:0) mainly occurs in the ER by several membrane-bound enzymes. The steps involved in the elongation process are principally the same as those carried out by FAS, but the four principal successive steps of the elongation are performed by individual proteins, which may be physically associated.
In fatty synthesis, the reducing agent is NADPH, whereas NAD is the oxidizing agent in beta-oxidation (the breakdown of fatty acids to acetyl-CoA). This difference exemplifies a general principle that NADPH is consumed during biosynthetic reactions, whereas NADH is generated in energy-yielding reactions. (Thus NADPH is also required for the synthesis of cholesterol from acetyl-CoA; while NADH is generated during glycolysis.) The source of the NADPH is two-fold. When malate is oxidatively decarboxylated by "NADP+-linked malic enzyme" to form pyruvate, and NADPH are formed. NADPH is also formed by the pentose phosphate pathway which converts glucose into ribose, which can be used in synthesis of nucleotides and nucleic acids, or it can be catabolized to pyruvate.
Conversion of carbohydrates into fatty acids
In humans, fatty acids are formed from carbohydrates predominantly in the liver and adipose tissue, as well as in the mammary glands during lactation.
The pyruvate produced by glycolysis is an important intermediary in the conversion of carbohydrates into fatty acids and cholesterol. This occurs via the conversion of pyruvate into acetyl-CoA in the mitochondrion. However, this acetyl-CoA needs to be transported into cytosol where the synthesis of fatty acids and cholesterol occurs. This cannot occur directly. To obtain cytosolic acetyl-CoA, citrate (produced by the condensation of acetyl-CoA with oxaloacetate) is removed from the citric acid cycle and carried across the inner mitochondrial membrane into the cytosol. There it is cleaved by ATP citrate lyase into acetyl-CoA and oxaloacetate. The oxaloacetate can be used for gluconeogenesis (in the liver), or it can be returned into mitochondrion as malate. The cytosolic acetyl-CoA is carboxylated by acetyl-CoA carboxylase into malonyl-CoA, the first committed step in the synthesis of fatty acids.
Animals cannot resynthesize carbohydrates from fatty acids
The main fuel stored in the bodies of animals is fat. A young adult human's fat stores average between about , but varies greatly depending on age, sex, and individual disposition. In contrast, the human body stores only about of glycogen, of which is locked inside the skeletal muscles and is unavailable to the body as a whole. The or so of glycogen stored in the liver is depleted within one day of starvation. Thereafter the glucose that is released into the blood by the liver for general use by the body tissues, has to be synthesized from the glucogenic amino acids and a few other gluconeogenic substrates, which do not include fatty acids.
Fatty acids are broken down to acetyl-CoA by means of beta oxidation inside the mitochondria, whereas fatty acids are synthesized from acetyl-CoA outside the mitochondrion, in the cytosol. The two pathways are distinct, not only in where they occur, but also in the reactions that occur, and the substrates that are used. The two pathways are mutually inhibitory, preventing the acetyl-CoA produced by beta-oxidation from entering the synthetic pathway via the acetyl-CoA carboxylase reaction. It can also not be converted to pyruvate as the pyruvate decarboxylation reaction is irreversible. Instead it condenses with oxaloacetate, to enter the citric acid cycle. During each turn of the cycle, two carbon atoms leave the cycle as in the decarboxylation reactions catalyzed by isocitrate dehydrogenase and alpha-ketoglutarate dehydrogenase. Thus each turn of the citric acid cycle oxidizes an acetyl-CoA unit while regenerating the oxaloacetate molecule with which the acetyl-CoA had originally combined to form citric acid. The decarboxylation reactions occur before malate is formed in the cycle. Malate is the only substance that can be removed from the mitochondrion to enter the gluconeogenic pathway to form glucose or glycogen in the liver or any other tissue. There can therefore be no net conversion of fatty acids into glucose.
Only plants possess the enzymes to convert acetyl-CoA into oxaloacetate from which malate can be formed to ultimately be converted to glucose.
Regulation
Acetyl-CoA is formed into malonyl-CoA by acetyl-CoA carboxylase, at which point malonyl-CoA is destined to feed into the fatty acid synthesis pathway. Acetyl-CoA carboxylase is the point of regulation in saturated straight-chain fatty acid synthesis, and is subject to both phosphorylation and allosteric regulation. Regulation by phosphorylation occurs mostly in mammals, while allosteric regulation occurs in most organisms. Allosteric control occurs as feedback inhibition by palmitoyl-CoA and activation by citrate. When there are high levels of palmitoyl-CoA, the final product of saturated fatty acid synthesis, it allosterically inactivates acetyl-CoA carboxylase to prevent a build-up of fatty acids in cells. Citrate acts to activate acetyl-CoA carboxylase under high levels, because high levels indicate that there is enough acetyl-CoA to feed into the Krebs cycle and conserve energy.
High plasma levels of insulin in the blood plasma (e.g. after meals) cause the dephosphorylation of acetyl-CoA carboxylase, thus promoting the formation of malonyl-CoA from acetyl-CoA, and consequently the conversion of carbohydrates into fatty acids, while epinephrine and glucagon (released into the blood during starvation and exercise) cause the phosphorylation of this enzyme, inhibiting lipogenesis in favor of fatty acid oxidation via beta-oxidation.
Unsaturated straight chain fatty acids
Anaerobic desaturation
Many bacteria use the anaerobic pathway for synthesizing unsaturated fatty acids. This pathway does not utilize oxygen and is dependent on enzymes to insert the double bond before elongation utilizing the normal fatty acid synthesis machinery. In Escherichia coli, this pathway is well understood.
FabA is a β-hydroxydecanoyl-ACP dehydrase – it is specific for the 10-carbon saturated fatty acid synthesis intermediate (β-hydroxydecanoyl-ACP).
FabA catalyzes the dehydration of β-hydroxydecanoyl-ACP, causing the release of water and insertion of the double bond between C7 and C8 counting from the methyl end. This creates the trans-2-decenoyl intermediate.
Either the trans-2-decenoyl intermediate can be shunted to the normal saturated fatty acid synthesis pathway by FabB, where the double bond will be hydrolyzed and the final product will be a saturated fatty acid, or FabA will catalyze the isomerization into the cis-3-decenoyl intermediate.
FabB is a β-ketoacyl-ACP synthase that elongates and channels intermediates into the mainstream fatty acid synthesis pathway. When FabB reacts with the cis-decenoyl intermediate, the final product after elongation will be an unsaturated fatty acid.
The two main unsaturated fatty acids made are Palmitoleoyl-ACP (16:1ω7) and cis-vaccenoyl-ACP (18:1ω7).
Most bacteria that undergo anaerobic desaturation contain homologues of FabA and FabB. Clostridia are the main exception; they have a novel enzyme, yet to be identified, that catalyzes the formation of the cis double bond.
Regulation
This pathway undergoes transcriptional regulation by FadR and FabR. FadR is the more extensively studied protein and has been attributed bifunctional characteristics. It acts as an activator of fabA and fabB transcription and as a repressor for the β-oxidation regulon. In contrast, FabR acts as a repressor for the transcription of fabA and fabB.
Aerobic desaturation
Aerobic desaturation is the most widespread pathway for the synthesis of unsaturated fatty acids. It is utilized in all eukaryotes and some prokaryotes. This pathway utilizes desaturases to synthesize unsaturated fatty acids from full-length saturated fatty acid substrates. All desaturases require oxygen and ultimately consume NADH even though desaturation is an oxidative process. Desaturases are specific for the double bond they induce in the substrate. In Bacillus subtilis, the desaturase, Δ5-Des, is specific for inducing a cis-double bond at the Δ5 position. Saccharomyces cerevisiae contains one desaturase, Ole1p, which induces the cis-double bond at Δ9.
In mammals the aerobic desaturation is catalyzed by a complex of three membrane-bound enzymes (NADH-cytochrome b5 reductase, cytochrome b5, and a desaturase). These enzymes allow molecular oxygen, , to interact with the saturated fatty acyl-CoA chain, forming a double bond and two molecules of water, . Two electrons come from NADH + and two from the single bond in the fatty acid chain. These mammalian enzymes are, however, incapable of introducing double bonds at carbon atoms beyond C-9 in the fatty acid chain..) Hence mammals cannot synthesize linoleate or linolenate (which have double bonds at the C-12 (= Δ12), or the C-12 and C-15 (= Δ12 and Δ15) positions, respectively, as well as at the Δ9 position), nor the polyunsaturated, 20-carbon arachidonic acid that is derived from linoleate. These are all termed essential fatty acids, meaning that they are required by the organism, but can only be supplied via the diet. (Arachidonic acid is the precursor of prostaglandins which fulfill a wide variety of functions as local hormones.)
Odd-chain fatty acids
Odd-chain fatty acids (OCFAs) are those fatty acids that contain an odd number of carbon atoms. The most common OCFAs are the saturated C15 and C17 derivatives, respectively pentadecanoic acid and heptadecanoic acid. The synthesis of even-chained fatty acid synthesis is done by assembling acetyl-CoA precursors, however, propionyl-CoA instead of acetyl-CoA is used as the primer for the biosynthesis of long-chain fatty acids with an odd number of carbon atoms.
Regulation
In B. subtilis, this pathway is regulated by a two-component system: DesK and DesR. DesK is a membrane-associated kinase and DesR is a transcriptional regulator of the des gene. The regulation responds to temperature; when there is a drop in temperature, this gene is upregulated. Unsaturated fatty acids increase the fluidity of the membrane and stabilize it under lower temperatures. DesK is the sensor protein that, when there is a decrease in temperature, will autophosphorylate. DesK-P will transfer its phosphoryl group to DesR. Two DesR-P proteins will dimerize and bind to the DNA promoters of the des gene and recruit RNA polymerase to begin transcription.
Pseudomonas aeruginosa
In general, both anaerobic and aerobic unsaturated fatty acid synthesis will not occur within the same system, however Pseudomonas aeruginosa and Vibrio ABE-1 are exceptions.
While P. aeruginosa undergoes primarily anaerobic desaturation, it also undergoes two aerobic pathways. One pathway utilizes a Δ9-desaturase (DesA) that catalyzes a double bond formation in membrane lipids. Another pathway uses two proteins, DesC and DesB, together to act as a Δ9-desaturase, which inserts a double bond into a saturated fatty acid-CoA molecule. This second pathway is regulated by repressor protein DesT. DesT is also a repressor of fabAB expression for anaerobic desaturation when in presence of exogenous unsaturated fatty acids. This functions to coordinate the expression of the two pathways within the organism.
Branched-chain fatty acids
Branched chain fatty acids are usually saturated and are found in two distinct families: the iso-series and anteiso-series. It has been found that Actinomycetales contain unique branch-chain fatty acid synthesis mechanisms, including that which forms tuberculosteric acid.
Branch-chain fatty acid synthesizing system
The branched-chain fatty acid synthesizing system uses α-keto acids as primers. This system is distinct from the branched-chain fatty acid synthetase that utilizes short-chain acyl-CoA esters as primers. α-Keto acid primers are derived from the transamination and decarboxylation of valine, leucine, and isoleucine to form 2-methylpropanyl-CoA, 3-methylbutyryl-CoA, and 2-methylbutyryl-CoA, respectively. 2-Methylpropanyl-CoA primers derived from valine are elongated to produce even-numbered iso-series fatty acids such as 14-methyl-pentadecanoic (isopalmitic) acid, and 3-methylbutyryl-CoA primers from leucine may be used to form odd-numbered iso-series fatty acids such as 13-methyl-tetradecanoic acid. 2-Methylbutyryl-CoA primers from isoleucine are elongated to form anteiso-series fatty acids containing an odd number of carbon atoms such as 12-Methyl tetradecanoic acid. Decarboxylation of the primer precursors occurs through the branched-chain α-keto acid decarboxylase (BCKA) enzyme. Elongation of the fatty acid follows the same biosynthetic pathway in Escherichia coli used to produce straight-chain fatty acids where malonyl-CoA is used as a chain extender. The major end products are 12–17 carbon branched-chain fatty acids and their composition tends to be uniform and characteristic for many bacterial species.
BCKA decarboxylase and relative activities of α-keto acid substrates
The BCKA decarboxylase enzyme is composed of two subunits in a tetrameric structure (A2B2) and is essential for the synthesis of branched-chain fatty acids. It is responsible for the decarboxylation of α-keto acids formed by the transamination of valine, leucine, and isoleucine and produces the primers used for branched-chain fatty acid synthesis. The activity of this enzyme is much higher with branched-chain α-keto acid substrates than with straight-chain substrates, and in Bacillus species its specificity is highest for the isoleucine-derived α-keto-β-methylvaleric acid, followed by α-ketoisocaproate and α-ketoisovalerate. The enzyme's high affinity toward branched-chain α-keto acids allows it to function as the primer donating system for branched-chain fatty acid synthetase.
Factors affecting chain length and pattern distribution
α-Keto acid primers are used to produce branched-chain fatty acids that, in general, are between 12 and 17 carbons in length. The proportions of these branched-chain fatty acids tend to be uniform and consistent among a particular bacterial species but may be altered due to changes in malonyl-CoA concentration, temperature, or heat-stable factors (HSF) present. All of these factors may affect chain length, and HSFs have been demonstrated to alter the specificity of BCKA decarboxylase for a particular α-keto acid substrate, thus shifting the ratio of branched-chain fatty acids produced. An increase in malonyl-CoA concentration has been shown to result in a larger proportion of C17 fatty acids produced, up until the optimal concentration (≈20μM) of malonyl-CoA is reached. Decreased temperatures also tend to shift the fatty-acid distribution slightly toward C17 fatty-acids in Bacillus species.
Branch-chain fatty acid synthase
This system functions similarly to the branch-chain fatty acid synthesizing system, however it uses short-chain carboxylic acids as primers instead of alpha-keto acids. In general, this method is used by bacteria that do not have the ability to perform the branch-chain fatty acid system using alpha-keto primers. Typical short-chain primers include isovalerate, isobutyrate, and 2-methyl butyrate. In general, the acids needed for these primers are taken up from the environment; this is often seen in ruminal bacteria.
The overall reaction is:
Isobutyryl-CoA + 6 malonyl-CoA +12 NADPH + 12 → Isopalmitic acid + 6 12 NADP + 5 + 7 CoA
The difference between (straight-chain) fatty acid synthase and branch-chain fatty acid synthase is substrate specificity of the enzyme that catalyzes the reaction of acyl-CoA to acyl-ACP.
Omega-alicyclic fatty acids
Omega-alicyclic fatty acids typically contain an omega-terminal propyl or butyryl cyclic group and are some of the major membrane fatty acids found in several species of bacteria. The fatty acid synthetase used to produce omega-alicyclic fatty acids is also used to produce membrane branched-chain fatty acids. In bacteria with membranes composed mainly of omega-alicyclic fatty acids, the supply of cyclic carboxylic acid-CoA esters is much greater than that of branched-chain primers. The synthesis of cyclic primers is not well understood but it has been suggested that mechanism involves the conversion of sugars to shikimic acid which is then converted to cyclohexylcarboxylic acid-CoA esters that serve as primers for omega-alicyclic fatty acid synthesis
Tuberculostearic acid synthesis
Tuberculostearic acid (D-10-Methylstearic acid) is a saturated fatty acid that is known to be produced by Mycobacterium spp. and two species of Streptomyces. It is formed from the precursor oleic acid (a monounsaturated fatty acid). After oleic acid is esterified to a phospholipid, S-adenosyl-methionine donates a methyl group to the double bond of oleic acid. This methylation reaction forms the intermediate 10-methylene-octadecanoyal. Successive reduction of the residue, with NADPH as a cofactor, results in 10-methylstearic acid
Mitochondrial fatty acid synthesis
In addition to fatty acid synthesis in the cytosol, mitochondria also have their own fatty acid synthesis (mtFASII). Mitochondrial fatty acid synthesis is essential for cellular respiration and mitochondrial biogenesis. A role as a mediator in intracellular signal transduction is also assumed, as the levels of bioactive lipids, such as lysophospholipids and sphingolipids, correlate with mtFASII.
In the first step of mtFASII, malonyl-CoA is formed from malonic acid by ACSF3. This occurs in tandem with a mitochondrial isoform of ACC1 (mtACC1), which can still provide malonyl-CoA from acetyl-CoA. The fatty acids, such as octanoyl-ACP (C8), which forms the starting substrate of lipoic acid biosynthesis, are formed via further intermediate steps and chain extensions. Through lipoic acid as a cofactor respectively the degree of lipoylation, mtFASII has an influence on mitochondrial enzyme complexes in energy metabolism, such as the pyruvate dehydrogenase complex, the α-ketoglutarate dehydrogenase complex, the BCKDH complex and the glycine cleavage system (GCS), among others.
Diseases
Disorders in mtFASII lead to the following metabolic diseases:
ACSF3: Combined malonic and methylmalonic aciduria (CMAMMA)
MCAT: Medium-chain acyl-CoA dehydrogenase deficiency (MCAD)
MECR: Mitochondrial enoyl-CoA reductase protein-associated neurodegeneration (MEPAN)
See also
Essential fatty acid
Fatty acid metabolism
Fatty acid synthase
ThYme (database) (2010)
Footnote
References
External links
Overview at Rensselaer Polytechnic Institute
Overview at Indiana State University
Biochemical reactions
Biosynthesis
Fatty acids
Lipid metabolism | Fatty acid synthesis | [
"Chemistry",
"Biology"
] | 5,175 | [
"Lipid biochemistry",
"Biochemical reactions",
"Biosynthesis",
"Chemical synthesis",
"Biochemistry",
"Lipid metabolism",
"Metabolism"
] |
9,568,581 | https://en.wikipedia.org/wiki/Fatty%20acid%20degradation | Fatty acid degradation is the process in which fatty acids are broken down into their metabolites, in the end generating acetyl-CoA, the entry molecule for the citric acid cycle, the main energy supply of living organisms, including bacteria and animals. It includes three major steps:
Lipolysis of and release from adipose tissue
Activation and transport into mitochondria
β-oxidation
Lipolysis and release
Initially in the process of degradation, fatty acids are stored in adipocytes. The breakdown of this fat is known as lipolysis. The products of lipolysis, free fatty acids, are released into the bloodstream and circulate throughout the body. During the breakdown of triacylglycerols into fatty acids, more than 75% of the fatty acids are converted back into triacylglycerol, a natural mechanism to conserve energy, even in cases of starvation and exercise.
Activation and transport into mitochondria
Fatty acids must be activated before they can be carried into the mitochondria, where fatty acid oxidation occurs. This process occurs in two steps catalyzed by the enzyme fatty acyl-CoA synthetase.
Formation of an activated thioester bond
The enzyme first catalyzes nucleophilic attack on the α-phosphate of ATP to form pyrophosphate and an acyl chain linked to AMP. The next step is formation of an activated thioester bond between the fatty acyl chain and Coenzyme A.
The balanced equation for the above is:
RCOO− + CoASH + ATP → RCO-SCoA + AMP + PPi
This two-step reaction is freely reversible and its equilibrium lies near 1. To drive the reaction forward, the reaction is coupled to a strongly exergonic hydrolysis reaction: the enzyme inorganic pyrophosphatase cleaves the pyrophosphate liberated from ATP to two phosphate ions, consuming one water molecule in the process. Thus the net reaction becomes:
RCOO− + CoASH + ATP → RCO-SCoA+ AMP + 2Pi
Transport into the mitochondrial matrix
The inner mitochondrial membrane is impermeable to fatty acids and a specialized carnitine carrier system operates to transport activated fatty acids from cytosol to mitochondria.
Once activated, the acyl CoA is transported into the mitochondrial matrix. This occurs via a series of similar steps:
Acyl CoA is conjugated to carnitine by carnitine acyltransferase I (palmitoyltransferase) I located on the outer mitochondrial membrane
Acyl carnitine is shuttled inside by a translocase
Acyl carnitine (such as Palmitoylcarnitine) is converted to acyl CoA by carnitine acyltransferase (palmitoyltransferase) II located on the inner mitochondrial membrane. The liberated carnitine returns to the cytosol.
Carnitine acyltransferase I undergoes allosteric inhibition as a result of malonyl-CoA, an intermediate in fatty acid biosynthesis, in order to prevent futile cycling between beta-oxidation and fatty acid synthesis.
The mitochondrial oxidation of fatty acids takes place in three major steps:
β-oxidation occurs to convert fatty acids into 2-carbon acetyl-CoA units.
Acetyl-CoA enters into TCA cycle to yield generate reduced NADH and reduced FADH2.
Reduced cofactors NADH and FADH2 participate in the electron transport chain in the mitochondria to yield ATP. There is no direct participation of the fatty acid.
β-oxidation
After activation by ATP, once inside the mitochondria, the β-oxidation of a fatty acids occurs via four recurring steps:
Oxidation by FAD
Hydration
Oxidation by NAD+
Thiolysis
Production of acyl-CoA and acetyl-CoA
The final product of β-oxidation of an even-numbered fatty acid is acetyl-CoA, the entry molecule for the citric acid cycle. If the fatty acid is an odd-numbered chain, the final product of β-oxidation will be propionyl-CoA. This propionyl-CoA will be converted into intermediate methylmalonyl-CoA and eventually succinyl-CoA, which also enters the TCA cycle.
See also
Reverse cholesterol transport
References
Metabolism
Fatty acids | Fatty acid degradation | [
"Chemistry",
"Biology"
] | 903 | [
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
9,569,014 | https://en.wikipedia.org/wiki/Passano%20Foundation | The Passano Foundation, established in 1945, provides an annual award to a research scientist whose work – done in the United States – is thought to have immediate practical benefits. Many Passano laureates have subsequently won the Nobel Prize.
Selection of award winners
Passano Laureates
2024 K. Christopher Garcia
2023 Se-Jin Lee
2022 Duojia Pan
2021 Alfred Goldberg
2020 David Eisenberg
2019 Robert Fettiplace, James Hudspeth
2018 Carl June, Michel Sadelain
2017 Yuan Chang, Patrick S. Moore
2016 , Helen Hobbs
2015 James P. Allison (2018 Nobel Prize in Physiology or Medicine)
2014 Jeffrey I. Gordon
2013 Rudolf Jaenisch
2012 Eric N. Olson
2011 Elaine Fuchs
2010 David Julius (2021 Nobel Prize in Physiology or Medicine)
2009 Irving Weissman
2008 Thomas Südhof (2013 Nobel Prize in Physiology or Medicine)
2007 Joan Massagué Solé
2006 Napoleone Ferrara
2005 Jeffrey M. Friedman
2003 Andrew Z. Fire (2006 Nobel Prize in Physiology or Medicine)
2002 Alexander Rich
2001 Seymour Benzer
2000 Giuseppe Attardi, Douglas C. Wallace
1999 Elizabeth Blackburn (2009 Nobel Prize in Physiology or Medicine), Carol W. Greider (2009 Nobel Prize in Physiology or Medicine)
1998 H. Robert Horvitz (2002 Nobel Prize in Physiology or Medicine)
1997 James E. Darnell, Jr.
1996 Leland H. Hartwell (2001 Nobel Prize in Physiology or Medicine)
1995 Robert G. Roeder, Robert Tjian
1994 Bert Vogelstein
1993 Jack L. Strominger, Don Craig Wiley
1992 Charles Yanofsky
1991 William S. Sly, Stuart Kornfeld
1990 Alfred Goodman Gilman (1994 Nobel Prize in Physiology or Medicine)
1989 Victor Almon McKusick
1988 Edwin Gerhard Krebs (1992 Nobel Prize in Physiology or Medicine), Edmond Henri Fischer (1992 Nobel Prize in Physiology or Medicine)
1987 Irwin Fridovich
1986 Albert L. Lehninger, Eugene P. Kennedy
1985 Howard Green
1984 Peter Nowell
1983 John Michael Bishop (1989 Nobel Prize in Physiology or Medicine), Harold Elliot Varmus (1989 Nobel Prize in Physiology or Medicine)
1982 Roscoe O. Brady, Elizabeth F. Neufeld
1981 Hugh McDevitt
1980 Seymour S. Kety
1979 Donald F. Steiner
1978 Michael Stuart Brown (1985 Nobel Prize in Physiology or Medicine), Joseph L. Goldstein (1985 Nobel Prize in Physiology or Medicine)
1977 Curt P. Richter
1976 Roger Charles Louis Guillemin (1977 Nobel Prize in Physiology or Medicine)
1975 Henry G. Kunkel
1974 Seymour S. Cohen, Baruch Samuel Blumberg (1976 Nobel Prize in Physiology or Medicine)
1973 Roger Sperry (1981 Nobel Prize in Physiology or Medicine)
1972 Kimishige Ishizaka, Teruko Ishizaka
1971 Stephen W. Kuffler
1970 Paul Zamecnik
1969 George Herbert Hitchings (1988 Nobel Prize in Physiology or Medicine)
1968
1967 Irvine Page
1966 John T. Edsall
1965 Charles Brenton Huggins (1966 Nobel Prize in Physiology or Medicine)
1964 Keith R. Porter, George Emil Palade (1974 Nobel Prize in Physiology or Medicine)
1963 Horace Winchell Magoun
1962 Albert Hewett Coons
1961 Owen Harding Wangensteen
1960 René Dubos
1959 Stanhope Bayne-Jones
1958 George W. Corner
1957
1956 George Nicolas Papanicolaou
1955 Vincent du Vigneaud (1955 Nobel Prize in Chemistry)
1954 Homer Smith
1953 John Franklin Enders (1954 Nobel Prize in Physiology or Medicine)
1952 Herbert M. Evans
1951 Philip Levine, Alexander Solomon Wiener
1950 Edward Calvin Kendall (1950 Nobel Prize in Physiology or Medicine), Philip Showalter Hench (1950 Nobel Prize in Physiology or Medicine)
1949 Oswald Avery
1948 Alfred Blalock, Helen Brooke Taussig
1947 Selman Abraham Waksman (1952 Nobel Prize in Physiology or Medicine)
1946 Ernest W. Goodpasture
1945 Edwin J. Cohn
Young Scientist Award
1992 Tom Curran
1991 Roger Tsien (2008 Nobel Prize in Chemistry)
1990 Matthew P. Scott
1989 Louis M. Kunkel
1988 Peter Walter
1987 Jeremy Nathans
1986 James Rothman (2013 Nobel Prize in Physiology or Medicine)
1985 Mark M. Davis
1984 Thomas R. Cech (1989 Nobel Prize in Chemistry)
1983 Gerald M. Rubin, Allan C. Spradling
1982 Roger D. Kornberg (2006 Nobel Prize in Chemistry)
1981 William A. Catterall, Joel M. Moss
1979 Richard Axel (2004 Nobel Prize in Physiology or Medicine)
1978 Robert Lefkowitz (2012 Nobel Prize in Chemistry)
1977
1976
1975 Joan A. Steitz
References
External links
Passano Foundation Home Page
Biomedical research foundations
Science and technology awards
Organizations established in 1945
Medical and health foundations in the United States | Passano Foundation | [
"Technology",
"Engineering",
"Biology"
] | 956 | [
"Science and technology awards",
"Biotechnology organizations",
"Biomedical research foundations"
] |
9,569,377 | https://en.wikipedia.org/wiki/Birkeland%E2%80%93Eyde%20process | The Birkeland–Eyde process was one of the competing industrial processes in the beginning of nitrogen-based fertilizer production. It is a multi-step nitrogen fixation reaction that uses electrical arcs to react atmospheric nitrogen (N2) with oxygen (O2), ultimately producing nitric acid (HNO3) with water. The resultant nitric acid was then used as a source of nitrate (NO3−) in the reaction HNO3 + H2O -> H3O+ + NO3- which may take place in the presence of water or another proton acceptor.
It was developed by Norwegian industrialist and scientist Kristian Birkeland along with his business partner Sam Eyde in 1903, based on a method used by Henry Cavendish in 1784. A factory based on the process was built in Rjukan and Notodden in Norway, combined with the building of large hydroelectric power facilities.
The Birkeland–Eyde process is relatively inefficient in terms of energy consumption. Therefore, in the 1910s and 1920s, it was gradually replaced in Norway by a combination of the Haber process and the Ostwald process. The Haber process produces ammonia (NH3) from molecular nitrogen (N2) and hydrogen (H2), the latter usually but not necessarily produced by steam reforming methane (CH4) gas in current practice. The ammonia from the Haber process is then converted into nitric acid (HNO3) in the Ostwald process.
The process
An electrical arc was formed between two coaxial water-cooled copper tube electrodes powered by a high voltage alternating current of 5 kV at 50 Hz. A strong static magnetic field generated by a nearby electromagnet spreads the arc into a thin disc by the Lorentz force. This setup is based on an experiment by Julius Plücker who in 1861 showed how to create a disc of sparks by placing the ends of a U-shaped electromagnet around a spark gap so that the gap between them was perpendicular to the gap between the electrodes, and which was later replicated similarly by Walther Nernst and others. The plasma temperature in the disc was in excess of 3000 °C. Air was blown through this arc, causing some of the nitrogen to react with oxygen forming nitric oxide. By carefully controlling the energy of the arc and the velocity of the air stream, yields of up to approximately 4–5% nitric oxide were obtained at 3000 °C and less at lower temperatures. The process is extremely energy intensive. Birkeland used a nearby hydroelectric power station for the electricity as this process demanded about 15 MWh per ton of nitric acid, yielding approximately 60 g per kWh. The same reaction is carried out by lightning, providing a natural source for converting atmospheric nitrogen to soluble nitrates.
N2 + O2 -> 2NO
The hot nitric oxide is cooled and combines with atmospheric oxygen to produce nitrogen dioxide. The time this process takes depends on the concentration of NO in the air. At 1% it takes about 180 seconds and at 6% about 40 seconds to achieve 90% conversion.
2 NO + O2 -> 2 NO2
This nitrogen dioxide is then dissolved in water to give rise to nitric acid, which is then purified and concentrated by fractional distillation.
3 NO2 + H2O -> 2 HNO3 + NO
The design of the absorption process was critical to the efficiency of the whole system. The nitrogen dioxide was absorbed into water in a series of packed column or plate column absorption towers each four stories tall to produce approximately 40–50% nitric acid. The first towers bubbled the nitrogen dioxide through water and non-reactive quartz fragments. Once the first tower reached final concentration, the nitric acid was moved to a granite storage container, and liquid from the next water tower replaced it. That movement process continued to the last water tower which was replenished with fresh water. About 20% of the produced oxides of nitrogen remained unreacted so the final towers contained an alkaline solution of lime to convert the remaining oxides to calcium nitrate (also known as Norwegian saltpeter) except approximately 2% which were released into the air.
References
Name reactions
Chemical processes
Science and technology in Norway
Norwegian inventions | Birkeland–Eyde process | [
"Chemistry"
] | 874 | [
"Chemical process engineering",
"Name reactions",
"Chemical processes",
"nan"
] |
9,569,430 | https://en.wikipedia.org/wiki/Normetanephrine | Normetanephrine, also called normetadrenaline, is a metabolite of norepinephrine created by action of catechol-O-methyl transferase on norepinephrine. It is excreted in the urine and found in certain tissues. It is a marker for catecholamine-secreting tumors such as pheochromocytoma.
References
Phenol ethers
Phenylethanolamines
Tumor markers | Normetanephrine | [
"Chemistry",
"Biology"
] | 99 | [
"Chemical pathology",
"Tumor markers",
"Biomarkers"
] |
9,569,479 | https://en.wikipedia.org/wiki/Maclaurin%27s%20inequality | In mathematics, Maclaurin's inequality, named after Colin Maclaurin, is a refinement of the inequality of arithmetic and geometric means.
Let be non-negative real numbers, and for , define the averages as follows:
The numerator of this fraction is the elementary symmetric polynomial of degree in the variables , that is, the sum of all products of of the numbers with the indices in increasing order. The denominator is the number of terms in the numerator, the binomial coefficient
Maclaurin's inequality is the following chain of inequalities:
with equality if and only if all the are equal.
For , this gives the usual inequality of arithmetic and geometric means of two non-negative numbers. Maclaurin's inequality is well illustrated by the case :
Maclaurin's inequality can be proved using Newton's inequalities or generalised Bernoulli's inequality.
See also
Newton's inequalities
Muirhead's inequality
Generalized mean inequality
Bernoulli's inequality
References
Real analysis
Inequalities
Symmetric functions | Maclaurin's inequality | [
"Physics",
"Mathematics"
] | 225 | [
"Algebra",
"Binary relations",
"Symmetric functions",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems",
"Mathematical theorems",
"Symmetry"
] |
9,569,619 | https://en.wikipedia.org/wiki/Vapor%20quality | In thermodynamics, vapor quality is the mass fraction in a saturated mixture that is vapor; in other words, saturated vapor has a "quality" of 100%, and saturated liquid has a "quality" of 0%. Vapor quality is an intensive property which can be used in conjunction with other independent intensive properties to specify the thermodynamic state of the working fluid of a thermodynamic system. It has no meaning for substances which are not saturated mixtures (for example, compressed liquids or superheated fluids).
Vapor quality is an important quantity during the adiabatic expansion step in various thermodynamic cycles (like Organic Rankine cycle, Rankine cycle, etc.). Working fluids can be classified by using the appearance of droplets in the vapor during the expansion step.
Quality can be calculated by dividing the mass of the vapor by the mass of the total mixture:
where indicates mass.
Another definition used in chemical engineering defines quality () of a fluid as the fraction that is saturated liquid. By this definition, a saturated liquid has . A saturated vapor has .
An alternative definition is the 'equilibrium thermodynamic quality'. It can be used only for single-component mixtures (e.g. water with steam), and can take values < 0 (for sub-cooled fluids) and > 1 (for super-heated vapors):
where is the mixture specific enthalpy, defined as:
Subscripts and refer to saturated liquid and saturated gas respectively, and refers to vaporization.
Calculation
The above expression for vapor quality can be expressed as:
where is equal to either specific enthalpy, specific entropy, specific volume or specific internal energy, is the value of the specific property of saturated liquid state and is the value of the specific property of the substance in dome zone, which we can find both liquid and vapor .
Another expression of the same concept is:
where is the vapor mass and is the liquid mass.
Steam quality and work
The origin of the idea of vapor quality was derived from the origins of thermodynamics, where an important application was the steam engine. Low quality steam would contain a high moisture percentage and therefore damage components more easily. High quality steam would not corrode the steam engine. Steam engines use water vapor (steam) to push pistons or turbines, and that movement creates work. The quantitatively described steam quality (steam dryness) is the proportion of saturated steam in a saturated water/steam mixture. In other words, a steam quality of 0 indicates 100% liquid, while a steam quality of 1 (or 100%) indicates 100% steam.
The quality of steam on which steam whistles are blown is variable and may affect frequency. Steam quality determines the velocity of sound, which declines with decreasing dryness due to the inertia of the liquid phase. Also, the specific volume of steam for a given temperature decreases with decreasing dryness.
Steam quality is very useful in determining enthalpy of saturated water/steam mixtures, since the enthalpy of steam (gaseous state) is many orders of magnitude higher than the enthalpy of water (liquid state).
References
Water
Liquid water
Water in gas
Steam power
Physical quantities | Vapor quality | [
"Physics",
"Mathematics",
"Environmental_science"
] | 660 | [
"Physical phenomena",
"Hydrology",
"Physical quantities",
"Quantity",
"Steam power",
"Power (physics)",
"Water",
"Physical properties"
] |
9,571,502 | https://en.wikipedia.org/wiki/Thomas%20Condon | Thomas Condon (1822–1907) was an Irish Congregational minister, geologist, and paleontologist who gained recognition for his work in the U.S. state of Oregon.
Life and career
Condon arrived in New York City from Ireland in 1833 and graduated from theological seminary in 1852, after which he traveled to Oregon by ship. As a minister at The Dalles, he became interested in the fossils he found in the area. He found fossil seashells on the Crooked River and fossil camels and other animals along the John Day River. Many of his discoveries were in the present-day John Day Fossil Beds National Monument. He corresponded with noted scientists, including Spencer Baird of the Smithsonian, Edward Cope of the Academy of Natural Sciences, Joseph Leidy, O.C. Marsh, and John C. Merriam, and provided specimens to major museums.
Condon was appointed the first State Geologist for Oregon in 1872. He resigned that post to become first professor of geology at the University of Oregon. Previously he was a teacher at Pacific University in Forest Grove.
In The Two Islands and What Came of Them, a geology book published in 1902, Condon wrote about two widely separated regions of Oregon that contain its oldest rocks, the Klamath Mountains in the southwestern part of the state and the Blue Mountains in the northeast. The book attempted to summarize what was then known about the state's geology and to draw conclusions about its geologic past.
Condon was an advocate of theistic evolution. He has been described as a "Christian Darwinist".
Legacy
Condon Hall at the University of Oregon, which originally housed the geology department, was named for Condon, as were the Thomas Condon Paleontology Center at the Sheep Rock unit of the John Day Fossil Beds National Monument, near Kimberly, Oregon, temporary Lake Condon, formed periodically by the Missoula Floods, and the Condon Fossil Collection of the University of Oregon Museum of Natural and Cultural History, which was founded by Condon in 1876. Condon Elementary School (1950-1983) in Eugene still stands as the University of Oregon's Agate Hall. He is the namesake of Condon Butte in Lane County. Condon, Oregon, was named for Harvey C. Condon, a nephew of Thomas Condon.
Anser condoni is a synonym for the fossil swan Cygnus paloregonus.
See also
Thomas Condon: Portrait of Condon (1989)
References
Works cited
Clark, Robert D. The Odyssey of Thomas Condon (1989). Portland, Oregon: The Oregon Historical Society Press. .
External links
Dr. Thomas Condon from the Oregon Historical Society
Thomas Condon profile from the Oregon Cultural Heritage Commission
Thomas Condon biography from the National Park Service
Thomas Condon: Of Faith and Fossils Documentary produced by Oregon Public Broadcasting
Irish emigrants to the United States
Oregon pioneers
19th-century American geologists
1822 births
1907 deaths
Pacific University faculty
People from The Dalles, Oregon
University of Oregon faculty
Deaths from influenza in the United States
Infectious disease deaths in Oregon
People from Fermoy
Theistic evolutionists
Christian clergy from County Cork
Scientists from County Cork | Thomas Condon | [
"Biology"
] | 636 | [
"Non-Darwinian evolution",
"Theistic evolutionists",
"Biology theories"
] |
9,571,778 | https://en.wikipedia.org/wiki/Plant%20geneticist | A plant geneticist is a scientist involved with the study of genetics in botany. Typical work is done with genes in order to isolate and then develop certain plant traits. Once a certain trait, such as plant height, fruit sweetness, or tolerance to cold, is found, a plant geneticist works to improve breeding methods to ensure that future plant generations possess the desired traits.
Plant genetics played a key role in the modern-day theories of heredity, beginning with Gregor Mendel's study of pea plants in the 19th century. The occupation has since grown to encompass advancements in biotechnology that have led to greater understanding of plant breeding and hybridization. Commercially, plant geneticists are sometimes employed to develop methods of making produce more nutritious, or altering plant pigments to make the food more enticing to consumers.
References
National Science Teachers Association: Plant Geneticist Interview
USDA Agriculture Research Service
Geneticist
Geneticist | Plant geneticist | [
"Biology"
] | 186 | [
"Behavior",
"Plant reproduction",
"Plants",
"Reproduction",
"Plant genetics"
] |
9,572,150 | https://en.wikipedia.org/wiki/Mode%20control%20panel | In aviation, the mode control panel (MCP) is an instrument panel that controls an advanced autopilot and related systems such as an automated flight-director system (AFDS).
The MCP contains controls that allow the crew of the aircraft to select which parts of the aircraft's flight are to be controlled automatically. In modern MCPs, there are many different modes of automation available. The MCP can be used to instruct the autopilot to hold a specific altitude, to change altitudes at a specific rate, to hold a specific heading, to turn to a new heading, to follow the directions of a flight management computer (FMC), and so on. The MCP is actually independent of the autopilot—it simply sets the mode in which the autopilot operates, but the autopilot itself (e.g., an AFDS) is a separate aircraft system. The MCP often interacts with both the AFDS or autopilot and the FMC(s).
MCPs are usually found in advanced aircraft intended for commercial use, especially jet airliners. They are often mounted on the glare shield, a small panel that overhangs the main instrument panel of the aircraft and also functions as a shield against outside glare.
See also
Electronic flight instrument system
Aircraft controls
Avionics | Mode control panel | [
"Technology"
] | 272 | [
"Avionics",
"Aircraft instruments"
] |
9,572,607 | https://en.wikipedia.org/wiki/Dugald%20Macpherson | H. Dugald Macpherson is a mathematician and logician. He is Professor of Pure Mathematics at the University of Leeds.
He obtained his DPhil from the University of Oxford in 1983 for his thesis entitled "Enumeration of Orbits of Infinite Permutation Groups" under the supervision of Peter Cameron. In 1997, he was awarded the Junior Berwick Prize by the London Mathematical Society. He continues to research into permutation groups and model theory. He is scientist in charge of the MODNET team at the University of Leeds. He co-authored the book Notes on Infinite Permutation Groups.
References
External links
Prof. Macpherson's homepage
Year of birth missing (living people)
20th-century British mathematicians
21st-century British mathematicians
Living people
Alumni of the University of Oxford
Academics of the University of Leeds
Model theorists
Place of birth missing (living people) | Dugald Macpherson | [
"Mathematics"
] | 178 | [
"Model theorists",
"Model theory"
] |
9,572,821 | https://en.wikipedia.org/wiki/Decrepitation | Decrepitation is the noise produced when certain chemical compounds are heated, or it refers to the cracking, or breaking up of lumps of limestone during heating. Such compounds include lead nitrate and calcine.
Mineralogy
Decrepitation is one of the most accurate ways to calculate a mineral-deposit scale so that the analysis of the hydrothermal system is advanced and improved. Fluid inclusions are important in regard to decrepitation because they are the microscopic areas of gas and liquid within crystals that are decrepitated, or broken, with the application of heat.
When decrepitating the crystal or salt, the liquid pressure is released which can result in a crack. However, in some cases the fluid inclusions are not fully decrepitated, in which case other methods must be used. Despite this shortcoming, decrepitation is the preferred procedure for identifying minerals because it allows for the quickest and greatest number of inclusions to be measured.
The pressure necessary to spur decrepitation is reliant upon the size of the fluid inclusions; bigger inclusions decrepitate more easily at pressures between 700-900 atmospheres, while smaller fluid inclusions may require upwards of 1200 atmospheres, contrastingly, when fluid inclusions become even smaller, the amount of pressure applied will have no effect and decrepitation will not occur.
Decrepitation in metamorphic rock
If the decrepitation begins at a temperature less than the temperature required to form the mineral, it is likely that the rate of decrepitation will speed up once the temperature exceeds that of the initial heating.
For metamorphic rocks, there are certain principles for measuring the decrepitations. What is known as D1 decrepitation, is classified as a temperature range of about 200-300°C and is caused by the liquid phase which occupies intricate inclusions, as in hydrothermal minerals. D2 decrepitation is characterized by a starting heat range of about 300-700°C, the temperature can also increase rapidly for a few hundred degrees, such as in solid inclusions. D3 decrepitation is continuously heated until the rate reaches its maximum out at about 350-450°C, D3 decrepitation can be observed in carbonates and is defined by the effect of an inversion of the mineral. Once decrepitation of a D4 mineral is reached it should reach completion within a few degrees, which is seen in the decrepitation of quartz. Decrepitation as a result of decomposition is known as D5 decrepitation, it is characterized by a sharp upwards rate, a definite peak, and a sharp downwards rate, this can be detected by comparing the peaks of various minerals within a rock.
References
Chemical processes | Decrepitation | [
"Chemistry"
] | 571 | [
"Chemical process engineering",
"Chemical processes",
"nan"
] |
9,574,815 | https://en.wikipedia.org/wiki/John%20Harris%20%28software%20developer%29 |
John D. Harris is a computer programmer, hacker and author of several 1980s Atari computer games. His impact on the early years of the video game industry are chronicled in the book Hackers: Heroes of the Computer Revolution.
His love for the Atari 8-bit computers led him to creating several popular games, perhaps most of all Frogger, which by the end of development had been written from scratch, twice. The reason for this is that his entire back catalogue of development tools and libraries he had developed were stolen at a game developer conference at which he was presenting. The delay in writing the game also led to complications between Harris and his employer, Ken Williams (Director of Sierra On-Line).
During John's time at Sierra, he became one of the most influential young developers in America, at 24 years of age he was earning a 6 figure income off the back of royalties for games which Sierra were marketing for him. As time went on, John's increasingly worrying relationship with Sierra began to get worse, the cutting of royalties and the lack of recognition for his work soon became a catalyst which led to him leaving the company to work at Synapse (despite many offers of employment from new startup EA Games).
Works
Atari 8-bit
Jawbreaker, Sierra On-Line, 1981
Frogger, Sierra On-Line, 1982
Mouskattack
Maneuvering
Bankster
MAE
Atari 2600
Jawbreaker, Tiger Vision
AmigaDE
Gobbler
Solitaire
Employment
Pulsar Interactive Corp., 1997–2003
Tachyon Studios, Inc.
Atari
Synapse
Sierra On-Line
References
Interview with John Harris regarding Hackers and his career and views on game development
Interview with John Harris regarding developing for AmigaDE, July 2002
Year of birth missing (living people)
Living people
American video game designers
American video game programmers | John Harris (software developer) | [
"Technology"
] | 369 | [
"Computing stubs",
"Computer specialist stubs"
] |
9,575,078 | https://en.wikipedia.org/wiki/Hollow%20Moon | The Hollow Moon and the closely related Spaceship Moon are pseudoscientific hypotheses that propose that Earth's Moon is either wholly hollow or otherwise contains a substantial interior space. No scientific evidence exists to support the idea; seismic observations and other data collected since spacecraft began to orbit or land on the Moon indicate that it has a solid, differentiated interior, with a thin crust, extensive mantle, and a dense core which is significantly smaller (in relative terms) than Earth's.
While Hollow Moon hypotheses usually propose the hollow space as the result of natural processes, the related Spaceship Moon hypothesis holds that the Moon is an artifact created by an alien civilization; this belief usually coincides with beliefs in UFOs or ancient astronauts. This idea dates from 1970, when two Soviet authors published a short piece in the popular press speculating that the Moon might be "the creation of alien intelligence"; since then, it has occasionally been endorsed by conspiracy theorists like Jim Marrs and David Icke.
An at least partially hollow Moon has made many appearances in science fiction, the earliest being H. G. Wells' 1901 novel The First Men in the Moon, which borrowed from earlier works set in a Hollow Earth, such as Ludvig Holberg's 1741 novel Niels Klim's Underground Travels.
Both the Hollow Moon and Hollow Earth theories are now universally considered to be fringe or conspiracy theories.
Claims and rebuttals
Density
The fact that the Moon is less dense than the Earth is advanced by conspiracy theorists as support for claims of a hollow Moon. The Moon's mean density is 3.3 g/cm3, whereas the Earth's is 5.5 g/cm3. Mainstream science argues this difference is due to the fact that the Earth's upper mantle and crust are less dense than its heavy, iron core.
The Moon rang like a bell
Between 1969 and 1977, seismometers installed on the Moon by the Apollo missions recorded moonquakes. The Moon was described as "ringing like a bell" during some of those quakes, specifically the shallow ones. This phrase was brought to popular attention in March 1970 in an article in Popular Science.
On November 20, 1969, Apollo 12 deliberately crashed the Ascent Stage of its Lunar Module onto the Moon's surface; NASA reported that the Moon rang 'like a bell' for almost an hour, leading to arguments that it must be hollow like a bell. Lunar seismology experiments since then have shown that the lunar body has shallow moonquakes that act differently from quakes on Earth, due to differences in texture, type and density of the planetary strata, but there is no evidence of any large empty space inside the body.
Vasin-Shcherbakov "spaceship" conjecture
In 1970, Michael Vasin and Alexander Shcherbakov, of the Soviet Academy of Sciences, advanced a hypothesis that the Moon is a spaceship created by unknown beings. The article was entitled "Is the Moon the Creation of Alien Intelligence?" and was published in Sputnik, the Soviet equivalent of Reader's Digest. The Vasin-Shcerbakov hypothesis was reported in the West that same year.
The authors reference earlier speculation by astrophysicist Iosif Shklovsky, who suggested that the Martian moon Phobos was an artificial satellite and hollow; this has since been shown not to be the case. Skeptical author Jason Colavito points out that all of their evidence is circumstantial, and that, in the 1960s, the atheistic Soviet Union promoted the ancient astronaut concept in an attempt to undermine the West's faith in religion.
"Perfect" solar eclipses
In 1965, author Isaac Asimov observed: "What makes a total eclipse so remarkable is the sheer astronomical accident that the Moon fits so snugly over the Sun. The Moon is just large enough to cover the Sun completely (at times) so that a temporary night falls and the stars spring out. [...] The Sun's greater distance makes up for its greater size and the result is that the Moon and the Sun appear to be equal in size. [...] There is no astronomical reason why Moon and Sun should fit so well. It is the sheerest of coincidence, and only the Earth among all the planets is blessed in this fashion."
Since the 1970s, conspiracy theorists have cited Asimov's observations on solar eclipses as evidence of the Moon's artificiality. Mainstream astronomers reject this interpretation. They note that the angular diameters of Sun and Moon vary by several percent over time and do not actually "perfectly" match during eclipses. Nor is Earth the only planet with such a satellite: Saturn's moon Prometheus has roughly the same angular diameter as the Sun when viewed from Saturn.
Some scholars have claimed that "the conditions required for perfect solar eclipses are the same conditions generally acknowledged to be necessary for intelligent life to emerge"; If so, the Moon's size and orbit might be best explained by the weak anthropic principle.
Scientific perspective
Multiple lines of evidence demonstrate that the Moon is a solid body which formed from an impact between Earth and a planetoid.
Origin of the Moon
Historically, it was theorized that the Moon originated when a rapidly-spinning Earth expelled a piece of its mass. This was proposed by George Darwin (son of the famous biologist Charles Darwin) in 1879 and retained some popularity until Apollo. The Austrian geologist Otto Ampferer in 1925 also suggested the emerging of the Moon as cause for continental drift. A second hypothesis argued the Earth and the Moon formed together as a double system from the primordial accretion disk of the Solar System. Finally, a third hypothesis suggested that the Moon may have been a planetoid captured by Earth's gravity.
The modern explanation for the origin of the Moon is usually the giant-impact hypothesis, which argues a Mars-sized body struck the Earth, making a debris ring that eventually collected into a single natural satellite, the Moon. The giant-impact hypothesis is currently the favored scientific hypothesis for the formation of the Moon.
Internal structure
Multiple lines of evidence disprove that the Moon is hollow. One involves moment of inertia parameters; the other involves seismic observations. The moment of inertia parameters indicate that the core of the Moon is both dense and small, with the rest of the Moon consisting of material with nearly-constant density. Seismic observations have been made, constraining the thickness of the Moon's crust, mantle and core, demonstrating it could not be hollow.
Mainstream scientific opinion on the internal structure of the Moon overwhelmingly supports a solid internal structure with a thin crust, an extensive mantle and a small denser core.
Moment of inertia factor
The moment of inertia factor is a number, ranging from 0 to .67, that represents the distribution of mass in a spherical body. A moment of inertia factor of 0 represents a body with all its mass concentrated at its central core, while a factor of .67 represents a perfectly hollow sphere. A moment of inertia factor of 0.4 corresponds to a sphere of uniform density, while factors less than 0.4 represent bodies with cores that are more dense than their surfaces. The Earth, with its dense inner core, has a moment of inertia factor of 0.3307
In 1965, astronomer Wallace John Eckert attempted to calculate the lunar moment of inertia factor using a novel analysis of the Moon's perigee and node. His calculations suggested the Moon might be hollow, a result Eckert rejected as absurd. By 1968, other methods had allowed the Moon's moment of inertia factor to be accurately calculated at its accepted value.
From 1969 to 1973, five retroreflectors were installed on the Moon during the Apollo program (11, 14, and 15) and Lunokhod 1 and 2 missions. These reflectors made it possible to measure the distance between the surfaces of the Earth and the Moon using extremely precise laser ranging. True (physical) libration of the Moon measured via Lunar laser ranging constrains the moment of inertia factor to 0.394 ± 0.002. This is very close to the value for a solid object with radially constant density, which would be 0.4.
Seismic activity
From 1969 through 1972, Apollo astronauts installed several seismographic measuring systems on the Moon and their data made available to scientists (such as those from the Apollo Lunar Surface Experiments Package). The Apollo 11 instrument functioned through August of the landing year. The instruments placed by the Apollo 12, 14, 15, and 16 missions were functional until they were switched off in 1977.
The existence of moonquakes was an unexpected discovery from seismometers. Analysis of lunar seismic data has helped constrain the thickness of the crust (~45 km) and mantle, as well as the core radius (~330 km).
Doppler Gravity Experiment
In 1998, the United States launched the Lunar Prospector, which hosted the Doppler Gravity Experiment (DGE) -- the first polar, low-altitude mapping of the lunar gravity field. The Prospector DGE obtained data constituted the "first truly operational gravity map of the Moon". The purpose of the Lunar Prospector DGE was to learn about the surface and internal mass distribution of the Moon. This was accomplished by measuring the Doppler shift in the S-band tracking signal as it reaches Earth, which can be converted to spacecraft accelerations. The accelerations can be processed to provide estimates of the lunar gravity field. Estimates of the surface and internal mass distribution give information on the crust, lithosphere, and internal structure of the Moon.
In popular culture
Fiction
H.G. Wells, The First Men in The Moon (1901). Wells describes fictional insectoids who live inside a hollow Moon.
Edgar Rice Burroughs, The Moon Maid (1926). A fantasy story set in the interior of a postulated hollow Moon which had an atmosphere and was inhabited.
Nikolay Nosov, Dunno on the Moon (1965). A Russian fairytale novel with a hollow Moon.
Isaac Asimov, Foundation and Earth (1986). Science fiction in which robot R. Daneel Olivaw is depicted living inside a partially hollow Moon.
David Weber, Mutineers' Moon (1991). Science fiction in which the Moon is a giant spaceship, which arrived 50,000 years ago.
Moonfall (2022). Science fiction film portraying the Moon as a Dyson sphere enclosing a white dwarf.
Conspiracy theory
Don Wilson, Our Mysterious Spaceship Moon (1975) and Secrets of Our Spaceship Moon (1979), inspired by Vasin-Shcherbakov, Wilson popularized the Spaceship Moon hypothesis.
George H. Leonard, Somebody Else Is On The Moon (1976) Argues the Moon is inhabited by an Alien race, but NASA has covered up this fact.
Fred Steckling, We Discovered Alien Bases on the Moon (1981)
Jim Marrs Alien Agenda (1997) Long-time JFK conspiracy theorist Marrs embraced the Spaceship Moon conspiracy theory
Christopher Knight & Alan Butler, Who Built the Moon? (2005). They suggest humans from the future traveled into the past to build the Moon in order to safeguard human evolution.
David Icke, Human Race Get off Your Knees – The Lion Sleeps No More (2010). Icke suggests that the Moon is in fact a space station from which Reptilians manipulate human thought.
References
Moon myths
Obsolete scientific theories
Pseudoscience
Science fiction themes
Moon
Space and astronomy conspiracy theories | Hollow Moon | [
"Astronomy",
"Technology"
] | 2,365 | [
"Space and astronomy conspiracy theories",
"Astronomical myths",
"Moon myths",
"Science and technology-related conspiracy theories"
] |
9,575,139 | https://en.wikipedia.org/wiki/Sturdee%27s%20pipistrelle | Sturdee's pipistrelle (Pipistrellus sturdeei), also known as the Bonin pipistrelle, is an extinct species of bat that was endemic to Japan.
Description
Pipistrellus sturdeei was thought to have existed solely on Haha-jima Island in the Bonin Islands, Japan, where the only known specimen was discovered. More recent scholarship, though, places doubt on the single specimen's origin and taxonomy. The previous population of this animal is unknown because only one specimen has been preserved, which is currently housed in the Natural History Museum, London. No record of Sturdee's pipistrelle has been observed since 1889.
References
Pipistrellus
Mammals described in 1915
Taxa named by Oldfield Thomas
Bats of Asia
Endemic mammals of Japan
Natural history of the Bonin Islands
Extinct animals of Japan
Species known from a single specimen
Mammal extinctions since 1500 | Sturdee's pipistrelle | [
"Biology"
] | 189 | [
"Individual organisms",
"Species known from a single specimen"
] |
12,033,440 | https://en.wikipedia.org/wiki/Pyrolysis%E2%80%93gas%20chromatography%E2%80%93mass%20spectrometry | Pyrolysis–gas chromatography–mass spectrometry is a method of chemical analysis in which the sample is heated to decomposition to produce smaller molecules that are separated by gas chromatography and detected using mass spectrometry.
How it works
Pyrolysis is the thermal decomposition of materials in an inert atmosphere or a vacuum. The sample is put into direct contact with a platinum wire, or placed in a quartz sample tube, and rapidly heated to 600–1000 °C. Depending on the application even higher temperatures are used. Three different heating techniques are used in actual pyrolyzers: Isothermal furnace, inductive heating (Curie Point filament), and resistive heating using platinum filaments. Large molecules cleave at their weakest bonds, producing smaller, more volatile fragments. These fragments can be separated by gas chromatography. Pyrolysis GC chromatograms are typically complex because a wide range of different decomposition products is formed. The data can either be used as fingerprint to prove material identity or the GC/MS data is used to identify individual fragments to obtain structural information. To increase the volatility of polar fragments, various methylating reagents can be added to a sample before pyrolysis.
Besides the usage of dedicated pyrolyzers, pyrolysis GC of solid and liquid samples can be performed directly inside programmable temperature vaporizer (PTV) injectors that provide quick heating (up to 60 °C/s) and high maximum temperatures of 600-650 °C. This is sufficient for many pyrolysis applications. The main advantage is that no dedicated instrument has to be purchased and pyrolysis can be performed as part of routine GC analysis. In this case quartz GC inlet liners can be used. Quantitative data can be acquired, and good results of derivatization inside the PTV injector are published as well.
Applications
Pyrolysis gas chromatography is useful for the identification of involatile compounds. These materials include polymeric materials, such as acrylics or alkyds. The way in which the polymer fragments, before it is separated in the GC, can help in identification. Pyrolysis gas chromatography is also used for environmental samples, including fossils. Pyrolysis GC is used in forensic laboratories to analyze evidence found in crime scenes such as paints, adhesives, plastics, synthetic fibres and soil extracts.
References
Mass spectrometry | Pyrolysis–gas chromatography–mass spectrometry | [
"Physics",
"Chemistry"
] | 530 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
12,034,118 | https://en.wikipedia.org/wiki/Cambridge%20IT%20Skills%20Diploma | The Cambridge IT Skills Diploma is a certificate that is based on the Microsoft Office software, this certificate assesses a range of the most important IT skills required and is available at two levels: Foundation and Standard.
Exam methodology
These Online based examinations consist of two levels from which the candidate can choose. Standard and Foundation assessments are computer-based and available on-demand throughout the year to provide a high-quality and flexible assessment service for individuals and centers.
Diploma modules
The program's modules cover the following topics:
Introduction to IT.
PC Usage and Managing files.
Word Processing, Spreadsheets, Presentations and Databases using Microsoft Office.
Electronic Communication using Microsoft Internet Explorer.
Diploma types
The name of the certificate awarded to the successful candidate is the “Cambridge International Diploma in IT Skills”
There are four types of Diploma:
Single-Module Diploma, the basic requirement for it is any of the seven applications.
Four-Module Diploma, the basic requirements for it are Introduction to IT, Windows, Word and Electronic communication.
Five-Module Diploma, the basic requirements for it are Windows, Word, Excel, (Access or PowerPoint) and Internet communication.
Seven-Module Diploma, the basic requirements for it are Introduction to IT, Windows, Microsoft Office and Internet communication.
Recognition and accreditation
Due to the importance of the Cambridge Diploma in IT Skills, many professional bodies and international organizations have given their support, ranging from official approval of the Diploma to requiring the Diploma for their employees.
The Cambridge Diploma in IT Skills is recognized by many organizations and governments such as Jordan, Kuwait, Kingdom of Bahrain, Lebanon, United Arab Emirates (UAE), United Nations Educational, Scientific and Cultural Organization (UNESCO) and United Nations Relief and Works Agency (UNRWA).
Abu-Ghazaleh Cambridge IT SkillCenter is the exclusive center in the Middle East that provides the Cambridge IT Skills Diploma in Arabic.
References
CIE.org
terabyteit.com
ameinfo.com
External links
Cambridge International Diploma in Information Technology
Abu-Ghazaleh Cambridge Information Technology Skills Center
Information technology qualifications
IT Skills Diploma | Cambridge IT Skills Diploma | [
"Technology"
] | 419 | [
"Computer occupations",
"Information technology qualifications"
] |
12,034,210 | https://en.wikipedia.org/wiki/Tokaimura%20nuclear%20accidents | The Tokaimura nuclear accidents refer to two nuclear related incidents near the village of Tōkai, Ibaraki Prefecture, Japan. The first accident occurred on 11 March 1997, producing an explosion after an experimental batch of solidified nuclear waste caught fire at the Power Reactor and Nuclear Fuel Development Corporation (PNC) radioactive waste bituminisation facility. Over twenty people were exposed to radiation.
The second was a criticality accident at a separate fuel reprocessing facility belonging to Japan Nuclear Fuel Conversion Co. (JCO) on 30 September 1999 due to improper handling of liquid uranium fuel for an experimental reactor. The incident spanned approximately 20 hours and resulted in radiation exposure for 667 people and the deaths of two workers. Most of the technicians had to go to hospital with serious injuries.
It was determined that the accidents were due to inadequate regulatory oversight, lack of appropriate safety culture and inadequate worker training and qualification. After these two accidents, a series of lawsuits were filed and new safety measures were put into effect.
By March 2000, Japan's atomic and nuclear commissions began regular investigations of facilities, expansive education regarding proper procedures and safety culture regarding handling nuclear chemicals and waste. JCO's credentials were removed, the first Japanese plant operator to be punished by law for mishandling nuclear radiation. This was followed by the company president's resignation and six officials being charged with professional negligence.
Background
Nuclear power was an important energy alternative for natural-resource-poor Japan to limit dependence on imported energy, providing about 30% of Japan's electricity up until the Fukushima nuclear disaster of 2011, after which nuclear electricity production fell into sharp decline.
Tōkai's location (about 70 miles from Tokyo) and available land space made it ideal for nuclear power production, so a series of experimental nuclear reactors and then the Tōkai Nuclear Power Plant – the country's first commercial nuclear power station – were built here. Over time, dozens of companies and government institutes were established nearby to provide nuclear research, experimentation, manufacturing, and fuel fabrication, enrichment and disposal facilities. Nearly one-third of Tōkai's population rely upon nuclear industry-related employment.
Said plant was built in 1988 and processed 3 tonnes of uranium per year. The uranium that was processed was enriched up to 20% U-235, a higher enrichment level than normal. They did this using a wet process.
1997 nuclear waste accident
On 11 March 1997, Tōkai's first serious nuclear incident occurred at PNC's bituminization facility. It is sometimes called the , 'Dōnen' being an abbreviation of PNC's full Japanese name Dōryokuro Kakunenryō Kaihatsu Jigyōdan. The site encased and solidified low-level liquid waste in molten asphalt (bitumen) for storage, and that day was trialing a new asphalt-waste mix, using 20% less asphalt than normal. A gradual chemical reaction inside one fresh barrel ignited the already-hot contents at 10:00 a.m. and quickly spread to several others nearby. Workers failed to properly extinguish the fire, and smoke and radiation alarms forced all personnel to evacuate the building. At 8 p.m., just as people were preparing to reenter the building, built up flammable gases ignited and exploded, breaking windows and doors, which allowed smoke and radiation to escape into the surrounding area.
The incident exposed 37 nearby personnel to trace amounts of radiation in what the government's Science and Technology Agency declared the country's worst-yet nuclear accident, which was rated a 3 on the International Nuclear Event Scale. A week after the event, meteorological officials detected unusually high levels of cesium 40 km (25 miles) southwest of the plant. Aerial views over the nuclear processing plant building showed a damaged roof from the fire and explosion allowing continued external radiation exposure.
PNC management mandated two workers to falsely report the chronological events leading to the facility evacuation in order to cover-up lack of proper supervision. Dōnen leadership failed to immediately report the fire to the Science and Technology Agency (STA). This delay was due to their own internal investigation of the fire causing hampered immediate emergency response teams and prolonged radioactivity exposure. Dōnen facility officials initially reported a 20% increase of radiation levels in the area surrounding the reprocessing plant, but later revealed the true percent was ten times higher than initially published. Tōkai residents demanded criminal prosecution of PNC officials, reorganization of company leadership and closure of the plant itself. Following public outcry, the facility closed until reopening in November 2000 when it was reinstated as a nuclear fuel reprocessing plant.
Later, Prime Minister Ryutaro Hashimoto criticized the delay that allowed radiation to continue to impact local areas.
1999 accident
The second, more serious Tōkai nuclear accident () occurred about four miles away from the PNC facility on 30 September 1999, at a fuel enrichment plant operated by JCO, a subsidiary of Sumitomo Metal Mining Company. It was the worst civilian nuclear radiation accident in Japan prior to Fukushima (2011). The incident exposed the surrounding population to hazardous radiation after the uranium mixture reached criticality. Two of the three technicians mixing fuel were killed. The incident was caused by lack of regulatory supervision, inadequate safety culture and improper technician training and education.
The first cause that contributed to the accident was the lack of regulatory oversight. The overhead failed to install a criticality accident alarm and they were not included in the National Plan for the Prevention of Nuclear Disasters. Due to lack of safety technology, they had to rely on the administration to keep track of the levels which led to human error. In addition, the regulator did not conduct routine inspections that would have caught this lack of safety technology.
The second cause of the accident was the inadequate safety culture in Japan. The company did not submit the second operation of nuclear facilities to the safety management division because they knew it would not get approved. The company spokesman explained that the company's revenue was getting low and so they felt they had no choice, but to open a new factory. They knew it wouldn't get approved so they did it without telling the safety management division.
The JCO facility converted uranium hexafluoride into enriched uranium dioxide fuel. This served as the first step in producing fuel rods for Japan's power plants and research reactors. Enriching nuclear fuel requires precision and has the potential to impose extreme risks to technicians. If done improperly, the process of combining nuclear products can produce a fission reaction which, in turn, produces radiation. In order to enrich the uranium fuel, a specific chemical purification procedure is required. The steps included feeding small batches of uranium oxide powder into a designated dissolving tank in order to produce uranyl nitrate using nitric acid. Next, the mixture is carefully transported to a specially-crafted buffer tank. The buffer tank containing the combined ingredients is specially designed to prevent fission activity from reaching criticality. In a precipitation tank, ammonia is added forming a solid product. This tank is meant to capture any remaining nuclear waste contaminants. In the final process, uranium oxide is placed in the dissolving tanks until purified, without enriching the isotopes, in a wet-process technology specialized by Japan.
Pressure placed upon JCO to increase efficiency led the company to employ an illegal procedure where they skipped several key steps in the enrichment procedure. The technicians poured the product by hand in stainless-steel buckets directly into a precipitation tank. This process inadvertently contributed to a critical mass level incident triggering uncontrolled nuclear chain reactions over the next several hours.
Victim report
Two of the workers were working on the tank at the time of the accident; the third was in a nearby room. All three immediately reported seeing blue-white flashes. They evacuated immediately upon hearing the gamma alarms sound. After evacuating, one of the workers that was at the tank began experiencing symptoms of irradiation. The worker passed out, then regained consciousness 70 minutes later. The three workers were then transferred to the hospital, which confirmed that they were exposed to high doses of gamma, neutron, and other radiation.
In addition to these three workers who immediately felt symptoms, 56 people at the JCO plant were reported to have been exposed to the gamma, neutron, and other irradiation. In addition to the workers at the site, construction workers who were working on a job site nearby, were also reported to have been exposed.
Nuclear criticality event chronology
JCO facility technicians Hisashi Ouchi, Masato Shinohara, and Yutaka Yokokawa were speeding up the last few steps of the fuel/conversion process to meet shipping requirements. It was JCO's first batch of fuel for the Jōyō experimental fast breeder reactor in three years; no proper qualification and training requirements were established to prepare for the process. To save processing time, and for convenience, the team mixed the chemicals in stainless-steel buckets. The workers followed JCO operating manual guidance in this process but were unaware it was not approved by the STA. Under correct operating procedure, uranyl nitrate would be stored inside a buffer tank and gradually pumped into the precipitation tank in increments.
At around 10:35, the precipitation tank reached critical mass when its fill level, containing about of uranium, reached criticality. The hazardous level was reached after the technicians added a seventh bucket containing aqueous uranyl nitrate, enriched to 18.8% U, to the tank. The solution added to the tank was almost seven times the legal mass limit specified by the STA.
The nuclear fuel conversion standards specified in the 1996 JCO Operating Manual dictated the proper procedures regarding dissolution of uranium oxide powder in a designated dissolution tank. The buffer tank's tall, narrow geometry was designed to hold the solution safely and to prevent criticality. In contrast, the precipitation tank had not been designed to hold unlimited quantities of this type of solution. The designed wide cylindrical shape made it favorable to criticality. The workers bypassed the buffer tanks entirely, opting to pour the uranyl nitrate directly into the precipitation tank. Uncontrolled nuclear fission (a self-sustaining chain reaction) began immediately, emitting intense gamma and neutron radiation. At the time of the event, Ouchi had his body draped over the tank while Shinohara stood on a platform to assist in pouring the solution. Yokokawa was sitting at a desk four metres away. All three technicians observed a blue flash (possibly Cherenkov radiation) and gamma radiation alarms sounded. Over the next several hours the fission reaction produced continuous chain reactions.
Ouchi and Shinohara immediately experienced pain, nausea, and difficulty breathing; both workers went to the decontamination room where Ouchi vomited. Ouchi received the largest radiation exposure, resulting in rapid difficulties with mobility, coherence, and loss of consciousness. Upon the point of critical mass, large amounts of high-level gamma radiation set off alarms in the building, prompting the three technicians to evacuate. All three of the workers were unaware of the impact of the accident or reporting criteria. A worker in the next building became aware of the injured employees and contacted emergency medical assistance; an ambulance escorted them to the nearest hospital. The fission products contaminated the fuel reprocessing building and immediately outside the nuclear facility. Emergency service workers arrived and escorted other plant workers outside of the facility's muster zones.
The next morning, workers ended the chain reaction by draining water from the surrounding cooling jacket installed on the precipitation tank. The water served as a neutron reflector. A boric acid solution was added to the precipitation tank to reduce all contents to sub-critical levels; boron was selected for its neutron absorption properties.
Tōkaimura evacuation
By mid-afternoon, the plant workers and surrounding residents were asked to evacuate. Five hours after the start of criticality, evacuation began of some 161 people from 39 households within a 350-metre radius from the conversion building. Twelve hours after the incident, 300,000 surrounding residents of the nuclear facility were told to stay indoors and cease all agricultural production. This restriction was lifted the next afternoon. Almost 15 days later, the facility instituted protection methods with sandbags and other shielding to protect from residual gamma radiation.
Aftermath
Without an emergency plan or public communication from the JCO, confusion and panic followed the event. Authorities warned locals not to harvest crops or drink well water. To ease public concerns, officials began radiation testing of residents living about from the facility. Over the next 10 days, about 10,000 medical check-ups were conducted. Dozens of emergency workers and residents who lived nearby were hospitalized and hundreds of thousands of others were forced to remain indoors for 24 hours. Testing confirmed 39 of the workers were exposed to the radiation. At least 667 workers, first-responders, and nearby residents were exposed to excess radiation as a result of the accident. Radioactive gas levels stayed high in the area even after the plant was sealed. Finally, on October 12, it was discovered that a roof ventilation fan had been left on and it was shut down. Sometime after the incident, people in the area were asked to lend any gold they had to allow calculations of the size and range of the gamma ray burst.
Ultimately the incident was classified as an "irradiation" not "contamination" accident under Level 4 on the Nuclear Event Scale. This determination labeled the situation low risk outside of the facility. The technicians and workers in the facility were measured for radiation contamination. The three technicians measured significantly higher levels of radiation than the measurement designated the maximum allowable dose (50 mSv) for Japanese nuclear workers. Many employees of the company and local population suffered accidental radiation exposure exceeding safe levels. Over fifty plant workers tested up to 23 mSv and local residents up to 15 mSv. The incident was fatal to the two technicians, Ouchi and Shinohara.
Environmental impact
STA and Ibaraki Prefecture began monitoring the levels of gamma immediately after they were notified of the accident. They collected samples of tap water, well water and precipitation within 10 kilometres of the site. They also took samples of vegetation, sea water, dairy products and sea products for testing. They found low levels of radioactivity in some of the vegetation, but they did not find any in the dairy products, water or sea.
Impact on technicians
According to the radiation testing by the STA, Ouchi was exposed to 17 Sv of radiation, Shinohara 10 Sv, and Yokokawa received 3 Sv. The two technicians who received the higher doses, Ouchi and Shinohara, died several months later.
Hisashi Ouchi, 35, was transported and treated at the University of Tokyo Hospital for 83 days. Ouchi suffered serious radiation burns to most of his body, had severe damage to his internal organs, and had a near-zero white blood cell count. Without a functioning immune system, Ouchi was vulnerable to hospital-acquired infection and was placed in a special radiation ward to limit the risk of infection. A micrograph of his chromosomes showed that none of them were identifiable. Doctors tried to restore some functionality to Ouchi's immune system by administering peripheral blood stem cell transplantation, which at the time was a new form of treatment.
After receiving the transplant from his sister, Ouchi initially experienced increased white blood cell counts temporarily, but he began to succumb to his other injuries soon thereafter. Many other interventions were conducted in an attempt to arrest further decline of his badly damaged body, including repeated use of cultured skin grafts and pharmacological interventions with painkillers, broad-spectrum antibiotics and granulocyte colony-stimulating factor, without any measurable success. Although small areas of Ouchi's skin and mucus membranes recovered with treatment, his overall condition continued to deteriorate, and the medical personnel caring for him privately doubted whether treatment should be continued due to the lack of effectiveness and out of concern for the pain Ouchi was experiencing.
Two months after the accident, Ouchi suffered cardiac arrest; although he was revived, he became unresponsive. At the wishes of his family, doctors continued to treat him, even though it had become clear that the radiation damage to his body was too extensive to be survivable. On December 19th, the doctors explained to his family the seriousness of his condition and suggested that Ouchi should not be resuscitated again, and the family agreed to a do-not-resuscitate order. His wife had hoped that Ouchi would at least survive until 1 January, since it was the arrival of the 2000s. But his condition deteriorated into multiple organ failure, and he died on 21 December 1999 following another cardiac arrest.
Masato Shinohara, 40, was transported to the same facility where he died on 27 April 2000 of multiple organ failure. He underwent radical cancer treatments, numerous successful skin grafts, and a transfusion from congealed umbilical cord blood (to boost stem cell count). Despite surviving for seven months, he was eventually unable to fight off radiation-exacerbated infections and internal bleeding, and succumbed to fatal lung and kidney failure.
Their supervisor, Yutaka Yokokawa, 54, received treatment from the National Institute of Radiological Sciences (NIRS) in Chiba, Japan. He was released three months later with minor radiation sickness. He faced negligence charges in October 2000.
Contributors to both accidents
According to the International Atomic Energy Agency, the cause of the accidents were "human error and serious breaches of safety principles". Several human errors caused the incident, including careless material handling procedures, inexperienced technicians, inadequate supervision and obsolete safety procedures on the operating floor. The company had not had any incidents for over 15 years making company employees complacent in their daily responsibilities.
The 1999 incident resulted from poor management of operation manuals, failure to qualify technicians and engineers, and improper procedures associated with handling nuclear chemicals. The lack of communication between the engineers and workers contributed to lack of reporting when the incident arose. Had the company corrected the errors after the 1997 incident, the 1999 incident would have been considerably less devastating or may not have happened.
Comments within the 2012 Report by the National Diet of Japan Fukushima Nuclear Accident Independent Investigation Commission notice regulatory and nuclear industry overconfidence, and governance failures may equally apply to the Tokaimura nuclear accident.
Victim compensation and plant closure
Over 600 plant workers, firefighters, emergency personnel and local residents were exposed to radioactivity following the incident. In October 1999, JCO set up advisory booths to process compensation claims and inquiries of those affected. By July 2000, over 7,000 compensation claims were filed and settled. In September 2000 JCO agreed to pay $121 million in compensation to settle 6,875 claims from people exposed to radiation and affected agricultural and service businesses. All residents within 350 metres of the incident and those forced to evacuate received compensation if they agreed to not sue the company in the future.
In late March 2000, the STA cancelled JCO's credentials for operation serving as the first Japanese plant operator to be punished by law for mishandling nuclear radiation. This suit was followed by the company president's resignation. In October, six officials from JCO were charged with professional negligence derived from failure to properly train technicians and knowingly subverting safety procedures.
Resulting legal suits
In April 2001, six employees, including the chief of production department at the time, pleaded guilty to a charge of negligence resulting in death. Among those arrested was Yokokawa for his failure to supervise proper procedures. The JCO President also pleaded guilty on behalf of the company. During the trial, the jury learned that a 1995 JCO safety committee had approved the use of steel buckets in the procedure. Furthermore, a widely distributed but unauthorized 1996 manual recommended the use of buckets in making the solution. A STA report indicated JCO management had permitted these hazardous practices beginning in 1993 to shortcut the conversion process, even though it was contrary to approved nuclear chemical handling procedures.
As a response to the incidents, special laws were put in place stipulating operational safety procedures and quarterly inspection requirements. These inspections focused on the proper conduct of workers and leadership. This change mandated both safety education and quality assurance of all facilities and activities associated with nuclear power generation. Starting in 2000, Japan's atomic and nuclear commissions began regular investigations of facilities, expansive education regarding proper procedures and safety culture regarding handling nuclear chemicals and waste.
Efforts to comply with emergency preparedness procedures and international guideline requirements continued. New systems were put in place for handling a similar incident with governing legislature and institutions in an effort to prevent further situations from occurring.
Japan imports 80% of its energy; so mounting pressures to produce self-sustaining energy sources remain. In 2014, Japan's government decided to establish the "Strategic Energy Plan" naming nuclear as an important power source that can safely stabilize and produce the energy supply and demand of the country. This event contributed to antinuclear activist movements against nuclear power in Japan. To this day, the tensions between the need for produced power outside of nonexistent natural resources and the safety of the country's population remain. Advocacy for acute nuclear disease victims and eradication of nuclear related incidents has led to several movements across the globe promoting human welfare and environmental conservation.
In popular culture
The 1999 accident is mentioned, along with a flashback scene of a hospital visit to Hisashi Ouchi, in the 2023 Japanese miniseries The Days, a dramatization of the Fukushima nuclear accident.
See also
Nuclear power in Japan
Fukushima Daiichi nuclear disaster
Rokkasho Reprocessing Plant, meant to be the successor to the Tokai Reprocessing Plant
Cecil Kelley criticality accident, a previous fatal accident in 1958 that also involved a solution of fissile material in a tank
References
External links
Nuclear Weapon Accident Response Procedures
Nuclear accidents and incidents
Nuclear reprocessing sites
Nuclear history of Japan
Industrial accidents and incidents in Japan
Heisei era
1997 in Japan
1997 industrial disasters
1999 in Japan
1999 industrial disasters
Tōkai, Ibaraki
1997 disasters in Japan
1999 disasters in Japan
March 1997 events in Japan
September 1999 events in Japan | Tokaimura nuclear accidents | [
"Chemistry"
] | 4,500 | [
"Nuclear accidents and incidents",
"Radioactivity"
] |
12,034,472 | https://en.wikipedia.org/wiki/Kiiti%20Morita | was a Japanese mathematician working in algebra and topology.
Morita was born in 1915 in Hamamatsu, Shizuoka Prefecture and graduated from the Tokyo Higher Normal School in 1936. Three years later he was appointed assistant at the Tokyo University of Science. He received his Ph.D. from Osaka University in 1950, with a thesis in topology. After teaching at the Tokyo Higher Normal School, he became professor at the University of Tsukuba in 1951. He held this position until 1978, after which he taught at Sophia University. Morita died of heart failure in 1995 at the Sakakibara Heart Institute in Tokyo; he was survived by his wife, Tomiko, his son, Yasuhiro, and a grandson.
He introduced the concepts now known as Morita equivalence and Morita duality which were given wide circulation in the 1960s by Hyman Bass in a series of lectures. The Morita conjectures on normal topological spaces are also named after him.
Publications
References
1915 births
1995 deaths
People from Hamamatsu
University of Tsukuba alumni
Osaka University alumni
20th-century Japanese mathematicians
Algebraists
Topologists
Academic staff of the University of Tsukuba
Academic staff of Sophia University | Kiiti Morita | [
"Mathematics"
] | 239 | [
"Topologists",
"Topology",
"Algebra",
"Algebraists"
] |
12,034,549 | https://en.wikipedia.org/wiki/Morita%20conjectures | The Morita conjectures in general topology are certain problems about normal spaces, now solved in the affirmative. The conjectures, formulated by Kiiti Morita in 1976, asked
If is normal for every normal space Y, is X a discrete space?
If is normal for every normal P-space Y, is X metrizable?
If is normal for every normal countably paracompact space Y, is X metrizable and sigma-locally compact?
The answers were believed to be affirmative. Here a normal P-space Y is characterised by the property that the product with every metrizable X is normal; thus the conjecture was that the converse holds.
Keiko Chiba, Teodor C. Przymusiński, and Mary Ellen Rudin proved conjecture (1) and showed that conjectures (2) and (3) cannot be proven false under the standard ZFC axioms for mathematics (specifically, that the conjectures hold under the axiom of constructibility V=L).
Fifteen years later, Zoltán Tibor Balogh succeeded in showing that conjectures (2) and (3) are true.
Notes
References
A.V. Arhangelskii, K.R. Goodearl, B. Huisgen-Zimmerman, Kiiti Morita 1915-1995, Notices of the AMS, June 1997
Topology
Conjectures that have been proved | Morita conjectures | [
"Physics",
"Mathematics"
] | 284 | [
"Mathematical theorems",
"Topology stubs",
"Topology",
"Space",
"Geometry",
"Conjectures that have been proved",
"Spacetime",
"Mathematical problems"
] |
12,035,083 | https://en.wikipedia.org/wiki/Federal%20Signal%20Corporation | Federal Signal Corporation is an American manufacturer headquartered in Downers Grove, Illinois. Federal Signal manufactures street sweeper vehicles, public address systems, emergency vehicle equipment, and emergency vehicle lighting.
The company operates two groups: Federal Signal Environmental Solutions and Federal Signal Safety and Security Systems. Federal Signal Environmental Solutions Group manufactures street sweeper vehicles, sewer cleaner and vacuum loader trucks, hydro excavators, waterblasting equipment, dump truck bodies, and trailers. Federal Signal Safety and Security Systems Group manufactures campus alerting systems, emergency vehicle lighting, emergency sirens, alarm systems, outdoor warning sirens, and public address systems.
Currently, the company has 14 manufacturing facilities in 5 different countries.
History
Federal Signal was founded in Chicago, Illinois, as the Federal Electric Company in 1901 by brothers John and James Gilchrist and partner John Goehst, manufacturing and selling store signs lit by incandescent lamps. By 1915, they began manufacturing and selling electrically operated mechanical sirens (such as the Q Siren and the Model 66 Siren). During this time, Federal Electric came under the ownership of Commonwealth Edison, eventually becoming a part of the utility empire of Samuel Insull.
By the 1950s, the company was manufacturing outdoor warning sirens, most notably the Thunderbolt series, primarily intended for warning of air raid attacks or fallout during the Cold War. Many of these sirens have been removed, but some still are operating in tornado siren systems. Longtime engineer Earl Gosswiller patented the Beacon-Ray and TwinSonic products, which were popular emergency vehicle lightbars.
In 1955, the company became a corporation, renaming itself "Federal Sign and Signal Corporation" . By this time, it made outdoor warning sirens, police sirens, fire alarms, and outdoor lighting.
By 1961, Federal Sign and Signal had gone public, trading on the NASDAQ market. This was when new products started being manufactured and sold, such as the Federal Signal STH-10. In 1976, the company became Federal Signal Corporation.
On Feb 22, 2000, Federal Signal Corporation announced the signing of a definitive agreement for the acquisition of P.C.S. Company.
On June 27, 2005, Federal Signal Corporation announced the signing of a joint venture agreement to establish a Chinese company, Federal Signal (Shanghai) Environmental & Sanitary Vehicle Company Limited, based near Shanghai, China.
On February 29, 2016, Federal Signal announced the signing of a definitive agreement for the acquisition of Canada's largest infrastructure-maintenance equipment supplier Joe Johnson Equipment (1), and the rights to the name and company.
On May 8, 2017, Federal Signal announced the acquisition of Truck Bodies and Equipment International (TBEI), making it the owner of six dump body and trailer brands, including Crysteel, Duraclass, Rugby Manufacturing, Ox Bodies, Travis and J-Craft.
On July 2, 2019, Federal Signal completed the acquisition of the assets and operations of Mark Rite Lines Equipment Company, Inc., a manufacturer of road-marking equipment. along with HighMark Traffic Services, Inc., which provides road-marking services in Montana. The signing of the purchase agreement was previously announced on May 14, 2019.
On November 17, 2022, Federal Signal announced the signing of a definitive agreement to acquire substantially all the assets and operations of Blasters, Inc. (“Blasters”), a leading manufacturer of truck-mounted waterblasting equipment, for an initial purchase price of $14 million, subject to post-closing adjustments. In addition, there is a contingent earn-out payment of up to $8 million.
See also
Rumbler (siren)
Q2B
Federal Signal Model 2
References
Company History - Federal Signal Corp. (Funding Universe)
Companies listed on the New York Stock Exchange
Fire detection and alarm companies
Emergency population warning systems
Sirens
Oak Brook, Illinois
Companies based in DuPage County, Illinois
Emergency services equipment makers
Articles containing video clips | Federal Signal Corporation | [
"Technology"
] | 786 | [
"Warning systems",
"Emergency population warning systems"
] |
12,035,381 | https://en.wikipedia.org/wiki/Pub%20Design%20Awards | The Pub Design Awards (PDA) are an annual awards, established in 1983 and hosted by CAMRA in association with English Heritage and the Victorian Society, that are given to exceptional pubs in the UK that have been newly built/converted or have recently undergone building/conservation work.
Categories
The awards cover 4 Categories:
New Build (For newly constructed pubs)
Conversion of existing buildings to pub use (For use of a building that had not previously been a pub)
Refurbishment of existing public houses (For recently refurbished pubs. The final interior must suit the era of the building and enhance the 'feel' of the pub)
The English Heritage Award for conservation (For pubs that have retained much of their original design and/or decor and have recently had conservation work to improve the overall condition of the interior.)
There is also a fifth award which is given independently of the above categories and cannot be entered directly; the Joe Goodwin Conservation Award, sometimes known as Joe Goodwin Award for best 'Street-Corner Local'. This is chosen from all entries, not just winners of other categories.
Any pub can only enter one category, entries may be from anywhere in the British Isles. Anyone can enter a pub for the awards.
Entry
Photographs and a typed description (Max. 2 A4 sheets) of the interior and exterior and what work was completed are required for entry, as are drawings of the interior plan and location map of the pub.
To qualify, the work must have been finished between 1 January and 31 December of the year before, e.g. For 2007, the work on the pub must have been completed sometime in 2006.
Judging
Judging takes place initially by shortlisting, after which the selection will be visited, the panel comprises members of the CAMRA Pubs Group along with conservation experts and architects outside CAMRA.
Not every award is given every year (depending on the entries), but there are often mentions for highly commended pubs, and some years there are joint winners when there are a number of exceptional pubs in particular categories.
The winners receive a plaque which can be displayed publicly in the pub, permanently.
Winners
Where the entry reads None Awarded, this means no award was given in that category for that year, even though there may be 'Highly Commended' pubs in that category(which are not listed here). None listed, this means there was no information about that particular award. Sources: CAMRA website - PDA winners list; What's Brewing (Newspaper of CAMRA), Issue: April 2007, Article: Better pubs by design..
See also
National Pub of the Year
Heritage pub
List of public house topics
References
Pub Awards Page - CAMRA
Halifax & Calderdale CAMRA - PDA Info
Awards Entry Form (PDF)
What's Brewing (Newspaper of CAMRA), April 2007, Article: Better pubs by design.
Architecture awards
Design awards
British awards
Pubs in the United Kingdom
Interior design
Hospitality industry awards
Hospitality industry in the United Kingdom
Annual events in the United Kingdom
1983 establishments in the United Kingdom
Awards established in 1983 | Pub Design Awards | [
"Engineering"
] | 611 | [
"Design",
"Design awards"
] |
12,036,119 | https://en.wikipedia.org/wiki/Gromov%27s%20inequality%20for%20complex%20projective%20space | In Riemannian geometry, Gromov's optimal stable 2-systolic inequality is the inequality
,
valid for an arbitrary Riemannian metric on the complex projective space, where the optimal bound is attained
by the symmetric Fubini–Study metric, providing a natural geometrisation of quantum mechanics. Here is the stable 2-systole, which in this case can be defined as the infimum of the areas of rational 2-cycles representing the class of the complex projective line in 2-dimensional homology.
The inequality first appeared in as Theorem 4.36.
The proof of Gromov's inequality relies on the Wirtinger inequality for exterior 2-forms.
Projective planes over division algebras
In the special case n=2, Gromov's inequality becomes . This inequality can be thought of as an analog of Pu's inequality for the real projective plane . In both cases, the boundary case of equality is attained by the symmetric metric of the projective plane. Meanwhile, in the quaternionic case, the symmetric metric on is not its systolically optimal metric. In other words, the manifold admits Riemannian metrics with higher systolic ratio than for its symmetric metric .
See also
Loewner's torus inequality
Pu's inequality
Gromov's inequality (disambiguation)
Gromov's systolic inequality for essential manifolds
Systolic geometry
References
Geometric inequalities
Differential geometry
Riemannian geometry
Systolic geometry | Gromov's inequality for complex projective space | [
"Mathematics"
] | 314 | [
"Geometric inequalities",
"Inequalities (mathematics)",
"Theorems in geometry"
] |
12,037,102 | https://en.wikipedia.org/wiki/Dowker%20space | In the mathematical field of general topology, a Dowker space is a topological space that is T4 but not countably paracompact. They are named after Clifford Hugh Dowker.
The non-trivial task of providing an example of a Dowker space (and therefore also proving their existence as mathematical objects) helped mathematicians better understand the nature and variety of topological spaces.
Equivalences
Dowker showed, in 1951, the following:
If X is a normal T1 space (that is, a T4 space), then the following are equivalent:
X is a Dowker space
The product of X with the unit interval is not normal.
X is not countably metacompact.
Dowker conjectured that there were no Dowker spaces, and the conjecture was not resolved until Mary Ellen Rudin constructed one in 1971. Rudin's counterexample is a very large space (of cardinality ). Zoltán Balogh gave the first ZFC construction of a small (cardinality continuum) example, which was more well-behaved than Rudin's. Using PCF theory, M. Kojman and S. Shelah constructed a subspace of Rudin's Dowker space of cardinality that is also Dowker.
References
Properties of topological spaces
Separation axioms | Dowker space | [
"Mathematics"
] | 266 | [
"Properties of topological spaces",
"Topological spaces",
"Topology",
"Space (mathematics)"
] |
12,037,194 | https://en.wikipedia.org/wiki/Adrenosterone | Adrenosterone, also known as Reichstein's substance G , as well as 11-ketoandrostenedione (11-KA4), 11-oxoandrostenedione (11-OXO), and androst-4-ene-3,11,17-trione, is a steroid hormone with an extremely weak androgenic effect, and an intermediate/prohormone of 11-ketotestosterone. It was first isolated in 1936 from the adrenal cortex by Tadeus Reichstein at the Pharmaceutical Institute in the University of Basel. Originally, adrenosterone was called Reichstein's substance G. Adrenosterone occurs in trace amounts in humans as well as most mammals and in larger amounts in fish, where it is a precursor to the primary androgen, 11-ketotestosterone.
Adrenosterone is sold as a dietary supplement since 2007 as a fat loss and muscle gaining supplement. It is thought to be a competitive selective 11βHSD1 inhibitor, which is responsible for activation of cortisol from cortisone. Thus preventing muscle breakdown, and contributing to a majority of its effects.
See also
11β-Hydroxyandrostenedione
11-Ketodihydrotestosterone
References
Anabolic–androgenic steroids
Androstanes
Hormones of the suprarenal cortex
Sex hormones
Steroid hormones
Triketones | Adrenosterone | [
"Biology"
] | 311 | [
"Behavior",
"Sexuality",
"Sex hormones"
] |
12,037,783 | https://en.wikipedia.org/wiki/The%20Semantic%20Turn | The semantic turn refers to a paradigm shift in the design of artifacts – industrial, graphic, informational, architectural, and social – from an emphasis on how artifacts ought to function to what they mean to those affected by them – semantics being a concern for meaning. It provides a new foundation for professional design, a detailed design discourse, codifications of proven methods, compelling scientific justifications of its products, and a clear identity for professional designers working within a network of their stakeholders.
The semantic turn suggests a distinction between the technical and user-irrelevant working of artifacts and the human interactions with artifacts, individually, socially, and culturally. Attending to the technical dimension of artifacts, for example, by applied scientists, mechanical or electronic engineers, and experts in economics, production, and marketing, is called technology-centered design. It addresses its subject matter in terms that ordinary users may not understand and applies design criteria users of technology do not care about. Attending to the meanings that users bring to their artifacts, how they use them and talk about them and among various stakeholders, is the domain of human-centered design. For ordinary users, the makeup and technical functioning of artifacts is mere background of what really matters to them.
A prime example for this distinction is the design of personal computers. For most people, the operations inside a computer are incomprehensible, but far from troubling because computers are designed to be experienced primarily through their interfaces. Human-computer interfaces consist of interactively rearrangeable icons, texts, and controls that users can understand in everyday terms and manipulate towards desirable ends. The design of intelligent artifacts suggests that the old adage of “form follows function” is no longer valid – except for the simplest of tools. The semantic turn suggests that human-centered designers’ unique expertise resides in the design of human interfaces with artifacts that are meaningful, easy to use, even enjoyable to experience, be it simple kitchen implements, public service systems, architectural spaces, or information campaigns. Although an automobile should obviously function as a means of transportation, human-centered designers emphasize the experiences of driving, ease of operation, feeling of safety, including the social meanings of driving a particular automobile. As artifacts have to work within many dimensions, human-centered designers must have a sense of and be able to work with all relevant stakeholders addressing different dimensions of the artifact.
The Semantic Turn: a book and its themes
The Semantic Turn is also the title of a book by Klaus Krippendorff, Professor of Communication at the University of Pennsylvania, cybernetician, degreed designer, and researcher who has published much to advance the science for design. The subtitle of the book, A new Foundation for Design, suggests a redesign of design practices in a human-centered design culture. Krippendorff takes an encompassing view of design, centering it on the meanings that artifacts acquire and what is or should be designers' primary concern.
The Semantic Turn represents an evolution from "Product Semantics" by Krippendorff and Butter, which was defined as "A systematic inquiry into how people attribute meanings to artifacts and interact with them accordingly" and "a vocabulary and methodology for designing artifacts in view of the meanings they could acquire for their users and the communities of their stakeholders". While retaining the emphasis on meaning and on the importance of both theory and practice, The Semantic Turn extends the concerns of designers first to the new challenges of design, including the design of ever more intangible artifacts such as services, identities, interfaces, multi-user systems, projects and discourses; and second, to consider the meaning of artifacts in use, in language, in the whole life cycle of the artifact, and in an ecology of artifacts.
Design
For Krippendorff, design "brings forth what would not come naturally (...); proposes realizable artifacts to others (...) must support the lives of ideally large communities (...) and must make sense to most, ideally to all who have a stake on them". Design thus is intimatelly involved with the meaning that stakeholders attribute to artifacts. Designers "consider possible futures (...) evaluate their desirability (...) and create and work out realistic paths from the present towards desirable futures, and propose them to those that can bring a design to fruition". Acknowledging that all design serves others, The Semantic Turn does not treat THE user as statistical fiction, but as knowledgeable stakeholders and necessary partners in human-centered design processes.
Predecessors of human-centeredness
Krippendorff quotes the Greek philosopher Protagoras who is believed to have been the first to express human-centeredness in words by saying that "Man is the measure of all things, of things that are (...) and of things that are not (...)." Krippendorff goes on to cite the color theory of J. W. von Goethe who exposed Isaac Newton’s spectral theory of colors as epistemologically flawed by pointing out that color is the product of the human eye. Color does not exist without it. Krippendorff refers to the Italian philosopher G. Vico for opposing R. Descartes by claiming we humans know what we have constructed, made up, cognitively, materially, or socially, to the biologists J. Uexküll for his species-specific theory of meaning and H. Maturana and F. Varela for developing a biological foundation of cognition, to the psychologist J. J. Gibson for his conception of affordance, which acknowledges that our environment does not account for our perception, it merely affords our sensory-motor coordinations or it does not; and to the anthropological linguist B. L. Whorf for his recognition that our perceptions are correlated with language, its grammar and vocabulary. Most important, Krippendorff allies himself with L. Wittgenstein’s definition of meaning as use, culminating in the axiom that Humans do not see and act on the physical qualities of things, but on what they mean to them.
Meaning
Attributing meaning to something follows from sensing it, and is a prelude to action. " One always acts according to the meaning of whatever one faces " and the consequences of these actions in turn become part of the meanings of what one interacts with. Meanings are always someone's construction and depend on context and culture. The same artifact may invoke different meanings at different times, in different contexts of use, and for different people. To design artifacts for use by others calls on designers to understand the understanding of others, a second order understanding that is fundamentally unlike the understanding of physical things. Since meanings cannot be observed directly, designers need to carefully observe the actions that imply certain meanings; involve themselves in dialog with their stakeholders; and invite them to participate in the design process.
Meaning of artifacts in use
People acquire the meanings of artifacts by their interfacing with them, where meanings become anticipated usabilities. Krippendorff does not limit the concept of interfaces to human computer interactions, however. For him, the concept applies to any artifact one faces. To users, artifacts are perceived as affordances, as the kind of interactions they enable or prohibit. Thus scissors and coffee cups are experienced as interfaces, just as personal computers are. Their physical or computational makeup become background phenomena to use. The meaning of an artifact in use is then " the range of imaginable senses and actions that users have reasons to expect" . Ideal interfaces are self-evident and " intrinsically motivating interactions between users and their artifacts" .
Drawing on Heidegger's explorations of the human use of technology, Krippendorff argues that all artifacts must be designed to afford three stages of use: initial recognition, intermediate exploration, and ideally, unproblematic reliance. The latter is achieved when the artifact is so incorporated into the user's world that it becomes hardly noticed, is taken for granted while looking through it to what is to be accomplished. Recognition involves users' categorizations, how close the artifact is to the ideal type of its kind. Exploration is facilitated by informatives such as state indicators, progress reports, confirmations of actions and readiness, alarm signals, close correlations between actions and their expected effects, maps of possibilities, instructions, error messages, and multi-sensory feedback. Users' intrinsic motivation arises from reliance, the seemingly effortless, unproblematic yet skillful engagement with artifacts free of disruptions. A well designed interface enables unambiguous recognition, effective exploration, and leads to enjoyable reliance. To accomplish these transitions, human-centered designers need to involve second-order understanding of users' cognitive models, cultural habits, and competencies.
Typically, users approach their artifacts with very different competencies. The Semantic Turn offers the possibility of accommodating these differences by allowing the design of several semantic layers. For example, contemporary Xerox machines exhibit one layer for making copies, another for clearing paper jams, a third for replacing defective parts by trained service personnel, and a fourth is reserved for the factory repair of replaced components.
Meaning of artifacts in language
"The fate of all artifacts is decided in language" , says Krippendorff. Indeed, designers must pay attention to the narratives in which an artifact appears as soon as it enters the conversations among stakeholders, bystanders, critics, and users, to the names that categorize the artifact as being of one kind or another, and to the adjectives that direct perception to particular qualities (is it a fast car? a clumsy cell phone? a high class dress?). Such characterizations can make or break an artifact and designers cannot ignore how people talk about them. Krippendorff proposes that artifacts should be designed so that their interfaces are [easily] narratable and fit into social or communicational relationships.
The character of an artifact – the set of adjectives deemed appropriate to it – can be assessed by means of semantic differential scales – seven point scales between polar opposite attributes such as elegant––––graceless; by categorizing free associations elicited from users, whether as first impressions or after extended use; by examining the content of stories people tell about the artifacts for implied judgments; or by pair comparisons of similar artifacts. Such methods give human-centered designers ways to quantify meanings, to work towards defined design criteria, including pursuing quantifiable aesthetic objectives, and justify a design to potential stakeholders.
Language permeates all of human life, including with artifacts. This applies not only to the users of artifacts but also to their designers. The narratives that evolve within design teams determine the direction a design is taking, and might end up convincing stakeholders to go along with a design project or oppose it, well before it is built, and influence designers in turn. What we know of current artifacts, ancient ones, outdated ones, antiques or museum pieces come to us in the form of stories. Designers need to analyse them for, as Krippendorff asserts, " The meanings that artifacts acquire in use are largely framed in language".
Meaning in the lives of artifacts
Here, Krippendorff invites designers to consider artifacts in their whole life cycle. In the case of industrial products, the life cycle might start with an initial idea, then followed by design, engineering, production, sales, use, storage, maintenance and finally retirement, as recycled or as waste. Well, not so "finally;" designers may learn much about a product's performance, unintended uses, unexpected problems, and resulting social consequences, which can serve to improve the design of the next generation of that product – design never ends. In each phase of the life cycle of an artifact, that artifact will have to support diverse but subjectively meaningful interfaces for different communities of stakeholders. In such stakeholder networks, artifacts need to proceed from one to the next: " no artifact can be realized within a culture without being meaningful to those who can move it through its various definitions" .
Meaning in an ecology of artifacts
Dictionaries tend to define ecology as multi-species interaction in a common environment, the species being animals and plants. Humans, however, have created a perhaps greater diversity of species of artifacts than has nature. Krippendorff observes that species of artifacts too are born, grow in size and number, diversify into sub-species, associate with other species, adapt to each other and to their human environment, and either reproduce, evolve, or disappear – just as in nature. Species of artifacts may compete, cooperate or be parasitic on other artifacts. For an example of the latter, consider spam, which thrives in the email ecosystem and could not exist outside it. Whereas species of animals and plants interact with one another in their own terms, species of artifacts are brought into interaction through human agency. People arrange artifacts, like the furniture at home; connect them into networks, like computes in the internet; form large cultural cooperatives, like hospitals full of medical equipment, drugs, and treatments; retire one species in favor of another, like typewriters gave way to personal computers; or change their ecological meanings, like horses, originally used for work and transportation, found an ecological niche in sports.
In an ecology of artifacts, the meaning of one consists of the possible interactions with other artifacts: cooperation, competition (substitution), domination or submission, leading technological development, like computers do right now, supporting the leaders, like the gadgets found in computer stores. Similarly, roads and gas stations follow the development of automobiles and participate in a very large cultural complex, including the design of cities and the distribution of work, and affect nature through depletion of resources, creating waste and CO2 emissions. Clearly, " designers who can handle the ecological meaning of their proposals have a better chance of keeping their designs alive" .
Towards a science for design
In 1969, Nobel laureate Herbert Simon called for a science of the artificial. Natural scientists, he argued, are concerned with what exists, whereas designers are concerned with what should be and how to achieve it. His conception of design was shaped by rational decision theory and early conceptions of computational logic, hence limited largely to technology-centered design. Krippendorff added the following contrasts to Simon’s:
The natural sciences limit themselves to theorizing past regularities from existing data. They do not see scientists as change agents. Any science for design must concern itself with how designers can change existing regularities, overcome contingencies that cause recurring problems, and make a difference in the lives of present stakeholders or future communities. Designers do not produce theories but propose unprecedented artifacts, new practices, and narratives that must be realized in a network of stakeholders, which are actors in their own interest. The science for design cannot be about design or of design, which are pursued from outside the design community. It must provide practical and intellectual support of design by being for or in the service of design activity. In support of change that does not come naturally, it must also provide the conceptualizations needed to hold designers accountable for how their proposals affect future contingencies.
The natural sciences privilege causal explanations, which rule out that their objects can understand how they are conceptualized, theorized, and studied. A science for human-centered design privileges the meanings (conceptions, explanations, and motivations) that knowledgeable users and stakeholders of a design can bring to it. It entails a reflexive kind of understanding unfamiliar in the natural sciences.
As detached observers of their objects, natural scientists can afford to celebrate abstract and general theories. Designers, by contrast, must be concerned with all necessary details of their design. No technology works in the abstract. Even social artifacts need to be understood and enacted by their constituents. A design is always a proposal to other stakeholders who may contribute to a design or oppose its realization. Theories in the natural sciences do not affect what they theorize, but designs must enroll others into what they are proposing, treat them as intelligent agents, or will not come to fruition.
In the natural sciences, research consists of gathering date or objective facts in support of theories about these data. Predictive theories assume the status quo (the continuation) of the phenomena they theorize. In the science for design, research means searching for previously unrecognized variables and proposing realistic paths into desirable futures. Design, to the extent it is innovative, may well break with past theories, overcome popular convictions, and challenge stubborn beliefs in a history-determined future. Fundamentally, past observations can never prove the validity of truly innovative designs.
A science for design makes three contributions to design:
Design research
Generally, research is any inquiry that generates communicable knowledge. Human-centered design research typically involves
Eliciting and analyzing the narratives of problematic uses of artifacts and desirable futures, which motivate or inspire a community of potential users and stakeholders to consider changes in their lives.
Searching for and evaluating examples of common, understandable, and attractive practices, especially from empirical domains other than the intended design, in view of their ability to serve as metaphors for new but immediately recognizable and meaningful interfaces.
Exploring new technologies and materials that could support or improve current and future uses of the artifacts under consideration.
Testing and evaluating alternative designs – recombinations and transformations of available technologies, possible interfaces, and social and ecological consequences – usually in terms relevant to present stakeholders and in lieu of future users.
Inquiring into how a design survives in the ecology of artifacts, what lessons can be learned for future design activities, so called post-design research.
Design methods
Human-centered design methods may aim at:
Systematic expansions of a design space – the possibilities in which a design can take place. This space should embrace and go beyond user and stakeholder expectations, especially including the (apparently) unthinkable. Such methods range from the computer generation of alternatives (combinatorics) to the use of language games during which novel ideas are created, for example by brainstorming.
Focused involvement of users and stakeholders in the design process (in reducing the design space to realistic proposals for meaningful artifacts). There are three known ways:
Acquiring an understanding of users’ and stakeholders’ understanding, so-called second-order understanding, for instance by ethnographic research or focus groups, and making design decisions dependent on that understanding
Involving users and stakeholders in design decisions, for example, participatory design
Delegating design by designing artifacts that either adapt themselves to their users’ and stakeholders’ worlds or can be redesigned by them, in personal computers, for example.
Formalizing successful design practices into reproducible aids that can improve future design practices, both computationally, for example, computer aided design, collaborative software, and rapid prototyping, and prescriptively. Krippendorff describes five prescriptive methods:
(Re)designing the characters of artifacts
Designing interfaces for artifacts in view of their usability and meanings in use
Designing novel artifacts, including services and social practices, from sensible narratives and metaphors
Designing design strategies
Designing dialogical (collaborative) methodologies to involve others in a design process.
Elaborating and refining the design discourse in order to
Improve the reproducibility of knowledge about the design process, by generating retrievable records of past accounts of design processes and publishing pertinent findings
Make the collaboration among designers as well as in interdisciplinary teams more efficient
Inform pertinent design education
Enhance the ability to ask fruitful research questions for which design research may provide conclusive answers
Increase and maintain the reputation of professional design, especially in order for designers to play important roles within the network of its stakeholders. This means applying the science for design to itself.
Validations of semantic claims
In a science for design, validation consists of generating compelling justifications for the claims that designers must make regarding the meaning, virtue, potential reality, costs and benefits of their design for particular communities. Inasmuch as any design can prove itself only in the future, post factum, and with the collaboration of others, human-centered design is justifiable only by means of plausible arguments Issue-Based Information System that motivate its stakeholders to realize or use that design. The science for design, always concerned with not yet observable contingencies, cannot provide the simple truth claims of the kind that natural scientist aspire to for their theories. But it can provide several other human-centered ways to back the claims designers need to make:
Convincing demonstrations and expositions of a design in various contexts of use
Statistical experiments with prototypes, models or visualizations involving users’ ability to make sense of, find appropriate meanings for, and handle a proposed artifact
Appeals to trusted theories or principles of how meanings of artifacts are acquired in use, can be communicated through various channels, or emerge in a diversity of social situations, including in an ecology of other artifacts
Accounts of the systematic application of established design methods that reduced the initial design space to the alternatives being proposed
Affirmed commitments by relevant stakeholders to realize the design.
Reception of "product semantics" and The Semantic Turn
Since its coinage in 1984, the use of “product semantics” has mushroomed. In 2009, a Google search identified over 18,000 documents referring to it. However, it has been critiqued by advocates of a more critical approach to design as overly simplistic.
The semantics of artifacts has become of central importance in courses taught at leading design departments of many universities all over the world, among them at the Arizona State University; the Cranbrook Academy of Arts; The Ohio State University; the Savannah College of Art and Design; the University of the Arts in Philadelphia, USA; the Hochschule fűr Gestaltung Offenbach in Germany; the Hongik University in Seoul, Korea; the Indian Institute of Technology in Mumbai; the Musashino Art University in Tokyo, Japan; the National Taiwan University of Science and Technology; the University of Art and Design in Helsinki, Finland; and more. It has also permeated other disciplines, notably ergonomics, marketing, cognitive engineering.
Reviews can be found by writers on design theory, design history, corporate strategy, national design policy, design science studies, participatory design, interaction design, human-computer interaction, and cybernetics.
The Semantic Turn has been translated into Japanese and is currently being translated into German.
Notes
Additional references
Archer, Bruce (1995). The Nature of Research. Co-design 2, pp. 6–13, , accessed 2009.10.18.
Bonsiepe, Gui (1996), Interface; Design neu begreifen. Mannheim, Germany: Bollmann Verlag.
Krippendorff, Klaus (2006). The Semantic Turn; A New Foundation for Design. Boca Raton, London, New York: Taylor&Francis, CRC Press.
Krippendorff, Klaus (Ed.) et al. (1997). Design in the Age of Information; A Report to the National Science Foundation (NSF). Raleigh, NC: Design Research Laboratory, North Carolina State University. , accessed 2009.10.15.
Krippendorff, Klaus & Butter, Reinhart (Eds.) (1989). Product Semantics. Design Issues 5, 2.
Norman, Donald, A. (2002). The Design of Everyday Things. New York: Basic Books.
Norman, Donald, A. (2005). Emotional Design. New York: Basic Books.
Simon, Herbert A. (1969/2001). The Sciences of the Artificial, 3rd Edition. Cambridge, MA: MIT Press.
Steffen, Dagmar (2000). Design als Produktsprache. Frankfurt/Main: Verlag form.
Tahkokallio, Päivi & Vihma, Susann (Eds.) (1995). Design – Pleasure or Responsibility? Helsinki: University of Art and Design.
Väkevä, Seppo (Ed.) (1990). Product Semantics '89. Helsinki: University of Art and Design.
Vihma, Susann (Ed.) (1990). Semantic Visions in Design. Helsinki: University of Art and Design.
Design | The Semantic Turn | [
"Engineering"
] | 4,918 | [
"Design"
] |
12,037,838 | https://en.wikipedia.org/wiki/Lens%20clock | A lens clock is a mechanical dial indicator that is used to measure the dioptric power of a lens. It is a specialized version of a spherometer. A lens clock measures the curvature of a surface, but gives the result as an optical power in diopters, assuming the lens is made of a material with a particular refractive index.
How it works
The lens clock has three pointed probes that make contact with the surface of the lens. The outer two probes are fixed while the center one moves, retracting as the instrument is pressed down on the lens's surface. As the probe retracts, the hand on the face of the dial turns by an amount proportional to the distance.
The optical power of the surface is given by
where is the index of refraction of the glass, is the vertical distance (sagitta) between the center and outer probes, and is the horizontal separation of the outer probes. To calculate in diopters, both
and must be specified in meters.
A typical lens clock is calibrated to display the power of a crown glass surface, with a refractive index of 1.523. If the lens is made of some other material, the reading must be adjusted to correct for the difference in refractive index.
Measuring both sides of the lens and adding the surface powers together gives the approximate optical power of the whole lens. (This approximation relies on the assumption that the lens is relatively thin.)
Radius of curvature
The radius of curvature of the surface can be obtained from the optical power given by the lens clock using the formula
where is the index of refraction for which the lens clock is calibrated, regardless of the actual index of the lens being measured. If the lens is made of glass with some other index , the true optical power of the surface can be obtained using
Example—correcting for refractive index
A biconcave lens made of flint glass with an index of 1.7 is measured with a lens clock calibrated for crown glass with an index of 1.523. For this particular lens, the lens clock gives surface powers of −3.0 and −7.0 diopters (dpt). Because the clock is calibrated for a different refractive index the optical power of the lens is not the sum of the surface powers given by the clock. The optical power of the lens is instead obtained as follows:
First, the radii of curvature are obtained:
Next, the optical powers of each surface are obtained:
Finally, if the lens is thin the powers of each surface can be added to give the approximate optical power of the whole lens: −13.4 diopters. The actual power, as read by a vertometer or lensometer, might differ by as much as 0.1 diopters.
Estimating thickness
A lens clock can also be used to estimate the thickness of thin objects, such as a hard or gas-permeable contact lens. Ideally, a contact lens dial thickness gauge would be used for this, but a lens clock can be used if a dial thickness gauge is not available. To do this, the contact lens is placed concave side up on a table or other hard surface. The lens clock is then brought down on it such that the center prong contacts the lens as close to its center as possible, and the outer prongs rest on the table. The thickness of the lens is then the sagitta in the formula above, and can be calculated from the optical power reading, if the distance between the outer prongs is known.
See also
Astigmatism
Eyeglass prescription
Corrective lens
Galileo
Lapidary
George Ravenscroft
Optometry
Vertex (optics)
Clock
Gear ratio
References
Ophthalmic equipment
Dimensional instruments
pl:Sferometr | Lens clock | [
"Physics",
"Mathematics"
] | 780 | [
"Quantity",
"Dimensional instruments",
"Physical quantities",
"Size"
] |
12,037,842 | https://en.wikipedia.org/wiki/Semantic%20HTML | Semantic HTML is the use of HTML markup to reinforce the semantics, or meaning, of the information in web pages and web applications rather than merely to define its presentation or look. Semantic HTML is processed by traditional web browsers as well as by many other user agents. CSS is used to suggest how it is presented to human users.
History
HTML has included semantic markup since its inception. In an HTML document, the author may, among other things, "start with a title; add headings and paragraphs; add emphasis to [the] text; add images; add links to other pages; [and] use various kinds of lists".
Various versions of the HTML standard have included presentational markup such as <font> (added in HTML 3.2; removed in HTML 4.0 Strict), <i> (all versions) and <center> (added in HTML 3.2). There are also the semantically neutral span and div elements. Since the late 1990s when Cascading Style Sheets were beginning to work in most browsers, web authors have been encouraged to avoid the use of presentational HTML markup with a view to the separation of content and presentation.
In 2001, Tim Berners-Lee participated in a discussion of the Semantic Web, where it was presented that intelligent software 'agents' might one day automatically crawl the Web and find, filter and correlate previously unrelated, published facts for the benefit of end users. Such agents are not commonplace even now, but some of the ideas of Web 2.0, mashups and price comparison websites may be coming close. The main difference between these web application hybrids and Berners-Lee's semantic agents lies in the fact that the current aggregation and hybridisation of information is usually designed in by web developers, who already know the web locations and the API semantics of the specific data they wish to mash, compare and combine.
An important type of web agent that does crawl and read web pages automatically, without prior knowledge of what it might find, is the web crawler or search-engine spider. These software agents are dependent on the semantic clarity of web pages they find as they use various techniques and algorithms to read and index millions of web pages a day and provide web users with search facilities.
In order for search-engine spiders to be able to rate the significance of pieces of text they find in HTML documents, and also for those creating mashups and other hybrids, as well as for more automated agents as they are developed, the semantic structures that exist in HTML need to be widely and uniformly applied to bring out the meaning of published information.
While the true semantic web may depend on complex RDF ontologies and metadata, every HTML document makes its contribution to the meaningfulness of the Web by the correct use of headings, lists, titles and other semantic markup wherever possible. This "plain" use of HTML has been called "Plain Old Semantic HTML" or POSH. The correct use of Web 2.0 'tagging' creates folksonomies that may be equally or even more meaningful to many. HTML 5 introduced new semantic elements such as <section>, <article>, <footer>, <progress>, <nav>, <aside>, <mark>, and <time>. Overall, the goal of the W3C is to slowly introduce more ways for browsers, developers, and crawlers to better distinguish between different types of data, allowing for benefits such as better display on browsers on different devices.
Presentational elements were not formally deprecated in HTML 4.01 and XHTML recommendations, but were recommended against. In HTML 5, some of those elements, such as <i> and <b>, are still specified as their meaning has been clearly defined "as to be stylistically offset from the normal prose without conveying any extra importance".
Considerations
In cases where a document requires more precise semantics than those expressed in HTML alone, fragments of the document may be enclosed within span or div elements with meaningful class names such as <span class="author"> and <div class="invoice">. Where these class names are also a fragment identifier within a schema or ontology, they may link to a more defined meaning. Microformats formalise this approach to semantics in HTML.
One important restriction of this approach is that such markup based on element inclusion must meet the well-formedness conditions. As these documents are broadly tree-structured, this means that only balanced fragments from a sub-tree can be marked up in this way. A means of marking-up any arbitrary section of HTML would require a mechanism independent of the markup structure itself, such as XPointer.
Good semantic HTML also improves the accessibility of web documents (see also Web Content Accessibility Guidelines). For example, when a screen reader or audio browser can correctly ascertain the structure of a document, it will not waste the visually impaired user's time by reading out repeated or irrelevant information when it has been marked up correctly.
Google "rich snippets"
In 2010, Google specified three forms of structured metadata that their systems will use to find structured semantic content within webpages. Such information, when related to reviews, people profiles, business listings, and events will be used by Google to enhance the "snippet", or short piece of quoted text that is shown when the page appears in search listings. Google specifies that that data may be given using microdata, microformats or RDFa. Microdata is specified inside itemtype and itemprop attributes added to existing HTML elements; microformat keywords are added inside class attributes as discussed above; and RDFa relies on rel, typeof and property attributes added to existing elements.
See also
CP/LD (Content Profile/Linked Document)
HTML elements (complete list)
HTML landmarks
Microdata (HTML)
Microformat
RDFa
Semantic Web
Semantics (computer science)
XML
References
External links
schema.org is an initiative launched on 2 June 2011 by Bing, Google and Yahoo!
Domain-specific knowledge representation languages
Web accessibility
Web design | Semantic HTML | [
"Engineering"
] | 1,262 | [
"Design",
"Web design"
] |
12,038,022 | https://en.wikipedia.org/wiki/List%20of%20Six%20Sigma%20software%20packages | There are generally four classes of software used to support the Six Sigma process improvement protocol:
Analysis tools, which are used to perform statistical or process analysis;
Program management tools, used to manage and track a corporation's entire Six Sigma program;
DMAIC and Lean online project collaboration tools for local and global teams;
Data Collection tools that feed information directly into the analysis tools and significantly reduce the time spent gathering data.
Analysis tools
Notes
References
Six Sigma software packages
Quality | List of Six Sigma software packages | [
"Technology"
] | 94 | [
"Computing-related lists",
"Lists of software"
] |
12,039,054 | https://en.wikipedia.org/wiki/WHO%20Model%20List%20of%20Essential%20Medicines | The WHO Model List of Essential Medicines (aka Essential Medicines List or EML), published by the World Health Organization (WHO), contains the medications considered to be most effective and safe to meet the most important needs in a health system. The list is frequently used by countries to help develop their own local lists of essential medicines. , more than 155 countries have created national lists of essential medicines based on the World Health Organization's model list. This includes both developed and developing countries.
The list is divided into core items and complementary items. The core items are deemed to be the most cost-effective options for key health problems and are usable with little additional health care resources. The complementary items either require additional infrastructure such as specially trained health care providers or diagnostic equipment or have a lower cost–benefit ratio. About 25% of items are in the complementary list. Some medications are listed as both core and complementary. While most medications on the list are available as generic products, being under patent does not preclude inclusion.
The first list was published in 1977 and included 208 medications. The WHO updates the list every two years. There are 306 medications in the 14th list in 2005, 410 in the 19th list in 2015, 433 in the 20th list in 2017, 460 in the 21st list in 2019, and 479 in the 22nd list in 2021. Various national lists contain between 334 and 580 medications. The Essential Medicines List (EML) was updated in July 2023 to its 23rd edition. This list contains 1200 recommendations for 591 drugs and 103 therapeutic equivalents.
A separate list for children up to 12 years of age, known as the WHO Model List of Essential Medicines for Children (EMLc), was created in 2007 and is in its 9th edition. It was created to make sure that the needs of children were systematically considered such as availability of proper formulations. Everything in the children's list is also included in the main list. The list and notes are based on the 19th to 23rd edition of the main list. Therapeutic alternatives with similar clinical performance are listed for some medicines and they may be considered for national essential medicines lists. The 9th Essential Medicines List for Children was updated in July 2023.
Note: An α indicates a medicine is on the complementary list.
Anaesthetics, preoperative medicines and medical gases
General anaesthetics and oxygen
Inhalational medicines
Halothane
Isoflurane
Nitrous oxide
Oxygen
Sevoflurane
Injectable medicines
Ketamine
Propofol
Thiopental
Local anaesthetics
Bupivacaine
Lidocaine
Lidocaine/epinephrine (lidocaine + epinephrine)
Complementary:
Ephedrine
Preoperative medication and sedation for short-term procedures
Atropine
Midazolam
Morphine
Medical gases
Oxygen
Medicines for pain and palliative care
Non-opioids and non-steroidal anti-inflammatory medicines (NSAIMs)
Acetylsalicylic acid (aspirin)
Ibuprofen
Paracetamol (acetaminophen)
Opioid analgesics
Codeine
Fentanyl
Morphine
Complementary:
Methadone
Medicines for other common symptoms in palliative care
Amitriptyline
Cyclizine
Dexamethasone
Diazepam
Docusate sodium
Fluoxetine
Haloperidol
Hyoscine butylbromide
Hyoscine hydrobromide
Lactulose
Loperamide
Metoclopramide
Midazolam
Ondansetron
Senna
Antiallergics and medicines used in anaphylaxis
Dexamethasone
Epinephrine (adrenaline)
Hydrocortisone
Loratadine
Prednisolone
Antidotes and other substances used in poisonings
Non-specific
Charcoal, activated
Specific
Acetylcysteine
Atropine
Calcium gluconate
Methylthioninium chloride (methylene blue)
Naloxone
Penicillamine
Prussian blue
Sodium nitrite
Sodium thiosulfate
Complementary:
Deferoxamine
Dimercaprol
Fomepizole
Sodium calcium edetate
Succimer
Medicines for diseases of the nervous system
Antiseizure medicines
Carbamazepine
Diazepam
Lamotrigine
Levetiracetam
Lorazepam
Magnesium sulfate
Midazolam
Phenobarbital
Phenytoin
Valproic acid (sodium valproate)
Complementary:
Ethosuximide
Levetiracetam
Valproic acid (sodium valproate)
Medicines for multiple sclerosis
Complementary:
Cladribine
Glatiramer acetate
Rituximab
Medicines for parkinsonism
Biperiden
Levodopa/carbidopa (levodopa + carbidopa)
Anti-infective medicines
Anthelminthics
Intestinal anthelminthics
Albendazole
Ivermectin
Levamisole
Mebendazole
Niclosamide
Praziquantel
Pyrantel
Antifilarials
Albendazole
Diethylcarbamazine
Ivermectin
Antischistosomals and other antinematode medicines
Praziquantel
Triclabendazole
Complementary:
Oxamniquine
Cysticidal medicines
Complementary:
Albendazole
Mebendazole
Praziquantel
Antibacterials
Access group antibiotics
Amikacin
Amoxicillin
Amoxicillin/clavulanic acid (amoxicillin + clavulanic acid)
Ampicillin
Benzathine benzylpenicillin
Benzylpenicillin
Cefalexin
Cefazolin
Chloramphenicol
Clindamycin
Cloxacillin
Doxycycline
Gentamicin
Metronidazole
Nitrofurantoin
Phenoxymethylpenicillin (penicillin V)
Procaine benzylpenicillin
Spectinomycin
Sulfamethoxazole/trimethoprim (sulfamethoxazole + trimethoprim)
Trimethoprim
Watch group antibiotics
Azithromycin
Cefixime
Cefotaxime
Ceftriaxone
Cefuroxime
Ciprofloxacin
Clarithromycin
Piperacillin/tazobactam (piperacillin + tazobactam)
Vancomycin
Complementary:
Ceftazidime
Meropenem
Vancomycin
Reserve group antibiotics
Reserve antibiotics are last-resort antibiotics. The EML antibiotic book was published in 2022.
Complementary:
Cefiderocol
Ceftazidime/avibactam (ceftazidime + avibactam)
Ceftolozane/tazobactam (ceftolozane + tazobactam)
Colistin
Fosfomycin
Linezolid
Meropenem/vaborbactam (meropenem + vaborbactam)
Plazomicin
Polymyxin B
Antileprosy medicines
Clofazimine
Dapsone
Rifampicin
Antituberculosis medicines
Ethambutol
Ethambutol/isoniazid/pyrazinamide/rifampicin (ethambutol + isoniazid + pyrazinamide + rifampicin)
Ethambutol/isoniazid/rifampicin (ethambutol + isoniazid + rifampicin)
Ethionamide
Isoniazid
Isoniazid/pyrazinamide/rifampicin (isoniazid + pyrazinamide + rifampicin)
Isoniazid/rifampicin (isoniazid + rifampicin)
Isoniazid/rifapentine (isoniazid + rifapentine)
Moxifloxacin
Pyrazinamide
Rifabutin
Rifampicin
Rifapentine
Complementary:
Amikacin
Amoxicillin/clavulanic acid (amoxicillin + clavulanic acid)
Bedaquiline
Clofazimine
Cycloserine
Delamanid
Ethionamide
Levofloxacin
Linezolid
Meropenem
Moxifloxacin
P-aminosalicylic acid (p-aminosalicylate sodium)
Pretomanid
Streptomycin
Antifungal medicines
Amphotericin B
Clotrimazole
Fluconazole
Flucytosine
Griseofulvin
Itraconazole
Nystatin
Voriconazole
Complementary:
Micafungin
Potassium iodide
Antiviral medicines
Antiherpes medicines
Aciclovir
Antiretrovirals
Nucleoside/nucleotide reverse transcriptase inhibitors
Abacavir
Lamivudine
Tenofovir disoproxil fumarate
Zidovudine
Non-nucleoside reverse transcriptase inhibitors
Efavirenz
Nevirapine
Protease inhibitors
Atazanavir/ritonavir (atazanavir + ritonavir)
Darunavir
Lopinavir/ritonavir (lopinavir + ritonavir)
Ritonavir
Integrase inhibitors
Dolutegravir
Raltegravir
Fixed-dose combinations of antiretroviral medicines
Abacavir/lamivudine (abacavir + lamivudine)
Dolutegravir/lamivudine/tenofovir (dolutegravir + lamivudine + tenofovir)
Efavirenz/emtricitabine/tenofovir
Efavirenz/lamivudine/tenofovir (efavirenz + lamivudine + tenofovir)
Emtricitabine/tenofovir (emtricitabine + tenofovir)
Lamivudine/zidovudine (lamivudine + zidovudine)
Medicines for prevention of HIV-related opportunistic infections
Isoniazid/pyridoxine/sulfamethoxazole/trimethoprim (isoniazid + pyridoxine + sulfamethoxazole + trimethoprim)
Other antivirals
Ribavirin
Valganciclovir
Complementary:
Oseltamivir
Valganciclovir
Antihepatitis medicines
Medicines for hepatitis B
Nucleoside/Nucleotide reverse transcriptase inhibitors
Entecavir
Tenofovir disoproxil fumarate
Medicines for hepatitis C
Pangenotypic direct-acting antiviral combinations
Daclatasvir
Daclatasvir/sofosbuvir (daclatasvir + sofosbuvir)
Glecaprevir/pibrentasvir (glecaprevir + pibrentasvir)
Ravidasvir
Sofosbuvir
Sofosbuvir/velpatasvir (sofosbuvir + velpatasvir)
Non-pangenotypic direct-acting antiviral combinations
Ledipasvir/sofosbuvir (ledipasvir + sofosbuvir)
Other antivirals for hepatitis C
Ribavirin
Antiprotozoal medicines
Antiamoebic and antigiardiasis medicines
Diloxanide
Metronidazole
Antileishmaniasis medicines
Amphotericin B
Meglumine antimoniate
Miltefosine
Paromomycin
Sodium stibogluconate
Antimalarial medicines
For curative treatment
Amodiaquine
Artemether
Artemether/lumefantrine (artemether + lumefantrine)
Artesunate
Artesunate/amodiaquine (artesunate + amodiaquine)
Artesunate/mefloquine (artesunate + mefloquine)
Artesunate/pyronaridine tetraphosphate (artesunate + pyronaridine tetraphosphate)
Chloroquine
Dihydroartemisinin/piperaquine phosphate (dihydroartemisinin + piperaquine phosphate)
Doxycycline
Mefloquine
Primaquine
Quinine
Sulfadoxine/pyrimethamine (sulfadoxine + pyrimethamine)
For chemoprevention
Amodiaquine + sulfadoxine/pyrimethamine (Co-packaged)
Chloroquine
Doxycycline
Mefloquine
Proguanil
Sulfadoxine/pyrimethamine (sulfadoxine + pyrimethamine)
Antipneumocystosis and antitoxoplasmosis medicines
Pyrimethamine
Sulfadiazine
Sulfamethoxazole/trimethoprim (sulfamethoxazole + trimethoprim)
Complementary:
Pentamidine
Antitrypanosomal medicines
African trypanosomiasis
Fexinidazole
Medicines for the treatment of 1st stage African trypanosomiasis
Pentamidine
Suramin sodium
Medicines for the treatment of 2nd stage African trypanosomiasis
Eflornithine
Melarsoprol
Nifurtimox
Complementary:
Melarsoprol
American trypanosomiasis
Benznidazole
Nifurtimox
Medicines for ectoparasitic infections
Ivermectin
Medicines for Ebola virus disease
Ansuvimab
Atoltivimab/maftivimab/odesivimab (atoltivimab + maftivimab + odesivimab)
Medicines for COVID-19
No listings in this section.
Antimigraine medicines
For treatment of acute attack
Acetylsalicylic acid (aspirin)
Ibuprofen
Paracetamol (acetaminophen)
Sumatriptan
For prophylaxis
Propranolol
Immunomodulators and antineoplastics
Immunomodulators for non-malignant disease
Complementary:
Adalimumab
Azathioprine
Ciclosporin
Tacrolimus
Antineoplastics and supportive medicines
Cytotoxic medicines
Complementary:
Arsenic trioxide
Asparaginase
Bendamustine
Bleomycin
Calcium folinate (leucovorin calcium)
Capecitabine
Carboplatin
Chlorambucil
Cisplatin
Cyclophosphamide
Cytarabine
Dacarbazine
Dactinomycin
Daunorubicin
Docetaxel
Doxorubicin
Doxorubicin (as pegylated liposomal)
Etoposide
Fludarabine
Fluorouracil
Gemcitabine
Hydroxycarbamide (hydroxyurea)
Ifosfamide
Irinotecan
Melphalan
Mercaptopurine
Methotrexate
Oxaliplatin
Paclitaxel
Pegaspargase
Procarbazine
Realgar Indigo naturalis formulation
Tioguanine
Vinblastine
Vincristine
Vinorelbine
Targeted therapies
Complementary:
All-trans retinoic acid (tretinoin) (ATRA)
Bortezomib
Dasatinib
Erlotinib
Everolimus
Ibrutinib
Imatinib
Nilotinib
Rituximab
Trastuzumab
Immunomodulators
Complementary:
Filgrastim
Lenalidomide
Nivolumab
Pegfilgrastim
Thalidomide
Hormones and antihormones
Complementary:
Abiraterone
Anastrozole
Bicalutamide
Dexamethasone
Hydrocortisone
Leuprorelin
Methylprednisolone
Prednisolone
Tamoxifen
Supportive medicines
Complementary:
Allopurinol
Mesna
Rasburicase
Zoledronic acid
Therapeutic foods
Ready-to-use therapeutic food
Medicines affecting the blood
Antianaemia medicines
Ferrous salt
Ferrous salt/folic acid (ferrous salt + folic acid)
Folic acid
Hydroxocobalamin
Complementary:
Erythropoiesis-stimulating agents
Medicines affecting coagulation
Dabigatran
Enoxaparin
Heparin sodium
Phytomenadione
Protamine sulfate
Tranexamic acid
Warfarin
Complementary:
Desmopressin
Heparin sodium
Protamine sulfate
Warfarin
Other medicines for haemoglobinopathies
Deferasirox
Complementary:
Deferoxamine
Hydroxycarbamide (hydroxyurea)
Blood products of human origin and plasma substitutes
Blood and blood components
Cryoprecipitate, pathogen-reduced
Fresh frozen plasma
Platelets
Red blood cells
Whole blood
Plasma-derived medicines
Human immunoglobulins
Rho(D) immune globulin (anti-D immunoglobulin)
Anti-rabies immunoglobulin
Anti-tetanus immunoglobulin
Complementary:
Normal immunoglobulin
Blood coagulation factors
Complementary:
Coagulation factor VIII
Coagulation factor IX
Plasma substitutes
Dextran 70
Cardiovascular medicines
Antianginal medicines
Bisoprolol
Glyceryl trinitrate
Isosorbide dinitrate
Verapamil
Antiarrhythmic medicines
Bisoprolol
Digoxin
Epinephrine (adrenaline)
Lidocaine
Verapamil
Complementary:
Amiodarone
Antihypertensive medicines
Amlodipine
Bisoprolol
Enalapril
Hydralazine
Hydrochlorothiazide
Lisinopril/amlodipine (lisinopril + amlodipine)
Lisinopril/hydrochlorothiazide (lisinopril + hydrochlorothiazide)
Losartan
Methyldopa
Telmisartan/amlodipine (telmisartan + amlodipine)
Telmisartan/hydrochlorothiazide (telmisartan + hydrochlorothiazide)
Complementary:
Sodium nitroprusside
Medicines used in heart failure
Bisoprolol
Digoxin
Enalapril
Furosemide
Hydrochlorothiazide
Losartan
Spironolactone
Complementary:
Digoxin
Dopamine
Antithrombotic medicines
Anti-platelet medicines
Acetylsalicylic acid (aspirin)
Clopidogrel
Thrombolytic medicines
Complementary:
Alteplase
Streptokinase
Lipid-lowering agents
Simvastatin
Fixed-dose combinations for prevention of atherosclerotic cardiovascular disease
Acetylsalicylic acid/atorvastatin/ramipril (acetylsalicylic acid + atorvastatin + ramipril)
Acetylsalicylic acid/simvastatin/ramipril/atenolol/hydrochlorothiazide (acetylsalicylic acid + simvastatin + ramipril + atenolol + hydrochlorothiazide)
Atorvastatin/perindopril/amlodipine (atorvastatin + perindopril + amlodipine)
Dermatological medicines (topical)
Antifungal medicines
Miconazole
Selenium sulfide
Sodium thiosulfate
Terbinafine
Anti-infective medicines
Mupirocin
Potassium permanganate
Silver sulfadiazine
Anti-inflammatory and antipruritic medicines
Betamethasone
Calamine
Hydrocortisone
Medicines affecting skin differentiation and proliferation
Benzoyl peroxide
Calcipotriol
Coal tar
Fluorouracil
Podophyllum resin
Salicylic acid
Urea
Complementary:
Methotrexate
Scabicides and pediculicides
Benzyl benzoate
Permethrin
Diagnostic agents
Ophthalmic medicines
Fluorescein
Tropicamide
Radiocontrast media
Amidotrizoate
Barium sulfate
Iohexol
Complementary:
Barium sulfate
Meglumine iotroxate
Antiseptics and disinfectants
Antiseptics
Chlorhexidine
Ethanol
Povidone iodine
Disinfectants
Alcohol based hand rub
Chlorine base compound
Chloroxylenol
Glutaral
Diuretics
Amiloride
Furosemide
Hydrochlorothiazide
Mannitol
Spironolactone
Complementary:
Hydrochlorothiazide
Mannitol
Spironolactone
Gastrointestinal medicines
Complementary:
Pancreatic enzymes
Antiulcer medicines
Omeprazole
Ranitidine
Antiemetic medicines
Dexamethasone
Metoclopramide
Ondansetron
Complementary:
Aprepitant
Anti-inflammatory medicines
Sulfasalazine
Complementary:
Hydrocortisone
Prednisolone
Laxatives
Senna
Medicines used in diarrhoea
Oral rehydration salts + zinc sulfate (Co-packaged)
Oral rehydration
Oral rehydration salts
Medicines for diarrhoea
Zinc sulfate
Medicines for endocrine disorders
Adrenal hormones and synthetic substitutes
Fludrocortisone
Hydrocortisone
Androgens
Complementary:
Testosterone
Estrogens
No listings in this section.
Progestogens
Medroxyprogesterone acetate
Medicines for diabetes
Insulins
Insulin injection (soluble)
Intermediate-acting insulin
Long-acting insulin analogues
Oral hypoglycaemic agents
Empagliflozin
Gliclazide
Metformin
Complementary:
Metformin
Medicines for hypoglycaemia
Glucagon
Complementary:
Diazoxide
Thyroid hormones and antithyroid medicines
Levothyroxine
Potassium iodide
Methimazole
Propylthiouracil
Complementary:
Lugol's solution
Methimazole
Potassium iodide
Propylthiouracil
Medicines for disorders of the pituitary hormone system
Cabergoline
Complementary:
Octreotide
Immunologicals
Diagnostic agents
Tuberculin, purified protein derivative (PPD)
Sera, immunoglobulins and monoclonal antibodies
Anti-rabies virus monoclonal antibodies
Antivenom immunoglobulin
Diphtheria antitoxin
Equine rabies immunoglobulin
Vaccines
Recommendations for all
BCG vaccine
Diphtheria vaccine
Haemophilus influenzae type b vaccine
Hepatitis B vaccine
Human papilloma virus (HPV) vaccine
Measles vaccine
Pertussis vaccine
Pneumococcal vaccine
Poliomyelitis vaccine
Rotavirus vaccine
Rubella vaccine
Tetanus vaccine
Recommendations for certain regions
Japanese encephalitis vaccine
Tick-borne encephalitis vaccine
Yellow fever vaccine
Recommendations for some high-risk populations
Cholera vaccine
Dengue vaccine
Hepatitis A vaccine
Meningococcal meningitis vaccine
Rabies vaccine
Typhoid vaccine
Recommendations for immunization programmes with certain characteristics
Influenza vaccine (seasonal)
Mumps vaccine
Varicella vaccine
Muscle relaxants (peripherally-acting) and cholinesterase inhibitors
Atracurium
Neostigmine
Suxamethonium
Vecuronium
Complementary:
Pyridostigmine
Vecuronium
Ophthalmological preparations
Anti-infective agents
Aciclovir
Azithromycin
Erythromycin
Gentamicin
Natamycin
Ofloxacin
Tetracycline
Anti-inflammatory agents
Prednisolone
Local anesthetics
Tetracaine
Miotics and antiglaucoma medicines
Acetazolamide
Latanoprost
Pilocarpine
Timolol
Mydriatics
Atropine
Complementary:
Epinephrine (adrenaline)
Anti-vascular endothelial growth factor (VEGF) preparations
Complementary:
Bevacizumab
Medicines for reproductive health and perinatal care
Contraceptives
Oral hormonal contraceptives
Ethinylestradiol/levonorgestrel (ethinylestradiol + levonorgestrel)
Ethinylestradiol/norethisterone (ethinylestradiol + norethisterone)
Levonorgestrel
Ulipristal
Injectable hormonal contraceptives
Estradiol cypionate/medroxyprogesterone acetate (estradiol cypionate + medroxyprogesterone acetate)
Medroxyprogesterone acetate
Norethisterone enantate
Intrauterine devices
Copper-containing device
Levonorgestrel-releasing intrauterine system
Barrier methods
Condoms
Diaphragms
Implantable contraceptives
Etonogestrel-releasing implant
Levonorgestrel-releasing implant
Intravaginal contraceptives
Ethinylestradiol/etonogestrel (ethinylestradiol + etonogestrel)
Progesterone vaginal ring
Ovulation inducers
Complementary:
Clomifene
Letrozole
Uterotonics
Carbetocin
Ergometrine
Mifepristone + misoprostol (Co-packaged)
Misoprostol
Oxytocin
Antioxytocics (tocolytics)
Nifedipine
Other medicines administered to the mother
Dexamethasone
Multiple micronutrient supplement
Tranexamic acid
Medicines administered to the neonate
Caffeine citrate
Chlorhexidine
Complementary:
Ibuprofen
Prostaglandin E1
Surfactant
Peritoneal dialysis solution
Complementary:
Intraperitoneal dialysis solution (of appropriate composition)
Medicines for mental and behavioural disorders
Medicines used in psychotic disorders
Fluphenazine
Haloperidol
Olanzapine
Paliperidone
Risperidone
Complementary:
Clozapine
Medicines used in mood disorders
Medicines used in depressive disorders
Amitriptyline
Fluoxetine
Medicines used in bipolar disorders
Carbamazepine
Lithium carbonate
Quetiapine
Valproic acid (sodium valproate)
Medicines for anxiety disorders
Diazepam
Fluoxetine
Medicines used for obsessive compulsive disorders
Clomipramine
Fluoxetine
Medicines for disorders due to psychoactive substance use
Medicines for alcohol use disorders
Acamprosate calcium
Naltrexone
Medicines for nicotine use disorders
Bupropion
Nicotine replacement therapy (NRT)
Varenicline
Complementary:
Methadone
Medicines acting on the respiratory tract
Antiasthmatic medicines and medicines for chronic obstructive pulmonary disease
Budesonide
Budesonide/formoterol (budesonide + formoterol)
Epinephrine (adrenaline)
Ipratropium bromide
Salbutamol
Tiotropium
Solutions correcting water, electrolyte and acid-base disturbances
Oral
Oral rehydration salts
Potassium chloride
Parenteral
Glucose
Glucose with sodium chloride
Potassium chloride
Sodium chloride
Sodium hydrogen carbonate
Sodium lactate, compound solution (Ringer's lactate solution)
Miscellaneous
Water for injection
Vitamins and minerals
Ascorbic acid
Calcium
Colecalciferol
Ergocalciferol
Iodine
Multiple micronutrient powder
Nicotinamide
Pyridoxine
Retinol
Riboflavin
Thiamine
Complementary:
Calcium gluconate
Ear, nose and throat medicines
Acetic acid
Budesonide
Ciprofloxacin
Xylometazoline
Medicines for diseases of joints
Medicines used to treat gout
Allopurinol
Disease-modifying anti-rheumatic drugs (DMARDs)
Chloroquine
Complementary:
Azathioprine
Hydroxychloroquine
Methotrexate
Penicillamine
Sulfasalazine
Medicines for juvenile joint diseases
Complementary:
Acetylsalicylic acid (aspirin)
Adalimumab
Methotrexate
Triamcinolone hexacetonide
Dental medicines and preparations
Fluoride
Glass ionomer cement
Resin-based composite (low-viscosity)
Resin-based composite (high-viscosity)
Silver diamine fluoride
Notes
An α indicates the medicine is on the complementary list for which specialized diagnostic or monitoring or training is needed. An item may also be listed as complementary on the basis of higher costs or a less attractive cost-benefit ratio.
References
Further reading
External links
Drug-related lists
Publications established in 1977
Wikipedia medicine articles ready to translate | WHO Model List of Essential Medicines | [
"Chemistry"
] | 5,840 | [
"Drug-related lists"
] |
12,040,220 | https://en.wikipedia.org/wiki/Tinfoil%20Hat%20Linux | Tinfoil Hat Linux (THL) is a compact security-focused Linux distribution designed for high security developed by The Shmoo Group. The first version (1.000) was released in February 2002. By 2013, it had become a low-priority project. Its image files and source are available in gzip format. THL can be used on modern PCs using an Intel 80386 or better, with at least 8 MB of RAM. The distribution fits on a single HD floppy disk. The small footprint provides additional benefits beyond making the system easy to understand and verify. A hard drive is not required to use THL, making it easier to "sanitize" the computer after use.
The logo of Tinfoil Hat is Tux, the Linux mascot, wearing a tinfoil hat.
The Shmoo Group website says "It started as a secure, single floppy, bootable Linux distribution for storing PGP keys and then encrypting, signing, and wiping files. At some point, it became an exercise in over-engineering."
Security features
Tinfoil Hat uses a number of measures to defeat hardware and software surveillance methods like keystroke logging, video camera, and TEMPEST:
Encryption — GNU Privacy Guard (GPG) public key cryptography software is included in THL.
Data retrieval — All temporary files are created on an encrypted RAM disk that is destroyed on shutdown. Even the GPG key file information can be stored encrypted on the floppy.
Keystroke monitoring — THL has GPG Grid, a wrapper for GPG that lets you use a video game-style character entry system instead of typing in your passphrase. Keystroke loggers get a set of grid points, instead of a passphrase.
Power usage and other side-channel attacks — Under the Paranoid options, a copy of GPG runs in the background generating keys and encrypting random documents. This makes it harder to determine when real encryption is taking place.
Reading the screen over the user's shoulder is made difficult when Tinfoil Hat is switched to paranoid mode, which sets the screen to a very low contrast.
Applications
THL can be used on most modern PCs using the x86 processor architecture. For example, one might install it on a computer that is kept in a locked room, not connected to any network, and used only for cryptographically signing keys. It is fairly easy to create the Tinfoil Hat booting floppy with Microsoft Windows. Verifying the checksum can pose a greater challenge. The text of the documentation is salted with a few jokes, the humor working in stark contrast to the serious and paranoiac tone of the surrounding text. The very name of the distribution pokes fun at itself, as Tinfoil Hats are commonly ascribed to paranoiacs as a method of protecting oneself from mind-control waves.
Tinfoil Hat Linux requires one to work in a text-only environment in Linux, electing to start users with a Bourne shell, the text editor vi, and with no graphical user interface. It uses BusyBox instead of the normal Util-Linux, the GNU Core Utilities (formerly known as FileUtils, ShellUtils, and TextUtils), and other common Unix tools. Tinfoil Hat also offers the GNU nano text editor.
See also
List of LiveDistros
Damn Small Linux
Security-focused operating system
Tin Hat Linux
OpenBSD
References
External links
Official website
Evilmutant.com article about Tinfoil Hat Linux, with screenshots
Another evilmutant.com article giving links to other media which picked up the previous article
Cryptographic software
Floppy-based Linux distributions
Floppy disk-based operating systems
RPM-based Linux distributions
Linux distributions | Tinfoil Hat Linux | [
"Mathematics"
] | 760 | [
"Cryptographic software",
"Mathematical software"
] |
12,040,404 | https://en.wikipedia.org/wiki/Diurnal%20air%20temperature%20variation | In meteorology, diurnal temperature variation is the variation between a high air temperature and a low temperature that occurs during the same day.
Temperature lag
Temperature lag, also known as thermal inertia, is an important factor in diurnal temperature variation. Peak daily temperature generally occurs after noon, as air keeps absorbing net heat for a period of time from morning through noon and some time thereafter. Similarly, minimum daily temperature generally occurs substantially after midnight, indeed occurring during early morning in the hour around dawn, since heat is lost all night long. The analogous annual phenomenon is seasonal lag.
As solar energy strikes the Earth's surface each morning, a shallow layer of air directly above the ground is heated by conduction. Heat exchange between this shallow layer of warm air and the cooler air above is very inefficient. On a warm summer's day, for example, air temperatures may vary by from just above the ground to chest height. Incoming solar radiation exceeds outgoing heat energy for many hours after noon and equilibrium is usually reached from 3–5 p.m., but this may be affected by a variety of factors such as large bodies of water, soil type and cover, wind, cloud cover/water vapor, and moisture on the ground.
Differences in variation
Diurnal temperature variations are greatest very near Earth's surface. The Tibetan and Andean Plateaus present one of the largest differences in daily temperature on the planet, as does the Western US and the western portion of southern Africa.
High desert regions typically have the greatest diurnal-temperature variations, while low-lying humid areas near the shores (tropical, oceanic, and arctic) typically have the least. Large cities (urban heat islands) also tend to have a lowed diurnal temperature variation than surrounding areas. This explains why an area like the Pinnacles National Park can have high temperatures of during a summer day, and then have lows of . At the same time, Washington D.C., which is much more humid, has temperature variations of only ;
urban Hong Kong has a diurnal temperature range of little more than .
While the National Park Service claimed that the world single-day record is a variation of (from to ) in Browning, Montana in 1916, the Montana Department of Environmental Quality claimed that Loma, Montana also had a variation of (from to ) in 1972. Both these extreme daily temperature changes were the result of sharp air-mass changes within a single day. The 1916 event was an extreme temperature drop, resulting from frigid Arctic air from Canada invading northern Montana, displacing a much warmer air mass. The 1972 event was a chinook event, where air from the Pacific Ocean overtopped mountain ranges to the west, and dramatically warmed in its descent into Montana, displacing frigid Arctic air and causing a drastic temperature rise.
In the absence of such extreme air-mass changes, diurnal temperature variations typically range from or smaller in humid, tropical areas, up to in higher-elevation, arid to semi-arid areas, such as parts of the U.S. Western states' Intermountain Plateau areas, for example Elko, Nevada, Ashton, Idaho and Burns, Oregon. The higher the humidity is, the lower the diurnal temperature variation is.
In Europe, due to its more northern latitude and close proximity to large warm water bodies (such as the Mediterranean), differences in daily temperature are not as pronounced as in other continents. However, places in Southern Europe significantly far from the Mediterranean tend to have high differences in daily temperatures, some around . These include Southwestern Iberia (e.g. Alvega or Badajoz) or the high-altitude plateaus of Turkey (if considered part of Europe) (e.g. Kayseri).
In Australia, significant diurnal temperature variations generally occur in the Red Centre around Alice Springs and Uluru.
Viticulture
Diurnal temperature variation is of particular importance in viticulture. Wine regions situated in areas of high altitude experience the most dramatic swing in temperature variation during the course of a day. In grapes, this variation has the effect of producing high acid and high sugar content as the grapes' exposure to sunlight increases the ripening qualities while the sudden drop in temperature at night preserves the balance of natural acids in the grape.
See also
Diurnal cycle
References
Meteorological phenomena
Daily events
Atmospheric temperature
fr:Amplitude thermique#Types | Diurnal air temperature variation | [
"Physics"
] | 912 | [
"Meteorological phenomena",
"Physical phenomena",
"Earth phenomena"
] |
12,040,981 | https://en.wikipedia.org/wiki/Gibbs%20and%20Canning | Gibbs and Canning Limited was an English manufacturer of terracotta and, in particular, architectural terracotta, located in Glascote, Tamworth, and founded in 1847.
The company manufactured a wide range of terracotta and faience: statues of lions and pelicans to adorn the Natural History Museum in London; architectural terracotta for banks and schools; and garden urns and planters. By the 1950s, when the factory finally closed, it was best known for more practical items, such as drainage pipes, sinks, vases and jars.
Today, there is little evidence of the factory in Glascote, but the legacy lives on in the decoration and plumbing of many buildings in Britain’s major towns and cities.
Buildings featuring Gibbs and Canning terracotta
Natural History Museum, South Kensington, London. Designed by Alfred Waterhouse. Both the interior and exterior statues, and the block-work, are Gibbs and Canning (G&C).
Royal Albert Hall, South Kensington, London. The buff, ornamental terracotta on the exterior.
142 Holborn Bars, Prudential Assurance Building, Holborn, London. Designed by Alfred Waterhouse with all the red terracotta by G&C.
Methodist Central Hall, Birmingham. Ornate, red terracotta.
Imperial Buildings, Victoria Street/Whitechapel corner, Liverpool, 1879. Cream terracotta.
Church of the Holy Name of Jesus, Manchester. Roof vaulting of hollow terracotta blocks, 1869–71.
Manchester Town Hall Designed, again by Alfred Waterhouse.
Victoria Law Courts, Birmingham. Interior buff-coloured terracotta.
References
Further reading
Streluk, A. (2006) "Gibbs & Canning of Glascote, Tamworth", Glazed Expressions, No.55 Spring
External links
Research page including details of many buildings that used Gibbs and Canning terracotta
Chemlinski Gallery - English Terracotta
Tamworth Castle - has a small display Gibbs and Canning wares and manufacturing techniques
Building materials companies of the United Kingdom
Ceramics manufacturers of England
Staffordshire pottery
Terracotta
Design companies established in 1847
Manufacturer of architectural terracotta
Manufacturing companies established in 1847
1847 establishments in England | Gibbs and Canning | [
"Engineering"
] | 446 | [
"Manufacturer of architectural terracotta",
"Architecture"
] |
12,041,060 | https://en.wikipedia.org/wiki/Di-%CF%80-methane%20rearrangement | In organic chemistry, the di-π-methane rearrangement is the photochemical rearrangement of a molecule that contains two π-systems separated by a saturated carbon atom. In the aliphatic case, this molecules is a 1,4-diene; in the aromatic case, an allyl-substituted arene. The reaction forms (respectively) an ene- or aryl-substituted cyclopropane. Formally, it amounts to a 1,2 shift of one ene group (in the diene) or the aryl group (in the allyl-aromatic analog), followed by bond formation between the lateral carbons of the non-migrating moiety:
Discovery
This rearrangement was originally encountered in the photolysis of barrelene to give semibullvalene. Once the mechanism was recognized as general by Howard Zimmerman in 1967, it was clear that the structural requirement was two π groups attached to an sp3-hybridized carbon, and then a variety of further examples was obtained.
Notable examples
One example was the photolysis of Mariano's compound, 3,3dimethyl-1,1,5,5tetraphenyl-1,4pentadiene. In this symmetric diene, the active π bonds are conjugated to arenes, which does not inhibit the reaction.
Another was the asymmetric Pratt diene. Pratt's diene demonstrates that the reaction preferentially cyclopropanates aryl substituents, because the reaction pathway preserves the resonant stabilization of a benzhydrylic radical intermediate.
The barrelene rearrangement is more complex than the Mariano and Pratt examples since there are two sp3-hybridized carbons. Each bridgehead carbon has three (ethylenic) π bonds, and any two can undergo the diπ-methane rearrangement. Moreover, unlike the acyclic Mariano and Pratt dienes, the barrelene reaction requires a triplet excited state. Thus acetone is used in the barrelene reaction; acetone captures the light and then delivers triplet excitation to the barrelene reactant. In the final step of the rearrangement there is a spin flip, to provide paired electrons and a new σ bond.
As excited-state probe
The dependence of the di-π-methane rearrangement on the multiplicity of the excited state arises from the free-rotor effect. Triplet 1,4-dienes freely undergo cis-trans interconversion of diene double bonds (i.e. free rotation). In acyclic dienes, this free rotation leads to diradical reconnection, short-circuiting the di-π-methane process. Singlet excited states do not rotate and may thus undergo the di-π-methane mechanism. For cyclic dienes, as in the barrelene example, the ring structure can prevent free-rotatory dissipation, and may in fact require bond rotation to complete the rearrangement.
References
Rearrangement reactions | Di-π-methane rearrangement | [
"Chemistry"
] | 626 | [
"Rearrangement reactions",
"Organic reactions"
] |
12,041,382 | https://en.wikipedia.org/wiki/Decimal%20computer | A decimal computer is a computer that represents and operates on numbers and addresses in decimal format instead of binary as is common in most modern computers. Some decimal computers had a variable word length, which enabled operations on relatively large numbers.
Decimal computers were common from the early machines through the 1960s and into the 1970s. Using decimal directly saved the need to convert from decimal to binary for input and output and offered a significant speed improvement over binary machines that performed these conversions using subroutines. This allowed otherwise low-end machines to offer practical performance for roles like accounting and bookkeeping, and many low- and mid-range systems of the era were decimal based.
The IBM System/360 line of binary computers, announced in 1964, included instructions that perform decimal arithmetic; other lines of binary computers with decimal arithmetic instructions followed. During the 1970s, microprocessors with instructions supporting decimal arithmetic became common in electronic calculators, cash registers and similar roles, especially in the 8-bit era.
The rapid improvements in general performance of binary machines eroded the value of decimal operations. One of the last major new designs to support it was the Motorola 68000, which shipped in 1980. More recently, IBM added decimal support to their POWER6 designs to allow them to directly support programs written for 1960s platforms like the System/360. With that exception, most modern designs have little or no decimal support.
Early computers
Early computers that were exclusively decimal include the ENIAC, IBM NORC, IBM 650, IBM 1620, IBM 7070, UNIVAC Solid State 80. In these machines, the basic unit of data was the decimal digit, encoded in one of several schemes, including binary-coded decimal (BCD), bi-quinary and two-out-of-five code. Except for the IBM 1620 and 1710, these machines used word addressing. When non-numeric characters were used in these machines, they were encoded as two decimal digits.
Other early computers were character oriented, providing instructions for performing arithmetic on character strings of decimal numerals, using BCD or excess-3 (XS-3) for decimal digits. On these machines, the basic data element was an alphanumeric character, typically encoded in six bits. UNIVAC I and UNIVAC II used word addressing, with 12-character words. IBM examples include IBM 702, IBM 705, the IBM 1400 series, IBM 7010, and the IBM 7080.
Some early binary computers, such as the Honeywell 800 and the RCA 601, also had decimal arithmetic instructions. Some others had special instructions, such as CVR and CAQ on the IBM 7090, that could be used to speed up decimal addition and the conversion of decimal to binary.
Later computers
The IBM System/360 family of computers, introduced in 1964 to unify IBM's product lines, uses binary addressing, binary integer arithmetic, and binary floating-point; it also includes instructions for packed decimal integer arithmetic.
Some other lines of binary computers added decimal arithmetic instructions. For example, the Honeywell 6000 series, based on the binary GE-600 series, offered, in some models, an Extended Instruction Set that supported packed decimal integer arithmetic and decimal floating-point arithmetic.
IBM's lines of midrange computers, starting with the System/3 in 1969, are binary computers with decimal integer instructions.
The VAX line of 32-bit binary computers from Digital Equipment Corporation, introduced in 1977, also includes packed decimal integer arithmetic instructions.
The Burroughs Medium Systems, beginning with the Burroughs B2500 and B3500 in 1966, provides only decimal arithmetic, including decimal addressing, making it a decimal architecture.
More modern computers
Support for BCD was common in early microprocessors, which were often used in roles like electronic calculators and cash registers where the math was all decimal. Examples of such support can be found in the Intel 8080, MOS 6502, Zilog Z80, Motorola 6800/6809 and most other designs of the era. In these designs, BCD was directly supported in the ALU, allowing it to perform operations on decimal data directly.
Intel BCD opcodes have remained in the x86 family to this day, although they are not supported in long mode. These instructions convert one-byte BCD numbers (packed and unpacked) to binary format before or after arithmetic operations. These operations were not extended to wider formats and hence are now slower than using 32-bit or wider BCD "tricks" to compute in BCD. The x87 FPU has instructions to convert 10-byte (18 decimal digits) packed decimal data, although it then operates on them as floating-point numbers.
The Motorola 68000 series offered both conversion utilities as well as the ability to directly add and subtract in BCD. These instructions were removed when the Coldfire instruction set was defined.
The 2008 revision of the IEEE 754 floating-point standard adds three decimal types with two binary encodings, with 7-, 16-, and 34-digit decimal significands.
One of the few RISC instruction sets to directly support decimal is IBM's Power ISA, which added support for IEEE 754-2008 decimal floating-point starting with Power ISA 2.05. Decimal integer support had been part of their mainframe line, and as part of the broader effort to merge the iSeries and zSeries decimal arithmetic was added to the POWER line so that a single processor could support workloads from these older machines with full performance. The IBM POWER6 processor is the first Power ISA processor that implemented these types, using the densely packed decimal binary encoding rather than BCD. Starting with Power ISA 3.0, decimal integer arithmetic instructions were added.
z/Architecture, the 64-bit version of IBM's mainframe instruction set, added support for the same encodings of IEEE 754 decimal floating-point, starting with the IBM System z9. Starting with the z15 processor, vector instructions to perform decimal integer arithmetic were added.
See also
References
Further reading
(NB. This title provides detailed description of decimal calculations, including explanation of binary-coded decimals and algorithms.)
(NB. At least some batches of this reprint edition were misprints with defective pages 115–146.)
Classes of computers
Early computers
Decimal computers | Decimal computer | [
"Technology"
] | 1,293 | [
"Classes of computers",
"Computers",
"Computer systems"
] |
12,041,842 | https://en.wikipedia.org/wiki/The%20Oil%20Gush%20in%20Balakhany | The Oil Gush in Balakhany () is a film written and directed by the pioneer of cinema in Azerbaijan, Alexandre Michon. It was filmed on August 4, 1898, in Balakhany, Baku and presented at the International Paris Exhibition. The film was shot using a 35 mm film on a Lumière cinematograph and is considered the first film in Azerbaijani cinematography. It depicts a blowout from an oil well in the Balakhany village of Baku.
See also
List of Azerbaijani films before 1920
References
External links
1898 films
1898 short films
Azerbaijani silent short films
Azerbaijani short documentary films
Petroleum industry in Azerbaijan
Oil wells
1890s short documentary films
Films of the Russian Empire | The Oil Gush in Balakhany | [
"Chemistry"
] | 137 | [
"Petroleum technology",
"Oil wells"
] |
12,042,738 | https://en.wikipedia.org/wiki/Terephthalic%20acid%20%28data%20page%29 | This page provides supplementary chemical data on Terephthalic acid, the organic compound and one of three isomeric phthalic acids, all with formula C6H4(CO2H)2.
Material Safety Data Sheet
The handling of this chemical may require notable safety precautions, which are set forth on the Material Safety Datasheet (MSDS) for it.SIRI
Structure and properties
Thermodynamic properties
Spectral data
References
Chemical data pages
Chemical data pages cleanup | Terephthalic acid (data page) | [
"Chemistry"
] | 98 | [
"Chemical data pages",
"nan"
] |
12,043,978 | https://en.wikipedia.org/wiki/Electrophoresis%20%28journal%29 | Electrophoresis is a peer-reviewed scientific journal covering all aspects of electrophoresis, including new or improved analytical and preparative methods, development of theory, and innovative applications of electrophoretic methods in the study of proteins, nucleic acids, and other compounds.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.535, ranking it 27th out of 87 journals in the category "Chemistry, Analytical" and 29th out of 78 in the category "Biochemical Research Methods".
References
External links
Electrophoresis
Biochemistry journals
Electrochemistry journals
English-language journals
Academic journals established in 1980
Wiley-VCH academic journals | Electrophoresis (journal) | [
"Chemistry",
"Biology"
] | 152 | [
"Biochemistry journals",
"Physical chemistry stubs",
"Biochemistry journal stubs",
"Instrumental analysis",
"Biochemical separation processes",
"Electrochemistry journals",
"Biochemistry stubs",
"Electrochemistry",
"Molecular biology techniques",
"Biochemistry literature",
"Physical chemistry jo... |
13,587,617 | https://en.wikipedia.org/wiki/Cameron%E2%80%93Erd%C5%91s%20conjecture | In combinatorics, the Cameron–Erdős conjecture (now a theorem) is the statement that the number of sum-free sets contained in is
The sum of two odd numbers is even, so a set of odd numbers is always sum-free. There are odd numbers in [N ], and so subsets of odd numbers in [N ]. The Cameron–Erdős conjecture says that this counts a constant proportion of the sum-free sets.
The conjecture was stated by Peter Cameron and Paul Erdős in 1988. It was proved by Ben Green and independently by Alexander Sapozhenko in 2003.
See also
Erdős conjecture
Notes
Additive number theory
Combinatorics
Theorems in discrete mathematics
Paul Erdős
Conjectures that have been proved | Cameron–Erdős conjecture | [
"Mathematics"
] | 158 | [
"Discrete mathematics",
"Combinatorics",
"Theorems in discrete mathematics",
"Conjectures that have been proved",
"Combinatorics stubs",
"Mathematical problems",
"Mathematical theorems"
] |
13,588,039 | https://en.wikipedia.org/wiki/Dermal%20adhesive | A dermal adhesive (or skin glue) is a glue used to close wounds in the skin as an alternative to sutures, staples, or clips.
Glued closure results in less scarring and is less prone to infection than sutured or stapled closure. There is also no residual closure to remove, so follow-up visits for removal are not required.
Some research is ongoing on making biodegradable glue for use inside the body, which can thus be broken down safely by the body.
Products
See also
Liquid bandage
Bone cement
References
Surgical suture material | Dermal adhesive | [
"Physics"
] | 116 | [
"Materials stubs",
"Materials",
"Matter"
] |
13,588,444 | https://en.wikipedia.org/wiki/Network%20virtualization | In computing, network virtualization is the process of combining hardware and software network resources and network functionality into a single, software-based administrative entity, a virtual network. Network virtualization involves platform virtualization, often combined with resource virtualization.
Network virtualization is categorized as either external virtualization, combining many networks or parts of networks into a virtual unit, or internal virtualization, providing network-like functionality to software containers on a single network server.
In software testing, software developers use network virtualization to test software which are under development in a simulation of the network environments in which the software is intended to operate. As a component of application performance engineering, network virtualization enables developers to emulate connections between applications, services, dependencies, and end users in a test environment without having to physically test the software on all possible hardware or system software. The validity of the test depends on the accuracy of the network virtualization in emulating real hardware and operating systems.
Components
Various equipment and software vendors offer network virtualization by combining any of the following:
Network hardware, such as switches and network adapters, also known as network interface cards (NICs)
Network elements, such as firewalls and load balancers
Networks, such as virtual LANs (VLANs) and containers such as virtual machines (VMs)
Network storage devices
Network machine-to-machine elements, such as telecommunications devices
Network mobile elements, such as laptop computers, tablet computers, and smartphones
Network media, such as Ethernet and Fibre Channel
External virtualization
External network virtualization combines or subdivides one or more local area networks (LANs) into virtual networks to improve a large network's or data center's efficiency. A virtual local area network (VLAN) and network switch comprise the key components. Using this technology, a system administrator can configure systems physically attached to the same local network into separate virtual networks. Conversely, an administrator can combine systems on separate local area networks (LANs) into a single VLAN spanning segments of a large network.
External network virtualization is envisioned to be placed in the middle of the network stack and help integrating different architectures proposed for next generation networks.
Internal virtualization
Internal network virtualization configures a single system with software containers, such as Xen hypervisor control programs, or pseudo-interfaces, such as a VNIC, to emulate a physical network with software. This can improve a single system's efficiency by isolating applications to separate containers or pseudo-interfaces.
Examples
Citrix and Vyatta have built a virtual network protocol stack combining Vyatta's routing, firewall, and VPN functions with Citrix's Netscaler load balancer, branch repeater wide area network (WAN) optimization, and secure sockets layer VPN.
OpenSolaris network virtualization provides a so-called "network in a box" (see OpenSolaris Network Virtualization and Resource Control).
Microsoft Virtual Server uses virtual machines to make a "network in a box" for x86 systems. These containers can run different operating systems, such as Microsoft Windows or Linux, either associated with or independent of a specific network interface controller (NIC).
Use in testing
Network virtualization may be used in application development and testing to mimic real-world hardware and system software.
In application performance engineering, network virtualization enables emulation of connections between applications, services, dependencies, and end users for software testing.
Wireless network virtualization
Wireless network virtualization can have a very broad scope ranging from spectrum sharing, infrastructure virtualization, to air interface virtualization. Similar to wired network virtualization, in which physical infrastructure owned by one or more providers can be shared among multiple service providers, wireless network virtualization needs the physical wireless infrastructure and radio resources to be abstracted and isolated to a number of virtual resources, which then can be offered to different service providers. In other words, virtualization, regardless of wired or wireless networks, can be considered as a process splitting the entire network system. However, the distinctive properties of the wireless environment, in terms of time-various channels, attenuation, mobility, broadcast, etc., make the problem more complicated. Furthermore, wireless network virtualization depends on specific access technologies, and wireless network contains much more access technologies compared to wired network virtualization and each access technology has its particular characteristics, which makes convergence, sharing and abstraction difficult to achieve. Therefore, it may be inaccurate to consider wireless network virtualization as a subset of network virtualization.
Performance
Until 1 Gbit/s networks, network virtualization was not suffering from the overhead of the software layers or hypervisor layers providing the interconnects. With the rise of high bandwidth, 10 Gbit/s and beyond, the rates of packets exceed the capabilities of processing of the networking stacks. In order to keep offering high throughput processing, some combinations of software and hardware helpers are deployed in the so-called "network in a box" associated with either a hardware-dependent network interface controller (NIC) using SRIOV extensions of the hypervisor or either using a fast path technology between the NIC and the payloads (virtual machines or containers).
For example, in case of Openstack, network is provided by Neutron which leverages many features from the Linux kernel for networking: iptables, iproute2, L2 bridge, L3 routing or OVS. Since the Linux kernel cannot sustain the 10G packet rate, then some bypass technologies for a fast path are used. The main bypass technologies are either based on a limited set of features such as Open vSwitch (OVS) with its DPDK user space implementation or based on a full feature and offload of Linux processing such as 6WIND virtual accelerator.
See also
Application performance engineering
Hardware virtualization
I/O virtualization
Network function virtualization
Network Virtualization using Generic Routing Encapsulation
Overlay network
OVN
Virtual circuit
Virtual Extensible LAN
Virtual firewall
Virtual private network
Software-defined networking
References
Further reading
External links
NetworkVirtualization.com | News retrieved 3 June 2008
RAD VPLS Tutorial
Types of VPNs
VMware Virtual Networking Concepts retrieved 26 October 2008
Network functions Virtualization(NFV) Benefits
Virtualization
Internet Protocol based network software | Network virtualization | [
"Engineering"
] | 1,280 | [
"Computer networks engineering",
"Virtualization"
] |
13,589,517 | https://en.wikipedia.org/wiki/6344%20P-L | 6344 P-L is an unnumbered, sub-kilometer asteroid and suspected dormant comet, classified as near-Earth object and potentially hazardous asteroid of the Apollo group that was first observed on 24 September 1960, by astronomers and asteroid searchers Tom Gehrels, Ingrid van Houten-Groeneveld, and Cornelis Johannes van Houten during the Palomar–Leiden survey at Palomar Observatory.
Description
Since is still unnumbered, the discoverers have not yet been officially determined. Last seen in 1960, it was lost, but rediscovered in 2007 as . In other words, it was a lost asteroid from 1960 until it was recovered and recognized as the same object by Peter Jenniskens in 2007. It was again observed from 19 July 2021 to 4 August 2021 by Astronomical Research Observatory, Westfield, and Calar Alto-Schmidt (see Minor Planet Center MPS 1525704).
It is either an asteroid or dormant comet nucleus, and it has a 4.7-year orbit around the Sun. The orbit goes out as far as Jupiter's but then back in, passing as close as 0.07 AU to the Earth, making it a collision risk.
Close approaches
The minor planet classifies as a potentially hazardous object with an Earth minimum orbit intersection distance of , equivalent to 11.1 lunar distances. Although it was not outgassing at the time of its recovery, its orbit indicates that it is probably a dormant comet.
Physical characteristics
Based on a generic magnitude-to-diameter conversion, measures between 250 and 460 meters for an assumed albedo between 0.20 and 0.06. As of 2018, no rotational lightcurve has been obtained. The body's rotation period, shape and pole remains unknown.
Palomar–Leiden survey
The survey designation "P-L" stands for Palomar–Leiden, named after Palomar Observatory and Leiden Observatory, which collaborated on the fruitful Palomar–Leiden survey in the 1960s. Tom Gehrels used Palomar's 48-inch Samuel Oschin telescope and shipped the photographic plates to the van Houten's at Leiden Observatory, where astrometry was carried out. The trio are credited with more than 4600 minor planet discoveries.
Numbering and naming
As of 2021, this minor planet has neither been numbered nor named and still remains provisionally designated (see list of unnumbered minor planets).
References
External links
Discoveries by the Palomar–Leiden survey
19600924
Recovered astronomical objects | 6344 P-L | [
"Astronomy"
] | 507 | [
"Recovered astronomical objects",
"Astronomical objects"
] |
13,590,437 | https://en.wikipedia.org/wiki/Tenidap | Tenidap was a COX/5-LOX inhibitor and cytokine-modulating anti-inflammatory drug candidate that was under development by Pfizer as a promising potential treatment for rheumatoid arthritis, but Pfizer halted development after marketing approval was rejected by the FDA in 1996 due to liver and kidney toxicity, which was attributed to metabolites of the drug with a thiophene moiety that caused oxidative damage.
References
Indoles
Thiophenes
Ureas
Nonsteroidal anti-inflammatory drugs
Chloroarenes
Aromatic ketones
Hydroxyarenes
Carboxamides
Abandoned drugs
Drugs developed by Pfizer | Tenidap | [
"Chemistry"
] | 137 | [
"Organic compounds",
"Ureas",
"Drug safety",
"Abandoned drugs"
] |
13,590,499 | https://en.wikipedia.org/wiki/Proquazone | Proquazone is a nonsteroidal anti-inflammatory drug, known as an NSAID.
Uses
Proquazone is used to treat rheumatoid arthritis, osteoarthritis and ankylosing spondylitis. It has been trialed for use as pain relief for tension headaches.
The recommended adult dose is around 450mg.
Side Effects
Often, use of Proquazone is associated with diarrhea (up to 30% of the time).
References
Quinazolinones
Ureas | Proquazone | [
"Chemistry"
] | 111 | [
"Organic compounds",
"Ureas"
] |
13,590,511 | https://en.wikipedia.org/wiki/Mycoplasma%20laboratorium | Mycoplasma laboratorium or Synthia refers to a synthetic strain of bacterium. The project to build the new bacterium has evolved since its inception. Initially the goal was to identify a minimal set of genes that are required to sustain life from the genome of Mycoplasma genitalium, and rebuild these genes synthetically to create a "new" organism. Mycoplasma genitalium was originally chosen as the basis for this project because at the time it had the smallest number of genes of all organisms analyzed. Later, the focus switched to Mycoplasma mycoides and took a more trial-and-error approach.
To identify the minimal genes required for life, each of the 482 genes of M. genitalium was individually deleted and the viability of the resulting mutants was tested. This resulted in the identification of a minimal set of 382 genes that theoretically should represent a minimal genome. In 2008 the full set of M. genitalium genes was constructed in the laboratory with watermarks added to identify the genes as synthetic. However M. genitalium grows extremely slowly and M. mycoides was chosen as the new focus to accelerate experiments aimed at determining the set of genes actually needed for growth.
In 2010, the complete genome of M. mycoides was successfully synthesized from a computer record and transplanted into an existing cell of Mycoplasma capricolum that had had its DNA removed. It is estimated that the synthetic genome used for this project cost US$40 million and 200 man-years to produce. The new bacterium was able to grow and was named JCVI-syn1.0, or Synthia. After additional experimentation to identify a smaller set of genes that could produce a functional organism, JCVI-syn3.0 was produced, containing 473 genes. 149 of these genes are of unknown function. Since the genome of JCVI-syn3.0 is novel, it is considered the first truly synthetic organism.
Minimal genome project
The production of Synthia is an effort in synthetic biology at the J. Craig Venter Institute by a team of approximately 20 scientists headed by Nobel laureate Hamilton Smith and including DNA researcher Craig Venter and microbiologist Clyde A. Hutchison III. The overall goal is to reduce a living organism to its essentials and thus understand what is required to build a new organism from scratch. The initial focus was the bacterium M. genitalium, an obligate intracellular parasite whose genome consists of 482 genes comprising 582,970 base pairs, arranged on one circular chromosome (at the time the project began, this was the smallest genome of any known natural organism that can be grown in free culture). They used transposon mutagenesis to identify genes that were not essential for the growth of the organism, resulting in a minimal set of 382 genes. This effort was known as the Minimal Genome Project.
Choice of organism
Mycoplasma
Mycoplasma is a genus of bacteria of the class Mollicutes in the division Mycoplasmatota (formerly Tenericutes), characterised by the lack of a cell wall (making it Gram negative) due to its parasitic or commensal lifestyle.
In molecular biology, the genus has received much attention, both for being a notoriously difficult-to-eradicate contaminant in mammalian cell cultures (it is immune to beta-lactams and other antibiotics), and for its potential uses as a model organism due to its small genome size. The choice of genus for the Synthia project dates to 2000, when Karl Reich coined the phrase Mycoplasma laboratorium.
Other organisms with small genomes
As of 2005, Pelagibacter ubique (an α-proteobacterium of the order Rickettsiales) has the smallest known genome (1,308,759 base pairs) of any free living organism and is one of the smallest self-replicating cells known. It is possibly the most numerous bacterium in the world (perhaps 1028 individual cells) and, along with other members of the SAR11 clade, are estimated to make up between a quarter and a half of all bacterial or archaeal cells in the ocean. It was identified in 2002 by rRNA sequences and was fully sequenced in 2005. It is extremely hard to cultivate a species which does not reach a high growth density in lab culture.
Several newly discovered species have fewer genes than M. genitalium, but are not free-living: many essential genes that are missing in Hodgkinia cicadicola, Sulcia muelleri, Baumannia cicadellinicola (symbionts of cicadas) and Carsonella ruddi (symbiote of hackberry petiole gall psyllid, Pachypsylla venusta) may be encoded in the host nucleus. The organism with the smallest known set of genes as of 2013 is Nasuia deltocephalinicola, an obligate symbiont. It has only 137 genes and a genome size of 112 kb.
Techniques
Several laboratory techniques had to be developed or adapted for the project, since it required synthesis and manipulation of very large pieces of DNA.
Bacterial genome transplantation
In 2007, Venter's team reported that they had managed to transfer the chromosome of the species Mycoplasma mycoides to Mycoplasma capricolum by:
isolating the genome of M. mycoides: gentle lysis of cells trapped in agar—molten agar mixed with cells and left to form a gel—followed by pulse field gel electrophoresis and the band of the correct size (circular 1.25Mbp) being isolated;
making the recipient cells of M. capricolum competent: growth in rich media followed starvation in poor media where the nucleotide starvation results in inhibition of DNA replication and change of morphology; and
polyethylene glycol-mediated transformation of the circular chromosome to the DNA-free cells followed by selection.
The term transformation is used to refer to insertion of a vector into a bacterial cell (by electroporation or heatshock). Here, transplantation is used akin to nuclear transplantation.
Bacterial chromosome synthesis
In 2008 Venter's group described the production of a synthetic genome, a copy of M. genitalium G37 sequence L43967, by means of a hierarchical strategy:
Synthesis → 1kbp: The genome sequence was synthesized by Blue Heron in 1,078 1080bp cassettes with 80bp overlap and NotI restriction sites (inefficient but infrequent cutter).
Ligation → 10kbp: 109 groups of a series of 10 consecutive cassettes were ligated and cloned in E. coli on a plasmid and the correct permutation checked by sequencing.
Multiplex PCR → 100kbp: 11 Groups of a series of 10 consecutive 10kbp assemblies (grown in yeast) were joined by multiplex PCR, using a primer pair for each 10kbp assembly.
Isolation and recombination → secondary assemblies were isolated, joined and transformed into yeast spheroplasts without a vector sequence (present in assembly 811-900).
The genome of this 2008 result, M. genitalium JCVI-1.0, is published on GenBank as CP001621.1. It is not to be confused with the later synthetic organisms, labelled JCVI-syn, based on M. mycoides.
Synthetic genome
In 2010 Venter and colleagues created Mycoplasma mycoides strain JCVI-syn1.0 with a synthetic genome. Initially the synthetic construct did not work, so to pinpoint the error—which caused a delay of 3 months in the whole project—a series of semi-synthetic constructs were created. The cause of the failure was a single frameshift mutation in DnaA, a replication initiation factor.
The purpose of constructing a cell with a synthetic genome was to test the methodology, as a step to creating modified genomes in the future. Using a natural genome as a template minimized the potential sources of failure. Several differences are present in Mycoplasma mycoides JCVI-syn1.0 relative to the reference genome, notably an E.coli transposon IS1 (an infection from the 10kb stage) and an 85bp duplication, as well as elements required for propagation in yeast and residues from restriction sites.
There has been controversy over whether JCVI-syn1.0 is a true synthetic organism. While the genome was synthesized chemically in many pieces, it was constructed to match the parent genome closely and transplanted into the cytoplasm of a natural cell. DNA alone cannot create a viable cell: proteins and RNAs are needed to read the DNA, and lipid membranes are required to compartmentalize the DNA and cytoplasm. In JCVI-syn1.0 the two species used as donor and recipient are of the same genus, reducing potential problems of mismatches between the proteins in the host cytoplasm and the new genome. Paul Keim (a molecular geneticist at Northern Arizona University in Flagstaff) noted that "there are great challenges ahead before genetic engineers can mix, match, and fully design an organism's genome from scratch".
Watermarks
A much publicized feature of JCVI-syn1.0 is the presence of watermark sequences. The 4 watermarks (shown in Figure S1 in the supplementary material of the paper) are coded messages written into the DNA, of length 1246, 1081, 1109 and 1222 base pairs respectively. These messages did not use the standard genetic code, in which sequences of 3 DNA bases encode amino acids, but a new code invented for this purpose, which readers were challenged to solve. The content of the watermarks is as follows:
Watermark 1: an HTML document which reads in a Web browser as text congratulating the decoder, and instructions on how to email the authors to prove the decoding.
Watermark 2: a list of authors and a quote from James Joyce: "To live, to err, to fall, to triumph, to recreate life out of life".
Watermark 3: more authors and a quote from Robert Oppenheimer (uncredited): "See things not as they are, but as they might be".
Watermark 4: more authors and a quote from Richard Feynman: "What I cannot build, I cannot understand".
JCVI-syn3.0
In 2016, the Venter Institute used genes from JCVI-syn1.0 to synthesize a smaller genome they call JCVI-syn3.0, that contains 531,560 base pairs and 473 genes. In 1996, after comparing M. genitalium with another small bacterium Haemophilus influenzae, Arcady Mushegian and Eugene Koonin had proposed that there might be a common set of 256 genes which could be a minimal set of genes needed for viability. In this new organism, the number of genes can only be pared down to 473, 149 of which have functions that are completely unknown. As of 2022 the unknown set has been narrowed to about 100. In 2019 a complete computational model of all pathways in Syn3.0 cell was published, representing the first complete in silico model for a living minimal organism.
Concerns and controversy
Reception
On Oct 6, 2007, Craig Venter announced in an interview with UK's The Guardian newspaper that the same team had synthesized a modified version of the single chromosome of Mycoplasma genitalium chemically. The synthesized genome had not yet been transplanted into a working cell. The next day the Canadian bioethics group, ETC Group issued a statement through their representative, Pat Mooney, saying Venter's "creation" was "a chassis on which you could build almost anything. It could be a contribution to humanity such as new drugs or a huge threat to humanity such as bio-weapons". Venter commented "We are dealing in big ideas. We are trying to create a new value system for life. When dealing at this scale, you can't expect everybody to be happy."
On May 21, 2010, Science reported that the Venter group had successfully synthesized the genome of the bacterium Mycoplasma mycoides from a computer record and transplanted the synthesized genome into the existing cell of a Mycoplasma capricolum bacterium that had had its DNA removed. The "synthetic" bacterium was viable, i.e. capable of replicating. Venter described it as "the first species.... to have its parents be a computer".
The creation of a new synthetic bacterium, JCVI-3.0 was announced in Science on March 25, 2016. It has only 473 genes. Venter called it “the first designer organism in history” and argued that the fact that 149 of the genes required have unknown functions means that "the entire field of biology has been missing a third of what is essential to life".
Press coverage
The project received a large amount of coverage from the press due to Venter's showmanship, to the degree that Jay Keasling, a pioneering synthetic biologist and founder of Amyris commented that "The only regulation we need is of my colleague's mouth".
Utility
Venter has argued that synthetic bacteria are a step towards creating organisms to manufacture hydrogen and biofuels, and also to absorb carbon dioxide and other greenhouse gases. George M. Church, another pioneer in synthetic biology, has expressed the contrasting view that creating a fully synthetic genome is not necessary since E. coli grows more efficiently than M. genitalium even with all its extra DNA; he commented that synthetic genes have been incorporated into E.coli to perform some of the above tasks.
Intellectual property
The J. Craig Venter Institute filed patents for the Mycoplasma laboratorium genome (the "minimal bacterial genome") in the U.S. and internationally in 2006. The ETC group, a Canadian bioethics group, protested on the grounds that the patent was too broad in scope.
Similar projects
From 2002 to 2010, a team at the Hungarian Academy of Science created a strain of Escherichia coli called MDS42, which is now sold by Scarab Genomics of Madison, WI under the name of "Clean Genome. E.coli", where 15% of the genome of the parental strain (E. coli K-12 MG1655) were removed to aid in molecular biology efficiency, removing IS elements, pseudogenes and phages, resulting in better maintenance of plasmid-encoded toxic genes, which are often inactivated by transposons. Biochemistry and replication machinery were not altered.
References
Primary sources
Popular press
External links
J. Craig Venter Institute: Research Groups
Artificial life
Synthetic biology
Mycoplasma | Mycoplasma laboratorium | [
"Engineering",
"Biology"
] | 3,103 | [
"Synthetic biology",
"Molecular genetics",
"Biological engineering",
"Bioinformatics"
] |
13,590,567 | https://en.wikipedia.org/wiki/Dexketoprofen | Dexketoprofen is a nonsteroidal anti-inflammatory drug (NSAID). It is manufactured by Menarini, under the tradename Keral. It is available in the UK, as dexketoprofen trometamol, as a prescription-only drug and in Latin America as Enantyum, produced by Menarini. Also, in Italy and Spain it is available as an over-the-counter drug (OTC) under the trade name Enandol or Enantyum. In Hungary it is available from a pharmacy as "Ketodex". In Turkey, it is an over the counter medicine under the name "Arveles". In Latvia, Lithuania and Estonia it is available as an OTC under the tradename Dolmen. In Mexico it is available in tablet form as "Stadium" made by Menarini. It is the dextrorotatory stereoisomer of ketoprofen.
Chemistry
Dexketoprofen is the (S)-enantiomer of ketoprofen. Technically it is a chiral switch of (±)-ketoprofen. The switch was done for a faster onset of action, a better therapeutic value. Dexketoprofen consists of a propionic acid moiety connected to a benzophenone molecule by its second carbon.
Medical uses
Short-term treatment of mild to moderate pain, including dysmenorrhoea. It is also used for migraines and knee pain.
Side effects
It may cause dizziness, and patients should not, therefore, drive or operate heavy machinery or vehicles until they are familiar with how dexketoprofen affects them. Concomitant use of alcohol and other sedatives may potentiate this effect. In a small subset of individuals the dizziness may be intolerable and require transition to an alternative treatment.
Pharmacology
Dexketoprofen belongs to a class of medicines called NSAIDs. It works by blocking the action of a substance in the body called cyclo-oxygenase, which is involved in the production of chemicals in the body called prostaglandins. Prostaglandins are produced in response to injury or certain diseases and would otherwise go on to cause swelling, inflammation and pain. By blocking cyclo-oxygenase, dexketoprofen prevents the production of prostaglandins and therefore reduces inflammation and pain. Along with peripheral analgesic action, it possesses central analgesic action.
See also
Chiral switch
Enantiopure drug
Chirality
References
Propionic acids
Benzophenones
Enantiopure drugs
Nonsteroidal anti-inflammatory drugs | Dexketoprofen | [
"Chemistry"
] | 560 | [
"Stereochemistry",
"Enantiopure drugs"
] |
13,590,647 | https://en.wikipedia.org/wiki/Flunoxaprofen | Flunoxaprofen, also known as Priaxim, is a chiral nonsteroidal anti-inflammatory drug (NSAID). It is closely related to naproxen, which is also an NSAID. Flunoxaprofen has been shown to significantly improve the symptoms of osteoarthritis and rheumatoid arthritis. The clinical use of flunoxaprofen has ceased due to concerns of potential hepatotoxicity.
Structure
Flunoxaprofen is a two-ring heterocyclic compound derived from benzoxazole. It also contains a fluorine atom and a propanoyl group.
Synthesis
The overall synthesis is similar to that for benoxaprofen; in this case, para-fluorobenzoyl chloride is used when forming the benzoxazole ring..
A Sandmeyer reaction by diazotisation of 2-(4-aminophenyl)propanenitrile (1) followed by acid hydrolysis leads to the phenol (2), which is nitrated and reduced using stannous chloride or catalytic hydrogenation to give the aminophenol (4). Hydrolysis of the nitrile produces the carboxylic acid (5), which is converted to racemic flunoxaprofen by acylation with p-fluorobenzoyl chloride, followed by cyclisation.
Preparations
Because flunoxaprofen has limited water-solubility, additional steps must be taken in order to prepare syrups, creams, suppositories, etc. In order to make flunoxaprofen water-soluble, yet still active and efficient, it must be mixed with lysine and then suspended in an organic solvent that is soluble in water. A salt will crystallize upon cooling. The salt must then be filtered out and dried. Pharmacological testing of this now water-soluble compound has shown that it has anti-inflammatory properties equal to flunoxaprofen by itself.
Pharmacokinetics
The efficacy and safety of flunoxaprofen has been compared with those of naproxen in rheumatoid arthritis patients to show that the two drugs have equivalent therapeutical effects. Both drugs significantly relieve spontaneous pain which occurs both during the day and at night. Both drugs also significantly relieve the pain associated with active and passive motion and aid in relieving morning stiffness. The study also showed both drugs to be equally effective at improving grip strength.
Flunoxaprofen is administered as racemate. The absorption and disposition of both enantiomers were studied in 1988. No significant differences between stereoisomers were detected with respect to their absorption and elimination half-lives. However, further studies have shown that the S-enantiomer is the pharmacologically active form of the drug and does not undergo stereoinversion, while R-Flunoxaprofen is pharmacologically activated through biotransformation to the S-enantiomer. This stereospecific chiral inversion is mediated by the FLX-S-Acyl-CoA thioester. Pharmacokinetic studies with stereoselective bioassays have been carried out in different species after racemate dosage (and flunoxaprofen enantiomer derivatives have also been used as chiral fluorescent derivatizing agents to determine the enantiomers of other drug enantiomers in plasma).
It has been shown that the dextrorotatory form is particularly active and has a much higher therapeutic index than some other anti-inflammatories, including indomethacin and diclofenac. It has also been shown that flunoxaprofen inhibits leukotriene rather than prostaglandin synthesis. This is similar to benoxaprofen. Flunoxaprofen and benoxaprofen have been shown to have similar absorption characteristics. However, the distribution and elimination of flunoxaprofen has been shown to be much faster than benoxaprofen.
Adverse effects
A structural analog of flunoxaprofen is benoxaprofen. The two drugs are carboxylic acid analogs that form reactive acyl glucuronides. Benoxaprofen has been shown to be involved in rare hepatotoxicity. Because of this, benoxaprofen has been removed from the market. In response to this the clinical use of flunoxaprofen has also stopped, even though studies have shown that flunoxaprofen is less toxic than benoxaprofen.
The toxicity of these nonsteroidal anti-inflammatory drugs may be related to the covalent modification of proteins in response to the drugs' reactive acyl glucuronides. The reactivity of the acyl glucuronides appears to co-determine the extent of protein binding, as initially proposed by the research group of Benet et al. in 1993.
References
Carboxylic acids
Benzoxazoles
4-Fluorophenyl compounds | Flunoxaprofen | [
"Chemistry"
] | 1,061 | [
"Carboxylic acids",
"Functional groups"
] |
13,590,686 | https://en.wikipedia.org/wiki/Dexibuprofen | Dexibuprofen is a nonsteroidal anti-inflammatory drug (NSAID). It is the active dextrorotatory enantiomer of ibuprofen. Most ibuprofen formulations contain a racemic mixture of both isomers.
Dexibuprofen is a chiral switch of racemic ibuprofen. The chiral carbon in dexibuprofen is assigned an absolute configuration of (S) per the Cahn–Ingold–Prelog rules. Dexibuprofen is also called (S)-(+)-ibuprofen.
Ibuprofen is an α-arylpropionic acid used largely in the treatment of rheumatoid arthritis and widely used over-the counter drug for headache and minor pains. This drug has a chiral center and exists as a pair of enantiomers. (S)-Ibuprofen, the eutomer, is responsible for the desired therapeutic effect. The inactive (R)-enantiomer, the distomer, undergoes a unidirectional chiral inversion to give the active (S)-enantiomer, the former acting as a prodrug for the latter. That is, when the ibuprofen is administered as a racemate the distomer is converted in vivo into the eutomer while the latter is unaffected.
See also
Chiral switch
Enantiopure drug
Chirality
Eudysmic ratio
Ibuprofen
References
Enantiopure drugs
Nonsteroidal anti-inflammatory drugs
Propionic acids
Benzene derivatives | Dexibuprofen | [
"Chemistry"
] | 337 | [
"Stereochemistry",
"Enantiopure drugs"
] |
13,590,693 | https://en.wikipedia.org/wiki/Ibuproxam | Ibuproxam is a nonsteroidal anti-inflammatory drug (NSAID).It is the hydroxamic acid of ibuprofen to which it is hydrolyzed in the blood. It was found, that ibuproxam is considerably less damaging to the gastrointestinal tract than is ibuprofen. The analgesic and the antipyretic activities are consistent.
References
Hydroxamic acids
Nonsteroidal anti-inflammatory drugs
Phenylene compounds
Isobutyl compounds
Propionic acids | Ibuproxam | [
"Chemistry"
] | 114 | [
"Organic compounds",
"Functional groups",
"Hydroxamic acids"
] |
13,590,706 | https://en.wikipedia.org/wiki/Indoprofen | Indoprofen is a nonsteroidal anti-inflammatory drug (NSAID). It was withdrawn worldwide in the 1980s after postmarketing reports of severe gastrointestinal bleeding.
A 2004 study using high-throughput screening found indoprofen to increase production of the survival of motor neuron protein, suggesting it may provide insight into treatments for spinal muscular atrophies.
Synthesis
The isoindolone ring system forms the nucleus for this profen NSAID.
The nitro group in 2-(4-nitrophenyl)propionic acid (1) is reduced using iron and hydrochloric acid to give 2-(4-aminophenyl)propionic acid (2). Reaction with phthalic anhydride then gives the phthalimide (4). Treatment with zinc in acetic acid yields indoprofen after reduction of one of the amide groups.
See also
Indobufen
References
Nonsteroidal anti-inflammatory drugs
Withdrawn drugs
Isoindolines
Lactams
Propionic acids | Indoprofen | [
"Chemistry"
] | 219 | [
"Drug safety",
"Withdrawn drugs"
] |
13,590,720 | https://en.wikipedia.org/wiki/Benoxaprofen | Benoxaprofen, also known as benoxaphen, is a chemical compound with the formula C16H12ClNO3. It is a non-steroidal anti-inflammatory drug (NSAID) of the arylpropionic acid class, and was marketed under the brand name Opren in the United Kingdom and Europe by Eli Lilly and Company (commonly referred to as Lilly), and as Oraflex in the United States of America (USA). Lilly suspended sales of Oraflex in 1982 after reports from the British government and the United States Food and Drug Administration (US FDA) of adverse effects and deaths linked to the drug.
History
Benoxaprofen was discovered by a team of research chemists at the British Lilly Research Centre of Eli Lilly and Company . This laboratory was assigned to explore new anti-arthritic compounds in 1966. Lilly applied for patents on its then named new drug 'benoxaprofen' seven years later. It also filed for permission from the U.S. Food and Drug Administration to start testing benoxaprofen on humans. It had to undergo the three-step clinical testing procedure required by the United States Federal Government.
Lilly began Phase I of the benoxaprofen clinical trials by testing a selection of healthy human volunteers. These tests had to prove that their new drug posed no clear and immediate safety hazards. In Phase II, a larger number of human subjects, including some with minor illnesses, was tested; the drug's effectiveness and safety was the major target of these tests. Phase III was the largest test, and began in 1976. More than 2,000 arthritis patients were administered benoxaprofen by more than 100 physicians. The physicians then reported the results to the Lilly Company.
When Lilly formally requested to begin marketing benoxaprofen in January 1980 with the US FDA, the document consisted of more than 100,000 pages of test results and patients records. However, benoxaprofen was first marketed abroad: in 1980, it was released for marketing in the United Kingdom. It subsequently came on the market in May 1982 in the USA.
When benoxaprofen was on the market as Oraflex in the USA, the first sign of trouble came for the Lilly Company. The British Medical Journal reported in May 1982 that physicians in the United Kingdom believed that the drug was responsible for at least twelve deaths, mainly caused by kidney and liver failure. A petition was filed to have Oraflex removed from the market.
On 4 August 1982, the British government temporarily suspended sales of the drug in UK 'on grounds of safety'. The British Committee on the Safety of Medicines declared, in a telegram to the FDA, that it had received reports of more than 3,500 adverse side effects among patients who had used Oraflex. There were also 61 deaths, most of which were of elderly people. Almost simultaneously, the FDA said it had reports of 11 deaths in the USA among Oraflex users, most of which were caused by kidney and liver damage. The Eli Lilly Company suspended sales of benoxaprofen that afternoon.
Structure and reactivity
The molecular formula of benoxaprofen is C16H12ClNO3 and the systematic (IUPAC) name is 2-[2-(4-chlorophenyl)-1,3-benzoxazol-5-yl]propionic acid. The molecule has a molecular mass of 301.050568 g/mol.
Benoxaprofen is essentially a planar molecule. This is due to the co-planarity of the benzoxazole and phenyl rings, but the molecule also has a non-planar side chain consisting of the propanoic acid moiety which acts as a carrier group. These findings were obtained from X-ray crystallographic measurements made at the Lilly Research Centre.
Benoxaprofen is highly phototoxic. The free radical decarboxylated derivative of the drug is the toxic agent which, in the presence of oxygen, yields singlet oxygen and superoxide anion. Irradiation of benoxaprofen in an aqueous solution causes photochemical decarboxylation via a radical mechanism and in single-strand breaks of DNA. This also happens to ketoprofen and naproxen, other NSAIDs, which are even more active in this respect than benoxaprofen.
Available forms
Benoxaprofen is a racemic mixture, (R/S)-2-(p-chlorophenyl-α-methyl-5-benzoxazoleacetic acid. The two enantiomers are (R)-(−) and (S)-(+).
The inversion of the (R)-(−)-enantiomer and glucuronide conjugation the results of metabolism of benoxaprofen. However, benoxaprofen will not readily undergo oxidative metabolism.
It is however possible that, when cytochrome P4501 is the catalyst, oxygenation of the 4-chlorophyll ring occurs. With the (S)-(+)-enantiomer, it is more likely that oxygenation of the aromatic ring of the 2-phenylpropionic acid moiety occurs, also with the catalyst as cytochrome P4501.
Toxicokinetics
Benoxaprofen is absorbed well after oral intake of doses ranging from 1 up to 10 mg/kg. Only the unchanged drug is detected in the plasma, mostly bound to plasma proteins. The plasma levels of benoxaprofen in eleven subjects have been accurately predicted, based on the two-compartment open model. The mean half-life of absorption was 0.4 hours. This means that within 25 minutes, half of the dose is absorbed in the system. The mean half-life of distribution was 4.8 hours. This means that within 5 hours, half of the dose is distributed throughout the entire system. The mean half-life of elimination was 37.8 hours. This means that within 40 hours, half of the dose is excreted out of the system.
In female rats, after oral dose of 20 mg/kg, the tissue concentration of benoxaprofen was the highest in liver, kidney, lungs, adrenals, and ovaries. The distribution in pregnant females is the same, while it can also be found, in lower concentrations, in the foetus. There is a big difference between species in the route of excretion. In man, rhesus monkey, and rabbit, it is mostly excreted via the urine, while in rat and dog it was excreted via biliary-faecal excretion. In man and dog, the compound was excreted as the ester glucuronide, and in the other species as the unchanged compound. This means no major metabolic transformation of benoxaprofen takes place.
Toxicodynamics
Unlike other non-steroidal anti-inflammatory drugs, benoxaprofen acts directly on mononuclear cells. It inhibits their chemotactic response by inhibiting the lipoxygenase enzyme.
Efficacy and side effects
Efficacy
Benoxaprofen is an analgesic, antipyretic, and anti-inflammatory drug. Benoxaprofen was given to patients with rheumatoid arthritis and osteoarthritis because of its anti-inflammatory effect. Patients with the Paget's disease, psoriatic arthritis, ankylosing spondylitis, a painful shoulder, the mixed connective-tissue disease, polymyalgia rheumatica, back pain, and the Behçet's disease also received benoxaprofen. A daily dose of 300–600 mg is effective for many patients.
Adverse effects
There are different types of side effects. Most of them were cutaneous or gastrointestinal. Side effects appear rarely in the central nervous system, and miscellaneous side effects were not often observed. A study shows that most side effects appear in patients with rheumatoid arthritis
Cutaneous side effects
Cutaneous side effects of benoxaprofen are photosensitivity, onycholysis, rash, milia, increased nail growth, pruritus (itch), and hypertrichosis. Photosensitivity leads to burning, itching, or redness when patients are exposed to sunlight. A study shows that benoxaprofen, or other lipoxygenase-inhibiting agents, might be helpful in the treatment of psoriasis because the migration inhibition of the inflammatory cells (leukocytes) into the skin.
Gastrointestinal side effects
Gastrointestinal side effects of benoxaprofen are bleeding, diarrhoea, abdominal pain, anorexia, mouth ulcers, and taste change. According to a study, the most appearing gastric side effects are vomiting, heartburn, and epigastric pain.
Side effects in the central nervous system
For a small number of people, taking benoxaprofen might result in depression, lethargy, and feeling ill.
Miscellaneous side effects
Faintness, dizziness, headache, palpitations, epistaxis, blurred vision, urinary urgency, and gynaecomastia rarely appear in patients who take benoxaprofen.
Benoxaprofen can also cause hepatotoxicity, which led to death of some elderly patients. That was the main reason why benoxaprofen was withdrawn from the market.
Toxicity
After the suspension of sales in 1982, the toxic effects which benoxaprofen could have on humans were looked into more deeply. The fairly planar compound of benoxaprofen seems to be hepa- and phototoxic in the human body.
Benoxaprofen has a rather long half-life in man (t1/2= 20-30 hours), undergoes biliary excretion and enterohepatic circulation, and is also known to have a slow plasma clearance (CL p=4.5 millilitre per minute). The half-life may be further increased in elderly patients (>80 years of age), and in patients which already have an renal impairment; increasing to figures as high as 148 hours.
The fetal hepatotoxicity of benoxaprofen can be attributed to the accumulation of the drug after a repeated dosage, and also associated with the slow plasma clearance. The hepatic accumulation of the drug is presumably the cause for an increase in the activity of the hepatic cytochrome P450I, which will oxygenate benaxoprofen and produce reactive intermediates. Benoxaprofen is very likely a substrate, and weak inducer of cytochrome P450I and its enzyme family. Normally, it is not metabolised by oxidative reactions, but with the S(+) enantiomer of benoxaprofen and cytochrome P450I as a catalyst, the oxygenation of the 4-chlorophenyl ring and of the aromatic ring of 2-phenyl propionic acid seems to be possible. Therefore, the induction of a minor metabolic pathway leads to the formation of toxic metabolites in considerable amounts. The toxic metabolites may bind to vital intracellular macromolecules, and may generate reactive oxygens by redox cycling if quinone is formed. This could also lead to a depletion of protective glutathione, which is responsible for the detoxification of reactive oxygens.
The observed skin phototoxicity of patients treated with benoxaprofen can be explained with a look at the structure of the compound. There are significant structural similarities between the benzoxazole ring of benoxaprofen and the benzofuran ring of psoralen, a compound known to be phototoxic. The free decarboxylated derivate of the drug can produce singlet oxygen and superoxide anions in the presence of oxygen. Furthermore, possible explanations for the photochemical decarboxylation and oxygen radical formation may be the accumulation of repeated dosage, the induction of cytochrome P450I, and the emergence of reactive intermediates with covalent binding. The photochemical character of the compound can cause inflammation and severe tissue damage.
In animals, peroxisomal proliferation is also observed, but does not seem to be significant in man.
Effects on animals
The effects of benoxaprofen on animals were tested in a series of experiments. Benoxaprofen had a considerable anti-inflammatory, analgesic, and also anti-pyretic activity in those tests. In all six animals tested, which included rats, dogs, rhesus monkeys, rabbits, guinea pigs, and mice, the drug was well absorbed orally. In three of the six species, benoxaprofen was then effectively taken up from the gastrointestinal tract (after oral doses of 1–10 mg/kg). The plasma half-life was found to be different, being less than 13 hours in the dog, rabbit, and monkey, it was notable longer in mice. Furthermore, there were species differences found in the rate and route of excretion of the compound. Whereas benoxaprofen was excreted into the urine by the rabbit and guinea pig, biliary excretion was the way of clearance found in rats and dogs. In all species, only unchanged benoxaprofen was found in the plasma mostly extensively bound to proteins.
The excretion of the unchanged compound into the bile did occur more slowly in rats. This is interpreted by the authors as evidence that no enterohepatic circulation takes place. Another research in rats showed that the plasma membrane of hepatocytes begun to form blebs after administration of benoxaprofen. This is suggested to be due to disturbances in the calcium concentration, which is possibly a result of an altered cellular redox state which can have an effect on mitochondrial function, and therefore cause disturbances in the calcium concentration. In none of the species, significant levels of metabolism of benoxaprofen were found to have happened. Only in dogs, glucuronide could be found in the bile, which is a sure sign of metabolism in that species. Also, no differences in distribution of the compound in normal and pregnant rats were found. It was shown in rats that benoxaprofen was distributed into the foetus but with a notable lower concentration than in the maternal tissue.
Synthesis
A Sandmeyer reaction by diazotisation of 2-(4-aminophenyl)propanenitrile (1) followed by acid hydrolysis leads to the phenol (2), which is nitrated and reduced by catalytic hydrogenation to give the aminophenol (3). Hydrolysis of the nitrile and esterification produces ester (4), which is converted to benoxaprofen by acylation with p-chlorobenzoyl chloride, followed by cyclisation and then saponification of the ethyl ester.
References
Propionic acids
Hepatotoxins
Benzoxazoles
4-Chlorophenyl compounds
Drugs developed by Eli Lilly and Company
Nonsteroidal anti-inflammatory drugs
Withdrawn drugs | Benoxaprofen | [
"Chemistry"
] | 3,212 | [
"Drug safety",
"Withdrawn drugs"
] |
13,590,793 | https://en.wikipedia.org/wiki/Proglumetacin | Proglumetacin (usually as the maleate salt, trade names Afloxan, Protaxon and Proxil) is a nonsteroidal anti-inflammatory drug (NSAID). It is metabolized in the body to indometacin and proglumide, a drug with antisecretory effects that helps prevent injury to the stomach lining.
References
Nonsteroidal anti-inflammatory drugs
Prodrugs
Indole ethers at the benzene ring
Piperazines
Carboxylate esters
Benzamides | Proglumetacin | [
"Chemistry"
] | 114 | [
"Chemicals in medicine",
"Prodrugs"
] |
13,590,813 | https://en.wikipedia.org/wiki/Oxametacin | Oxametacin (or oxamethacin) is a non-steroidal anti-inflammatory drug.
Hydrolysis of the amide group is one of the synthetic pathways to Deboxamet (ChemDrug).
References
Tryptamines
Hydroxamic acids
Phenol ethers
4-Chlorophenyl compounds
Benzamides
Nonsteroidal anti-inflammatory drugs | Oxametacin | [
"Chemistry"
] | 82 | [
"Organic compounds",
"Functional groups",
"Hydroxamic acids"
] |
13,590,911 | https://en.wikipedia.org/wiki/Acemetacin | Acemetacin is a non-steroidal anti-inflammatory drug (NSAID) used for the treatment of osteoarthritis, rheumatoid arthritis, lower back pain, and relieving post-operative pain. It is manufactured by Merck KGaA under the tradename Emflex. It is no longer available in the UK (since 2018), however is available in other countries as a prescription-only drug.
Medical uses
Acemetacin has proven effective in the treatment of osteoarthritis, rheumatoid arthritis, ankylosing spondylitis, and other kinds of rheumatoid inflammation, as well as in post-operative and post-traumatic pain and attack of gout. Application of a single dose of acemetacin for post-operative pain is not well supported by studies.
Contraindications
Contraindications are basically the same as with other NSAIDs: hypersensitivity reactions to NSAIDs in the past (typically asthma or skin reactions), gastrointestinal or cerebral bleeding, peptic ulcer, haematopoietic disorders (anaemia, leukopenia), and during the third trimester of pregnancy.
Adverse effects
Common side effects (in about 1–10% of patients) include gastrointestinal problems typical of NSAIDs, such as nausea, diarrhoea, stomach pain, and peptic ulcer; central nervous effects like headache and dizziness; and skin reactions. Gastrointestinal tolerability is better than that of the related drug indometacin. Severe allergic reactions and haematopoietic disorders occur in fewer than 0.01% of patients.
Interactions
The following interactions, typical of NSAIDs, have been described:
other NSAIDs, corticosteroids: increased frequency of side effects, especially peptic ulcers and gastrointestinal bleeding
diuretics, ACE inhibitors and other antihypertensive drugs: reduced effectiveness of these drugs
with ACE inhibitors or ciclosporin, increased risk of kidney function disorders
anticoagulants such as warfarin: increased risk of bleeding
increased blood plasma concentrations of digoxin and methotrexate
decreased plasma concentrations of lithium
Pharmacology
Acemetacin acts as an inhibitor of cyclooxygenase (COX), producing the anti-inflammatory and analgetic (pain relieving) effects. In the body, it is partly metabolized to indomethacin, which also acts as a COX inhibitor. The same mechanism is responsible for the antipyretic and antiplatelet effects, which are however not clinically used, as well as for the typical NSAID adverse effects.
An advantage of acemetacin is that it reduces gastric damage as compared to indometacin, possibly because acemetacin has less effect on the increase of leukotriene B4 synthesis and tumor necrosis factor (TNF) expression, leading to less induction of leukocyte-endothelial adherence.
Pharmacokinetics
The substance is quickly and almost completely absorbed from the gut. Highest blood plasma concentrations are reached after two hours. It is bound to plasma proteins to 80–90%. Concentrations in the synovial fluid and membranes, muscle and bone are higher than in the blood.
Apart from the active metabolite indometacin, a number of inactive metabolites are found after application of acemetacin: the O-desmethyl-, des-4-chlorobenzoyl-, and O-desmethyl-des-4-chlorobenzoyl derivatives of both indometacin and acemetacin, as well as all of these substances' glucuronides (mediated at least partly by the enzyme UGT2B7). Elimination half-life is 4.5±2.8 hours (in some individuals up to 16 hours) under steady state conditions. 40% are eliminated via the kidney, and 50% via the faeces.
Chemistry
Acemetacin is the glycolic acid ester of indometacin. It is a fine, slightly yellowish, crystalline powder that melts at . It is polymorphic, with four known anhydrous (water-free) and two monohydrate crystalline forms.
Society and culture
Brand names
Other brand names include Zadex (Hungary), Rheutrop (Austria), Acemetadoc, Acephlogont, Azeat, Rantudil (Germany, Hungary, Mexico, Poland, Portugal, Turkey), Gamespir (Greece), Oldan, Reudol (Spain), Tilur (Switzerland), ACEO (Taiwan), Ost-map (Egypt).
References
Indoles
Acetic acids
Carboxamides
Drugs developed by Merck
Indole ethers at the benzene ring
4-Chlorophenyl compounds
Prodrugs
Nonsteroidal anti-inflammatory drugs | Acemetacin | [
"Chemistry"
] | 1,041 | [
"Chemicals in medicine",
"Prodrugs"
] |
13,590,950 | https://en.wikipedia.org/wiki/Alclofenac | Alclofenac is a nonsteroidal anti-inflammatory drug (NSAID).
Synthesis
References
Carboxylic acids
Chloroarenes
Phenol ethers
Nonsteroidal anti-inflammatory drugs
Allyl compounds | Alclofenac | [
"Chemistry"
] | 50 | [
"Carboxylic acids",
"Functional groups"
] |
13,590,992 | https://en.wikipedia.org/wiki/Kebuzone | Kebuzone (or ketophenylbutazone) is a nonsteroidal anti-inflammatory drug (NSAID) that is used for the treatment of inflammatory conditions such as thrombophlebitis and rheumatoid arthritis (RA).
References
Ketones
Pyrazolidindiones
Nonsteroidal anti-inflammatory drugs | Kebuzone | [
"Chemistry"
] | 76 | [
"Ketones",
"Functional groups"
] |
13,591,037 | https://en.wikipedia.org/wiki/Bucillamine | Bucillamine is an antirheumatic agent developed from tiopronin. Activity is mediated by the two thiol groups that the molecule contains. Research done in USA showed positive transplant preservation properties. Bucillamine is currently being investigated for COVID-19 drug repurposing.
Bucillamine has a well-known safety profile and is prescribed in the treatment of rheumatoid arthritis in Japan and South Korea for over 30 years. It is a cysteine derivative with 2 thiol groups that is 16-fold more potent than acetylcysteine (NAC) as a thiol donor in vivo, giving it vastly superior function in restoring glutathione and therefore greater potential to prevent acute lung injury during influenza infection. Bucillamine has also been shown to prevent oxidative and reperfusion injury in heart and liver tissues.
Bucillamine has both proven safety and proven mechanism of action similar to that of NAC, but with much higher potency, mitigating the previous obstacles to using thiols therapeutically. It is hypothesized that similar processes related to reactive oxygen species (ROS) are involved in acute lung injury during nCov-19 infection, possibly justifying the investigation of bucillamine as an intervention for COVID-19.
On July 31, 2020, the U.S. Food & Drug Administration (FDA) has approved Revive Therapeutics Ltd. to proceed with a randomized, double-blind, placebo-controlled confirmatory Phase 3 clinical trial protocol to evaluate the safety and efficacy of Bucillamine in patients with mild-moderate COVID-19.
References
Antirheumatic products
Carboxylic acids
Propionamides
Thiols | Bucillamine | [
"Chemistry"
] | 363 | [
"Organic compounds",
"Carboxylic acids",
"Thiols",
"Functional groups"
] |
13,591,771 | https://en.wikipedia.org/wiki/Monochrome%20monitor | A monochrome monitor is a type of computer monitor in which computer text and images are displayed in varying tones of only one color, as opposed to a color monitor that can display text and images in multiple colors. They were very common in the early days of computing, from the 1960s through the 1980s, before color monitors became widely commercially available. They are still widely used in applications such as computerized cash register systems, owing to the age of many registers. Green screen was the common name for a monochrome monitor using a green "P1" phosphor screen; the term is often misused to refer to any block mode display terminal, regardless of color, e.g., IBM 3279, 3290.
Abundant in the early-to-mid-1980s, they succeeded Teletype terminals and preceded color CRTs and later LCDs as the predominant visual output device for computers.
CRT Design
The most common technology for monochrome monitors was the CRT, although, e.g., plasma displays, were also used.
Unlike color monitors, which display text and graphics in multiple colors through the use of alternating-intensity red, green, and blue phosphors, monochrome monitors have only one color of phosphor (mono means "one", and chrome means "color"). All text and graphics are displayed in that color. Some monitors have the ability to vary the brightness of individual pixels, thereby creating the illusion of depth and color, exactly like a black-and-white television.
Typically, only a limited set of brightness levels was provided to save display memory which was very expensive in the '70s and '80s. Either normal/bright or normal/dim (1 bit) per character as in the VT100 or black, dark gray, light gray, white (2bit) per pixel like the NeXT MegaPixel Display.
Monochrome monitors are commonly available in three colors: if the P1 phosphor is used, the screen is green monochrome. If the P3 phosphor is used, the screen is amber monochrome. If the P4 phosphor is used, the screen is white monochrome (known as "page white"); this is the same phosphor as used in early television sets.
An amber screen was claimed to give improved ergonomics, specifically by reducing eye strain; this claim appears to have little scientific basis.
Usage
Well-known examples of early monochrome monitors are the VT100 from Digital Equipment Corporation, released in 1978, the Apple Monitor III in 1980, and the IBM 5151, which accompanied the IBM PC model 5150 upon its 1981 release.
The 5151 was designed to work with the PC's Monochrome Display Adapter (MDA) text-only graphics card, but the third-party Hercules Graphics Card became a popular companion to the 5151 screen because of the Hercules' comparatively high-resolution bitmapped 720×348 pixel monochrome graphics capability, much used for business presentation graphics generated from spreadsheets like Lotus 1-2-3. This was much higher resolution than the alternative IBM Color Graphics Adapter 320×200 pixel, or 640×200 pixel graphic standard. It could also run most programs written for the CGA card's standard graphics modes. Monochrome monitors continued to be used, even after the introduction of higher resolution color IBM Enhanced Graphics Adapter and Video Graphics Array standards in the late 1980s, for dual-monitor applications.
Clarity
Pixel for pixel, monochrome CRT monitors produce sharper text and images than color CRT monitors. This is because a monochrome monitor is made up of a continuous coating of phosphor and the sharpness can be controlled by focusing the electron beam; whereas on a color monitor, screen space is divided into triads of three phosphor dots (one red, one blue, one green) separated by a mask. The effective resolution of a color monitor is limited by the density of these triads. Furthermore, pixels in the source image will not align precisely to these triads, so moire effects will occur as the image resolution approaches the limit imposed by the size of the phosphor triads. Monochrome monitors were used in almost all dumb terminals and were widely used in text-based applications such as computerized cash registers and point of sale systems because of their superior sharpness and enhanced readability.
Some green screen displays were furnished with a particularly full/intense phosphor coating, making the characters very clear and sharply defined (thus easy to read) but generating an afterglow-effect (sometimes called a "ghost image") when the text scrolled down the screen or when a screenful of information was quickly replaced with another as in word processing page up/down operations. Other green screens avoided the heavy afterglow-effects, but at the cost of much more pixelated character images. The 5151, amongst others, had brightness and contrast controls to allow the user to set their own compromise.
Phosphor limitations
Monochrome monitors are particularly susceptible to screen burn (hence the advent, and name, of the screensaver), because the phosphors used are of very high intensity.
Another effect of the high-intensity phosphors is an effect known as "ghosting", wherein a dim afterglow of the screen's contents is briefly visible after the screen has been blanked.
This ghosting effect is deliberate on some monitors, known as "long persistence" monitors. These use the relatively long decay period of the phosphor glow to reduce flickering and eye strain.
In popular culture
The colour scheme, grid layout of characters, and ghosting effects of the now-obsolete monochrome CRT screens have become an eye-catching visual shorthand for computer-generated text, frequently in "futuristic" settings. The opening titles of the first Ghost in the Shell film and the digital rain effect of the Matrix trilogy science fiction films prominently feature computer displays with ghosting green text.
A similar grid of amber text is used in the science fiction TV show Travelers.
A free application for Linux terminal software called "Cool Retro Term" is available to accurately emulate old CRT Monochrome terminals for nostalgia or retrocomputing reasons. There is also an Xscreensaver hack called phosphor which emulates a long-persistence green screen and can be used as a terminal.
See also
IBM 3270
IBM 5250
IBM 5151
Apple Monitor III
References
Electronic display devices
Legacy hardware
Obsolete technologies
User interfaces | Monochrome monitor | [
"Technology"
] | 1,336 | [
"User interfaces",
"Interfaces"
] |
13,592,334 | https://en.wikipedia.org/wiki/Baze%20v.%20Rees | Baze v. Rees, 553 U.S. 35 (2008), is a decision by the United States Supreme Court, which upheld the constitutionality of a particular method of lethal injection used for capital punishment.
Background of the case
Ralph Baze and Thomas Bowling were sentenced to death in Kentucky, each for a double-murder. They argued that executing them by lethal injection would violate the Eighth Amendment prohibition of cruel and unusual punishment. The governing legal standard required that lethal injection must not inflict "unnecessary pain", and Baze and Bowling argued that the lethal chemicals Kentucky used carried an unnecessary risk of inflicting pain during the execution. Kentucky at the time used the then-common combination of sodium thiopental, pancuronium bromide, and potassium chloride. The Supreme Court of Kentucky rejected their claim, but the U.S. Supreme Court granted certiorari.
The case had nationwide implications because the specific "cocktail" used for lethal injections in Kentucky was the same one that virtually all states used for lethal injection. The U.S. Supreme Court stayed all executions in the country between September 2007 and April 2008, when it delivered its ruling and affirmed the Kentucky top court decision. It is the longest period with zero executions in the United States from 1982 to date.
Supreme Court's decision
The Supreme Court upheld Kentucky's method of lethal injection as constitutional by a vote of 7–2. No single opinion carried a majority. Chief Justice Roberts wrote a plurality opinion joined by Justice Kennedy and Justice Alito, that was later ruled to be the controlling opinion in Glossip v. Gross (2015).
Justice Alito wrote an opinion concurring with the plurality reasoning, while Justices Stevens, Scalia, Thomas and Breyer wrote opinions concurring in the judgment only.
Justice Ginsburg, joined by Justice Souter, wrote the lone dissent.
Plurality opinion
The plurality opinion was written by Chief Justice John Roberts and joined by Justices Anthony Kennedy and Samuel Alito, held that Kentucky's execution method was humane and constitutional. In response to the petitioners' argument that the risk of mistakes in the execution protocol was so great as to render it unconstitutional, the plurality wrote that "an isolated mishap alone does not violate the Eighth Amendment". It also stated that the first drug in a multi-drug cocktail must render the inmate unconscious. Otherwise, there is a "substantial, constitutionally unacceptable risk" that the inmate will suffer a painful suffocation.
Stevens' concurrence
Justice John Paul Stevens concurred in the opinion of the Court, writing separately to explain his concerns with the death penalty in general. He wrote that the case questioned the "justification for the death penalty itself". He characterized the motivation behind the death penalty as an antithesis to modern values:
We are left, then, with retribution as the primary rationale for imposing the death penalty. And indeed, it is the retribution rationale that animates much of the remaining enthusiasm for the death penalty. As Lord Justice Denning argued in 1950, some crimes are so outrageous that society insists on adequate punishment, because the wrong-doer deserves it, irrespective of whether it is a deterrent or not. See Gregg, 428 U. S., at 184, n. 30. Our Eighth Amendment jurisprudence has narrowed the class of offenders eligible for the death penalty to include only those who have committed outrageous crimes defined by specific aggravating factors. It is the cruel treatment of victims that provides the most persuasive arguments for prosecutors seeking the death penalty. A natural response to such heinous crimes is a thirst for vengeance.
He further stressed concern over the process of death penalty cases where emotion plays a major role and where the safeguards for defendants may have been lowered. He cited statistics that indicated that many people sentenced to die were later found to be wrongly convicted. He concluded by stating that a penalty "with such negligible returns to the State [is] patently excessive and cruel and unusual punishment violative of the Eighth Amendment".
Scalia's concurrence
Justice Scalia, joined by Justice Thomas, wrote separately "to provide what I think is needed response to Justice Stevens' separate opinion":
In the fact of Justice Stevens' experience, the experience of all others is, it appears, of little consequence. The experience of the state legislatures and the Congress—who retain the death penalty as a form of punishment—is dismissed as "the product of habit and inattention rather than an acceptable deliberative process". The experience of social scientists whose studies indicate that the death penalty deters crime is relegated to a footnote. The experience of fellow citizens who support the death penalty is described, with only the most thinly veiled condemnation, as stemming from a "thirst for vengeance". It is Justice Stevens' experience that reigns over all.
Justice Stevens' final refuge in his cost-benefit analysis is a familiar one: There is a risk that an innocent person might be convicted and sentenced to death—though not a risk that Justice Stevens can quantify, because he lacks a single example of a person executed for a crime he did not commit in the current American system.
But of all Justice Stevens' criticisms of the death penalty, the hardest to take is his bemoaning of "the enormous costs that death penalty litigation imposes on society," including the "burden on the courts and the lack of finality for victim's families." Those costs, those burdens, and that lack of finality are in large measure the creation of Justice Stevens and other Justices opposed to the death penalty, who have "encumber[ed] [it] … with unwarranted restrictions neither contained in the text of the Constitution nor reflected in two centuries of practice under it"—the product of their policy views "not shared by the vast majority of the American people.
Dissent
In a dissenting opinion joined by Justice Souter, Justice Ginsburg challenged the constitutionality of Kentucky's three-drug lethal injection protocol.
Justice Ginsburg highlighted the excruciating pain caused by the second and third drugs, pancuronium bromide and potassium chloride, arguing that their use on a conscious inmate would have been "constitutionally unacceptable." While the plurality argued that Kentucky's protocol was constitutional because it lacked substantial evidence of an inadequate dose of the first drug, sodium thiopental, Justice Ginsburg disagreed. She asserted that Kentucky's protocol lacked basic safeguards used by other states to confirm an inmate's unconsciousness before administering subsequent drugs.
Examining previous Supreme Court cases on execution methods, Justice Ginsburg found limited guidance on the standard for evaluating Kentucky's lethal injection protocol. She emphasized the evolving standards of decency and the need to consider the severity of pain, likelihood of occurrence, and feasibility of alternatives. While the plurality set a fixed threshold for the risk factor, Justice Ginsburg argued that the three factors were interconnected, and a strong showing in one area reduced the importance of others.
See also
Lethal injection
Wilkerson v. Utah (1878)
Glossip v. Gross (2015)
Bucklew v. Precythe (2019)
Bibliography
Linda Greenhouse. "Justices to Enter the Debate Over Lethal Injection". The New York Times, September 26, 2007.
"Supreme Court clears way for executions to resume" Reuters, April 16, 2008.
References
External links
Baze v. Rees on ScotusWiki
Audio: complete recording of oral arguments before the court from Oyez.org
United States Supreme Court cases
United States Supreme Court cases in 2008
United States Supreme Court cases of the Roberts Court
Cruel and Unusual Punishment Clause and death penalty case law
Legal history of Kentucky
Lethal injection
Capital punishment in Kentucky
2008 in Kentucky | Baze v. Rees | [
"Environmental_science"
] | 1,582 | [
"Toxicology",
"Lethal injection"
] |
13,592,714 | https://en.wikipedia.org/wiki/Norboletone | Norboletone () (former proposed brand name Genabol), or norbolethone, is a synthetic and orally active anabolic–androgenic steroid (AAS) which was never marketed. It was first developed in 1966 by Wyeth Laboratories and was investigated for use as an agent to encourage weight gain and for the treatment of short stature, but was never marketed commercially because of fears that it might be toxic. It subsequently showed up in urine tests on athletes in competition in the early 2000s.
Norboletone was found to have been brought to the market by the chemist Patrick Arnold, of the Bay Area Laboratory Co-operative (BALCO), an American nutritional supplement company. It is reputed to have been the active ingredient in the original formulation of the "undetectable" steroid formulation known as "The Clear" before being replaced by the more potent drug tetrahydrogestrinone.
In 2002, Don Catlin, the founder and then-director of the UCLA Olympic Analytical Lab, identified norboletone for the first time in an athlete's urine sample. In the same year, U.S. bicycle racer Tammy Thomas was caught using it and was banned from her sport. The following year, Catlin identified and developed a test for tetrahydrogestrinone (THG), the second reported designer anabolic sample—a key development in the BALCO Affair.
Norboletone is on the World Anti-Doping Agency's list of prohibited substances, and is therefore banned from use in most major sports.
References
Abandoned drugs
Anabolic–androgenic steroids
Estranes
Designer drugs
Hepatotoxins | Norboletone | [
"Chemistry"
] | 348 | [
"Drug safety",
"Abandoned drugs"
] |
13,592,900 | https://en.wikipedia.org/wiki/Desoxymethyltestosterone | Desoxymethyltestosterone (DMT), known by the nicknames Madol and Pheraplex, is a synthetic and orally active anabolic–androgenic steroid (AAS) and a 17α-methylated derivative of dihydrotestosterone (DHT) which was never marketed for medical use. It was one of the first designer steroids to be marketed as a performance-enhancing drug to athletes and bodybuilders.
Desoxymethyltestosterone is sometimes abbreviated as DMT, though it should not be confused with the hallucinogen dimethyltryptamine, which is also known by the same acronym.
Side effects
Pharmacology
Pharmacodynamics
In animal studies, desoxymethyltestosterone has been found to bind to the androgen receptor (AR) about half as strongly as DHT, and to cause side effects that are typical of 17α-alkylated AAS, such as liver damage and left ventricular hypertrophy when taken in higher doses.
Desoxymethyltestosterone is unusual in that it is structurally a 2-ene compound, lacking the 3-keto group present in almost all commercial AAS (with ethylestrenol being a rare and notable exception). This does not mean it is a weak compound, and clinical research has determined that it is a fairly potent oral agent. Rat studies indicate that desoxymethyltestosterone has an anabolic effect 160% that of testosterone while being only 60% as androgenic, giving it a Q ratio of 6.5:1. Because of this favorable ratio, experiments in orchiectomized rats have demonstrated that treatment with desoxymethyltestosterone resulted only in a stimulation of the weight of the levator ani muscle; the prostate and seminal vesicle weights remained unaffected leading the authors of one study to characterize desoxymethyltestosterone as a powerful AAS with attributes of a selective androgen receptor modulator (SARM) and some indication of toxicity.
Chemistry
Desoxymethyltestosterone, also known as 3-desoxy-17α-methyl-δ2-5α-dihydrotestosterone (3-desoxy-17α-methyl-δ2-DHT) or as 17α-methyl-5α-androst-2-en-17β-ol, is a synthetic androstane steroid and a 17α-alkylated derivative of dihydrotestosterone (DHT).
History
Desoxymethyltestosterone was invented in 1961 by Max Huffman who obtained a patent on the compound the same year. It was described in the scientific literature in 1963. However, it was never brought to market as a commercial drug. Desoxymethyltestosterone was rediscovered by chemist, AAS enthusiast, and amateur bodybuilder Patrick Arnold in 2005. Arnold produced desoxymethyltestosterone and supplied it to Victor Conte of Bay Area Laboratory Co-operative (BALCO), an American nutritional supplement company and steroid supplier.
DMT became a controlled substance in the US on January 4, 2010, and is classified as a Schedule III anabolic steroid under the United States Controlled Substances Act along with boldione and dienedione. The substance had come under scrutiny after it was found to be present in several over-the-counter bodybuilding supplements.
See also
5α-Androst-2-en-17-one
References
Abandoned drugs
1-Methylcyclopentanols
Anabolic–androgenic steroids
Androstanes
Designer drugs
Hepatotoxins | Desoxymethyltestosterone | [
"Chemistry"
] | 791 | [
"Drug safety",
"Abandoned drugs"
] |
13,593,602 | https://en.wikipedia.org/wiki/Phloxine | Phloxine B (commonly known simply as phloxine) is a water-soluble red dye used for coloring drugs and cosmetics in the United States and coloring food in Japan. It is derived from fluorescein, but differs by the presence of four bromine atoms at positions 2, 4, 5 and 7 of the xanthene ring and four chlorine atoms in the carboxyphenyl ring. It has an absorption maximum around 540 nm and an emission maximum around 564 nm. Apart from industrial use, phloxine B has functions as an antimicrobial substance, viability dye and biological stain. For example, it is used in hematoxylin-phloxine-saffron (HPS) staining to color the cytoplasm and connective tissue in shades of red.
Antimicrobial properties
Lethal dosage levels
In the presence of light, phloxine B has a bactericidal effect on gram-positive strains, such as Bacillus subtilis, Bacillus cereus, and several methicillin-resistant Staphylococcus aureus (MRSA) strains. At a minimum inhibitory concentration of 25 μM, growth is reduced by 10-fold within 2.5 hours. At concentrations of 50 μM and 100 μM, growth is stopped completely and cell counts decrease by a factor of 104 to 105. For humans, the Food and Drug Administration deems phloxine B to be safe up to a daily dosage of 1.25 mg/kg.
Mechanism of action
Bacteria exposed to phloxine B die from oxidative damage. Phloxine B ionizes in water to become a negatively charged ion that binds to positively charged cellular components . When phloxine B is subjected to light, debromination occurs and free radicals and singlet oxygen are formed. These compounds cause irreversible damage to the bacteria, leading to growth arrest and cell death. Gram-negative bacteria are phloxine B-resistant due to the outer cell membrane that surrounds them. This polysaccharide-coated lipid bilayer creates a permeability barrier that prevents efficient uptake of the compound. Addition of EDTA, which is known to strip the lipopolysaccharides and increase membrane permeability, removes the phloxine B resistance and allows gram-negative bacteria to be killed as well.
Measure of viability
Phloxine B can be used to stain dead cells of several yeasts, including Saccharomyces cerevisiae and Schizosaccharomyces pombe. When diluted in yeast growth media, the dye is unable to entere cell because of their membranes. Dead yeast cells lose membrane integrity, so phloxine B can enter and stain the intracellular cytosolic compounds. Therefore, staining is a measure of cell death.
In cell counting assays, the number of fluorescent (i.e. dead) cells observed through a haemocytometer can be compared to the total number of cells to give a measure of mortality. The same principle can be applied at higher throughput by fluorescence-activated flow cytometry (FACS), where all phloxine B-stained cells in a sample are counted.
[Note: some reports suggest that phloxine B is instead pumped out of live yeast cells but retained in dead/dying yeast cells. However, definitive evidence for either model is still needed.]
References
Organobromides
Chloroarenes
Lactones
Fluorone dyes
Spiro compounds | Phloxine | [
"Chemistry"
] | 754 | [
"Organic compounds",
"Spiro compounds"
] |
13,594,421 | https://en.wikipedia.org/wiki/Fenproporex | Fenproporex (Perphoxene) (N-2-Cyanoethylamphetamine) (3-(1-phenylpropan-2-ylamino)propanenitrile) (3-[(1-Methyl-2-Phenylethyl)amino]propiononitrile) is a stimulant drug of the phenethylamine and amphetamine chemical classes that was developed in the 1960s. It is used as an appetite suppressant for the treatment of obesity.
Fenproporex produces amphetamine as a metabolite and was withdrawn in many countries following problems with abuse, but it is still prescribed in some countries. It is sometimes combined with benzodiazepines, antidepressants, and other compounds to create a version of the "rainbow diet pill".
Fenproporex has never been approved by the US Food and Drug Administration (FDA) for sale in the US due to lack of efficacy and safety data. However, in March 2009, the FDA warned consumers that it has been detected as an unlabeled component of diet pills available over the Internet. Fenproporex is designated a Schedule IV controlled substance in the US pursuant to the Controlled Substances Act.
Fenproporex is on the list of substances banned by the World Anti-Doping Agency, and any sportsperson testing positive for the substance faces a ban from competition.
References
Substituted amphetamines
Nitriles
Norepinephrine-dopamine releasing agents | Fenproporex | [
"Chemistry"
] | 317 | [
"Nitriles",
"Functional groups"
] |
13,594,613 | https://en.wikipedia.org/wiki/Mefenorex | Mefenorex (Rondimen, Pondinil, Anexate) is a stimulant drug which was used as an appetite suppressant. It is an amphetamine derivative which was developed in the 1970s and used for the treatment of obesity. Mefenorex produces amphetamine as a metabolite, and has been withdrawn in many countries despite having only mild stimulant effects and relatively little abuse potential.
References
Substituted amphetamines
Organochlorides
Norepinephrine-dopamine releasing agents
Prodrugs | Mefenorex | [
"Chemistry"
] | 117 | [
"Chemicals in medicine",
"Prodrugs"
] |
13,594,916 | https://en.wikipedia.org/wiki/Amfetaminil | Amfetaminil (also known as amphetaminil, N-cyanobenzylamphetamine, and AN-1; brand name Aponeuron) is a stimulant drug derived from amphetamine, which was developed in the 1970s and used for the treatment of obesity, ADHD, and narcolepsy. It has largely been withdrawn from clinical use following problems with abuse. The drug is a prodrug to amphetamine.
Stereochemistry
Amfetaminil is a molecule with two stereogenic centers. Thus, four different stereoisomers exist:
(R)-2-[(R)-1-Phenylpropan-2-ylamino]-2-phenylacetonitrile (CAS number 478392-08-4)
(S)-2-[(S)-1-Phenylpropan-2-ylamino]-2-phenylacetonitrile (CAS number 478392-12-0)
(R)-2-[(S)-1-Phenylpropan-2-ylamino]-2-phenylacetonitrile (CAS number 478392-10-8)
(S)-2-[(R)-1-Phenylpropan-2-ylamino]-2-phenylacetonitrile (CAS number 478392-14-2)
Synthesis
Schiff base formation between amphetamine (1) and benzaldehyde (2) gives benzalamphetamine [2980-02-1] (3). Nucleophilic attack of cyanide anion on the imine (c.f. Strecker reaction) gives amfetaminil (4). Finally, reaction with nitrous acid gives (5). The rearrangement to a Sydnone then occurs to give CID:88166659 (6). Feprosidnine is sans the phenyl group.
References
Nitriles
Norepinephrine-dopamine releasing agents
Prodrugs
Substituted amphetamines
Wakefulness-promoting agents
World Anti-Doping Agency prohibited substances | Amfetaminil | [
"Chemistry"
] | 478 | [
"Chemicals in medicine",
"Nitriles",
"Functional groups",
"Prodrugs"
] |
13,595,249 | https://en.wikipedia.org/wiki/Kilokaiser | The Kaiser (K) is a unit of energy. A common form is kiloKaiser (kK). 1 kK = 1000 cm−1. ( cm−1, wavenumber or inverse wavelength.) This unit is most commonly used with respect to energy transitions between electronic states in inorganic complexes.
See also
Wavenumber
Kilokaiser is a common but incorrect spelling of the unit KiloKayser, which equals 1000 wavenumber (cm−1). The unit is named after Heinrich Gustav Johannes Kayser (16 March 1853 – 14 October 1940), a German physicist.
References
Scarlata, Suzanne; Rakesh Gupta; et al. Biochemistry, Vol. 35, No. 47, 1996
Fuguet, Elisabet; Carla Ráfols; et al. Langmuir, Vol. 19, No. 1, 2003
Douglas, Bodie; Darl McDaniel; and John Alexander. Concepts and Models of Inorganic Chemistry. 3rd ed. John Wiley & Sons, Inc. New York. 1994.
Units of energy | Kilokaiser | [
"Mathematics"
] | 214 | [
"Quantity",
"Units of energy",
"Units of measurement"
] |
13,595,409 | https://en.wikipedia.org/wiki/Loop%20performance | Loop performance in control engineering indicates the performance of control loops, such as a regulatory PID loop. Performance refers to the accuracy of a control system's ability to track (output) the desired signals to regulate the plant process variables in the most beneficial and optimised way, without delay or overshoot.
Importance
Regulatory control loops are critical in automated manufacturing and utility industries like refining, paper and chemicals manufacturing, power generation, among others. They are used to control a particular parameter within a process. The parameter that is being controlled could be temperature, pressure, flow or level of some process. For example, temperature controllers are used in boilers which are used in production of gasoline.
Software
There are many software applications that help in measuring and analysing the performance of control loops in industrial plants. Benchmarking the loop performance and identifying opportunities for improvement are key drivers for improving plant reliability, production throughput and safe operation.
References
Control theory | Loop performance | [
"Mathematics"
] | 190 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
13,595,435 | https://en.wikipedia.org/wiki/Chaetocladus | Chaetocladus is an extinct non-calcifying genus of unicellular green algae known from the Upper Silurian.
Morphology
Chaetocladus thalli range from 2–6 cm in height and average 1 cm in diameter. They comprise a parallel-sided, unbranching axis which is surrounded by leaf-like ramifications.
Fossil record
Chaetocladus is known from upper Silurian konservat lagerstätte, and found in association with other algae, arthropods, and annelid worms.
Similar Dasycladean algae are reported from late-Ordovician lagerstatte.
Classification
Due to its morphological similarity to the extant order Dasycladales, Chaetocladus is considered to be an early cousin of this order. Unlike the majority of Dasycladales, Chaetocladus does not form deposit calcite - therefore it required much rarer taphonomic conditions to be preserved.
Some genera now recognised as Chaetocladus were originally described as Graptolites.
Species
C. capitatus
C. dubius
C. gracilis
C. hefteri
C. plumula
C. ruedemanni
References
Ordovician plants
Silurian plants
Devonian plants
Fossil algae
Ulvophyceae genera
Middle Ordovician first appearances
Middle Devonian genus extinctions
Fossil taxa described in 1997 | Chaetocladus | [
"Biology"
] | 294 | [
"Fossil algae",
"Algae"
] |
13,595,525 | https://en.wikipedia.org/wiki/ICTP%20Ramanujan%20Prize | The DST-ICTP-IMU Ramanujan Prize for Young Mathematicians from Developing Countries is a mathematics prize awarded annually by the International Centre for Theoretical Physics in Italy. The prize is named after the Indian mathematician Srinivasa Ramanujan. It was founded in 2004, and was first awarded in 2005.
The prize is awarded to a researcher from a developing country less than 45 years of age who has conducted outstanding research in a developing country. The prize is supported by the Ministry of Science and Technology (India) and Norwegian Academy of Science and Letters through the Abel Fund, with the cooperation of the International Mathematical Union.
List of winners
See also
SASTRA Ramanujan Prize
List of mathematics awards
References
External links
International Mathematical Union
Mathematics awards
Awards established in 2004
Srinivasa Ramanujan
International awards | ICTP Ramanujan Prize | [
"Technology"
] | 164 | [
"Science and technology awards",
"International science and technology awards",
"Mathematics awards"
] |
13,596,786 | https://en.wikipedia.org/wiki/PhoneME | The phoneME project is Sun Microsystems reference implementation of Java virtual machine and associated libraries of Java ME with source, licensed under the GNU General Public License.
The phoneME library includes implementations of Connected Limited Device Configuration (CLDC) and Mobile Information Device Profile (MIDP) as well as complete or partial implementations for some optional package JSRs.
Optional Java ME packages implementations
phoneME provide complete or partial implementations for the following JSRs:
PDA Optional Packages for the J2ME Platform (JSR 75)
Java APIs for Bluetooth (JSR 82)
Wireless Messaging API|Wireless Messaging API and Wireless Messaging API 2.0 (JSR 120 and JSR 205)
Java Mobile Media API (JSR 135)
Web Services Specification for Java ME (JSR 172)
Security and Trust Services API for J2ME (JSR 177)
Location API for Java ME (JSR 179)
Session Initiation Protocol (Java) (JSR 180)
Content Handler API (JSR 211)
Scalable 2D Vector Graphics API (JSR 226)
Payment API (JSR 229)
Mobile Internationalization API (JSR 238)
Java Binding for OpenGL ES (JSR 239)
Supported platforms
Supported platforms are Linux/ARM, Linux/x86 and Windows/i386.
See also
Java Platform, Micro Edition
External links
PhoneME project page (original website, currently shut down)
JSR 68 — J2ME Platform Specification
phoneme-svn.dump on Archive.org — A dump of the Apache Subversion repository before the website was shut down
git version of the source code dump — A more accessible version of the SVN dump from Archive.org, converted to git
Computing platforms
Platform, Micro Edition
Software using the GNU General Public License | PhoneME | [
"Technology"
] | 357 | [
"Computing platforms",
"Java platform"
] |
13,598,464 | https://en.wikipedia.org/wiki/Hypermobility%20%28travel%29 | Hypermobile travelers are "highly mobile individuals" who take "frequent trips, often over great distances." They "account for a large share of the overall kilometres travelled, especially by air." These people contribute significantly to the overall amount of air miles flown within a given society. Although concerns over hypermobility apply to several modes of transport, the environmental impact of aviation and especially its greenhouse gas emissions have brought particular focus on flying. Among the reasons for this focus is that these emissions, because they are made at high altitude, have a climate impact that is commonly estimated to be 2.7 higher, than the same emissions if made at ground-level.
Although the amount of time people have spent in motion has remained constant since 1950, the shift from feet and bicycles to cars and planes has increased the speed of travel fivefold. This results in the twin effects of wider, and shallower regions of social activity around each person (further exacerbated by electronic communication which can be seen as a form of virtual mobility), and a degradation of the social and physical environment brought about by the high speed traffic (as theorised by urban designer Donald Appleyard).
The changes are brought about locally due to the use of cars and motorways, and internationally by aeroplanes. Some of the social threats of hypermobility include:
more polarisation between rich and poor
reduced health and fitness
Compulsive travel has been proposed as a model of addiction in one paper.
Widespread Internet use is seen as a contributory factor towards hypermobility due to the increased ease which it enables travel to be desired and organized. On the other hand, the proliferation of online communication tools as an alternative to in-person meetings has been linked to a 25% decrease in business travel by UK residents from 2000 to 2010.
The term hypermobility arose around 1980 concerning the flow of capital, and since the early 1990s has also referred to excessive travel. [See: Hepworth and Ducatel (1992); Whitelegg (1993); Lowe (1994); van der Stoep (1995); Shields (1996); Cox (1997); Adams (1999); Khisty and Zeitler (2001); Gössling et al. (2009); Mander & Randles (2009); and (Higham 2014).] The term is widely credited as having been coined by Adams (1999), but apart from the title of the work it says nothing explicit about it except that "[t]he term hypermobility is used in this essay to suggest that it may be possible to have too much of a good thing."
See also
References
Air pollution
Demographic economics
Environmental impact by source
Human geography
Human migration
Sustainable transport
Transportation planning
Culture | Hypermobility (travel) | [
"Physics",
"Environmental_science"
] | 562 | [
"Travel",
"Physical systems",
"Transport",
"Sustainable transport",
"Environmental social science",
"Human geography"
] |
13,598,940 | https://en.wikipedia.org/wiki/SACI-2 | The SACI-2 was a Brazilian experimental satellite, designed and built by the Brazilian Institute for Space Research (INPE). It was launched on 11 December 1999 from the INPE base in Alcântara, Maranhão, by the Brazilian VLS-1 V02 rocket. Due to failure of its second stage, the rocket veered off course and had to be destroyed 3 minutes and 20 seconds after launch.
The name was officially an acronym of Satélite de Aplicações CIentíficas ("Scientific Applications Satellite"), but was obviously taken from the Saci character of Brazilian folklore.
Specifications
The satellite weighted approximately 80 kg. It was a box approximately 60 cm long and 40 cm square, with a circular base plate and surrounded by a metal ring, both about 80 cm in diameter. Besides being a technology testbed, it carried four scientific payloads (PLASMEX, MAGNEX, OCRAS and PHOTO), with a total weight of 10 kg, to investigate plasma bubbles in the geomagnetic field, air glow, and anomalous cosmic radiation fluxes. It was meant to circle the Earth on a circular orbit at 750 km altitude, inclined 17.5 ° from the Equator.
Energy supply
Solar cells: Gallium Arsenide (AsGa)
Dimensions: 3 panels of 57 cm x 44 cm
Efficiency: 19%
Power output: 150 W
Nickel Cadmium (NiCd) Battery Cells
Voltage: 1.4 V
Capacity: 4.5 Ah
Remote control rate: 19.2 kbit/s
Transmission rate: 500 kbit/s
Antennas of edge: 2 of transmission and 2 of reception, type Microstrip
Operating frequency telemetry/remote control: 2,250 GHz / 2,028 GHz
Receiving antenna in soil: 3.4 m in diameter
The spin-stabilized spacecraft carried two S-band communication links (a 2W, 256 kb/ s downlink and 19.2 kbit/s uplink), and a 48 MB solid state data recorder. It is variously reported to have cost between US$ 800,000 and US$1.7 million.
See also
1999 in spaceflight
References
External links
SACI-2 in Gunter's Space Page.
Spacecraft launched in 1999
Satellite launch failures | SACI-2 | [
"Astronomy"
] | 466 | [
"Astronomy stubs",
"Spacecraft stubs"
] |
13,599,122 | https://en.wikipedia.org/wiki/Canyon%20Lake%20Gorge | Canyon Lake Gorge is a limestone gorge in Texas, which is around long, hundreds of yards (metres) wide, and up to or more deep, which was exposed in 2002 when extensive flooding of the Guadalupe River led to a huge amount of water going over the spillway from Canyon Lake reservoir and removing the sediment from the gorge. The gorge provides a valuable exposure of rock strata as old as 111 million years showing fossils and a set of dinosaur tracks, and forms a new ecosystem for wildlife with carp and other creatures in a series of pools fed by springs and waterfalls.
The Gorge Preservation Society formed as a local citizen's group to develop long-term plans for the Gorge in partnership with the Guadalupe-Blanco River Authority and the U.S. Army Corps of Engineers. Public access to the gorge is restricted to guided tours by the Society along a designated route for a hike lasting about three hours. Availability of tours is limited, no pets are permitted and no rock or fossil collecting is allowed. Research permits can be obtained by university or scientific research groups.
The flood of 2002
In July 2002 up to of water per second flowed over the spillway of Canyon Lake, Texas for approximately six weeks, the first time the spillway had been in use since the reservoir dam was constructed in 1964. Normally, the flow out of the reservoir is around of water per second. The Guadalupe River basin forms a part of "Flash Flood Alley" which is one of the river basins most prone to flash flooding in the world. Nine people were killed by the flood over a stretch of the river, which damaged or destroyed 48,000 homes and cost around $1 billion in damages, but the Canyon Lake manager has stated that even though the floodwaters went over the spillway, the dam still prevented an estimated $38.6 million in damages downstream during the event.
Educational and natural resource
On November 29, 2005, a ceremony was held in which representatives of the Guadalupe-Blanco River Authority and the U.S. Army Corps of Engineers signed an agreement to develop the gorge as an educational and natural resource.
Significance for geologists
The 2002 flood at Canyon Lake and subsequent rapid formation of Canyon Lake Gorge presented a unique opportunity to study the geomorphological power of rapidly moving water and to better understand the process of canyon formation.
In their 2010 study, Michael Lamb of the California Institute of Technology and Mark Fonstad of Texas State University documented the dramatic transformation of a section of the Guadalupe River Valley landscape into a steep-walled bedrock canyon in just three days. The scientists documented the excavation of bedrock limestone to an average depth of over 20 feet and average width of 130–200 feet for a distance of over one mile. The “plucking” and transport of massive boulders from the site resulted in the formation of several waterfalls, inner channels, and bedrock terraces. The abrasion of rock by sediment-loaded water sculpted walls and created plunge pools and teardrop-shaped “streamlined islands”. Although some of the geological formations present in the gorge are known to be associated with rapidly flowing flood water (such as the streamlined islands), other formations (such as the inner channels, knickpoints and terraces) have traditionally been interpreted through the “long ago and very slow” paradigm of geologic time in response to shifting climate or tectonic forcing.
Typically, a steep-walled narrow gorge is inferred to represent slow persistent erosion, but because many of the geological formations of Canyon Lake Gorge are virtually indistinguishable from other formations which have been attributed to long term (slower) processes, the data collected from Canyon Lake Gorge lends further credence to the hypothesis that some of the most spectacular canyons on Earth may have been carved rapidly during ancient megaflood events. Additionally, because the flood conditions under which the gorge was formed are known, Canyon Lake Gorge provides a means of developing improved computer model reconstructions of pre-historic floods to determine water volume, flood duration and erosion rates.
References
External links
Canyons and gorges of Texas
Protected areas of Comal County, Texas
Nature reserves in Texas
Landforms of Comal County, Texas
United States Army Corps of Engineers
Guadalupe-Blanco River Authority | Canyon Lake Gorge | [
"Engineering"
] | 846 | [
"Engineering units and formations",
"United States Army Corps of Engineers"
] |
13,603,155 | https://en.wikipedia.org/wiki/Industrial%20process%20imaging | Industrial process imaging, or industrial process tomography or process tomography are methods used to form an image of a cross-section of vessel or pipe in a chemical engineering or mineral processing, or petroleum extraction or refining plant.
Process imaging is used for the development of process equipment such as filters, separators and conveyors, as well as monitoring of production plant including flow rate measurement. As well as conventional tomographic methods widely used in medicine such as X-ray computed tomography, magnetic resonance imaging and gamma ray tomography, and ultra-sound tomography, new and emerging methods such as electrical capacitance tomography and magnetic induction tomography and electrical resistivity tomography (similar to medical electrical impedance tomography) are also used.
Although such techniques are not in widespread deployment in industrial plant there is an active research community, including a Virtual Center for industrial Process Tomography, and a regular World Congress on Industrial Process Tomography, now organized by a learned society for this area, the International Society for Industrial Process Tomography
A number of applications of tomography of process equipment were described in the 1970s, using Ionising Radiation from X-ray or isotope sources but routine use was limited by the high cost involved and safety constraints. Radiation-based methods used long exposure times which meant that dynamic measurements of the real-time behaviour of process systems were not feasible. The use of electrical methods to image industrial processes was pioneered by Maurice Beck at the UMIST in the mid-1980s
See also
Industrial Tomography Systems
Process tomography
Imaging
References
Chemical process engineering | Industrial process imaging | [
"Chemistry",
"Engineering"
] | 314 | [
"Chemical process engineering",
"Chemical engineering"
] |
13,603,363 | https://en.wikipedia.org/wiki/Baby%20Modula-3 | Baby Modula-3 is a functional programming sublanguage of Modula-3 (safe subset) programming language based on ideals invented by Martín Abadi. It is an object-oriented programming language for studying programming language design; one part of it is implicitly prototype-oriented, and the other is explicitly statically typed designed for studying computer science type theory. It has been checked as a formal language of metaprogramming systems. It comes from the Scandinavian School of object-oriented languages.
Abadi tried to give an example of pure object-oriented language which would allow studying the formal semantics of objects. "Baby Modula-3 is defined with a structured operational semantics and with a set of static type rules. A denotational semantics guarantees the soundness of this definition."
This object model has been shown to have well definiteness decidability (a mechanical proof of it isn't known).
Abadi worked at Digital Equipment Corporation (DEC) Systems Research Center (SRC) in Palo Alto, California. As DEC was bought by Compaq and then Compaq was bought by Hewlett-Packard (HP), the SRC-report 95 was made available to the public by HP.
Influences
Luca Cardelli and Martín Abadi wrote the book A Theory of Objects in 1996, laying out formal calculi for the semantics of object-oriented programming languages. Baby Modula-3 influenced this work according to Cardelli, and guided a calculus of the type of self in Types for object and the type of 'self'.
It has opened the way for work on Modula-3 formal semantic checking systems, for object-oriented type system programming languages that have been used to model the formal semantics of languages such as Ada and C.
References
Academic programming languages
Modula programming language family
Prototype-based programming languages
Programming language design | Baby Modula-3 | [
"Engineering"
] | 379 | [
"Design",
"Programming language design"
] |
13,603,949 | https://en.wikipedia.org/wiki/Charles%20E.%20Schaefer | Charles E. Schaefer (November 15, 1933 – September 19, 2020) was an American psychologist considered by many to be the "Father of Play Therapy" who has appeared on The Oprah Winfrey Show, The Today Show and Good Morning America. He was Professor of Psychology and was Director of both the Center for Psychological Services and the Crying Baby Clinic at Fairleigh Dickinson University in Teaneck, New Jersey.
Schaefer was the co-founder and director emeritus of the Association for Play Therapy in Fresno, California and the founder and co-director of the Play Therapy Training Institute in Hightstown, New Jersey. Author of more than 50 books, Child Magazine named Schaefer's book Raising Baby Right, (Crown Publisher, 1992) as its 1992 Book of the Year.
The Association for Play Therapy honored Schaefer with the Play Therapy Lifetime Achievement Award in 2006 and the Distinguished Service Award in 1996. Fairleigh Dickinson University honored Dr. Schaefer with the Distinguished Faculty Award For Research & Scholarship in 1994. And Fairfield University honored Dr. Schaefer with the Alumni Professional Achievement Award in 1969.
Education
Schaefer earned his Bachelor of Arts degree from Fairfield University in 1955 and his Doctorate degree in Clinical Psychology from Fordham University.
External links
FDU Magazine Profile
Association of Play Therapy
The Play Therapy Institute
References
1933 births
2020 deaths
21st-century American psychologists
Fairfield University alumni
Fairleigh Dickinson University faculty
Fordham University alumni
Play (activity) | Charles E. Schaefer | [
"Biology"
] | 293 | [
"Play (activity)",
"Behavior",
"Human behavior"
] |
7,388,191 | https://en.wikipedia.org/wiki/2%2C2%E2%80%B2-Bipyridine | 2,2′-Bipyridine (bipy or bpy, pronounced ) is an organic compound with the formula . This colorless solid is an important isomer of the bipyridine family. It is a bidentate chelating ligand, forming complexes with many transition metals. Ruthenium and platinum complexes of bipy exhibit intense luminescence.
Preparation, structure, and general properties
2,2'-Bipyridine was first prepared by decarboxylation of divalent metal derivatives of pyridine-2-carboxylate:
It is prepared by the dehydrogenation of pyridine using Raney nickel:
Submitted by W. H. F. Sasse1
Substituted 2,2'-bipyridines
Unsymmetrically substituted 2,2'-bipyridines can be prepared by cross coupling reaction of 2-pyridyl and substituted pyridyl reagents.
Structure
Although bipyridine is often drawn with its nitrogen atoms in cis conformation, the lowest energy conformation both in solid state and in solution is in fact coplanar, with nitrogen atoms in trans position. Monoprotonated bipyridine adopts a cis conformation.
Reactions
2,2'-bipyridine produces multiple coordination complexes. It binds metals as a chelating ligand, forming a 5-membered chelate ring.
See also
2,2'-Biquinoline
1,10-Phenanthroline
Dimethyl-2,2'-bipyridine
References
Chelating agents
Bipyridines
2-Pyridyl compounds | 2,2′-Bipyridine | [
"Chemistry"
] | 343 | [
"Chelating agents",
"Process chemicals"
] |
7,388,204 | https://en.wikipedia.org/wiki/K-line%20%28x-ray%29 | The K-line is a spectral peak in astronomical spectrometry used, along with the L-line, to observe and describe the light spectrum of stars.
The K-line is associated with iron (Fe) and is described as being from emissions at ~6.4keV (thousands of electron volts).
On 5 October 2006 NASA announced the results of research using the Japanese JAXA Suzaku satellite, after earlier work with the XMM-Newton satellite. "The observations include clocking the speed of a black hole's spin rate and measuring the angle at which matter pours into the void, as well as evidence for a wall of X-ray light pulled back and flattened by gravity."
The study teams observed X-ray emissions from the "broad iron K line" near the event horizon of several super-massive black holes of galaxies called MCG-6-30-15 and MCG-5-23-16. The normally narrow K-line is broadened by the doppler shift (red shift or blue shift) of the X-ray light emitted by matter being affected by the gravity of the black hole. The results coincide with predictions Albert Einstein's theory of general relativity. The teams were led by Andrew Fabian of Cambridge University, England, and James Reeves of NASA's Goddard Space Flight Center, Greenbelt, Maryland, United States.
See also
K- and L- Electron shell
Siegbahn notation
References
External links
NASA report on Suzaku research dated 5 October 2006 – retrieved 11 October 2006
Astronomical spectroscopy | K-line (x-ray) | [
"Physics",
"Chemistry",
"Astronomy"
] | 318 | [
"Spectroscopy stubs",
"Spectrum (physical sciences)",
"Astronomy stubs",
"Astrophysics",
"Astrophysics stubs",
"Astronomical spectroscopy",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs"
] |
7,388,311 | https://en.wikipedia.org/wiki/Non-denominated%20postage | Non-denominated postage is a postage stamp intended to meet a certain postage rate, but printed without the denomination, the price for that rate. They may retain full validity for the intended rate, regardless of later rate changes, or they may retain validity only for the original purchase price. In many English-speaking countries, it is called non-value indicator or non-value indicated (NVI) postage. They are used in many countries and reduce the cost of printing large issues of low-value make up stamps.
UPU approval
The Universal Postal Union approved the use of non-denominated stamps on international mail in 1995.
Canada
Canada's first non-denominational stamp was the 1981 "A" Definitive, featuring a stylized maple leaf. It was issued during a transition from the first class domestic rate 17¢ to 30¢ and was valued at 30 cents. In 2006, Canada's next NVI was called the "Permanent" stamp, which is a trademarked term. It was originally marked by a white capital "P" overlaid on a red maple leaf, which is itself within a white circle. Later releases, such as the 2009 Silver Dart commemorative, varied the colours. In that example, the Maple Leaf around the "P" is white and the "P" is dropped out. The circle does not appear. In announcing its decision to adopt non-denominated postage in 2006, Canada Post noted that it had to print more than 60 million one-cent stamps following the last price increase in 2005. The Canadian NVI program was essentially equivalent to the American NVI program, as both covered regular domestic first-class mail. One Canada Post NVI stamp covers the cost of mailing a standard letter up to 30 g within Canada.
On 11 December 2013, Canada Post unveiled its Five-point Action Plan, which temporarily removed "Permanent" stamps from sale, although they remained valid for postage. On 31 March, the regular domestic stamp price increased from CA$0.63 to CA$0.85 (roll & bundle issued stamps) and up to $1.00 for single stamp purchase, beginning on 31 March 2014. Sale of "Permanent" stamps resumed on that day at the new rate.
Czech Republic
Czech stamps for domestic mail are marked "A", stamps for international mail to European countries are marked "E", and stamps for international mail to non-European countries are marked "Z".
India
In 1940, the Government of United Province of British India issued a non-denominated stamp marking Literacy Day.
Republic of Ireland
An Post issue "N" stamps at the current domestic posting rate, which allow posting throughout both the Republic of Ireland and Northern Ireland; and "W" stamps at the current international letter mail rate.
There were formerly "E" stamps for postage to within the European Union, but this postage rate has been discontinued.
All three values were introduced in 2000 prior to the Euro changeover; however, only "N" stamps were available for many years after that, and only by specific request at post offices; generally as special occasion stamps such as weddings or birthday celebration stamps which may be purchased significantly in advance of use. However, "N" and "W" stamps are now widely sold, and are the only commonly available pre-printed stamps sold.
The Netherlands
PostNL now issues all first-class stamps as NVIs, which simply bear a large numeral "1" that varies to match the typography used for each particular issue. Stamps meeting the first-class rate to Europe additionally bore the marking "Europa", and those to foreign destinations outside of Europe the marking "Wereld" ("World"); presently, all stamps for destinations outside the Netherlands are marked "Internationaal" ("international"), with no distinctions for destinations within or outside Europe.
New Zealand
New Zealand Post started issuing the Kiwistamp in 2009. One stamp will always be worth the required postage of a Standard Post medium domestic letter. Customers may use multiple Kiwistamps or mix them with denominated stamps to make up the required postage for bigger domestic or international mail.
Singapore
Singapore has two NVIs today: 1st Local and 2nd Local. The first Singapore NVIs were issued in 1995; almost every issue had a "For Local Addresses Only" stamp. Later, in 2004, a new NVI denomination was released: "2nd Local". Since then almost all issues have "1st Local" stamps, and some have "2nd Local" stamps, rather than the previous "For Local Addresses Only". 1st Local stamps are valid for standard letters within Singapore up to 20 g, and 2nd Local stamps are valid for standard letters within Singapore up to 40 g.
Russia
Russian Post sells envelopes and postcards with pre-printed non-denominated stamps for domestic mail, A for regular domestic mail, B for postcards, and D for registered mail.
Scandinavia
Åland
Åland uses the following NVI denominations: Lokalpost (domestic, within Åland only), Inrikes (Finland), Europa (Europe), Världen (the world), 1 klass (1st class), 2 klass (2nd class), and Julpost (Christmas mail). As of February 2024, the current values of non-denominated Åland postage stamps, or no-value indicator (NVI) are: Lokalpost (domestic, within Åland only): €2.80, Inrikes (Finland): €2.90, Europa (Europe): €3.20, Världen (the world): €3.40, 1 klass (1st class): €2.80, 2 klass (2nd class): €2.40 and Julpost (Christmas mail): €1.50.
Finland
Finland's first NVI stamp () was issued on 2.3.1992. There are two denominations, one valid for domestic 1st class, or overnight, domestic letter of up to 50 g and the other for similar 2nd class letter. The stamps may be combined for more expensive tariffs.
Norway
Posten Norge launched these on 1 September 2005. They were first only used for domestic mail, later expanded to include Europe and World denominations.
They are called (Value free stamps).
Sweden
Sweden issues two forms of NVI valid for letters within Sweden of up to 50 g. These stamps may be combined when the weight of a letter exceeds 50 g. For up to 100 g – use two stamps; for up to 250 g – use 4 stamps; 500 g – 6 stamps; 1 kg – 8 stamps; 2 kg – 12 stamps. There are no longer surcharges to bulkier letters. The Swedish name for NVI stamps is .
Brev: first class delivery within Sweden. Brev ('letter') or Brev Inrikes ('domestic letter') is printed on the stamps. Price as of January 2020 - 11 SEK;
Julpost: first class delivery within Sweden. Julpost ('Christmas mail') is printed on the stamp. Price is 0.50 SEK lower than brev. Intended for use in a fixed period before Christmas.
NVIs that are no longer issued, but still valid for franking:
Ekonomibrev: used to be second class (up to three days for delivery) within Sweden. Price as of January 2009 - 5.50 SEK. The service does no longer exist.
Föreningsbrev: used to be rate for non-profit organizations. Price as of January 2009 - 5.00 SEK. The service does no longer exist.
Regular first class stamps can also be used to mail letters abroad, providing that their combined value corresponds to the appropriate rate by Swedish Post. For instance, to mail a letter up to 50 g in weight, two Brev stamps are required.
United Kingdom
Non-denominated postage was first introduced in the United Kingdom in 1989 for domestic mail, in part as a workaround to the problem of fast-changing rates, the Royal Mail issuing "non-value indicated" Machins using textual inscriptions "1ST" and "2ND" to indicate class of service rather than a monetary value. It later introduced further stamps, including for worldwide and European use, for different weights, and for postcards.
United States
Letter-denominated stamps
In past years, non-denominated postage issued by the United States differed from the issues of other countries, in that the stamps retained their original monetary value. Some stamps, such as those intended for local or bulk mail rate, were issued without denomination.
This practice began in 1975, when there was uncertainty as to the timing and extent of a rate increase from ten cents for the first ounce of first-class postage as the end of the year approached. Christmas stamps were released without denomination, giving the United States Postal Service (USPS) flexibility to refrain from reprinting hundreds of millions of stamps in a new denomination. The rate increase, to thirteen cents (US$0.13), occurred just after Christmas.
The United States also issued stamps with letter denomination, beginning from A, B, etc., during postal rate changes. After reaching the letter "H", this practice was discarded in favor of simply indicating the class of postage (e.g., first class) for which the stamp was intended.
Forever stamps
In 2006, the USPS applied for permission to issue a first-class postage stamp similar to non-denominated stamps, termed the "Forever stamp". The first such stamp was unveiled on March 26, 2007, and went on sale April 12, 2007, for 41 cents (US$0.41). Termed the "Liberty Bell" stamp, it was marked "USA first-class forever". On October 21, 2010, the second Forever stamp, featuring pinecones on evergreen trees, was issued for the holiday season. Coils of Forever stamps were first issued on December 1, 2010, in the se-tenant format with Lady Liberty and the Flag design. A re-design, announced June 16, 2011, featured four American scientists: Melvin Calvin, Asa Gray, Maria Goeppert Mayer, and Severo Ochoa. In 2011, all first-class stamps were changed to Forever stamps.
Forever stamps are sold at the prevailing first-class postage rate and remain valid for full first-class postage, regardless of later rate increases. For example, the original Forever stamps purchased in April 2007 for 41 cents per stamp are still valid, even though there have been multiple rate increases since then.
While domestic Forever stamps can be used for international mail if additional postage is attached, the Global Forever stamp was introduced in early 2013 specifically for first-class international mail. In October of the same year, another Global Forever stamp with a Christmas motif was issued. Two new Global Forever stamps were issued the following year. All four were also printed in limited quantities without die cuts (imperforated) for collectors. Another Global Forever stamp, showing the Moon, followed in 2016, by which time only die cut stamps were printed. New Global Forever designs have been issued every year since 2017.
In 2015, Forever stamps were expanded to postcard, non-machinable surcharge, and additional ounce stamps. These stamps have their intended purpose printed on them instead of a number; this is similar to some fundraising (semi-postal) stamps, such as the breast cancer research stamp, issued in 1998.
Forever stamps are being increasingly targeted by scammers, who sell counterfeits online for substantial discounts over legitimate Forever stamps.
References
Further reading
Washington Post article on the forever stamp
Official announcement of the US forever stamp proposal
External links
Discussion of UK version.
United States Postal Service guide to non-denominated postage stamps
Non-denominated US stamps: Pictures and rates
ForeverStamps.com Blog covering the Forever Stamp
Slate.com, Nathaniel Rich: "Should I invest in 'Forever' Stamps?" Slate, May 17, 2007: Criticism of Forever Stamps as an investment
Philatelic terminology
Postal systems | Non-denominated postage | [
"Technology"
] | 2,466 | [
"Transport systems",
"Postal systems"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.