id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
11,989,433 | https://en.wikipedia.org/wiki/Scott%20information%20system | In domain theory, a branch of mathematics and computer science, a Scott information system is a primitive kind of logical deductive system often used as an alternative way of presenting Scott domains.
Definition
A Scott information system, A, is an ordered triple
satisfying
Here means
Examples
Natural numbers
The return value of a partial recursive function, which either returns a natural number or goes into an infinite recursion, can be expressed as a simple Scott information system as follows:
That is, the result can either be a natural number, represented by the singleton set , or "infinite recursion," represented by .
Of course, the same construction can be carried out with any other set instead of .
Propositional calculus
The propositional calculus gives us a very simple Scott information system as follows:
Scott domains
Let D be a Scott domain. Then we may define an information system as follows
the set of compact elements of
Let be the mapping that takes us from a Scott domain, D, to the information system defined above.
Information systems and Scott domains
Given an information system, , we can build a Scott domain as follows.
Definition: is a point if and only if
Let denote the set of points of A with the subset ordering. will be a countably based Scott domain when T is countable. In general, for any Scott domain D and information system A
where the second congruence is given by approximable mappings.
See also
Scott domain
Domain theory
References
Glynn Winskel: "The Formal Semantics of Programming Languages: An Introduction", MIT Press, 1993 (chapter 12)
Models of computation
Domain theory | Scott information system | [
"Mathematics"
] | 323 | [
"Order theory",
"Domain theory"
] |
11,989,478 | https://en.wikipedia.org/wiki/CBX1 | Chromobox protein homolog 1 is a protein that in humans is encoded by the CBX1 gene.
Function
The protein is localized at heterochromatin sites, where it mediates gene silencing.
Interactions
CBX1 has been shown to interact with:
C11orf30,
CBX3 and
CBX5, and
SUV39H1.
See also
Heterochromatin protein 1
References
Further reading
External links
Transcription factors
Genes mutated in mice | CBX1 | [
"Chemistry",
"Biology"
] | 98 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
11,989,499 | https://en.wikipedia.org/wiki/CBX3 | Chromobox protein homolog 3 is a protein that is encoded by the CBX3 gene in humans.
At the nuclear envelope, the nuclear lamina and heterochromatin are adjacent to the inner nuclear membrane. The protein encoded by this gene binds DNA and is a component of heterochromatin. This protein also can bind lamin B receptor, an integral membrane protein found in the inner nuclear membrane. The dual binding functions of the encoded protein may explain the association of heterochromatin with the inner nuclear membrane. Two transcript variants encoding the same protein but differing in the 5' UTR, have been found for this gene.
Interactions
CBX3 has been shown to interact with PIM1, Ki-67, Lamin B receptor, CBX5 and CBX1.
See also
Heterochromatin protein 1
References
Further reading
External links
Transcription factors | CBX3 | [
"Chemistry",
"Biology"
] | 182 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
11,990,241 | https://en.wikipedia.org/wiki/Oil%20immersion | In light microscopy, oil immersion is a technique used to increase the resolving power of a microscope. This is achieved by immersing both the objective lens and the specimen in a transparent oil of high refractive index, thereby increasing the numerical aperture of the objective lens.
Without oil, light waves reflect off the slide specimen through the glass cover slip, through the air, and into the microscope lens (see the colored figure to the right). Unless a wave comes out at a 90-degree angle, it bends when it hits a new substance, the amount of bend depending on the angle. This distorts the image. Air has a very different index of refraction from glass, making for a larger bend compared to oil, which has an index more similar to glass. Specially manufactured oil can have nearly exactly the same refractive index as glass, making an oil immersed lens nearly as effective as having glass entirely around the sample (which would be impractical).
Immersion oils are transparent oils that have specific optical and viscosity characteristics necessary for use in microscopy. Typical oils used have an index of refraction of around 1.515. An oil immersion objective is an objective lens specially designed to be used in this way. Many condensers also give optimal resolution when the condenser lens is immersed in oil.
Theoretical background
Lenses reconstruct the light scattered by an object. To successfully achieve this end, ideally, all the diffraction orders have to be collected. This is related to the opening angle of the lens and its refractive index. The resolution of a microscope is defined as the minimum separation needed between two objects under examination in order for the microscope to discern them as separate objects. This minimum distance is labelled δ. If two objects are separated by a distance shorter than δ, they will appear as a single object in the microscope.
A measure of the resolving power, R.P., of a lens is given by its numerical aperture, NA:
where λ is the wavelength of light. From this it is clear that a good resolution (small δ) is connected with a high numerical aperture.
The numerical aperture of a lens is defined as
where α0 is half the angle spanned by the objective lens seen from the sample, and n is the refractive index of the medium between the lens and specimen (≈1 for air).
State of the art objectives can have a numerical aperture of up to 0.95. Because sin α0 is always less than or equal to unity (the number "1"), the numerical aperture can never be greater than unity for an objective lens in air. If the space between the objective lens and the specimen is filled with oil however, the numerical aperture can obtain values greater than unity. This is because oil has a refractive index greater than 1.
Oil immersion objectives
From the above it is understood that oil between the specimen and the objective lens improves the resolving power by a factor 1/n. Objectives specifically designed for this purpose are known as oil immersion objectives.
Oil immersion objectives are used only at very large magnifications that require high resolving power. Objectives with high power magnification have short focal lengths, facilitating the use of oil. The oil is applied to the specimen (conventional microscope), and the stage is raised, immersing the objective in oil. (In inverted microscopes the oil is applied to the objective).
The refractive indices of the oil and of the glass in the first lens element are nearly the same, which means that the refraction of light will be small upon entering the lens (the oil and glass are optically very similar). The correct immersion oil for an objective lens has to be used to ensure that the refractive indices match closely. Use of an oil immersion lens with the incorrect immersion oil, or without immersion oil altogether, will suffer from spherical aberration. The strength of this effect depends on the size of the refractive index mismatch.
Oil immersion can generally only be used on rigidly mounted specimens otherwise the surface tension of the oil can move the coverslip and so move the sample underneath. This can also happen on inverted microscopes because the coverslip is below the slide.
Immersion oil
Before the development of synthetic immersion oils in the 1940s, cedar tree oil was widely used. Cedar oil has an index of refraction of approximately 1.516. The numerical aperture of cedar tree oil objectives is generally around 1.3. Cedar oil has a number of disadvantages however: it absorbs blue and ultraviolet light, yellows with age, has sufficient acidity to potentially damage objectives with repeated use (by attacking the cement used to join lenses), and diluting it with solvent changes its viscosity (and refraction index and dispersion). Cedar oil must be removed from the objective immediately after use before it can harden, since removing hardened cedar oil can damage the lens.
In modern microscopy synthetic immersion oils are more commonly used, as they eliminate most of these problems. NA values of 1.6 can be achieved with different oils. Unlike natural oils, synthetic ones do not harden on the lens and can typically be left on the objective for months at a time, although to best maintain a microscope it is best to remove the oil daily. Over time oil can enter for the front lens of the objective or into the barrel of the objective and damage the objective. There are different types of immersion oils with different properties based on the type of microscopy. Type A and Type B are both general purpose immersion oils with different viscosities. Type F immersion oil is best used for fluorescent imaging at room temperature (23°C), while type N oil is made to be used at body temperature (37°C) for live cell imaging applications. All have a nD of 1.515, quite similar to the original cedar oil.
See also
Immersion lithography
Index-matching material
Solid immersion lens
Water immersion objective
References
Practical Microscopy by L.C. Martin and B.K. Johnson, Glasgow (1966).
Light Microscopy by J.K. Solberg, Tapir Trykk (2000).
External links
"Microscope Objectives: Immersion Media" by Mortimer Abramowitz and Michael W. Davidson, Olympus Microscopy Resource Center (website), 2002.
"Immersion Oil Microscopy" by David B. Fankhauser, Biology at University of Cincinnati, Clermont College (website), December 30, 2004.
"History of Oil Immersion Lenses" by Jim Solliday, Southwest Museum of Engineering, Communications, and Computation (website), 2007.
"Immersion Oil and the Microscope" by John J. Cargille, New York Microscopical Society Yearbook, 1964 (revised, 1985). (Archived at Cargille Labs (website).)
Microscopy
Microscope components
Lenses | Oil immersion | [
"Chemistry"
] | 1,374 | [
"Microscopy"
] |
11,991,342 | https://en.wikipedia.org/wiki/Magnetobiology | Magnetobiology is the study of biological effects of mainly weak static and low-frequency magnetic fields, which do not cause heating of tissues. Magnetobiological effects have unique features that obviously distinguish them from thermal effects; often they are observed for alternating magnetic fields just in separate frequency and amplitude intervals. Also, they are dependent of simultaneously present static magnetic or electric fields and their polarization.
Magnetobiology is a subset of bioelectromagnetics. Bioelectromagnetism and biomagnetism are the study of the production of electromagnetic and magnetic fields by biological organisms. The sensing of magnetic fields by organisms is known as magnetoreception.
Biological effects of weak low frequency magnetic fields, less than about 0.1 millitesla (or 1 Gauss) and 100 Hz correspondingly, constitutes a physics problem. The effects look paradoxical, for the energy quantum of these electromagnetic fields is by many orders of value less than the energy scale of an elementary chemical act. On the other hand, the field intensity is not enough to cause any appreciable heating of biological tissues or irritate nerves by the induced electric currents.
Effects
An example of a magnetobiological effect is the magnetic navigation by migrant animals by means of magnetoreception.
Many animal orders, such as certain birds, marine turtles, reptiles, amphibians and salmonoid fishes are able to detect small variations of the geomagnetic field and its magnetic inclination to find their seasonal habitats. They are said to use an "inclination compass". Certain crustaceans, spiny lobsters, bony fish, insects and mammals have been found to use a "polarity compass", whereas in snails and cartilageous fish the type of compass is as yet unknown. Little is known about other vertebrates and arthropods. Their perception can be on the order of tens of nanoteslas.
Magnetic intensity as a component of the navigational ‘map’ of pigeons had been discussed since the late nineteenth century. One of the earliest publications to prove that birds use magnetic information was a 1972 study on the compass of European robins by Wolfgang Wiltschko. A 2014 double blinded study showed that European robins exposed to low level electromagnetic noise between about 20 kHz and 20 MHz, could not orient themselves with their magnetic compass. When they entered aluminium-screened huts, which attenuated electromagnetic noise in the frequency range from 50 kHz to 5 MHz by approximately two orders of magnitude, their orientation reappeared.
For human health effects see electromagnetic radiation and health.
Magnetoreception
Several neurobiological models on the primary process which mediates the magnetic input have been proposed:
radical pair mechanism: direction-specific interactions of radical pairs with the ambient magnetic field.
processes involving permanently magnetic (iron-bearing) material like magnetite in tissues
Magnetically induced changes in physical/chemical properties of liquid water.
In the radical pair mechanism photopigments absorb a photon, which elevates it to the singlet state. They form singlet radical pairs with antiparallel spin, which, by singlet–triplet interconversion, may turn into triplet pairs with parallel spin. Because the magnetic field alters the transition between spin state the amount of triplets depends on how the photopigment is aligned within the magnetic field. Cryptochromes, a class of photopigments known from plants and animals appear to be the receptor molecules.
The induction model would only apply to marine animals because as a surrounding medium with high conductivity only salt water is feasible. Evidence for this model has been lacking.
The magnetite model arose with the discovery of chains of single domain magnetite in certain bacteria in the 1970s. Histological evidence in a large number of species belonging to all major phyla. Honey bees have magnetic material in the front part of the abdomen while in vertebrates mostly in the ethmoid region of the head. Experiments prove that the input from magnetite-based receptors in birds and fish is sent over the ophthalmic branch of the trigeminal nerve to the central nervous system.
Safety standards
Practical significance of magnetobiology is conditioned by the growing level of the background electromagnetic exposure of people. Some electromagnetic fields at chronic exposures may pose a threat to human health. World Health Organization considers enhanced level of electromagnetic exposure at working places as a stress factor. Present electromagnetic safety standards, worked out by many national and international institutions, differ by tens and hundreds of times for certain EMF ranges; this situation reflects the lack of research in the area of magnetobiology and electromagnetobiology. Today, most of the standards take into account biological effects just from heating by electromagnetic fields, and peripheral nerve stimulation from induced currents.
Medical approach
Practitioners of magnet therapy attempt to treat pain or other medical conditions by relatively weak electromagnetic fields. These methods have not yet received clinical evidence in accordance with accepted standards of evidence-based medicine. Most institutions recognize the practice as a pseudoscientific one.
See also
Bioelectrochemistry
Magnetoelectrochemistry
Electromagnetic radiation and health
Transcranial magnetic stimulation
References
Further reading
Presman A.S. Electromagnetic Fields and Life, Plenum, New York, 1970.
Kirschvink J.L., Jones D.S., MacFadden B.J. (Eds.) Magnetite Biomineralization and Magnetoreception in Organisms. A New Biomagnetism, Plenum, New York, 1985.
Binhi V.N. Magnetobiology: Underlying Physical Problems. — Academic Press, San Diego, 2002. — 473 p. —
Binhi V.N., Savin A.V. Effects of weak magnetic fields on biological systems: Physical aspects. Physics – Uspekhi, V.46(3), Pp. 259–291, 2003.
Scientific journals
Bioelectromagnetics
Electromagnetic Biology and Medicine
Biomedical Radioelectronics
Biophysics
Radiobiology | Magnetobiology | [
"Chemistry",
"Biology"
] | 1,216 | [
"Radiobiology",
"Radioactivity"
] |
11,991,996 | https://en.wikipedia.org/wiki/Hyalophane | Hyalophane or jaloallofane is a crystalline mineral, part of the feldspar group of tectosilicates. It is considered a barium-rich potassium feldspar. Its chemical formula is , and it has a hardness of 6 to . The name hyalophane comes from the Greek , meaning "glass", and meaning "to appear".
An occurrence of hyalophane was discovered in 1855 in Lengenbach Quarry, Imfield, Binn valley, municipality of Binn, Canton of Valais, Switzerland. The mineral is found predominantly in Europe, with occurrences in Switzerland, Australia, Bosnia, Germany, Japan, New Jersey, and the west coast of North America. Hyalophane may be found in manganese deposits in compact metamorphic zones.
Hyalophane has a monoclinic crystallography, with cell properties a = 8.52 Å, b = 12.95 Å, c = 7.14 Å, and β = 116°. Optically, the material exhibits biaxial birefringence, with refractive index values of nα = 1.542, nβ = 1.545, and nγ = 1.547 and a maximum birefringence of δ = 0.005. It has weak dispersion and low surface relief.
Hyalophane has sometimes been used as a gemstone.
References
Tectosilicates
Barium minerals
Feldspar
Gemstones
Monoclinic minerals
Minerals in space group 12 | Hyalophane | [
"Physics"
] | 319 | [
"Materials",
"Gemstones",
"Matter"
] |
11,992,117 | https://en.wikipedia.org/wiki/Cytotechnology | Cytotechnology is the microscopic interpretation of cells to detect cancer and other abnormalities. This includes the examination of samples collected from the uterine cervix (Pap test), lung, gastrointestinal tract, or body cavities.
A cytotechnologist is an allied health professional trained to evaluate specimens on glass slides using microscopes. Two evaluations can be performed, starting with the initial evaluation, which can be performed by a computer, and points out areas that may be of particular interest for later examination. Then, the cytotechnologist performs a secondary evaluation and determines whether a specimen is normal or abnormal. Abnormal specimens are referred to a pathologist for final interpretation or medical diagnosis.
Much like with other medical fields, a Cytotechnologist's work must be completed with high fidelity and must be interpreted properly. The working relationship between the cytotechnologist and the pathologist provides quality control for their work. Discussions between the two and the severity of their disagreements are used as a measurement for quality assurance. For example, when a cytotechnologist is working to construct a tissue microarray (TMA), they must check in with the supervising pathologist at multiple points in the process. The pathologist must first work with the cytotechnologist to make sure they know what to look for in the selection process and then review any selection that the cytotechnologist has concerns about.
Different countries have different certification requirements and standards for cytotechnologists. In the United States, there are currently two routes for certification: a person can first earn a bachelor's degree and then attend an accredited program in cytotechnology for one year, or they can attend a cytotechnology program that also awards a bachelor's degree. After successful completion of either route, the individual becomes eligible to take a certification exam offered by the American Society for Clinical Pathology (ASCP). People who complete the requirements and pass the examination are entitled to designate themselves as "CT (ASCP)". People who reside outside the United States have the option to take the certification exam for the ASCPi instead. They must complete all requirements to be eligible, and upon successful completion, can practice in the USA. The American Society for Cytotechnology (ASCT) sets U.S. professional standards, monitors legislative and regulatory issues, and provides education. Individual states regulate the licensure of cytotechnologists, usually following American Society of Cytopathology (ASC) guidelines.
Other countries have their own versions of the ASCT, including the British Association of Cytopathology (BAC) in the UK and the European Federation of Cytology Societies (EFCS) in the EU.
The ASC is for cytopathologists but certain qualified cytotechnologists can join it as well.
History
ASCP certified the first cytotechnologist in 1957. In the 1970s, the number of schools peaked at 130, before dropping to 32 in 2011.
See also
Gynaecologic cytology
Cytopathology
References
External links
American Society for Cytotechnology website
Pathology
Laboratory healthcare occupations | Cytotechnology | [
"Biology"
] | 651 | [
"Pathology"
] |
11,992,203 | https://en.wikipedia.org/wiki/Private%20Disk | Private Disk is a disk encryption application for the Microsoft Windows operating system, developed by Dekart SRL. It works by creating a virtual drive, the contents of which is encrypted on-the-fly; other software can use the drive as if it were a usual one.
One of Private Disk's key selling points is in its ease of use, which is achieved by hiding complexity from the end user (e.g. data wiping is applied transparently when an encrypted image is deleted.) This simplicity does however reduce its flexibility in some respects (e.g. it only allows the use of AES-256 encryption.)
Although Private Disk uses a NIST certified implementation of the AES and SHA-256/384/512 algorithms, this certification is restricted to a single component of Private Disk; the encryption/hash library used and not to Private Disk as a complete system.
Feature highlights
NIST-certified implementation of AES-256-bit, and SHA-2. Private Disk complies with FIPS 197 and FIPS 180-2
CBC mode with secret IVs is used to encrypt the sectors of the storage volume
Disk Firewall, an application-level filter, which allows only trusted programs to access the virtual drive
Ability to run directly from a removable drive, requiring no local installation
Offers access to encrypted data on any system, even if administrative privileges are not available
Encrypted images can be accessed on Windows Mobile and Windows CE handhelds; this is achieved by making the encrypted container format compatible with containers used by SecuBox (disk encryption software by Aiko Solutions)
File wiping is applied when deleting an encrypted image
PD File Move, a file migration tool, which will locate the specified files on the system and securely move them to an encrypted disk
Compatibility with Windows 9x and Windows NT operating systems
Autorun and Autofinish automatically start a program or a script when a virtual disk is mounted or dismounted
Encrypted backup of an encrypted image
Password quality meter
Automatic backup of a disk's encryption key
Built-in password recovery tool
Compatibility with 64-bit platforms
Existing versions
There are multiple versions of Private Disk, which provide a different feature set:
Private Disk - hard disk encryption software that uses 256-bit AES encryption, is highly configurable, offers application-level protection, USB disk portability, etc.
Private Disk Multifactor is a superset of Private Disk, providing the same functionality, adding support for biometric authentication, as well as smart-card or token-based authentication.
Private Disk Light is a free version, it uses AES-128 and comes with a restricted set of features.
Private Disk SDK is a software development kit that can be used to build a custom application which provides data encryption facilities.
See also
Disk encryption software
Comparison of disk encryption software
External links
Dekart company web-page
Information about certified implementations of the cryptographic algorithms
Private Disk discussion forum
Release notes for the latest version
References
Cryptographic software
Windows security software
Disk encryption | Private Disk | [
"Mathematics"
] | 636 | [
"Cryptographic software",
"Mathematical software"
] |
11,992,699 | https://en.wikipedia.org/wiki/Ostrich%20effect | The ostrich effect, also known as the ostrich problem, was originally coined by Galai & Sade (2003). The name comes from the common (but false) legend that ostriches bury their heads in the sand to avoid danger. This effect is a cognitive bias where people tend to “bury their head in the sand” and avoid potentially negative but useful information, such as feedback on progress, to avoid psychological discomfort.
Neuroscientific evidence
There is neuroscientific evidence of the ostrich effect. Sharot et al. (2012) investigated the differences in positive and negative information when updating existing beliefs. Consistent with the ostrich effect, participants presented with negative information were more likely to avoid updating their beliefs. Moreover, they found that the part of the brain responsible for this cognitive bias was the left IFG - by disrupting this part of the brain with TMS, participants were more likely to accept the negative information provided.
Researched contexts & applications
Finance
An everyday example of the ostrich effect in a financial context is people avoiding checking their bank account balance after spending a lot of money. The studies below explore the ostrich effect through investors in financial markets.
Galai & Sade (2003) studied investors' decision-making in Israel’s capital market. They found that investors prefer financial investments where the risk is unreported over those with a similar risk-return profile but with frequently reported risks, saying that investors are willing to pay a premium for "the bliss of ignorance".
Later, Karlsson et al. (2005) studied investors’ decision-making in Swedish and US markets. They determined that investors from both countries looked up their portfolios more when the market index was increasing (positive information) and less when the index was decreasing (negative information).
Healthcare
There are known negative implications of the ostrich effect in healthcare. For example, people with diabetes avoid monitoring their blood sugar levels.
Banerjee & Zanella highlighted the ostrich effect in avoiding preventive screening, studying women working at a company to understand how a woman’s propensity to get annual mammograms changes after a co-worker is diagnosed with breast cancer. The company had on-site mammograms and removed all barriers to getting them, such as cost and long queues. 70% of eligible women took up the company’s offer of an annual mammogram. However, surprisingly, in the presence of a co-worker diagnosed with breast cancer, women “spatially closer to her in the workplace” are 8% less likely to get a screening. Highlighting that in the presence of potentially negative information, people tend to avoid the chance to receive it.
Climate and energy
Research has found that when people feel uninformed about a pressing matter, they may exhibit the ostrich effect. The ostrich effect may explain why people sometimes avoid tackling climate change or energy depletion.
Shepherd & Kay (2012) presented participants with a passage. One group read that the US would have oil for 240 more years (positive information), while the other read that supplies would diminish in 40 years (negative information). Afterwards, participants completed a questionnaire to gauge their interest in learning about energy depletion. Those who read that energy depletion was an urgent problem and that oil would run out in 40 years were more likely to avoid learning about the issue.
Theories on causes
Cognitive dissonance
Cognitive dissonance is a state of psychological discomfort that arises when an individual holds two or more conflicting beliefs. Chang et al. (2017) found that when participants ranked reasons on why they did not monitor progress, the main reason was that “information on goal progress would demand a change in beliefs”. This statement shows that when confronted with information that contradicts their beliefs, individuals may experience cognitive dissonance and avoid seeking it to reduce discomfort. This avoidance is the ostrich effect. The opposite, seeking information consistent with your beliefs, is a cognitive bias termed confirmation bias.
Trustability
Chang et al. (2017) also found that some participants exhibited the ostrich effect because they did not trust the information provided. Lack of trust is especially true for negative information; Ilgen et al. (1979) found that people are more likely to trust positive feedback than negative feedback. Additionally, DeBono & Harnish (1988) found that the information's trustability depends on the perceived expertise of the information provider. The higher the perceived expertise, the more likely people trust it.
Loss aversion
Loss aversion is the tendency for people to feel the pain of losses more strongly than the pleasure of equivalent gains. Panidi (2015) looked at the link between loss aversion and the ostrich effect - loss aversion was measured through lottery choices, and the ostrich effect was measured through preventive medical testing. The study found that higher loss aversion decreases the chance of the decision to do a preventive medical test. Demonstrating that the higher the loss aversion in an individual, the more likely they are to display the ostrich effect by avoiding information on diagnosis.
Criticism
Meerkat-Effect
Initial findings
Gherzi et al. (2014) studied 617 investors from Barclays Wealth & Management UK. They found no perceivable attempt by investors to ignore or avoid negative information. Instead, they saw that "investors increase their portfolio monitoring following both positive and daily negative market returns, behaving more like hyper-vigilant meerkats than head-in-the-sand ostriches". They dubbed this phenomenon the "meerkat effect".
Follow-up research
Sicherman et al. (2016) showed that the sample and demographic moderate the extent that investors exhibited the ostrich effect. In a sample of 100,000, Sicherman et al. (2016) found that 79% of investors showed the ostrich effect while 21% had “anti-ostrich behaviour”, such as the meerkat effect.
The researchers argued that Gherzi et al. (2014) sample size of 617 investors was too small, one potential reason that most investors exhibited the meerkat effect rather than the ostrich effect. Sicherman et al. (2016) also showed that the ostrich effect appeared more in “men, older investors and wealthier investors”.
Future research: Culture
Another moderator for the ostrich effect that has yet to be specifically studied but has been theorised is cultural differences. Culture may impact the ostrich effect as the underlying causes of the ostrich effect are all influenced by culture. Hoshino-Browne et al. (2005) showed that cognitive dissonance is resolved in different manners in collectivistic cultures compared to individualistic cultures. Furthermore, Wang et al. (2016) shows that loss aversion is higher in individualistic cultures, and Rose & Kitayama (1991) found that collectivistic cultures tend to trust negative feedback and reject positive ones. Individualism appears more in western culture, hinting at the ostrich effect being higher in western cultures. The studies on the ostrich effect are predominantly conducted on western cultures; therefore, future studies must test for potential cultural differences in the ostrich effect.
See also
Ostrich policy - another article about the same thing
Confirmation bias
Denial
Elephant in the room
Loss aversion
Selective exposure
Voldemort effect
References
Adages
Behavioral finance
Cognitive biases
Metaphors referring to birds | Ostrich effect | [
"Biology"
] | 1,555 | [
"Behavioral finance",
"Behavior",
"Human behavior"
] |
11,992,990 | https://en.wikipedia.org/wiki/Vanadium%28III%29%20sulfate | Vanadium(III) sulfate is the inorganic compound with the formula V2(SO4)3. It is a pale yellow solid that is stable to air, in contrast to most vanadium(III) compounds. It slowly dissolves in water to give the green aquo complex [V(H2O)6]3+.
The compound is prepared by treating V2O5 in sulfuric acid with elemental sulfur:
This transformation is a rare example of a reduction by elemental sulfur.
When heated in vacuum at or slightly below 410 °C, it decomposes into vanadyl sulfate (VOSO4) and SO2. Vanadium(III) sulfate is stable in dry air but upon exposure to moist air for several weeks forms a green hydrate form.
Vanadium(III) sulfate is a reducing agent.
References
Vanadium(III) compounds
Sulfates | Vanadium(III) sulfate | [
"Chemistry"
] | 184 | [
"Salts",
"Inorganic compounds",
"Sulfates",
"Inorganic compound stubs"
] |
11,993,641 | https://en.wikipedia.org/wiki/Hydrocollator | The hydrocollator, first introduced in 1947 by the Chattanooga Pharmaceutical Company, consists of a thermostatically controlled water bath for placing bentonite-filled cloth heating pads. When the pads are removed from the bath, they are placed in covers and placed on the patient. The device is primarily used by athletic trainers and physical therapists.
Research
The evidence behind the use of the hydrocollator is primarily concerned with achieving rapid heating of the tissue due to the more efficient transfer of energy through water as compared to air. There is some concern that hydrocollator treatment may be less effective with overweight or obese patients.
Heating methods are used commonly in patients with acute pain. It is recommended that heating pads be used at home on acute injuries for short term pain relief.
References
Medical equipment | Hydrocollator | [
"Biology"
] | 160 | [
"Medical equipment",
"Medical technology"
] |
11,993,841 | https://en.wikipedia.org/wiki/Pavo%E2%80%93Indus%20Supercluster | The Pavo–Indus Supercluster is a neighboring supercluster located about away in the constellations of Pavo, Indus, and Telescopium. The supercluster contains three main clusters, Abell 3656, Abell 3698, and Abell 3742.
Other groups and clusters in the supercluster include the NGC 6769 Group and Abell S805 (IC 4765 Group, Pavo II, DRCG 1842-63) and the massive Norma Cluster.
In 2014, it was announced that the Pavo–Indus Supercluster is a lobe in a greater supercluster, Laniakea, that is centered on the Great Attractor. The Virgo Supercluster would also be part of this greater supercluster, thus becoming the local supercluster.
Structure
The Pavo–Indus Supercluster exhibits a wall or filamentary structure that extends to a total length of . The supercluster along with the Telescopium−Grus Cloud form parts of a wall bounding the Local Void and the Sculptor Void.
Nearby superclusters
Centaurus Supercluster
In 1983, in a paper by Winkler et al it was suggested based on redshift maps of the distribution of galaxies that the Pavo–Indus Supercluster may be connected to the Centaurus Supercluster. Later in 1984, in a collaboration with astronomer Tony Fairall and in a separate paper by Fairall published in 1988 titled "A redshift map of the Triangulum Australe–Ara region: further indication that Centaurus and Pavo are one and the same supercluster" it was concluded based on distribution of galaxies in redshift space that the Pavo–Indus supercluster was indeed connected to the Centaurus and Hydra supercluster and that the Virgo Supercluster was an appendage of a larger structure involving these superclusters. Later studies concluded that Pavo–Indus formed part of a wall of galaxies similar in size to the CfA2 Great Wall, dubbed the Norma Wall with the Norma Cluster at its center similar to the Coma Cluster. This wall encompasses the Pavo–Indus supercluster through the Norma Cluster, passing the ZOA in the Great Attractor region, to meet up with the Centaurus–Crux Cluster at a redshift at about 5,700–6,200 km/s s and the CIZA J1324.7−5736 cluster at a redshift of 5700 km/s while also splitting off to form the Centaurus Wall passing the galactic plane to meet up with Centaurus Cluster where the supercluster originates.
Perseus–Pisces Supercluster
Di Nel la H. et al found no evidence of a connection between Pavo–Indus and the Perseus–Pisces Supercluster. However, Tully et al. revealed the existence of a filamentary extension of the Pavo–Indus Supercluster known as the Arch that caps the Local Void in the supergalactic north and provides a connection to the Perseus–Pisces Supercluster before terminating close to the NGC 7242 Cluster.
Ophiuchus Supercluster
The Pavo–Indus supercluster lies physically close to the Ophiuchus Supercluster and may be connected in an unknown filament between the two superclusters.
See also
Abell catalogue
Large-scale structure of the universe
List of Abell clusters
Supercluster
References
External links
The Pavo-Indus Supercluster from An Atlas of the Universe
Laniakea Supercluster
Galaxy superclusters | Pavo–Indus Supercluster | [
"Astronomy"
] | 781 | [
"Astronomical objects",
"Galaxy superclusters"
] |
11,993,854 | https://en.wikipedia.org/wiki/Common%20Data%20Link | Common Data Link (CDL) is a secure U.S. military communication protocol. It was established by the U.S. Department of Defense in 1991 as the military's primary protocol for imagery and signals intelligence. CDL operates within the at data rates up to 274 Mbit/s. CDL allows for full duplex data exchange. CDL signals are transmitted, received, synchronized, routed, and simulated by Common data link (CDL) Interface Boxes (CIBs).
The FY06 Authorization Act (Public Law ) requires use of CDL for all imagery, unless waiver is granted. The primary reason waivers are granted is from the inability to carry the 300 pound radios on a small (30 pound) aircraft. Emerging technology expects to field a 2-pound version by the end of the decade (2010).
The Tactical Common Data Link (TCDL) is a secure data link being developed by the U.S. military to send secure data and streaming video links from airborne platforms to ground stations. The TCDL can accept data from many different sources, then encrypt, multiplex, encode, transmit, demultiplex, and route this data at high speeds. It uses a Ku narrowband uplink that is used for both payload and vehicle control, and a wideband downlink for data transfer.
The TCDL uses both directional and omnidirectional antennas to transmit and receive the Ku band signal. The TCDL was designed for UAVs, specifically the MQ-8B Fire Scout, as well as crewed non-fighter environments. The TCDL transmits radar, imagery, video, and other sensor information at rates from 1.544 Mbit/s to 10.7 Mbit/s over ranges of 200 km. It has a bit error rate of 10e-6 with COMSEC and 10e-8 without COMSEC. It is also intended that the TCDL will in time support the required higher CDL rates of 45, 137, and 274 Mbit/s.
References
L-3 business segments
Avionics Systems Standardisation Committee
Secure communication
Military communications | Common Data Link | [
"Engineering"
] | 442 | [
"Military communications",
"Telecommunications engineering"
] |
11,994,387 | https://en.wikipedia.org/wiki/Niederdorla | Niederdorla is a village and a former municipality in the Unstrut-Hainich-Kreis district of Thuringia, Germany. One of the possible geographical centres of Germany is within its area. The nearest city is Erfurt, which also is the capital city of Thuringia. Since 31 December 2012, it has been part of the municipality of Vogtei.
Geographical centre of Germany
Niederdorla claims to be the most central municipality in Germany. A plaque was erected and a lime tree planted at after the 1990 German reunification. The point was confirmed as the centroid of the extreme coordinates by the Dresden University of Technology. Niederdorla also comprises the centre of gravity (equilibrium point) about to the southwest.
People from Niederdorla
Matthias Weckmann, born c. 1616 in Niederdorla, died 1674 in Hamburg, Baroque organist and composer
See also
Central Germany (geography)
References
Former municipalities in Thuringia
Geographical centres | Niederdorla | [
"Physics",
"Mathematics"
] | 207 | [
"Point (geometry)",
"Geometric centers",
"Geographical centres",
"Symmetry"
] |
11,995,186 | https://en.wikipedia.org/wiki/Telapristone | Telapristone (), as telapristone acetate (proposed brand names Proellex, Progenta; former code name CDB-4124), is a synthetic, steroidal selective progesterone receptor modulator (SPRM) related to mifepristone which is under development by Repros Therapeutics for the treatment of breast cancer, endometriosis, and uterine fibroids. It was originally developed by the National Institutes of Health (NIH), and, as of 2017, is in phase II clinical trials for the aforementioned indications. In addition to its activity as an SPRM, the drug also has some antiglucocorticoid activity.
See also
List of investigational sex-hormonal agents § Progestogenics
Aglepristone
Lilopristone
Onapristone
Toripristone
References
External links
Telapristone - AdisInsight
Acetate esters
Dimethylamino compounds
Antiglucocorticoids
Estranes
Ketones
Selective progesterone receptor modulators | Telapristone | [
"Chemistry"
] | 224 | [
"Ketones",
"Functional groups"
] |
11,996,133 | https://en.wikipedia.org/wiki/Intercontinental%20and%20transoceanic%20fixed%20links | A fixed link or fixed crossing is a permanent, unbroken road or rail connection across water that uses some combination of bridges, tunnels, and causeways and does not involve intermittent connections such as drawbridges or ferries. A bridge–tunnel combination is commonly used for major fixed links.
This is a list of proposed and actual transport links between continents and to offshore islands. See also list of bridge–tunnels for another list of fixed links including links across rivers, bays and lakes.
History
Cosmopolitan Railway
In 1890 William Gilpin first proposed to connect all the continents by land via the Cosmopolitan Railway. Significant elements of that proposal, such as the English Channel Tunnel, have been constructed since that era. However, the improvement of the global shipping industry and advent of international air travel has reduced the demand for many intercontinental land connections.
Europe
English Channel
There is no public highway connection between Great Britain and the European mainland; only a rail connection, the Channel Tunnel.
A cross channel tunnel was first proposed in 1802 and construction actually started in 1881 before being abandoned. Roll-on/roll-off ferry services provided links across the channel for vehicles.
A road tunnel was proposed in 1979, but not considered viable. Construction of the Channel Tunnel started in 1988 and the tunnel opened in 1994. Automobiles and lorries/transport trucks are loaded onto the Eurotunnel Shuttle's enclosed railway cars (similar to auto rack/motorail railway cars) for the trip through the tunnel. A service road tunnel runs the entire length of the crossing, but is closed to general use and used only during emergencies and for maintenance. Cyclists – both amateur and professional – have crossed the channel via the tunnel on special occasions.
There have been proposals at various times for a second channel tunnel of some kind.
Irish Sea
Various ferry services link Ireland to Britain and France. A number of options for an Irish Sea fixed crossing have been proposed over the years, but none are currently under serious consideration. Additionally, there was a short-lived proposal for an underground roundabout beneath the Isle of Man, connecting tunnels to Ireland, Scotland, and two to England. Another proposal was for an additional route from Scotland to the Isle of Man.
Channel Islands to France
In July 2009, the States of Jersey were considering the feasibility of building a 14-mile (23 km) tunnel between Jersey (a British Crown dependency) and Lower Normandy in France. There was a revival of the idea in 2018 and 2021. It was reported in the local media that a link between Jersey and the neighbouring island of Guernsey would cost £2.6 billion.
Kerch Strait
The Crimean Bridge is a pair of parallel bridges constructed by the Russian Federation following the annexation of Crimea, to span the Kerch Strait between the Taman Peninsula of Krasnodar Krai and the Kerch Peninsula of Crimea. The bridge complex provides for both vehicular traffic and for rail. During the Russian invasion of Ukraine, the bridge has been attacked or damaged on three occasions: 8 October 2022, 17 July 2023 and 12 August 2023. As of August 2023, the bridge reopened at a limited capacity.
Germany to Denmark to Sweden
The Øresund Bridge links southern Sweden to the Danish island of Zealand. Zealand is linked to the Danish mainland and the rest of Europe by the Great Belt Fixed Link. Most travellers between Sweden and Germany, both by road and train, use the shorter route with a ferry over the Fehmarn Belt southwestwards towards Hamburg or southwards to Rostock. The Fehmarn Belt Fixed Link is planned to be opened in 2029. A Gedser-Rostock Bridge is also under consideration but has been put back as the Fehmarn Belt crossing is now under construction. Proposals also exist for a fixed link from Rügen to southern Sweden, linking Berlin and the Øresund region.
Sweden to Finland
Ferry services link Sweden to Finland via Åland. There are proposals of fixed links between Sweden and Finland. A tunnel could be built between Sweden and Åland, about long, and deep, with the lowest depth around Märket, a little detour. The area between Åland and Finland is shallow with many islands, able to be connected with bridges - some of which already exist. Between Umeå and Vaasa further north, there is a proposal to build the Kvarken Bridge, a series of bridges, the longest , in total . None of these proposals have been seriously investigated.
Finland to Estonia
Ferry services link Finland to Estonia as well as overground rail and road routes via Saint Petersburg in Russia. Rail Baltica is a proposal for a rail link from Finland to Estonia, Latvia, Lithuania and Poland, bypassing Russia via a Helsinki to Tallinn Tunnel. The gulf has heavy ferry traffic, and the port of Helsinki has the largest number of international passengers of any port in Europe, and most travel to Tallinn or back. Finland and Estonia share close linguistic, cultural, economic and historical ties and proponents of what they call "Talsinki" (a portmanteau of the names of the two capitals) point to the Øresund region as an example of a cross-national metropolitan area linked by an underwater bridge-tunnel. A combination of a Finland to Estonia and a Finland to Sweden fixed link would reduce the need for ferries on the route the MS Estonia was on when it sank in 1994 causing a loss of 852 lives, the biggest peacetime maritime disaster in the Baltic.
Italian mainland to Sicily
The Strait of Messina has a busy ferry traffic. The Strait of Messina Bridge is planned, but the construction date has been postponed several times.
Italian mainland to Elba
There is a project to link Elba with mainland Italy (through Piombino in Tuscany) crossing the Piombino Channel in a Road Tunnel of 16 km. The feasibility project was launched by the Cacelli Partners Ltd of Riccardo Cacelli, in collaboration with the Adu London studio of the architect David Ulivagnoli.
Sardegna–Maddalena
There is a project of an underwater tunnel that will link Palau (in Sardinia) with the island of La Maddalena, crossing a 3 km stretch of sea.
Corsica–Sardegna
The has been interest from Hyperloop One in using Hyperloop to connect the islands.
Italy–Croatia Bridge
There were proposals to build a railway and highway bridge over the Adriatic sea to connect Italy and Croatia, from Ancona to Zadar following a 120 km route. The idea was presented by the Roman architect Giorgio De Romanis, and also called for the creation of a special company "Il ponte sull'Adriatico Srl". The bridge would be suspended above the sea at a height of between 30 and 70 meters, and would also allow the laying of pipes for water, oil and gas, as well as the accommodation of telecommunication cables. The idea received the support of the ex-governor of the Marche, Gian Mario Spacca. Some sources considered the project as being more realistic than a bridge between Calabria and Sicily
Norway
The Boknafjord tunnel (main part of the Rogfast project) is under construction and will in 2033 be the world's longest and deepest undersea road tunnel, long and reach under sea level. It will connect the island of Bokn with the mainland at Stavanger under the open Bokna Fjord.
Faroe Islands
Tunnels and bridges are an important part of the Faroe Islands transportation network. The longest proposed one is the 25 km Suðuroyartunnilin (the Suðuroy Tunnel).
Europe to Africa
Gibraltar Tunnel
The Gibraltar Tunnel is proposed to be a rail tunnel linking Africa and Europe. A tunnel would likely be an electrified rail tunnel with car shuttles due to the depth of the Strait of Gibraltar (up to ) and the length of the tunnel making it a great challenge to remove vehicle exhaust. Similar considerations led to the Channel Tunnel linking the UK and France not being a highway tunnel. There have also been proposals for a bridge over the strait, although ship traffic would complicate this solution. Car ferries currently operate across the strait.
Strait of Sicily
The proposed Strait of Sicily Tunnel would link Sicily to Tunisia. Together with the proposed Strait of Messina Bridge from Sicily to Italy this would provide a fixed link between Italy and Tunisia.
Europe to Asia
The Turkish Straits are the channel between European Turkey and Asian Turkey and consist of the (from south to north) the Dardanelles, the Sea of Marmara and the Bosphorus.
The Bosphorus
Three suspension bridges cross the Bosphorus. The first of these, the Bosphorus Bridge, is long and was completed in 1973. The second, named Fatih Sultan Mehmet (Bosporus II) Bridge, is long, and was completed in 1988 about north of the first bridge. The Bosphorus Bridge forms part of the O1 Motorway, while the Fatih Sultan Mehmet Bridge forms part of the Trans-European Motorway.
Construction of a third suspension bridge, the Yavuz Sultan Selim Bridge, began on May 29, 2013; it was opened to traffic on August 26, 2016. The bridge was built near the northern end of the Bosporus, between the villages of Garipçe on the European side and Poyrazköy on the Asian side. It is part of the "Northern Marmara Motorway", which will be further integrated with the existing Black Sea Coastal Highway, and will allow transit traffic to bypass city traffic.
The Marmaray project, featuring a long undersea railway tunnel, opened on 29 October 2013. Approximately of the tunnel runs under the strait, at a depth of about .
An undersea water supply tunnel with a length of , named the Bosporus Water Tunnel, was constructed in 2012 to transfer water from the Melen Creek in Düzce Province (to the east of the Bosporus strait, in northwestern Anatolia) to the European side of Istanbul, a distance of .
The Eurasia Tunnel is a road tunnel between Kazlicesme and Goztepe, which began construction in February 2011 and opened to traffic on 21 December 2016. The Great Istanbul Tunnel, a proposed undersea road and railway tunnel, will connect Şişli and Beykoz districts.
The Dardanelles
The Çanakkale 1915 Bridge opened in 2022, crossing the strait between the cities of Gelibolu and Lapseki.
Africa to Asia
Suez Canal Bridge
The Mubarak Peace Bridge, also known as the Egyptian-Japanese Friendship Bridge, Al Salam Bridge, or Al Salam Peace Bridge, is a road bridge crossing the Suez Canal at El-Qantara, whose name means "the bridge" in Egyptian Arabic. The bridge links the continents of Africa and Asia.
Saudi–Egypt Causeway
The Saudi–Egypt Causeway is a proposal for a causeway and bridge between the Sinai Peninsula in Egypt and the northern part of Saudi Arabia. This would provide a direct road route between Egypt and Saudi Arabia without going through Israel or Jordan. A causeway faces considerable political hurdles as the disruption of Israeli shipping access to the Red Sea was seen as a casus belli by Israel ahead of the Six-Day War. There is a car ferry between Safaga, Egypt and Duba, Saudi Arabia. The two uninhabited islands in the strait (Tiran island and Sanafir island), which might be used for a bridge, tunnel or causeway, were disputed between Egypt and Saudi Arabia until President Abdel Fatah al-Sisi of Egypt officially ceded them to Saudi Arabia in 2016/2017. The potential construction of a fixed link was cited in some media reports as contributing to the cession.
Bridge of the Horns
The Bridge of the Horns is a proposed construction project to build a bridge between the coasts of Djibouti and Yemen across the Bab-el-Mandeb, the strait between the Red Sea and Gulf of Aden. There are no ferry services on this route as of 2018.
Jazan–Massawa Tunnel
Saudi Arabia considers to develop a 440 km tunnel across the Red Sea to link its industrial city of Jazan with Massawa's port in Eritrea. It had the support of Pakistan and the Chinese Belt and Road Initiative.
Asia
Sri Lanka
The Palk Strait bridge proposal between India to Sri Lanka. India Boat Mail train and ferry service provided a train and ferry service from India to Sri Lanka until the First World War. An India–Sri Lanka HVDC Interconnection is under consideration to link the electricity networks of these countries.
South East Asian islands
Mainland Peninsular Malaysia is linked to Penang Island by two road bridges: the Penang Bridge and the Sultan Abdul Halim Muadzam Shah Bridge (Penang Second Bridge). To the south, it is linked to Singapore Island across the Straits of Johor by the Johor–Singapore Causeway and the Malaysia–Singapore Second Link; the former also carries Malaysia's West Coast Line to the island.
There are proposals to link Johor (in Malaysia) and Riau (in Indonesia), in a Malacca Strait Bridge or underwater tunnel crossing Strait of Malacca and some islands. The longest connection is a 17.5 kilometer. The total length is between 39 and 40 kilometers. There's also a proposal of a Singapore Strait crossing linking Singapore with the Riau archipelago of Indonesia, most likely with the island of Batam. Both projects would link Indonesia (specifically Sumatra and Java islands) to Mainland Asia.
Passenger and vehicle ferries link the various islands of Indonesia, the Philippines, Singapore, Malaysia, and Papua New Guinea.
There are proposals to link Java, the most populated Island of Indonesia, to Sumatra via a proposed Sunda Strait Bridge and from Sumatra to Singapore and/or Malaysia via the Malacca Strait Bridge. While its construction has not yet started, but the completion date of the bridge is set to August 2025.
Hainan Island
The Guangdong–Hainan Ferry, or the Yuehai Ferry (part of the Guangdong–Hainan Railway) is a vehicle and train ferry connecting Hainan Island to Guangdong in mainland China. The ferries run across the Qiongzhou Strait, between Zhanjiang, Guangdong and Haikou, Hainan. A road-rail bridge has been proposed.
Bohai Strait
Bohai Strait tunnel project is a proposed connection that would connect the Chinese cities of Yantai and Dalian across the Bohai Strait.
Taiwan
The Taiwan Strait Tunnel Project is a proposed undersea tunnel to connect Pingtan in the People's Republic of China to Hsinchu in northern Taiwan as part of the G3 Beijing–Taipei Expressway. First proposed in 1996, the project has since been subject to a number of academic discussions and feasibility studies, including by the China Railway Engineering Corporation. There exist cross strait ferries, both within outlying islands of Taiwan and between the PRC and Taiwan. The political status of Taiwan complicates any such proposal.
Korea to China
The Korean government considered building underwater tunnels with China, the proposed route would be between Incheon-Weihai, being considerate to build an intermediary artificial island in the route of 341 kilometers. Other Korean cities, like Ongjin, Hwaseong and Pyeongtaek, are considered to be part of the routes to China. Also, is part of the Chinese comprehensive development plan for the Bohai area.
Mokpo–Jeju
The Jeju Undersea Tunnel is a project to connect the South Korean provinces of South Jeolla and Jeju across the Jeju Strait, with intermediate stops at the islands of Bogildo and Chujado. The total length of the proposed railway is 167 km, including a 66 km surface interval from Mokpo to Haenam, a 28 km bridge section from Haenam to Bogil Island, and a 73 km stretch from Bogil to Chuja and Jeju Islands.
Busan–Geoje Bridge
The Busan–Geoje Fixed Link (or Geoga Bridge) is an 8.2-km bridge-tunnel fixed link that connects the South Korean city of Busan to Geoje Island.
Korea to Japan
The "Korea Japan Friendship Tunnel System" is a proposal for a fixed link from the city of Fukuoka on Kyūshū, Japan, to the port city of Busan in Korea via four islands. The maximum ocean depth in this area is . Similar proposals have been discussed for decades by Korean and Japanese politicians. A road bridge links Kyūshū to the main Japanese island of Honshu.
Japan to Russia
The Seikan Tunnel has provided a rail link from the main Japanese Island of Honshu to the northernmost Japanese island of Hokkaido since 1988. The proposed Sakhalin-Hokkaido Tunnel would link Hokkaido to the Russian island of Sakhalin. When combined with the proposed Sakhalin Tunnel between Sakhalin and the Russian Mainland and an extension of the Baikal Amur Mainline this would give a rail link from Japan to Russia and the mainland of Asia.
Hong Kong–Zhuhai–Macau Bridge
The Hong Kong–Zhuhai–Macau Bridge links Hong Kong and Macau, and Zhuhai in China. Opened on October 24, 2018, it is the longest fixed crossing in the world.
Hōyo Strait
Shikoku and Kyushu are the only adjacent major Japanese islands not directly connected by a fixed link. Road travel between the two is possible only via Honshu, a detour of up to 600 km.
Since 1995, the Ōita and Ehime prefectures have been jointly conducting research into the technical feasibility of bridges over the Hōyo Strait and conducting basic research into natural and social conditions, and in 1998, in the "Hoyo Kaikyo Bridge Survey Report" it was concluded that the bridge would be technically feasible. The bridge proposed in the report uses a four-span suspension bridge with a central tower height of 376 m, central span length of 3,000 m, and bridge length of about 8,400 m as the main bridge, connecting the Toyo Strait with two bridges, the extension would be about 12.7 km. The total project cost is currently estimated to be about 1.3 trillion yen (US$12.1 billion).
The Hoyo Kaikyo Route Promotion Council conducted a survey comparing various crossing technologies (bridges, tunnels) and modes of transportation (automobiles, railways) in 1997, and "Transportation method comparison study report" was published. According to the report, in the case of bridges, road bridges are technically possible, but due to the long span, it is difficult to use them as railway bridges and combined bridges.
Qatar–Bahrain Causeway
The Qatar Bahrain Causeway was a planned causeway between the two Arab states of Qatar and Bahrain. It was expected that a ferry service would be established between the two countries in 2017.
Due to the Qatar diplomatic crisis and Bahrain's siding with Saudi Arabia, the bridge is very unlikely ever to be built.
King Fahd Causeway
The King Fahd Causeway is a series of bridges and causeways connecting Saudi Arabia and Bahrain. At , the western terminus of the causeway is the al-Khour neighbourhood of Khobar, Saudi Arabia and the eastern terminus is Al Jasra, Bahrain.
Iran–Qatar–Saudi Arabia Tunnel
Iran and Qatar (who will take most part of the project's financing) have plans for an underwater tunnel connecting the two countries, being planned to be the longest tunnel in the world (having 190 km). It would link the Iranian port of Bandar-e Deyr to an unspecified location in Qatar across the Persian Gulf in both road and railway sections; however, a road tunnel is not considered very feasible due to the long distance. It had the support of Iran's Ports and Maritime Organisation managing director, Ali Akbar Safaei, and Iranian President Ebrahim Raisi, who expect the creation of a joint Qatar-Iran committee with the Emir of Qatar, Tamim bin Hamad Al Thani. Also, it will create a straight and direct route between Saudi Arabia and Iran.
Iran–Oman Causeway
Iran proposed to build an overpass bridge over the strait of Hormuz that will link Iran economically to GCC countries and Yemen through 39 km road link between Oman's Musandam exclave and southern Iran. The idea was having the support of Iranian Ambassador to Oman Ali Akbar Sibeveih.
Persian Gulf Bridge
The Persian Gulf Bridge in Qeshm–Bandar Abbas (Iran) will be a 2.4 km- (1.5 mi-) long road-rail bridge, connecting Qeshm Island to mainland Iran, from the historic port of Laft to Pahal port in Bandar Abbas (Hormozgan Province). It was proposed to build an undersea tunnel instead of a bridge, but was rejected due to high costs.
Masirah island Bridge
In 2010, Oman's ruler, Sultan Qaboos bin Saeed, asked the government to plan for the construction of a bridge connecting Masirah Island on Oman's eastern coast to the mainland in a railroad.
Saudi Arabia–Pakistan
The Saudis were exploring two options to build a 400 km tunnel or bridge to link Gwadar (in Pakistan) with Muscat (in Oman) at the mouth of Strait of Hormuz at one end. The objective was to bypass its trade routes (including for oil supplies) with Iran and Qatar, because of the Iran–Saudi Arabia proxy conflict, also to extend the Chinese Belt and Road Initiative.
United Arab Emirates–India Tunnel
A 1,200 miles (2000 km) tunnel, carrying Maglev trains, is planned to connect India’s biggest city Mumbai and emirate Fujairah (United Arab Emirates) through the Arabian Sea in a high-speed railway line to transport passengers, tourists and workers in just 2 hours. It had the support of Abdullah Alshehhi, chief consultant of Abu Dhabi National Advisory Office (a Masdar City-based consultancy), and the Gulf Cooperation Council. It is considered the longest submarine tunnel project in the world and will have a depth of 15,000 feet below the surface of the Indian Ocean.
Also, Karachi (Pakistan) and Muscat (Oman) are also included future plans of train stations, making provision for a road to be constructed within the tunnel for cars and trucks as well as a floating hotel, shopping centres and fuel stations, featuring pipelines for oil and water. Expansion of the project might include the One Belt One Road Initiative, linking China to Pakistan economic corridor at Gwadar Port with UAE, through the Fujairah port to complement the Chinese silk road.
Seikan tunnel
One of the longest tunnels in the world and - depending on definitions (total length versus length actually under water) - either the longest or the second longest underwater tunnel ahead or behind of the Channel Tunnel, the Seikan Tunnel links Japan's northernmost main island Hokkaido to Honshu. Initially only built to Cape gauge, the rail line running through the tunnel has since been converted to dual gauge to allow standard gauge services, particularly Shinkansen. The Tōya Maru accident of 1954, in which a train ferry sank in a typhoon, killing over a thousand people, was a major factor in tilting the decision towards construction of the tunnel. The tunnel opened in 1988 and Hokkaido Shinkansen started running through it in 2016.
Bataan–Cavite Interlink Bridge
The Philippines is planning to build a bridge that will span Manila Bay and connect the provinces of Bataan and Cavite. The Bataan–Cavite Interlink Bridge, once completed, will be long and will consist of two cable-stayed bridges, with a span of and each. The National Economic and Development Authority (NEDA) approved the bridge project in early 2020 with a budget of . The implementation of the bridge project is projected to last six years.
In October 2020, the Department of Public Works and Highways (DPWH) signed a $59 million engineering design contract, awarded to the joint venture of T. Y. Lin International from the US and Korea's Pyunghwa Engineering Consultants Ltd., who are working in tandem with Geneva-based Renardet S.A. and local firm DCCD Engineering Corporation.
As of March 2023, the project's detailed engineering design is already 70% complete, according to DPWH. The construction of the bridge is targeted to start in late 2023.
Asia to America
Bering Strait bridge or tunnel
There is a proposal to span the Bering Sea with a bridge or tunnel called the Intercontinental Peace Bridge, the TKM-World Link or the AmerAsian Peace Tunnel. This would link the American Cape Prince of Wales with the Russian Cape Dezhnev. The link would consist of three tunnels connecting Alaska and Russia via two islands: Little Diomede (USA) and Big Diomede (Russia). The longest single tunnel would be . Since the Bering Sea at the site of the proposed crossing has a maximum known depth of , the tunnels might be dug with conventional tunnel boring machines of the type used to build the Channel Tunnel. The three tunnel proposal is considered to be preferable over a bridge due to severe environmental conditions, especially the inescapable winter ice damage.
Each proposed tunnel would be shorter than some current tunnels. The Channel Tunnel linking England with mainland Europe is about long; the Seikan Tunnel, an ocean tunnel linking Hokkaido with Honshu in Japan is long; and the Swiss Gotthard Base Tunnel through the Alps, opened in 2016, is long.
For a bridge or tunnel to be useful, a road or railway must be built to connect it, despite the very difficult climate and very sparse local population. In Alaska, a link would be needed, and in Russia, a link more than long must be constructed. Until around 2010, such road connections were suggested by enthusiasts only, but at that time both the Russian government and the Alaskan state government started to consider such roads. The Alaska Railroad is currently the only railroad in Alaska, and is not connected to the wider North American rail network, but plans for an A2A Railway linking it to Alberta, Canada and from there to the rest of the North American rail network are under consideration.
Asia to Oceania
There are no viable options for a fixed link between Asia and Oceania. Excluding the Indonesia–Papua New Guinea border that divides the island of New Guinea, the shortest distances between Southeast Asia and Oceania still cover a significant distance across the sea between Indonesia and Australia. The closest would be Badu Island in Queensland's Torres Strait Islands, which is away from Merauke Regency's coastal border with Papua New Guinea. Otherwise, in the Arafura Sea, Rimbija Island in the Northern Territory's uninhabited Wessel Islands is south of Yos Sudarso Island, also in Merauke Regency; and across the Timor Sea, the northern tip of Melville Island is still south of Selaru in the Tanimbar Islands of Maluku.
Oceania
Australia–Papua New Guinea Tunnel
A tunnel/bridge between the Australian mainland and the island of New Guinea, bridging the Torres Strait, is not considered economically feasible owing to the great distance. Cape York in northern Queensland is 140 km away from New Guinea. This is a very long distance compared to existing tunnels or bridges, and the demand for car travel is not so high; as of 2009 there are no car ferries between Australia and Papua New Guinea. Passenger travel is by air or private boat only.
Cook Strait
The Cook Strait between North Island and South Island of New Zealand has been suggested for a fixed link. The length would be at least 22 km, and the water depth is around 200 meters. This is mostly considered a too complicated and costly project to be realised.
The Americas
Vancouver Island
Ferry services link Vancouver Island to British Columbia on the Canadian Mainland and to the State of Washington in the US.
Proposals have been made for a fixed link to Vancouver Island for over a century. Because of the extreme depth and soft seabed of the Georgia Strait, and the potential for seismic activity, a bridge or tunnel would face monumental engineering, safety, and environmental challenges at a prohibitive cost.
Prince Edward Island
Prince Edward Island is linked to New Brunswick on the Canadian mainland by the Confederation Bridge which opened in 1997.
Newfoundland
Various proposals have been considered for a fixed link consisting of bridges, tunnels, and/or causeways across the Strait of Belle Isle, connecting the Province of Newfoundland and Labrador's mainland Labrador region with the island of Newfoundland. This strait has a minimum width of .
Long Island
Nine bridges and 13 tunnels (including railroad tunnels) connect the New York City boroughs of Brooklyn and Queens, on Long Island, to Manhattan and Staten Island and, via these, to Newark in New Jersey and The Bronx on the mainland of New York state. However, no fixed crossing of the Long Island Sound exists east of New York City; most traffic from the mainland United States must pass through the city to access Long Island. Passenger and auto ferries connect Suffolk County on Long Island northward across the Sound to the mainland of New York state and eastward to the state of Connecticut. There have been various proposals, none successful, to replace these ferries with a fixed link across Long Island Sound to provide an alternate route around New York City for Long Island-bound traffic.
Delmarva
The Chesapeake Bay Bridge–Tunnel (CBBT) is a 23-mile-long (37 km) fixed link crossing the mouth of the United States' Chesapeake Bay, connecting the Delmarva Peninsula with Virginia Beach, Virginia. It opened in 1964.
Overseas Highway
The Overseas Highway is a 113-mile (181.9 km) highway carrying U.S. Route 1 (US 1) through the Florida Keys to Key West.
Florida to Bahamas, Cuba, Hispaniola, and Puerto Rico
Ferry services between the US and Cuba and between Cuba and Haiti were common before 1960, but were suspended due to the United States embargo against Cuba. After the normalization of U.S.-Cuba diplomatic relations by U.S. President Barack Obama and Cuban President Raúl Castro, some American companies began plans to provide regular ferry services between Florida and Cuba. However, President Donald Trump reinstated many travel restrictions towards Cuba during his term, including prohibition of direct ferry services.
There is only one regular ferry to Havana from a foreign port: Cancún, Mexico.
A ferry travels between Mayagüez in Puerto Rico and Santo Domingo in the Dominican Republic.
There have been proposals for a direct link between Key West (US) to Havana (Cuba) by tunnel or bridge and also for a direct tunnel between Florida and The Bahamas.
Ceiba–Vieques Bridge
On 2009, there was a proposal to build a bridge between Ceiba (on Puerto Rico island) and Vieques island, having an estimated cost of $600 millions. The main goal was to cut travel time to and from the small island town that is currently served by daily ferry runs.
Gulf of Paria crossing
After the independence of Trinidad and Tobago, members in the government have spoken of constructing a physical link between the islands of Trinidad and Tobago, wanting to physically unify the country. As public discussion and commentary ensued over feasibility and cost, an alternative proposal of a Gulf of Paria crossing was made of constructing a shorter connection which would connect Trinidad and Venezuela.
In 2017, China showed interest in the construction of a mega bridge in the Caribbean Sea to connect Tobago and Trinidad.
On modern times, there have been studies from the Department of Civil & Environmental Engineering, from University of the West Indies at St. Augustine, that have developed studies for a possible bridge linking between Venezuela and Tobago, but only as case study, without official support.
Maracaibo Crossing
In Venezuela, the General Rafael Urdaneta Bridge connects Maracaibo with much of the rest of the country, being a bridge that crosses the narrowest part of Lake Maracaibo, in Zulia (northwestern Venezuela). There is another plan of a Second bridge over Lake Maracaibo for the construction of a mixed road-rail bridge that would link the Zulia cities of Santa Cruz de Mara and Punta de Palmas, located on both sides of Lake Maracaibo, in the Miranda Municipality.
Darién Gap
A notable break in the Pan-American Highway is a section of land located in the Darién Province in Panama and the Colombian border called the Darién Gap. It is an stretch of rainforest. The gap has been crossed by adventurers on bicycle, motorcycle, all-terrain vehicle, and foot, dealing with jungle, swamp, insects, kidnapping, and other hazards.
Some people, groups, indigenous populations, and governments are opposed to completing the Darién portion of the highway. Reasons for opposition include protecting the rain forest, containing the spread of tropical diseases, protecting the livelihood of indigenous peoples in the area, and reducing the spread of drug trafficking and its associated violence from Colombia.
Santa Catarina Island
The Hercilio Luz Bridge is the first bridge constructed to link the Island of Santa Catarina to the mainland Brazil. Two additional crossings connecting the island to the mainland exist: Colombo Salles Bridge and Pedro Ivo Campos Bridge.
Chacao Channel bridge
It's a construction project for a bridge that will cross the Chacao Channel. It is intended to unite the Isla Grande de Chiloé with the Chilean continental territory, in the Los Lagos Region. The opening of the bridge is planned for 2025. It will be the longest suspension bridge in Latin America. Previously there were suggestion of a connection by tunnel, but were rejected due to financial problems.
Terrestrial connection project through the Río de la Plata
At the end of the 19th century, Argentine President Domingo Sarmiento presented the "Argirópolis" project; which included building railway bridges uniting both countries through the Martín García island.
Several land connection projects through the Río de la Plata were evaluated by the governments of Argentina and Uruguay (also Mercosur), with the objective of erecting a road for vehicular, rail or both types of transit. Although most of the proposals involve the construction of bridges, others also mention sub-fluvial tunnels as a possible alternative. The project would consist to link Colonia del Sacramento in Uruguay to Punta Lara in Argentina.
Transatlantic tunnel
A transatlantic tunnel is a theoretical tunnel that would span the Atlantic Ocean between North America and Europe, perhaps enabling mass transit. Some proposals envision technologically advanced trains reaching speeds of . Most conceptions of the tunnel envision it between the United States and the United Kingdom ‒ or more specifically between New York City and London.
Advantages compared to air travel would be increased speed and use of electricity instead of oil-based fuel.
The main barriers to constructing such a tunnel are cost, with estimates of between $88 billion and $175 billion, as well as the limits of current materials science.
The proposed routes are a direct link from the US to Europe, or from the US, crossing Canada, Greenland, Iceland and the Faroe Islands, to the United Kingdom, using an underwater vacuum tube train.
The Maritime Research Institute Netherlands (Marin), in 2019, tested a model trans-Atlantic underwater tunnel between the United States and Europe capable of supporting hyperloop.
See also
Atlantropa
Orkney tunnel
Trans-Asian Railway
List of transport megaprojects
List of straits
References
"The Three Americas Railway: An International and Intercontinental Enterprise" book written in 1881 by Hinton Rowan Helper discusses the need for an Intercontinental Highway, using railroads, starting on page 418.
"The Rotarian", January 1936. Article "Seeking Peace in a Concrete Way" starting on page 42.
"Looking far north: the Harriman Expedition to Alaska, 1899" written in 1982 by William H. Goetzmann, Kay Sloan, writes that Harriman in 1899 proposed a "Round the World Railroad" (page 128). The authors go on to write that Harriman traveled to Japan a few years later to continue this proposal.
"The Bering Strait Crossing: A 21st Century Frontier Between East and West" by James Oliver published in 2006 (256 pages) mentions extensively the Intercontinental Highway. He goes on to mention that the notion of a global highway has been around for hundreds of years including William Gilpen, who suggests it in 1846 was a proponent of a global rail highway to link to the then being proposed European and Asiatic Railway.
"Planning and Design of Bridges" by M. S. Troitsky, 1994 describes many of the bridges and tunnels proposed in the Trans Global Highway article including on page 39 this book mentions that in 1958, T.Y. Lin mentions the possible construction of a Bering Strait bridge (and obviously a needed highway network).
Alaska History: A Publication of the Alaska Historical Society, Volumes 4-6 (1989) mentions on page 6 that in 1892, a man named Strauss proposed a global highway and a man made bridge over the Bering Strait. The article goes on to mention the Lin proposal of 1958.
"Maritime Information Review" a publication of the Netherlands Maritime Information Centre, in 1991 had an extensive article, on "strait crossings" covering the then proposed Bering Strait bridge, the Gibraltar Tunnel and so on, and mentions the proposed global highway network.
Popular Mechanics Apr 1994 has an article "Alaska Siberia Bridge" and the article goes on to mention the construction of a global highway.
External links
The Schiller Institute article on the Trans Global Highway
Wikivoyage, Wikimedia's global travel guide with information on driving in each country
Exploratory engineering
Transportation planning
Proposed undersea tunnels
Proposed bridges
Transcontinental crossings | Intercontinental and transoceanic fixed links | [
"Technology"
] | 7,600 | [
"Exploratory engineering"
] |
11,996,151 | https://en.wikipedia.org/wiki/Sympathetic%20cooling | Sympathetic cooling is a process in which particles of one type cool particles of another type.
Typically, atomic ions that can be directly laser cooled are used to cool nearby ions or atoms, by way of their mutual Coulomb interaction. This technique is used to cool ions and atoms that cannot be cooled directly by laser cooling, which includes most molecular ion species, especially large organic molecules. However, sympathetic cooling is most efficient when the mass/charge ratios of the sympathetic- and laser-cooled ions are similar.
The cooling of neutral atoms in this manner was first demonstrated by Christopher Myatt et al. in 1997. Here, a technique with electric and magnetic fields were used, where atoms with spin in one direction were more weakly confined than those with spin in the opposite direction. The weakly confined atoms with a high kinetic energy were allowed to more easily escape, lowering the total kinetic energy, resulting in a cooling of the strongly confined atoms.
Myatt et al. also showed the utility of their version of sympathetic cooling for the creation of Bose–Einstein condensates.
References
Atomic, molecular, and optical physics
Cooling technology
Thermodynamics | Sympathetic cooling | [
"Physics",
"Chemistry",
"Mathematics"
] | 229 | [
"Dynamical systems",
"Nuclear and atomic physics stubs",
" molecular",
"Thermodynamics",
"Nuclear physics",
"Atomic",
" and optical physics"
] |
11,996,218 | https://en.wikipedia.org/wiki/Data%20plane | In routing, the data plane, sometimes called the forwarding plane or user plane, defines the part of the router architecture that decides what to do with packets arriving on an inbound interface. Most commonly, it refers to a table in which the router looks up the destination address of the incoming packet and retrieves the information necessary to determine the path from the receiving element, through the internal forwarding fabric of the router, and to the proper outgoing interface(s).
In certain cases the table may specify that a packet is to be discarded. In such cases, the router may return an ICMP "destination unreachable" or other appropriate code. Some security policies, however, dictate that the router should drop the packet silently, in order that a potential attacker does not become aware that a target is being protected.
The incoming forwarding element will also decrement the time-to-live (TTL) field of the packet, and, if the new value is zero, discard the packet. While the Internet Protocol (IP) specification indicates that an Internet Control Message Protocol (ICMP) time exceeded message should be sent to the originator of the packet (i.e. the node indicated by the source address), the router may be configured to drop the packet silently (again according to security policies).
Depending on the specific router implementation, the table in which the destination address is looked up could be the routing table (also known as the routing information base, RIB), or a separate forwarding information base (FIB) that is populated (i.e., loaded) by the routing control plane, but used by the forwarding plane for look-ups at much higher speeds. Before or after examining the destination, other tables may be consulted to make decisions to drop the packet based on other characteristics, such as the source address, the IP protocol identifier field, or Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) port number.
Forwarding plane functions run in the forwarding element. High-performance routers often have multiple distributed forwarding elements, so that the router increases performance with parallel processing.
The outgoing interface will encapsulate the packet in the appropriate data link protocol. Depending on the router software and its configuration, functions, usually implemented at the outgoing interface, may set various packet fields, such as the DSCP field used by differentiated services.
In general, the passage from the input interface directly to an output interface, through the fabric with minimum modification at the output interface, is called the fast path of the router. If the packet needs significant processing, such as segmentation or encryption, it may go onto a slower path, which is sometimes called the services plane of the router. Service planes can make forwarding or processing decisions based on higher-layer information, such as a Web URL contained in the packet payload.
Contrast to control plane
The data plane is the part of the software that processes the data requests. By contrast, the control plane is the part of the software that configures and shuts down the data plane.
The conceptual separation of the data plane from the control plane has been done for years. An early example is Unix, where the basic file operations are open, close for the control plane and read, write for the data plane.
The conceptual separation of the data plane from the control plane in software programming has proven useful in the packet switching field where it originated. In networking, the data plane is sometimes referred to as the forwarding plane, as it separates the concerns: the data plane is optimized for speed of processing, and for simplicity and regularity. The control plane is optimized so as to allow configuration, handling policies, handling exceptional situations, and in general facilitating and simplifying the data plane processing.
Issues in router forwarding performance
Vendors design router products for specific markets. Design of routers intended for home use, perhaps supporting several PCs and VoIP telephony, is driven by keeping the cost as low as possible. In such a router, there is no separate forwarding fabric, and there is only one active forwarding path: into the main processor and out of the main processor.
Routers for more demanding applications accept greater cost and complexity to get higher throughput in their forwarding planes.
Several design factors affect router forwarding performance:
Data link layer processing and extracting the packet
Decoding the packet header
Looking up the destination address in the packet header
Analyzing other fields in the packet
Sending the packet through the "fabric" interconnecting the ingress and egress interfaces
Processing and data link encapsulation at the egress interface
Routers may have one or more processors. In a uniprocessor design, these performance parameters are affected not just by the processor speed, but by competition for the processor. Higher-performance routers invariably have multiple processing elements, which may be general-purpose processor chips or specialized application-specific integrated circuits (ASIC).
Very high performance products have multiple processing elements on each interface card. In such designs, the main processor does not participate in forwarding, but only in control plane and management processing.
Benchmarking performance
In the Internet Engineering Task Force, two working groups in the Operations & Maintenance Area deal with aspects of performance. The Interprovider Performance Measurement (IPPM) group focuses, as its name would suggest, on operational measurement of services. Performance measurements on single routers, or narrowly defined systems of routers, are the province of the Benchmarking Working Group (BMWG).
RFC 2544 is the key BMWG document. A classic RFC 2544 benchmark uses half the router's (i.e., the device under test (DUT)) ports for input of a defined load, and measures the time at which the outputs appear at the output ports.
Forwarding information base design
Originally, all destinations were looked up in the RIB. Perhaps the first step in speeding routers was to have a separate RIB and FIB in main memory, with the FIB, typically with fewer entries than the RIB, being organized for fast destination lookup. In contrast, the RIB was optimized for efficient updating by routing protocols.
Early uniprocessing routers usually organized the FIB as a hash table, while the RIB might be a linked list. Depending on the implementation, the FIB might have fewer entries than the RIB, or the same number.
When routers started to have separate forwarding processors, these processors usually had far less memory than the main processor, such that the forwarding processor could hold only the most frequently used routes. On the early Cisco AGS+ and 7000, for example, the forwarding processor cache could hold approximately 1000 route entries. In an enterprise, this would often work quite well, because there were fewer than 1000 server or other popular destination subnets. Such a cache, however, was far too small for general Internet routing. Different router designs behaved in different ways when a destination was not in the cache.
Cache miss issues
A cache miss condition might result in the packet being sent back to the main processor, to be looked up in a slow path that had access to the full routing table. Depending on the router design, a cache miss might cause an update to the fast hardware cache or the fast cache in main memory. In some designs, it was most efficient to invalidate the fast cache for a cache miss, send the packet that caused the cache miss through the main processor, and then repopulate the cache with a new table that included the destination that caused the miss. This approach is similar to an operating system with virtual memory, which keeps the most recently used information in physical memory.
As memory costs went down and performance needs went up, FIBs emerged that had the same number of route entries as in the RIB, but arranged for fast lookup rather than fast update. Whenever a RIB entry changed, the router changed the corresponding FIB entry.
FIB design alternatives
High-performance FIBs achieve their speed with implementation-specific combinations of specialized algorithms and hardware.
Software
Various search algorithms have been used for FIB lookup. While well-known general-purpose data structures were first used, such as hash tables, specialized algorithms, optimized for IP addresses, emerged. They include:
Binary tree
Radix tree
Four-way trie
Patricia tree
A multicore CPU architecture is commonly used to implement high-performance networking systems. These platforms facilitate the use of a software architecture in which the high-performance packet processing is performed within a fast path environment on dedicated cores, in order to maximize system throughput. A run-to-completion model minimizes OS overhead and latency.
Hardware
Various forms of fast RAM and, eventually, basic content-addressable memory (CAM) were used to speed lookup. CAM, while useful in layer 2 switches that needed to look up a relatively small number of fixed-length MAC addresses, had limited utility with IP addresses having variable-length routing prefixes (see Classless Inter-Domain Routing). Ternary CAM (CAM), while expensive, lends itself to variable-length prefix lookups.
One of the challenges of forwarder lookup design is to minimize the amount of specialized memory needed, and, increasingly, to minimize the power consumed by memory.
Distributed forwarding
A next step in speeding routers was to have a specialized forwarding processor separate from the main processor. There was still a single path, but forwarding no longer had to compete with control in a single processor. The fast routing processor typically had a small FIB, with hardware memory (e.g., static random-access memory (SRAM)) faster and more expensive than the FIB in main memory. Main memory was generally dynamic random-access memory (DRAM).
Early distributed forwarding
Next, routers began to have multiple forwarding elements, that communicated through a high-speed shared bus or through a shared memory. Cisco used shared busses until they saturated, while Juniper preferred shared memory.
Each forwarding element had its own FIB. See, for example, the Versatile Interface Processor on the Cisco 7500
Eventually, the shared resource became a bottleneck, with the limit of shared bus speed being roughly 2 million packets per second (Mpps). Crossbar fabrics broke through this bottleneck.
Shared paths become bottlenecks
As forwarding bandwidth increased, even with the elimination of cache miss overhead, the shared paths limited throughput. While a router might have 16 forwarding engines, if there was a single bus, only one packet transfer at a time was possible. There were some special cases where a forwarding engine might find that the output interface was one of the logical or physical interfaces present on the forwarder card, such that the packet flow was totally inside the forwarder. It was often easier, however, even in this special case, to send the packet out the bus and receive it from the bus.
While some designs experimented with multiple shared buses, the eventual approach was to adapt the crossbar switch model from telephone switches, in which every forwarding engine had a hardware path to every other forwarding engine. With a small number of forwarding engines, crossbar forwarding fabrics are practical and efficient for high-performance routing. There are multistage designs for crossbar systems, such as Clos networks.
See also
Control plane
Management plane
Network processor
Network search engine
References
Internet architecture | Data plane | [
"Technology"
] | 2,358 | [
"Internet architecture",
"IT infrastructure"
] |
11,996,219 | https://en.wikipedia.org/wiki/Gooseneck%20%28piping%29 | A gooseneck (or goose neck) is a 180° pipe fitting at the top of a vertical pipe that prevents entry of water. Common implementations of goosenecks are ventilator piping or ducting for bathroom and kitchen exhaust fans, ship holds, landfill methane vent pipes, or any other piping implementation exposed to the weather where water ingress would be undesired. It is so named because the word comes from the similarity of pipe fitting to the bend in a goose's neck.
Gooseneck may also refer to a style of kitchen or bathroom faucet with a long vertical pipe terminating in a 180° bend.
To avoid hydrocarbon accumulation, a thermosiphon should be installed at the low point of the gooseneck.
Gooseneck, Lead (pigtail)
Leaded goosenecks are short sections of lead pipe (1’ to 2’ long) used during the early 1900s up to World War Two in supplying water to a customer. These lead tubes could be easily bent, and allowed for a flexible connection between rigid service piping. The bent segments of pipe often took the shape of a goose's neck, and are referred to as “lead goosenecks.” Lead is no longer permitted in new water systems or new building construction.
Goosenecks (also referred to as pigtails) are in-line components of a water service (i.e. piping, valves, fittings, tubing, and accessories) running from the distribution system water main to a meter or building inlet. The valve used to connect a small-diameter service line to a water main is called a corporation stop (also called a tap, or corp stop). One gooseneck joins the corporation stop to the water service pipe work. A second gooseneck links the supply pipeline to a water meter located outside the building.
See also
Swan neck duct
Swan neck flask
Trap (plumbing)
References
Piping | Gooseneck (piping) | [
"Chemistry",
"Engineering"
] | 390 | [
"Piping",
"Chemical engineering",
"Mechanical engineering",
"Building engineering"
] |
11,996,370 | https://en.wikipedia.org/wiki/Electronic%20shelf%20label | An electronic shelf label (ESL) system is used by retailers for displaying, typically on the front edge of retail shelving, product pricing on shelves that can automatically be updated or changed under the control of a central computer server.
ESL tag modules use electronic paper (e-paper) or liquid-crystal displays (LCDs) to show the current product price to the customer. E-paper is widely used on ESLs as it provides a crisp display and supports full graphic images (typically only black and white) while only needing power during updates, and no power to retain an image. A communication network from the central computer server allows the price display to be automatically updated whenever a product price is changed, in contrast to static paper placards. Wireless communication is needed and must support appropriate range, speed, and reliability. The means of wireless communication can be based on radio, infrared or even visible light communication. Currently, the ESL market leans heavily towards radio frequency communication.
History
Early product
An early system first offered for sale by National Cash Register (NCR) in 1997 used modulated backscatter of radio waves to provide two way wireless communications between the labels and the store’s radio network. By using modulated backscatter, the labels confirmed receipt of price changes (along with battery and display status) without the need for an active radio transmitter, thus saving cost and increasing battery life to over 5 years.
First generation: LCD and infrared communication
7-segment LCD ESL tags use a display similar to how a calculator displays the numerical values. The numerical value to display on the tags itself are then shown based on activating different combinations of these seven bars and segments. A disadvantage of using a liquid crystal display tags is difficulty in displaying certain letters. The communication technology used for the transmitter to connect to the label is through diffused infrared communication. The values on the LCD tags are established by infrared bouncing off of surfaces. However, the speed of transmission is heavily compromised due to the data compression of each data packets from the transmitter. Another disadvantage is that LCDs need power to retain an image.
Second generation: e-paper and infrared or radio communication
Electronic paper (e-paper, electronic ink, or e-ink) describes a technology that mimics the appearance of ordinary ink on paper. An e-paper display is made up of multiple capsules in a thin film with electrodes placed above and below the capsule film, and when an electric charge is applied to an individual electrode, the color particle moves to either the top or bottom of the capsule, allowing the ESL to display certain intensities of color within the capsule. E-paper generally uses an infrared or radio communication network to communicate from the central transmitter to the tags. Typically, low frequency radio transmission is used for tags, but the radio transmission has a disadvantage of a low bandwidth that makes it difficult to show complex segmented images.
Third generation: geo-location and product finder
, the current generation of ESL use e-paper display technology and radio communications, integrated with existing retail technologies such as electronic article surveillance, digital signage, and people counters. Once retailers upload a floor plan of the sales area into the label-management software, consumers can be automatically tracked (in real time) through the network of people-counting devices, or via their personal Bluetooth devices. This allows the software to determine the consumer's position within the store and subject the consumer to targeted, customized marketing initiatives, such as discounts, or individual pricing.
General principles
A typical ESL utilizes ultra-low-power CPU and wireless communication solutions to meet low power of low cost constraints in order to displace the high number of static shelf labels required in an average retail store.
ESL consists of three components:
Label management software: Responsible for the configuration of the system, configuration of the properties on the label itself, and to store the database for the list of prices. The software mainly covers the network management, file systems, and transmission of data. Also processes and packs the data of product information and the prices configured into packets of information. The data packets are then sent to the communication station via wireless network. Typically, a centralized software that is responsible for the building of and maintenance of the network for the data communication between the label management software and the terminal display.
Communication station: Responsible for the stability and reliability of transmittance through a long distance from the label management software to the label.The wireless communication must support application range, speed, battery life, and reliability. The means of wireless communication can be based on radio, infrared or even visible light communication. Currently, the ESL market leans heavily towards radio frequency based ESL solutions.
Terminal display: Functions as a receiver from the communication station to display the price configured from the label management software. The terminal display label will then act based on the instructions that was given in the data packets.
Hardware design
ESL hardware design generally includes the circuit design of the communication station and the terminal display label. The typical chipset used to perform the basic functional requirement of ESL's is the TI MSP432. Typical communication between the communication station and the terminal display label is controlled by a RF module, the general protocol for RF module uses CC2500 with a communication distance of upwards to 30 meters. For terminal display, it can be displayed via electronic ink, electronic paper or liquid crystal display.
Software design
An ESL software API is included in the bluetooth 5.4 specification permits a 7-bit group identifier of 8-bit unique ESL ID's allowing for a total of 32,640 ESL's to be allocated for one bluetooth ESL network. Multiple bluetooth ESL networks would be necessary in the same location to cover typical grocery store ESL applications.
Usage
Electronic shelf labels are primarily used by retailers who sell their products in stores and are usually attached to the front edge of the retail shelves and display the price of the product. Additional information such as stock levels, expiration dates, or product information may also be displayed as well, depending on the type of ESL.
Benefits
Automated ESL systems reduce pricing management labor costs, improves pricing accuracy and allows dynamic pricing. Dynamic pricing is the concept in which retailers can fluctuate pricing to match demand, online competition, inventory levels, shelf-life of items, and create promotions. Some advantages of using electronic shelf labels are
Accurate pricing: Prices on shelves are updated on time and on demand to match with price files on the label management software from a link between the in store point of sale processor and the label management software. This will increase pricing accuracy to avoid branding issues revolving around price integrity. As a result, decreasing the lost revenue from undervalued items, as consumers generally alert staffs of overpriced items, and not the inverse.
Cost savings: As opposed to traditional pricing labels, whenever prices are changed and updated; employees will no longer require to print out labels and manually replace them in the shelf tags. With ESL, this eliminates the need to visit each shelf and make changes as all changes are made in the label management software and updated to the labels digitally. This saves retailers on the materials and labor in producing and replacing printed tags, and offer the ability to update prices dynamically on demand.
Product finder: Retailers are able to integrate each ESL tags with an external application to offer wayfinding capability for their products. A customer can input the product they are looking for either through a developed mobile application from the retailer, or through an external digital signage to direct the customer to the product's location.
In store heat map: Some ESL providers integrate with Bluetooth Low Energy-enabled devices to track the movement of consumers and how long they remain at a particular location. This is done by displaying an image of the floor plan of the store on the label management software with a heat map showing locations of hot spots based on the detection of responses from high traffic areas through Bluetooth.
Regulate stock levels: Inventory management is crucial for retailers. Inventory information may be displayed on ESL through connection with the point of sale processor. Additional information the ESL can display is an expected date on when the stock will refill on the shelf. ESL will also be able to display a quick response code to allow consumers to easily find the item online, or for retailers to display relevant product information to their consumers.
Disadvantages
While there are benefits to ESL, it is not without its flaws. Some disadvantages of using electronic shelf labels are:
Error propagation: As ESL are controlled by a label management software that regulates all ESL within a store or throughout an entire chain of company, any erroneous or undervalued price entered into the label management software will be reflected through the entire retail chain.
Inability to quantify return on investments: Due to the large volume of ESL a retailer will need for their stores, the initial investment cost for a store could be marginally high. This along with the inability to quantify whether consumer experience were improved during shopping due to the implementation of ESL makes it difficult to quantify the return on investment of ESL.
Market trends
The global ESL market throughout 2027 is forecast to grow more than 16% CAGR. The wide range of users ranges from grocery stores, hardware stores, sports equipment, furniture, consumer appliances, and electronic and gadgets. Forecast growth is primarily due to reduction in pricing over time.
With the rapid increase in the inclusion of Internet of things technology in the retail industry, over 79% of retailers in the North America alone expect to invest in ESL and people counters. 72% of these retailers in North America have plans to reinvent the supply chain management through adoption of ESL in their stores, thereby accelerating the market growth of ESL. Further studies show that Europe currently dominates the ESL market in terms of size, with over one-third of the total market share in 2017, due to the strong presence of domestic and multinational retailers in the region, Diebold Nixdorf AG, and Displaydata. However the market in APAC is expected to grow at the highest CAGR within the forecast period. The ESL market in the APAC region is segmented into territories with significant market potential, China, Japan, Australia, Singapore, South Korea, and the rest of the region. Additionally, growth focuses on the expansion of large scale retailers in the region. A study led by ABI Research states the global ESL market could reach US$2 billion by 2019(actual achieved marketshare in 2019 of $631 million), but a further study led by Fortune Business Isights expect the ESL market to reach $2.85 billion by 2027.
References
Retail store elements
Electronic paper technology
Display technology | Electronic shelf label | [
"Technology",
"Engineering"
] | 2,191 | [
"Display technology",
"Electronic engineering",
"Components",
"Retail store elements"
] |
11,996,462 | https://en.wikipedia.org/wiki/Kashiwazaki-Kariwa%20Nuclear%20Power%20Plant | The is a large, modern (housing the world's first advanced boiling water reactor or ABWR) nuclear power plant on a site. The campus spans the towns of Kashiwazaki and Kariwa in Niigata Prefecture, Japan, on the coast of the Sea of Japan, where it gets cooling water. The plant is owned and operated by Tokyo Electric Power Company (TEPCO), and it is the largest nuclear generating station in the world by net electrical power rating.
On 16 July 2007, the Chūetsu offshore earthquake took place, with its epicenter located only from the plant. The earthquake registered Mw 6.6, ranking it among the strongest earthquakes to occur in the immediate range of a nuclear power plant. This shook the plant beyond design basis and initiated an extended shutdown for inspection, which indicated that greater earthquake-proofing was needed before the operation could be resumed. The plant was completely shut down for 21 months following the earthquake. Unit 7 was restarted after seismic upgrades on 19 May 2009, followed later by units 1, 5, and 6. (Units 2, 3, and 4 were not restarted by the time of the March 2011 earthquake.)
The four restarted and operating units at the plant were not affected by the 11 March 2011 earthquake, but thereupon all units were shut down to carry out safety improvements. TEPCO regained permission to restart units 6 and 7 from the Nuclear Regulation Authority (NRA) in 2017, but throughout 2023, all units remained idle. In December 2023, the NRA finally approved the reloading of fuel at the plant, citing improvements in the safety management system. As of 2024, TEPCO is seeking permission from local authorities to restart the plant again.
Reactors
There are seven reactor units spread across the campus coast line. Numbering starts at Unit 1 with the south-most unit through Unit 4, then there is a large green space in between Unit 4 and 7, then it continues with Units 6 and 5.
The power installation costs for units at this site well reflect the general trend in costs of nuclear plants. Capital costs increased through the 1980s but have become cheaper in modern times. The last two units were the first Advanced Boiling Water Reactors (ABWRs) ever built.
Performance
Operating a single large plant comprising this many reactors has several economic advantages. One such benefit is the limited impact of single-reactor refueling outages during the replacement cycle; one dormant reactor makes minimal impact on the plant's net power production. A smooth transition was seen in the power production history of the plant up through the time the last two units were built. Currently, however, there are no active reactors at the Kashiwazaki-Kariwa plant. TEPCO has outlined plans to restart Reactor 6 and Reactor 7 and is awaiting approval from the government and citizens before the reactors are permitted to restart.
Partial shutdowns
In February 1991, Unit 2 was automatically shut down following a sudden drop in oil pressure inside the steam turbine.
On 18 July 1997, radioactive steam leaked from a gauge within Unit 7 of the Kashiwazaki-Kariwa plant. In May, a burst tube had delayed trial runs at the plant, and earlier in July smoke had been found coming from plant machinery.
In January 1998, Unit 1 was shut down after increasing radiation levels in the steam driving the turbine triggered alarms. The levels were reportedly 270 times the expected operating level.
Reactors at the plant were shut down one by one following the 2002 discovery that TEPCO had deliberately falsified data surrounding safety inspections. The first reactor was taken offline 9 September 2002, and the final reactor was taken offline 27 January 2003. The newest units, the more inherently safe ABWRs, were taken back online the quickest and suffered the smallest effect. Units 1, 2, and 3, on the other hand, generated no electricity during the fiscal year of 2003.
Complete shutdowns
Units 1-4 were completely shut down in 2008. Only Unit 1 was temporarily restarted in 2010–2011. Unit 5 was temporarily restarted between 2010 and 2012 after a shut down in 2007.
Following the Fukushima disaster in 2011, Unit 1 was shut down again in 2012 along with units 5–7. As of May 2022, the plant remains idle.
Fuel
All reactors continue to use low-enriched uranium as the nuclear fuel; however, there have been plans drafted by TEPCO to use MOX fuel in some of the reactors by the permission of the Japanese Atomic Energy Commission (JAEC). A public referendum in the Kariwa village in 2001 voted 53% against use of the new fuel. After the 2002 TEPCO data fabrication scandals, the president at the time, , announced that plans to use the MOX fuel at the KK plant would be suspended indefinitely.
Earthquakes
Earthquake resistant design features
Sand at the sites was removed and the reactor was built on firm ground. Adjacent soil was backfilled. Basements of the reactor buildings extend several levels down (maximum of 42 m below grade). These underground elements stabilize the reactor buildings, making them less likely to suffer sway due to resonance vibrations during an earthquake. As with other Japanese power plants, reactors at the plant were built according to earthquake-resistance standards, which are regulated by law and the JAEC.
In 2006 safety standards for earthquake resistance in Japan's nuclear plants were modified and tightened. After the 2007 earthquake suspicions arose that another fault line may be closer to the plant than originally thought, possibly running straight through the site.
2007 Chūetsu offshore earthquake
The KK plant was 19 kilometers away from the epicenter of the magnitude 6.6 2007 Chūetsu offshore earthquake, which took place 10:13 a.m., 16 July 2007. Peak ground acceleration of 6.8 m/s2 (0.69 g) was recorded in Unit 1 in the east–west direction, above the design specification for safe shutdown of 4.5 m/s2, and well above the rapid restart specification for key equipment in the plant of 2.73 m/s2. Units 5 and 6 also recorded shaking over this limit. Shaking of 20.58 m/s2 was recorded in the turbine building of Unit 3.
Those nearby saw black smoke which was later confirmed to be an electric transformer that had caught fire at Unit 3. The fire was put out by noon on the day of the quake, about 2 hours after it started. The 3-story transformer building was extensively charred.
Reactor units 3, 4, and 7 all automatically powered down safely in response to the quake. Unit 2 was in startup mode and not online. Units 1, 5, and 6 were already shut down for inspection at the time. TEPCO was ready to restart some of the units as of the next day, but the trade ministry ordered the plant to remain idle until additional safety checks could be completed. On Wednesday, 18 July, the mayor of Kashiwazaki ordered operations at the plant to be halted until its safety could be confirmed. The Nikkei reported that government safety checks could delay the restart for over a year, without stating the source of the information. For comparison, in 2005, a reactor at the Onagawa Nuclear Power Plant was closed for five months following an earthquake.
IAEA inspections
The International Atomic Energy Agency (IAEA) offered to inspect the plant, which was initially declined. The governor of Niigata prefecture then sent a petition to Shinzo Abe. On Sunday, 22 July 2007, the Nuclear and Industrial Safety Agency (NISA) announced that it would allow inspectors from the United Nations to review the damage.
A team from the IAEA carried out a four-day inspection, as investigations by Japan's Nuclear and Industrial Safety Agency (NISA), Nuclear Safety Commission (NSC) and the Tokyo Electric Power Company (TEPCO) continued. The team of the IAEA confirmed that the plant had "shut down safely" and that "damage appears less than expected." On 19 August, the IAEA reported that, for safety-related and nuclear components, "no visible significant damage has been found" although "nonsafety related structures, systems and components were affected by significant damage".
The official report issued by the IAEA stated that the plant "behaved in a safe manner" after a 4-day inspection. Other observations were:
"Safety related structures, systems and components of the plant seem to be in a general condition, much better than might be expected for such a strong earthquake, and there is no visible significant damage"
Conservatisms introduced in the construction of the plant compensated for the magnitude of the earthquake being so much greater than planned for.
Recommendations included:
A re-evaluation of the seismic safety.
Detailed geophysical investigations
External inspections of the plant were planned to be completed by the end of July 2008. The schedule was confirmed on 10 July 2008 by the site superintendent, Akio Takahashi. On 15 July, Akira Amari said his ministry was also continuing their own tests. An IAEA workshop in June 2008 recognized that the earthquake exceeded the "seismic input" used in the design in that plant, and that regulations played a critical role in keeping the plant safe. However, TEPCO determined that significant upgrades were required to cope with the improved understanding of the seismic environment and possible shaking effects at the plant site.
The IAEA sent a team for a follow-up visit in January 2008. They concluded that much high-quality inspection work had been undertaken and noted the likely improvements to nuclear seismic design worldwide that may result from this process. An additional visit from an IAEA team of 10 experts occurred in December 2008, noting that the "unexpectedly large ground motions" were now well understood and could be protected against, and further confirming the safe performance of the plant during the quake.
Radioactivity releases
Initially, it was thought that some water (estimated to be about 1.5 L) from the spent fuel pool leaked into the Sea of Japan as a result of the quake. Later, more detailed reports confirmed a number of releases, though most of them were far less active than common natural radiation sources. According to the NISA, this was the first time a release of radioactive material happened as a result of an earthquake.
0.6 litres of slightly radioactive water leaked from the third floor of the Unit 6 reactor building, which contained 280 becquerels of radioactivity. (For reference, a household smoke detector typically contains of radioactivity, and a living adult human typically has around 8000 Bq of naturally occurring radioactivity inside their body).
0.9 litres of slightly radioactive water leaked from the inner third floor of the Unit 6 reactor building, containing 16,000 Bq of radioactivity.
From unit 6, 1.3 cubic meters of water from the spent fuel pool leaked through a drainage pipe and ultimately into the Sea of Japan. The water contained 80 Bq/L, totaling 90,000 Bq in the release. For comparison, an Onsen located in Misasa, Tottori, Japan uses water with a large concentration of radon, which gives it a radioactivity of 9300 Bq/L. The leaked water from the plant did not pose a health risk even before being diluted. Towels were used to mop up the water.
On Wednesday, 18 July 2007, at Unit 7, radioactive iodine was found leaking from an exhaust pipe by a government inspector, the leak began between Tuesday and Wednesday and was confirmed to have stopped by Thursday night. The amount of iodine released was estimated at 12 million Bq and the total amount of particulate radioactivity released into the air was about 402,000,000 Bq. This was said to have been one 10 millionth of the legal limit. It is estimated that this caused an unintentional dose of 0.0002 nanosieverts (nSv), per person distributed among around 10 million people. The limit for dose to the public from the operations of a nuclear plant in Japan in one year is 1100 nSv, and, for comparison, natural background radiation worldwide for humans is on average around 2,400,000 nSv/year (2.4 mSv/year). In regards to the cause, Yasuhisa Shiozaki said "This is an error of not implementing the manual," because the vent should have been closed.
Other problems
About 400 drums containing low-level nuclear waste stored at the plant were knocked over by the aftershocks, 40 losing their lids. Company officials reported on 17 July that traces of the radioactive materials cobalt-60, iodine, and chromium-51 had been released into the atmosphere, presumably from the containers losing their lids.
Criticisms of the company's response to the event included the time it took the company to report events and the certainty with which they were able to locate the source of various problems. TEPCO's president made a comment the site was a "mess" after visiting post-quake. While the reported amount of leaked radioactivity remained far below what poses a danger to the public, details changed multiple times in the few days after the quake and attracted significant media attention. After the quake, TEPCO was supposedly investigating 50 separate cases of "malfunctioning and trouble," a number that was changed to 63 cases later. Even the radioactivity sensors around the site encountered trouble, the reading from these devices are normally available online, giving the public a direct measure of ambient radioactivity around the site, but due to damage sustained during the earthquake, stopped reporting on the website. The company published an apology on that page, and data from the devices covering the off-line period was released later, showing no artificial abnormalities (note that the readings naturally fluctuate depending on whether it's raining or snowing and a host of other factors).
TEPCO's president maintained that fears of a leak of radioactive material were unfounded (since the amount leaked into the ocean was a billionth of the legal limit), but many international reporters expressed distrust of the company that has a history of cover-up controversies. The IAEA's Mohamed ElBaradei encouraged full transparency throughout the investigation of the accident so that lessons learned could be applied to nuclear plants elsewhere.
Impact
News of the earthquake, combined with the fact that replacement power sources (such as oil and gas) are at record highs, caused TEPCOs stock to plummet 7.5%, the largest drop in seven years, which amounted to around US$4.4 billion lost in stock capitalization. This made the event even more costly to the company than the 2002 data falsification scandal. Additionally, TEPCO warned that the plant closure could cause a power shortage during the summer months. Trade minister Akira Amari requested that business users cut electricity use, and in August TEPCO was forced to reduce electricity supplies for industrial uses, the first time it had to resort to such measures in 17 years.
Reports of the leak caused thousands of cancellations at resorts and hotels along the Sea of Japan coast, even as far as Murakami, Niigata (140 km northeast) and Sado Island. Inn owners have said that rumors have been more damaging than direct effects of the earthquake.
The shutdown forced TEPCO to run natural gas plants in place of this plant, not only increasing Japan's demand for the fuel and increasing the price internationally, but also increasing carbon dioxide output such that Japan will have difficulty meeting the Kyoto Protocol.
Restart
After 16 months of comprehensive component-based assessment and upgrades on all seven reactors, this phase of post-earthquake response was almost complete, with Reactor 7 fully upgraded to cope with the seismic environment. On 8 November 2008, fuel loading in reactor Unit 7 started, preparatory to a period of system safety tests on that reactor. On 19 February 2009 TEPCO applied to the local governance to restart Unit 7 after having obtained approval from the national government and regulators. Local government agreement for restart was granted in May and electrical grid power was supplied from Unit 7 at 20% power on 19 May. The reactor was raised to 100% power on 5 June 2009 as part of a series of restart tests.
Unit 6 restarted on 26 August 2009 and reconnected to the grid on 31 August.
Unit 1 restarted on 31 May 2010 after loading with fuel (along with Unit 5) earlier in the year, and was generating grid power by 6 June 2010.
Unit 5 recommenced grid generation on 26 November 2010, in the same week that fuel loading for Unit 3 started.
Units 2, 3, and 4 were not restarted.
2011 Tōhoku earthquake
The reactors were shut down indefinitely following the 2011 Tōhoku earthquake and tsunami. Plans to restart units 6 and 7 were delayed after problems developed with the intruder detection system.
Facility improvements after Fukushima I nuclear accidents
On 21 April 2011, after the Fukushima Daiichi nuclear disaster, TEPCO announced a plan to build up the seawall to a height of 15 m (49.2 ft) above sea level and spanning more than 800 m (2,624 ft) in length for units 1–4, and more than 500 m (1,640 ft) for units 5–7 by June 2013. The height of a potential tsunami was assumed to be 3.3 m. Also, plans were made to rebuild the radioactive overflow storage pool to be completed by September 2012.
2011–2012: Survey on tsunamis in the past
On 10 November 2011, TEPCO announced a survey for signs of past tsunamis in this area. With drills, soil samples were to be taken of sediment layers dating from the year 1600 back to 7000 years ago, at nine locations around the plant at the coast of central Japan. This survey, the first that TEPCO ever conducted on this subject, did start on 15 November 2011, and was planned to be completed in April 2012, and was done to examine the possibility of higher tsunamis than had been expected at the time the plant was designed and built.
On 26 April 2012, TEPCO said that it would recalculate the risks of earthquakes and tsunamis. This was done after reports, as published by four prefectures around the nuclear Plant, re-estimated the risks of potential earthquakes in the region:
Tottori Prefecture: a 220 kilometer long fault might trigger an 8.15 magnitude earthquake
Shimane Prefecture: 8.01 magnitude
Ishikawa Prefecture: 7.99 magnitude
The calculated earthquake magnitudes are almost three times stronger than all the calculations done by TEPCO regarding the safety assessments for the plant. These were based on a magnitude 7.85 quake caused by a 131 kilometer long fault near Sado Island in Niigata and a 3.3 meter-high tsunami. To endure this, an embankment was under construction to resist tsunami waves up to 15 meters high. The recalculation could have consequences for the stress tests and safety assessments for the plant.
After the planned revision of the safety standards in July 2013, some faults under the reactors were considered as geologically active. This was found by Japanese news agency Kyodo News on 23 January 2013 in papers and other material published by TEPCO. Under the new regulations, geologic faults would be considered to be active if they had moved within the last 400,000 years, instead of the less stringent standard of 120 000 years, as was formerly accepted.
Two faults, named "Alpha" and "Beta," are present under Reactors 1 and 2. Other faults are situated under Reactor 3 and Reactor 5, as well as underneath the building of Reactor 4. Under the new regulations, the beta-fault could be classified as active because it moved a ground layer including volcanic ash around 240,000 years ago. The outcome of the study might trigger a second survey by the newly installed Japanese regulator NRA. In January 2013, studies were conducted or planned on geological faults around six Japanese reactor sites. The Kashiwazaki-Kariwa plant would be number seven.
Current status
In 2017, TEPCO contempleted a restart of the plant from 2019 to 2021.
Kashiwazaki-Kariwa is one of the 44 nuclear power plants in Japan that have been rendered inactive in the years following the Fukushima Daiichi Accident. By October 2020, the Japanese government had inspected the plant, and by January 2020, TEPCO had completed its improvements on Unit 7. The company outlined plans to restart the reactor as early as the end of the Japanese 2022 Fiscal year (31 March 2022). However, the Nuclear Regulation Authority released a report in April 2021 indicating that there were serious security infractions and enacted an order that postponed the restart indefinitely.
Following the April 2021 NRA report, TEPCO admitted that its intruder detection system was left broken in order to reduce costs and confirmed that an unauthorized personnel member used a colleague's ID card to access the plant's central control room in September 2020. In response, TEPCO plans to implement anti-terrorism measures, install an intrusion detection system, and hire an additional 30 guards to protect nuclear material at the facility. The power company intends to invest ¥20 Billion (US$165.4 Million) on these security measures from 31 March 2023 to 31 March 2028.
According to a report from TEPCO, the NRA began Additional Inspection (Phase II) to monitor the new security measures at the plant. In April 2022, it was confirmed that the security flaws revealed in the NRA's April 2022 report were limited to Kashiwazaki-Kariwa and not indicative of a widespread issue throughout the company's culture. TEPCO is planning on moving nearly 40% of their nuclear division employees to Niigata Prefecture in preparation of its plans to restart Reactor 7 and begin rebuilding trust in the citizens, but the future of Kashiwazaki-Kariwa is still uncertain. As of 26 May 2022, the local government has yet to move forward with approval for TEPCO to set forth their plans to restart. According to a 2021 survey by Niigata Nippo, just over half of Niigata prefecture residents oppose a nuclear restart.
In October 2022, Japanese Prime Minister Kishida Fumio unveiled a new strategy for Japans nuclear power plants regarding new construction projects and license extensions. Included in this strategy, is a plan to restart units at the Kashiwazaki-Kariwa Nuclear Power Plant by the summer months of 2023. Although, the feasibility of this timelime have been questioned by journalists given the number of safety issues that have come to light at the plant in the last few years. Most of these issues relate to security discrepancies such as a worker who forgot his ID, borrowed his colleagues card to enter crucial areas. A government inspection of Unit 7 in October 2020 concluded that the majority of construction had been finished by January the following year. TEPCO felt that it is doing everything in its power to meet NRA guidelines.
In late 2023, the national regulator lifted the operational ban on the plant, allowing it to begin applying for permits from local governments to reopen.
On Monday, 8 April 2024, Japans Nuclear Regulation Authority approved plans submitted by TEPCO to fuel reactor No. 7. TEPCO announced it would begin fueling reactor 7 starting around 4pm on 14 April, a process which typically takes about two weeks. Operation of reactor 7 would still require completion of additional inspections and would require the approval of the Niigate Prefecture Governor. It's been reported that Reactor 7 is scheduled to restart operation in October of 2024 "under a base-case scenario".
See also
Katsuhiko Ishibashi
Pacific Ring of Fire
List of nuclear power plants in Japan
References
External links
Niigata Chuetsu Offshore earthquake
Niigata Chuetsu Offshore Earthquake impacts Japan Atomic Industrial Forum
Earthquake impacts Japan Nuclear Technology Institute
View on earthquake events Japan's Nuclear Safety Commission
Chairman's statement
Kashiwazaki-Kariwa Earthquake Japan's Citizens' Nuclear Information Center Report
Kashiwazaki nuclear plant report from the scene Greenpeace
Insight: Where not to build nuclear power stations New Scientist
Japan’s Quake-Prone Atomic Plant Prompts Wider Worry The New York Times
Entire plant
Tokyo Electric Company Official Site for Kashiwazaki-Kariwa 東京電力・柏崎刈羽原子力発電所 (in Japanese)
This shows output power, click on icons at top left to see three different radiation monitors.
Nuclear TEPCO-Power Plants (in English)
List of events at the plant (in English)
1980s establishments in Japan
Nuclear power stations using advanced boiling water reactors
Earthquake engineering
Buildings and structures in Niigata Prefecture
Nuclear power stations in Japan
Tokyo Electric Power Company
Kashiwazaki, Niigata
Kariwa, Niigata | Kashiwazaki-Kariwa Nuclear Power Plant | [
"Engineering"
] | 5,066 | [
"Structural engineering",
"Earthquake engineering",
"Civil engineering"
] |
5,623,359 | https://en.wikipedia.org/wiki/Modified%20Modular%20Jack | The Modified Modular Jack (MMJ) is a small form-factor serial port connector developed by Digital Equipment Corporation (DEC). It uses a modified version of the 6P6C modular connector with the latch displaced off-center so standard modular connectors found on Ethernet cables or phone jacks cannot accidentally be plugged in. MMJ connections are used on Digital minicomputers, such as the PDP-11, VAX and Alpha systems, and to connect terminals, printers, and serial console servers.
The MMJ connector has six conductors. As defined by DEC, the six pins are Tx and Rx for the data transmission, their return paths, and DSR and DTR for handshaking. The transmit and receive signals are differential: each signal is the voltage difference between the line and its associated ground, as opposed to a voltage on a single connector relative to a common reference. The electrical signaling is defined by the EIA RS-422 standard. The system can interoperate with RS-232 signaling by shorting the lower voltage sides of each signal to the RS-232 signal ground line. In this case, RS-232 limits apply to data rate and cable length.
When connecting two DTE devices such as a computer and a printer, the Digital BC16E crossover cable is used.
The DEC VT320 and TI/HP-928 were terminals offering an MMJ option. Using RS-422 signaling over 6-conductor RJ12, cables could be up to 1000 meters.
Thrustmaster is using the MMJ connector for connecting its line of rudder pedals and racing pedals to either USB via a proprietary Thrustmaster Adapter, or directly to a joystick or racing wheel.
The Lego Mindstorms NXT and EV3 set used connectors similar to MMJ for motors and sensors. This was to prevent the connection of standard telephone cords. The MMJ connector used by LEGO is also taller than those used with data terminals.
CEA-909 uses the MMJ connector for control signals to a smart antenna, which can either physically or electrically rotate for maximum signal strength.
See also
Serial console
References
External links
Wiring schemes for Modified Modular Jack cables and converters
Linux Documentation Project: Connectors and Adapter Types
Networking hardware | Modified Modular Jack | [
"Engineering"
] | 460 | [
"Computer networks engineering",
"Networking hardware"
] |
5,623,373 | https://en.wikipedia.org/wiki/Optimal%20radix%20choice | In mathematics and computer science, optimal radix choice is the problem of choosing the base, or radix, that is best suited for representing numbers. Various proposals have been made to quantify the relative costs of using different radices in representing numbers, especially in computer systems. One formula is the number of digits needed to express it in that base, multiplied by the base (the number of possible values each digit could have). This expression also arises in questions regarding organizational structure, networking, and other fields.
Definition
The cost of representing a number N in a given base b can be defined as
where we use the floor function and the base-b logarithm .
If both b and N are positive integers, then the quantity is equal to the number of digits needed to express the number N in base b, multiplied by base b. This quantity thus measures the cost of storing or processing the number N in base b if the cost of each "digit" is proportional to b. A base with a lower average is therefore, in some senses, more efficient than a base with a higher average value.
For example, 100 in decimal has three digits, so its cost of representation is 10×3 = 30, while its binary representation has seven digits (11001002), so the analogous calculation gives 2×7 = 14. Likewise, in base 3 its representation has five digits (102013), for a value of 3×5 = 15, and in base 36 (2S36) one finds 36×2 = 72.
If the number is imagined to be represented by a combination lock or a tally counter, in which each wheel has b digit faces, from and having wheels, then is the total number of digit faces needed to inclusively represent any integer from 0 to N.
Asymptotic behavior
The quantity for large N can be approximated as follows:
The asymptotically best value is obtained for base 3, since attains a minimum for in the positive integers:
For base 10, we have:
Comparing different bases
The values of of bases b1 and b2 may be compared for a large value of N:
Choosing e for b2 gives
The average of various bases up to several arbitrary numbers (avoiding proximity to powers of 2 through 12 and e) are given in the table below. Also shown are the values relative to that of base e. of any number is just , making unary the most economical for the first few integers, but this no longer holds as N climbs to infinity.
{| class="wikitable sortable"
|-
! Base b
! Avg. E(b,N)
N = 1 to 6
! Avg. E(b,N)
N = 1 to 43
! Avg. E(b,N)
N = 1 to 182
! Avg. E(b,N)
N = 1 to 5329
!
! Relative size ofE (b )/E (e )
|- align=right
| 1
| 3.5
| 22.0
| 91.5
| 2,665.0
| || align="left"|—
|- align=right
| 2
| 4.7
| 9.3
| 13.3
| 22.9
|
|- align=right
| e
| 4.5
| 9.0
| 12.9
| 22.1
|
|- align=right
| 3
| 5.0
| 9.5
| 13.1
| 22.2
|
|- align=right
| 4
| 6.0
| 10.3
| 14.2
| 23.9
|
|- align=right
| 5
| 6.7
| 11.7
| 15.8
| 26.3
|
|- align=right
| 6
| 7.0
| 12.4
| 16.7
| 28.3
|
|- align=right
| 7
| 7.0
| 13.0
| 18.9
| 31.3
|
|- align=right
| 8
| 8.0
| 14.7
| 20.9
| 33.0
|
|- align=right
| 9
| 9.0
| 16.3
| 22.6
| 34.6
|
|- align=right
| 10
| 10.0
| 17.9
| 24.1
| 37.9
|
|- align=right
| 12
| 12.0
| 20.9
| 25.8
| 43.8
|
|- align=right
| 15
| 15.0
| 25.1
| 28.8
| 49.8
|
|- align=right
| 16
| 16.0
| 26.4
| 30.7
| 50.9
|
|- align=right
| 20
| 20.0
| 31.2
| 37.9
| 58.4
|
|- align=right
| 30
| 30.0
| 39.8
| 55.2
| 84.8
|
|- align=right
| 40
| 40.0
| 43.7
| 71.4
| 107.7
|
|- align=right
| 60
| 60.0
| 60.0
| 100.5
| 138.8
|
|}
Ternary tree efficiency
One result of the relative economy of base 3 is that ternary search trees offer an efficient strategy for retrieving elements of a database. A similar analysis suggests that the optimum design of a large telephone menu system to minimise the number of menu choices that the average customer must listen to (i.e. the product of the number of choices per menu and the number of menu levels) is to have three choices per menu.
In a -ary heap, a priority queue data structure based on -ary trees, the worst-case number of comparisons per operation in a heap containing elements is (up to lower-order terms), the same formula used above. It has been suggested that choosing or may offer optimal performance in practice.
Brian Hayes suggests that may be the appropriate measure for the complexity of an Interactive voice response menu: in a tree-structured phone menu with outcomes and choices per step, the time to traverse the menu is proportional to the product of (the time to present the choices at each step) with (the number of choices that need to be made to determine the outcome). From this analysis, the optimal number of choices per step in such a menu is three.
Computer hardware efficiencies
The 1950 reference High-Speed Computing Devices describes a particular situation using contemporary technology. Each digit of a number would be stored as the state of a ring counter composed of several triodes. Whether vacuum tubes or thyratrons, the triodes were the most expensive part of a counter. For small radices r less than about 7, a single digit required r triodes. (Larger radices required 2r triodes arranged as r flip-flops, as in ENIAC's decimal counters.)
So the number of triodes in a numerical register with n digits was rn. In order to represent numbers up to 106, the following numbers of tubes were needed:
{| class="wikitable"
|-
! Radix r
! Tubes N = rn
|-
| 2
| 39.20
|-
| 3
| 38.24
|-
| 4
| 39.20
|-
| 5
| 42.90
|-
| 10
| 60.00
|}
The authors conclude,
See also
Ternary computer
List of numeral systems
References
Further reading
S.L. Hurst, "Multiple-Valued Logic-Its Status and its Future", IEEE trans. computers, Vol. C-33, No 12, pp. 1160–1179, DEC 1984.
J. T. Butler, "Multiple-Valued Logic in VLSI Design, ” IEEE Computer Society Press Technology Series, 1991.
C.M. Allen, D.D. Givone “The Allen-Givone Implementation Oriented Algebra", in Computer Science and Multiple-Valued Logic: Theory and Applications, D.C. Rine, second edition, D.C. Rine, ed., The Elsevier North-Holland, New York, N.Y., 1984. pp. 268–288.
G. Abraham, "Multiple-Valued Negative Resistance Integrated Circuits", in Computer Science and Multiple-Valued Logic: Theory and Applications, D.C. Rine, second edition, D.C. Rine, ed., The Elsevier North-Holland, New York, N.Y., 1984. pp. 394–446.
Positional numeral systems
Computer arithmetic
Ternary computers | Optimal radix choice | [
"Mathematics",
"Technology",
"Engineering"
] | 1,753 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer arithmetic",
"Numeral systems",
"Information theory",
"Arithmetic",
"Computer science",
"Positional numeral systems"
] |
5,624,034 | https://en.wikipedia.org/wiki/Chemical%20compound%20microarray | A chemical compound microarray is a collection of organic chemical compounds spotted on a solid surface, such as glass and plastic. This microarray format is very similar to DNA microarray, protein microarray and antibody microarray. In chemical genetics research, they are routinely used for searching proteins that bind with specific chemical compounds, and in general drug discovery research, they provide a multiplex way to search potential drugs for therapeutic targets.
There are three different forms of chemical compound microarrays based on the fabrication method. The first form is to covalently immobilize the organic compounds on the solid surface with diverse linking techniques; this platform is usually called Small Molecule Microarray, which is invented and advanced by Dr. Stuart Schreiber and colleagues . The second form is to spot and dry organic compounds on the solid surface without immobilization, this platform has a commercial name as Micro Arrayed Compound Screening (μARCS), which is developed by scientists in Abbott Laboratories . The last form is to spot organic compounds in a homogenous solution without immobilization and drying effect, this platform is developed by Dr. Dhaval Gosalia and Dr. Scott Diamond and later commercialized as DiscoveryDot technology by Reaction Biology Corporation .
Polymer Microarrays
Polymer microarrays have been developed to allow screening for new polymeric materials to direct different tissue lineages. Research has also been directed towards studying the surface chemistry of these arrays to determine which surface chemistries control cell adhesion, although concerns have been raised as to the influence of the substrate on measurements and the questionable statistical interpretation of results.
The lack of control in the production of many of these polymer arrays suggests that any practical application of these technologies will be limited. This is particularly true for the in situ polymerisation of acrylate monomers in minute volumes.
References
Uttamchandani, M. et al. (2005) "Small molecule microarrays, recent advances and applications". Curr Opin Chem Biol. 9, 4–13 .
Walsh, D.P. and Chang, Y.T. (2004) "Recent Advances in Small Molecule Microarrays, Applications and Technology". Comb Chem High Throughput Screen. 7, 557–564 .
Hoever, M. and Zbinden, P. (2004) "The evolution of microarrayed compound screening. Drug Discov". Today 9, 358–365.
Gosalia, DN and Diamond, SL. (2003) "Printing Chemical libraries on microarrays for fluid phase nanoliter reactions". Proc. Natl. Acad. Sci. USA, 100, 8721–8726 .
Ma, H. et al. (2005) "Nanoliter Homogenous Ultra High Throughput Screening Microarray for Lead Discoveries and IC50 Profiling". Assay Drug Dev. Technol. 3, 177–187 .
Horiuchi, K.Y. et al. (2005) "Microarrays for the functional analysis of the chemical-kinase interactome", accepted, J Biomol Screen. 11, 48–56 .
Ma, H. and Horiuchi, K.Y. (2006) "Chemical Microarray: a new tool for drug screening and discovery", Drug Discovery Today, 11, 661–668 .
Nanotechnology
Microarrays | Chemical compound microarray | [
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 710 | [
"Biochemistry methods",
"Genetics techniques",
"Microtechnology",
"Microarrays",
"Materials science",
"Bioinformatics",
"Molecular biology techniques",
"Nanotechnology"
] |
5,624,327 | https://en.wikipedia.org/wiki/Electric%20eye | An electric eye is a photodetector used for detecting obstruction of a light beam. An example is the door safety system used on garage door openers that use a light transmitter and receiver at the bottom of the door to prevent closing if there is any obstruction in the way that breaks the light beam. The device does not provide an image; only the presence of light is detectable. Visible light may be used, but infrared radiation conceals the operation of the device and typically is used in modern systems. Originally, systems used lamps powered by direct current or the power line alternating current frequency, but modern photodetector systems use an infrared light-emitting diode modulated at a few kilohertz, which allows the detector to reject stray light and improves the range, sensitivity, and security of the device.
Examples
Highway vehicle counter
In the 1930s, an electric eye vehicle counter was introduced in the US using two IR lamps set apart so that only cars and pedestrians would be counted.
First compact commercial unit
A compact type of electric eye was offered in 1931 that was enclosed in a small steel case and much easier to install compared to older models.
Automatic wrapping machines
In the 1930s, an electric eye apparatus was developed to help a wrapping machine wrap 72 boxes a minute.
Automatic door opener
In 1931, General Electric tested the first automatic door openers now popular in hospitals. They called their electric eye the Magic Eye.
Business alarm system
In 1931, an electric eye that used invisible UV wavelength was offered to businesses in need of a 24-hour alarm system. A system of this type is demonstrated in the first scene of the 1932 film Jewel Robbery.
Automatic cameras
In 1936, Dr. Albert Einstein and Dr. Gustav Bucky received a patent for a design which applied the electric eye to a camera. The camera was capable of automatically determining the proper aperture and exposure.
See also
Photoelectric sensor
Machine vision
References
External links
"Latest Way Beams Of Light Are Put To Work" Popular Mechanics, April 1931
Optical devices
Garage door openers | Electric eye | [
"Materials_science",
"Technology",
"Engineering"
] | 409 | [
"Home automation",
"Glass engineering and science",
"Garage door openers",
"Optical devices"
] |
5,624,367 | https://en.wikipedia.org/wiki/Crop%20Trust | The Crop Trust, officially known as the Global Crop Diversity Trust, is an international nonprofit organization with a secretariat in Bonn, Germany. Its mission is to conserve and make available the world's crop diversity for food security.
Established in 2004, the Crop Trust is the only organization whose sole mission is to safeguard the world’s crop diversity for future food security. Through an endowment fund for crop diversity, the Crop Trust provides financial support for key international and national genebanks that hold collections of diversity for food crops available under the International Treaty for Plant Genetic Resources for Food and Agriculture (ITPGRFA). The organization also provides tools and support for the efficient management of genebanks, facilitates coordination between conserving institutions, and organizes final backup of crop seeds in the Svalbard Global Seed Vault.
Since its establishment, the Crop Trust has raised more than USD 300 million for the Crop Diversity Endowment Fund and supports conservation work in over 80 countries.
Mission
Crop diversity is the biological foundation of agriculture, and is the raw material plant breeders and farmers use to adapt crop varieties to pests and diseases. In the future, this crop diversity will play a central role in helping agriculture adjust to climate change and adapt to water and energy constraints.
History
In 1996, the UN Food and Agriculture Organization (FAO) recognized the need for global coordination for the conservation of the world’s crop diversity. At a conference organized by the FAO, 150 countries launched a Global Plan of Action to coordinate efforts at halting the loss of the world’s agrobiodiversity. The Global Plan of Action formed a major pillar of what would become the International Treaty on Plant Genetic Resources for Food and Agriculture, known as the Plant Treaty. The Plant Treaty brings the diversity of 64 food and forage crops into a multilateral system where the genetic material is protected and accessible to all who needed it. To protect the collections that housed that genetic material, however, a stable system of funding was needed.
To partly address this need, the Crop Trust was established in October 2004. Its mission was to help build a global system of ex situ crop diversity conservation, funded through an endowment for crop diversity. The Plant Treaty recognizes the endowment fund as an essential element of its funding strategy and confirms the autonomy of the Crop Trust as a scientific organization in raising and disbursing funds. Geoff Hawtin was appointed the Interim Executive Director of the new organization, housed at the FAO headquarters in Rome, Italy.
The Crop Trust began its work gathering contributions for the endowment fund from various foundations, corporations, and governments that had ratified the Plant Treaty. In 2007, the Crop Trust signed its first long-term grant agreement with the International Rice Research Institute (IRRI) in Los Baños, Philippines.
In 2005, Cary Fowler was appointed the first permanent Crop Trust Executive Director. Under Fowler’s leadership, the Crop Trust initiated the Global System Project and joined the three-party management agreement for the Svalbard Global Seed Vault, which opened in 2008 as a partnership between the Crop Trust, the Ministry of Agriculture and Food of Norway, and the Nordic Genetic Resource Center (NordGen). In 2011, the Crop Trust launched the Crop Wild Relatives Project, a 10-year project to collect and conserve crop wild relatives, a project funded by the Government of Norway.
In 2012, the Crop Trust appointed Marie Haga as the new Crop Trust Executive Director. A new target for the Crop Trust endowment fund was set at USD850 million to finance a global system for the conservation of crop diversity, centered around key international, regional and national collections, as well as the Svalbard Global Seed Vault.
In 2013, the Crop Trust opened its new headquarters in Bonn, Germany, through a hosting agreement with the Government of Germany. Shortly thereafter, the Crop Trust launched the five-year CGIAR Genebank CRP, taking on financial responsibility and oversight for the 11 CGIAR genebanks. In 2017, the CGIAR Genebank Platform replaced the Genebank CRP program and the Food Forever Initiative was launched to raise awareness of efforts to achieve Target 2.5 of the United Nations Sustainable Development Goals.
In 2018, the Crop Trust signed the first long-term funding agreement with the IRRI genebank, pledging to fully fund essential operations in perpetuity. The Crop Trust celebrated its 15th anniversary in 2019 and crossed the USD 300 million threshold in the endowment fund.
In 2020, Stefan Schmitz was appointed Executive Director. With support from the German government, the Crop Trust launched the Seeds4Resilience Project project in 2020. The project will upgrade five national genebanks in sub-Saharan Africa. In February 2020, the Svalbard Global Seed Vault reached one million accessions of seed samples for more than 6,000 species.
Management
The Crop Trust is headquartered in Bonn, Germany, after relocating there from Rome, Italy.
The executive board is chaired by Catherine Bertini. The Crop Trust's Donors' Council is chaired by Dr. Taek-Ryoun Kwon of South Korea.
Main donors include: Australia, Canada, Germany, Ireland, Norway, Sweden, Switzerland, United Kingdom, United States, the Bill and Melinda Gates Foundation, and the Grains Research and Development Corporation (Australia). A number of developing countries have also provided support, including Egypt, Ethiopia and India. Further contributions have been received from private corporations, foundations, industry associations, and from private individuals.
Leadership
Executive Director
2004-2005 – Geoffrey Hawtin (Interim)
2005-2012 – Cary Fowler
2013-2019 – Marie Haga
2020–present – Stefan Schmitz
Executive Board Chair
2007–2012 – Margaret Catley-Carlson
2013–2017 – Walter Fust
2018–2019 – Amb. Timothy Fischer
2019–2021 – Sir Peter Crane
2022–present – Catherine Bertini
Grants
Since its establishment, the Crop Trust has funded work in over 80 countries, and made its first grant for long-term conservation of a collection in late 2006. By 2011, the Crop Trust had established in-perpetuity support (i.e. grants funded through the Crop Trust's endowment) for collections of 15 crops: rice, cassava, wheat, barley, faba bean, pearl millet, maize, forages, banana, aroids, grass pea, sorghum, yam and lentil.
In 2007, the Crop Trust began a global initiative to rescue threatened, high-priority collections of crop diversity in developing countries and to support information systems to improve their conservation and availability. These efforts included providing support to developing countries and international agricultural research centers to deposit shipments of seed samples in the Svalbard Global Seed Vault for safety duplication purposes.
In 2010, the Crop Trust launched a global 10-year program to find, gather, catalog and save the wild relatives of 22 major food crops. These wild species contain untapped diversity to help address future challenges to agriculture.
Svalbard Global Seed Vault
The Crop Trust joined the Government of Norway and the Nordic Gene Bank in the 2008 establishment of the Svalbard Global Seed Vault, a "fail-safe" facility located at Svalbard, Norway. The Seed Vault provides long-term storage of duplicates of seeds conserved in genebanks around the world. This provides security of the world’s food supply against the loss of seeds in genebanks due to mismanagement, accident, equipment failures, funding cuts, and natural disasters. It is designed to hold the seeds of some 4.5 million samples of different varieties of agricultural crops. Primarily through the endowment fund, the Crop Trust provides most of the annual operating costs for the facility. With support from donors, the Crop Trust also assists selected genebanks in packaging and shipping seeds to the Seed Vault.
References
External links
Crop Trust website
Vimeo.com: "Securing Our Food Forever" — Crop Trust video.
Agricultural organisations based in Germany
Biodiversity
Seed associations
Sustainable agriculture
International charities
Non-profit organisations based in North Rhine-Westphalia | Crop Trust | [
"Biology"
] | 1,620 | [
"Biodiversity"
] |
5,624,421 | https://en.wikipedia.org/wiki/Phosphoric%20monoester%20hydrolases | Phosphoric monoester hydrolases (or phosphomonoesterases) are enzymes that catalyse the hydrolysis of O-P bonds by nucleophilic attack of phosphorus by cysteine residues or coordinated metal ions.
They are categorized with the EC number 3.1.3.
Examples include:
acid phosphatase
alkaline phosphatase
fructose-bisphosphatase
glucose-6-phosphatase
phosphofructokinase-2
phosphoprotein phosphatase
calcineurin
6-phytase
See also
phosphodiesterase
phosphatase
External links
Metabolism | Phosphoric monoester hydrolases | [
"Chemistry",
"Biology"
] | 145 | [
"Biochemistry",
"Metabolism",
"Cellular processes"
] |
5,624,725 | https://en.wikipedia.org/wiki/Wine%20cellar | A wine cellar is a storage room for wine in bottles or barrels, or more rarely in carboys, amphorae, or plastic containers. In an active wine cellar, important factors such as temperature and humidity are maintained by a climate control system. In contrast, passive wine cellars are not climate-controlled, and are usually built underground to reduce temperature swings. An aboveground wine cellar is often called a wine room, while a small wine cellar (fewer than 500 bottles) is sometimes termed a wine closet. The household department responsible for the storage, care and service of wine in a great mediaeval house was termed the buttery. Large wine cellars date back over 3,700 years.
Purpose
Wine cellars protect alcoholic beverages from potentially harmful external influences, providing darkness, constant temperature, and constant humidity. Wine is a natural, perishable food product issued from fermentation of fruit. Left exposed to heat, light, vibration or fluctuations in temperature and humidity, all types of wine can spoil. When properly stored, wines not only maintain their quality but many actually improve in aroma, flavor, and complexity as they mature.
Depending on their level of sugar and alcohol, wines are more or less sensitive to temperature variances; wine with higher alcohol and/or sugar content will be less sensitive to temperature variance.
Conditions
Wine can be stored satisfactorily between as long as any variations are gradual. A temperature of , much like that found in the caves used to store wine in France, is ideal for both short-term storage and long-term aging of wine. Wine generally matures differently and more slowly at a lower temperature than it does at a higher temperature. When the temperature swings are significant, 14 degrees or more, it will cause the wine to breathe through the cork which significantly speeds up the aging process. Between , wines will age normally.
Active versus passive
Wine cellars can be either active or passively cooled. Active wine cellars are highly insulated and need to be properly constructed. They require specialized wine cellar conditioning and cooling systems to maintain the desired temperature and humidity. In a very dry climate, it may be necessary to actively humidify the air, but in most areas this is not necessary. Passive wine cellars must be located in naturally cool and damp areas with minor seasonal and diurnal temperature variations, for example, a basement in a temperate climate. Passive cellars may be less predictable, but cost nothing to operate and are not affected by power outages.
Humidity
Some wine experts debate the importance of humidity for proper wine storage. In the Wine Spectator, writer Matt Kramer noted a French study which claimed that the relative humidity within a bottle is maintained 100% regardless of the closure used or the orientation of the bottle. However, Alexis Lichine says that low humidity can be a problem because it may cause organic corks to dry prematurely. of gravel covering the floor periodically sprinkled with a little water was recommended to retain the desired humidity.
Gallery
See also
Storage of wine
Aging of wine
CellarTracker (database)
References
Cellar
Rooms
pt:Adega | Wine cellar | [
"Engineering"
] | 625 | [
"Rooms",
"Architecture"
] |
5,624,982 | https://en.wikipedia.org/wiki/%CE%92-Glucosidase | β-Glucosidase (; systematic name β-D-glucoside glucohydrolase) is an enzyme that catalyses the following reaction:
Hydrolysis of terminal, non-reducing β-D-glucosyl residues with release of β-D-glucose
Structure
β-Glucosidase is composed of two polypeptide chains. Each chain is made up of 438 amino acids and constitute a subunit of the enzyme. Each of these subunits contains an active site. The active site has three potential components: the pocket, the cleft, and the tunnel. The pocket structure is beneficial for recognition of monosaccharide like glucose. The cleft allows for binding of sugars to form polysaccharides. The tunnel allows for the enzyme to attach to polysaccharide and then release product while still attached to the sugar.
Function
The function of the enzyme is to perform hydrolysis of various glycosides and oligosaccharides. The most significant oligosaccharide β-glucosidase reacts with is cellulose. Cellulose is a polymer composed of β-1,4-linked glucosyl residues. β-glucosidases, cellulases (endoglucanases), cellobiosidases (exoglucanases) are required by a number of organisms to consume it. These enzymes are powerful tools for degradation of plant cell walls by pathogens and other organisms consuming plant biomass. β‑glucosidases are essential for many organisms to digest a variety of nutrients. This enzyme completes double-displacement reaction, meaning that the enzyme is changed to an intermediate form when the first substrate enters the active site, it then releases the product before another substrate binds, and reverts to its original form by the end of the reaction. In the case of β-glucosidase, two carboxylate residues of glucosides, cellobiose, cellotriose, cellotetraose are involved at the active site. The purpose of the reaction is to remove the residues from disaccharide cellobiose to produce glucose during the hydrolysis of biomass. Depending on what the enzyme is reacting with the end product will be one or two glucose molecules.
Humans
In humans, tissues within the liver, small intestine, spleen and kidney contain a cytosolic β-glucosidase (CBG) that hydrolyses various β-d-glycosides. This human enzyme shows significant activity towards many xenobiotics commonly found in the human diet including glycosides of phytoestrogens, flavonoids, simple phenolics and cyanogens and human CBG hydrolyses a broad range of dietary glucosides, possibly playing a critical role in xenobiotic metabolism.
Liposomal β-glucosidase (glucocerebrosidas), found in human lysosomes, plays an important role in the degradation of glycosphingolipids, breaking down glucosylceramide into ceramide and glucose. Gaucher's disease is characterised by an accumulation of glucosylceramide in bodily tissues due to a lack of, or impaired activity of liposomal β-glucosidase, leading to weakened bones, liver damage, and enlargement of the spleen and impairment to its normal function.
Beyond β-glucosidases expressed in human tissues, bacterial β-glucosidases are also found in human saliva and inside the intestine produced by the bacterial microbiota of the mouth and gastro-intestinal tract, with various implications to normal human health, drug and hormone metabolism, and involvement in certain diseases.
Bonnethead Shark
Bonnethead sharks are found in tropical and subtropical water living in estuaries with muddy or sandy bottoms, rich with seagrass. They were once thought of as being solely carnivores. It was known that bonnethead did consume seagrass, but it was viewed as incidental and dismissed as not helping the benefitting the shark. However, recent studies of the shark's hindgut has found that it has a high activity level of β-glucosidase. During the digestive process of the bonnethead shark, the acidic stomach weakens the cell walls of the seagrass and allows for β-glucosidase to enter the cell and digest the cellulose. The activity level is on par with the monkeyface eel. The monkeyface eel is a herbivore, meaning that the bonnethead is able to perform the same digestive activity as a herbivore. Therefore, the bonnethead shark is now classified as an omnivore.
Christmas Island Red Crab
The Christmas Island red crab is a species of crab located solely in the Christmas Island of the Indian Ocean. Land crabs such as these possess multiple varieties of β-glucosidase as they are terrestrial herbivores. In the case of the Christmas Island red crab β-glucosidase not only produces glucose, but also removes cellobiose. This is important as cellobiose is an inhibitor for a number of enzymes including endo-β-1,4-glucanase and cellobiohydrolase. β-Glucosidase is also capable of hydrolysis on small oligomers that are produced by other enzymes without the assistance of an intermediate enzyme. This in turn makes β-glucosidase a very efficient enzyme in not only the digestive tract of the Christmas Island red crab, but other crustaceans as well.
Synonyms
Synonyms, derivatives, and related enzymes include gentiobiase, cellobiase, emulsin, elaterase, aryl-β-glucosidase, β-D-glucosidase, β-glucoside glucohydrolase, arbutinase, amygdalinase, p-nitrophenyl β-glucosidase, primeverosidase, amygdalase, linamarase, salicilinase, and β-1,6-glucosidase.
See also
Amygdalin β-glucosidase
Cellulase, a suite of enzymes produced chiefly by fungi, bacteria, and protozoans that catalyze cellulolysis (i.e. the hydrolysis of cellulose)
Glucosylceramidase, a related enzyme
Prunasin β-glucosidase
Vicianin β-glucosidase
References
External links
GO-database listing 'GO:0016162 cellulose 1,4-beta-cellobiosidase activity'
Risk Assessment Summary, CEPA 1999. Trichoderma reesei P59G
Carbohydrate metabolism
EC 3.2.1
Enzymes of known structure | Β-Glucosidase | [
"Chemistry"
] | 1,445 | [
"Carbohydrate metabolism",
"Carbohydrate chemistry",
"Metabolism"
] |
5,624,984 | https://en.wikipedia.org/wiki/Ibn%20al-Banna%27%20al-Marrakushi | Ibn al‐Bannāʾ al‐Marrākushī (), full name: Abu'l-Abbas Ahmad ibn Muhammad ibn Uthman al-Azdi al-Marrakushi () (29 December 1256 – 31 July 1321), was an Arab Muslim polymath who was active as a mathematician, astronomer, Islamic scholar, Sufi and astrologer.
Biography
Ahmad ibn Muhammad ibn Uthman was born in the Qa'at Ibn Nahid Quarter of Marrakesh on 29 or 30 December 1256. His nisba al-Marrakushi is in relation to his birth and death in his hometown Marrakesh and al azdi means he was from the big arab tribe Azd. His father was a mason thus the kunya Ibn al-Banna' (lit. the son of the mason).
Ibn al-Banna' studied a variety of subjects under at least 17 masters: Quran under the Qari's Muhammad ibn al-bashir and shaykh al-Ahdab. ʻilm al-ḥadīth under qadi al-Jama'a (chief judge) of Fez َAbu al-Hajjaj Yusuf ibn Ahmad ibn Hakam al-Tujibi, Abu Yusuf Ya'qub ibn Abd al-Rahman al-Jazuli and Abu abd allah ibn. Fiqh and Usul al-Fiqh under Abu Imran Musa ibn Abi Ali az-Zanati al-Marrakushi and Abu al-Hasan Muhammad ibn Abd al-Rahman al-Maghili who taugh him al-Juwayni's Kitab al-Irsahd. He also studied Arabic grammar under Abu Ishaq Ibrahim ibn Abd as-Salam as-Sanhaji and Muhammad ibn Ali ibn Yahya as-sharif al-marrakushi who also taugh him Euclid’s Elements. ʿArūḍ and ʿilm al-farāʾiḍ under Abu Bakr Muhammad ibn Idris ibn Malik al-Quda'i al-Qallusi. Arithmetic under Muhammad ibn Ali, known as Ibn Ḥajala. Ibn al-Banna' also studied astronomy under Abu 'Abdallah Muhammad ibn Makhluf as-Sijilmassi. He also studied medecine under al-Mirrīkh.
He is known to have attached himself to the founder of the Hazmiriyya zawiya and sufi saint of Aghmat, Abu Zayd Abd al-Rahman al-Hazmiri, who guided his arithmetic skills toward divinational predictions.
Ibn al-Banna' taught classes in Marrakesh and some of his students were: Abd al-Aziz ibn Ali al-Hawari al-Misrati (d.1344), Abd al-Rahman ibn Sulayman al-Laja'i (d. 1369) and Muhammad ibn Ali ibn Ibrahim al-Abli (d. 1356).
He died at Marrakesh on 31 July 1321.
Works
Ibn al-Banna' wrote over 100 works encompassing such varied topics as Astronomy, Astrology, the division of inheritances, Linguistics, Logic, Mathematics, Meteorology, Rhetoric, Tafsir, Usūl al-Dīn and Usul al-Fiqh. One of his works, called Talkhīṣ ʿamal al-ḥisāb () (Summary of arithmetical operations), includes topics such as fractions and sums of squares and cubes. Another, called Tanbīh al-Albāb, covers topics related to:
calculations regarding the drop in irrigation canal levels,
arithmetical explanation of the Muslim laws of inheritance
determination of the hour of the Asr prayer,
explanation of frauds linked to instruments of measurement,
enumeration of delayed prayers which have to be said in a precise order, and
calculation of legal tax in the case of a delayed payment
He also wrote an introduction to Euclid's Elements.
He also wrote Rafʿ al-Ḥijāb 'an Wujuh A'mal al-Hisab (Lifting the Veil from Faces of the Workings of Calculations) which covered topics such as computing square roots of a number and the theory of simple continued fractions.
See also
Ibn Ghazi al-Miknasi
References
Sources
1256 births
1321 deaths
13th-century astronomers
13th-century mathematicians
13th-century Moroccan people
13th-century Moroccan writers
14th-century astronomers
14th-century mathematicians
14th-century Moroccan writers
Medieval Moroccan astronomers
Medieval Moroccan mathematicians
Algebraists
Medieval geometers
People from Marrakesh
Mathematicians who worked on Islamic inheritance
Scientists who worked on qibla determination | Ibn al-Banna' al-Marrakushi | [
"Mathematics"
] | 953 | [
"Algebra",
"Algebraists"
] |
5,625,092 | https://en.wikipedia.org/wiki/Arsenical%20bronze | Arsenical bronze is an alloy in which arsenic, as opposed to or in addition to tin or other constituent metals, is combined with copper to make bronze. The use of arsenic with copper, either as the secondary constituent or with another component such as tin, results in a stronger final product and better casting behavior.
Copper ore is often naturally contaminated with arsenic; hence, the term "arsenical bronze" when used in archaeology is typically only applied to alloys with an arsenic content higher than 1% by weight, in order to distinguish it from potentially accidental additions of arsenic.
Origins in pre-history
Although arsenical bronze occurs in the archaeological record across the globe, the earliest artifacts so far known, dating from the 5th millennium BC, have been found on the Iranian plateau. Arsenic is present in a number of copper-containing ores (see table at right, adapted from Lechtman & Klein, 1999), and therefore some contamination of the copper with arsenic would be unavoidable. However, it is still not entirely clear to what extent arsenic was deliberately added to copper and to what extent its use arose simply from its presence in copper ores that were then treated by smelting to produce the metal.
Reconstructing a possible sequence of events in prehistory involves considering the structure of copper ore deposits, which are mostly sulfides. The surface minerals would contain some native copper and oxidized minerals, but much of the copper and other minerals would have been washed further into the ore body, forming a secondary enrichment zone. This includes many minerals such as tennantite, with their arsenic, copper and iron. Thus, the surface deposits would have been used first; with some work, deeper sulfidic ores would have been uncovered and worked, and it would have been discovered that the material from this level had better properties.
Using these various ores, there are four possible methods that may have been used to produce arsenical bronze alloys. These are:
The direct addition of arsenic-bearing metals or ores such as realgar to molten copper. This method, although possible, lacks evidence.
The reduction of antimony-bearing copper arsenates or fahlore to produce an alloy high in arsenic and antimony. This is entirely practicable.
The reduction of roasted copper sulfarsenides such as tennantite and enargite. This method would result in the production of toxic fumes of arsenous oxide and the loss of much of the arsenic present in the ores.
The co-smelting of oxidic and sulfidic ores such as malachite and arsenopyrite together. This method has been demonstrated to work well, with little in the way of dangerous fumes given off during it, because of the reactions between the minerals.
Greater sophistication of metal workers is suggested by Thornton et al. They suggest that iron arsenide was deliberately produced as part of the copper-smelting process, to be traded and used to make arsenical bronze elsewhere by addition to molten copper.
Artifacts made of arsenical bronze cover the complete spectrum of metal objects, from axes to ornaments. The method of manufacture involved heating the metal in crucibles, and casting it into moulds made of stone or clay. After solidifying, it would be polished or, in the case of axes and other tools, work-hardened by beating the working edge with a hammer, thinning out the metal and increasing its strength. Finished objects could also be engraved or decorated as appropriate.
Advantages of arsenical bronze
While arsenic was most likely originally mixed with copper as a result of the ores already containing it, its use probably continued for a number of reasons. First, it acts as a deoxidizer, reacting with oxygen in the hot metal to form arsenous oxides which vaporize from the liquid metal. If a great deal of oxygen is dissolved in liquid copper, when the metal cools the copper oxide separates out at grain boundaries, and greatly reduces the ductility of the resulting object. However, its use can lead to a greater risk of porous castings, owing to the solution of hydrogen in the molten metal and its subsequent loss as a bubble (although any bubbles could be forge-welded and still leave the mass of the metal ready to be work-hardened).
Second, the alloy is capable of greater work-hardening than is the case with pure copper, so that it performs better when used for cutting or chopping. An increase in work-hardening capability arises with an increasing percentage of arsenic, and the bronze can be work-hardened over a wide range of temperatures without fear of embrittlement. Its improved properties over pure copper can be seen with as little as 0.5 to 2 wt% As, giving a 10-to-30% improvement in hardness and tensile strength.
Third, in the correct percentages, it can contribute a silvery sheen to the article being manufactured. There is evidence of arsenical bronze daggers from the Caucasus and other artifacts from different locations having an arsenic-rich surface layer which may well have been produced deliberately by ancient craftsmen, and Mexican bells were made of copper with sufficient arsenic to color them silver.
Arsenical bronze, sites and civilisations
Arsenical bronze was used by many societies and cultures across the globe. Firstly, the Iranian plateau, followed by the adjacent Mesopotamian area, together covering modern Iran, Iraq and Syria, has the earliest arsenical bronze metallurgy in the world, as previously mentioned. It was in use from the 4th millennium BC through to mid 2nd millennium BC, a period of nearly 2,000 years. There was a great deal of variation in arsenic content of artifacts throughout this period, making it impossible to say exactly how much was added deliberately and how much came about by accident.
These matters were clarified considerably by 2016. The two relevant ancient sites in eastern Turkey (Malatya Province) are Norşuntepe and Değirmentepe, where arsenical bronze production was taking place before 4000 BC. Hearths or natural draft furnaces, slag, ore, and pigment had been recovered throughout these sites. This was in the context of architectural complexes typical of southern Mesopotamian architecture.
According to Boscher (2016), at Değirmentepe, arsenical copper objects were clearly manufactured around 4200 BC, yet the technological aspects of this production remain unclear. This is because the primary smelting of ore seems to have been undertaken elsewhere, perhaps already at the sites of mining.
In contrast, the related Norşuntepe site provides a better context of production, and demonstrates that some form of arsenic alloying was indeed taking place by the 4th millennium BC. Since the slag identified at Norşuntepe contains no arsenic, this means that arsenic in some form was added separately.
Societies using arsenical bronze include the Akkadians, those of Ur, and the Amorites, all based around the Tigris and Euphrates rivers and centres of the trade networks which spread arsenical bronze across the Middle East during the Bronze Age.
The Chalcolithic-period Nahal Mishmar hoard in the Judean Desert west of the Dead Sea contains a number of arsenical bronze (4–12% arsenic) and perhaps arsenical copper artifacts made using the lost-wax process, the earliest known use of this complex technique. "Carbon-14 dating of the reed mat in which the objects were wrapped suggests that it dates to at least 3500 B.C. It was in this period that the use of copper became widespread throughout the Levant, attesting to considerable technological developments that parallel major social advances in the region."
In ancient Egypt, use of arsenical bronze/copper is confirmed since the second phase of Naqada culture, and then used widely until the beginning of the New Kingdom, i.e. in the Egyptian Chalcolithic, Early and Middle Bronze Age, and within the same eras also in ancient Nubia. In the Old Kingdom, era of the largest pyramids' builders, the arsenical copper was used for the production of tools at Giza. Arsenical copper was also processed in the workshop uncovered at Giza's Heit el-Ghurab, "lost city of pyramid builders" from the reign of Menkaure. Egyptian and Nubian objects made of arsenical copper were identified in the collections in Brussels, and in Leipzig. In the Middle Kingdom, use of tin bronze is increasing in ancient Egypt and Nubia. One of the largest studies of such material was the research of the Egyptian and Nubian axe blades in the British Museum, and it provided comparable results. Similar situation can be observed in Middle Bronze Age Kerma.
Sulfide deposits frequently are a mix of different metal sulfides, such as copper, zinc, silver, arsenic, mercury, iron and other metals. (Sphalerite (ZnS with more or less iron), for example, is not uncommon in copper sulfide deposits, and the metal smelted would be brass, which is both harder and more durable than copper.) The metals could theoretically be separated out, but the alloys resulting were typically much stronger than the metals individually.
The use of arsenical bronze spread along trade routes into northwestern China, to the Gansu–Qinghai region, with the Siba, Qijia and Tianshanbeilu cultures. However it is still unclear as to whether arsenical bronze artifacts were imported or made locally, although the latter is suspected as being more likely due to possible local exploitation of mineral resources. On the other hand, the artifacts show typological connections to the Eurasian steppe.
The Eneolithic period in Northern Italy, with the Remedello and Rinaldone cultures in 2800 to 2200 BC, saw the use of arsenical bronze. Indeed, it seems that arsenical bronze was the most common alloy in use in the Mediterranean basin at this time.
In South America, arsenical bronze was the predominant alloy in Ecuador and north and central Peru, because of the rich arsenic bearing ores present there. By contrast, the south and central Andes, southern Peru, Bolivia and parts of Argentina, were rich in the tin ore cassiterite and thus did not use arsenical bronze.
The Sican Culture of northwestern coastal Peru is famous for its use of arsenical bronze during the period 900 to 1350 AD. Arsenical bronze co-existed with tin bronze in the Andes, probably due to its greater ductility which meant it could be easily hammered into thin sheets which were valued in local society.
Arsenical bronze after the Bronze Age
The archaeological record in Egypt, Peru and the Caucasus suggests that arsenical bronze was produced for a time alongside tin bronze. At Tepe Yahya its use continued into the Iron Age for the manufacture of trinkets and decorative objects, thus demonstrating that there was not a simple succession of alloys over time, with superior new alloys replacing older ones. There are few real advantages metallurgically for the superiority of tin bronze, and early authors suggested that arsenical bronze was phased out due to its health effects. It is more likely that it was phased out in general use because alloying with tin gave castings which had similar strength to arsenical bronze but did not require further work-hardening to achieve useful strength. It is also probable that more certain results could be achieved with the use of tin, because it could be added directly to the copper in specific amounts, whereas the precise amount of arsenic being added was much harder to gauge due to the manufacturing process.
Health effects of arsenical bronze use
Arsenic is an element with a vaporization point of 615 °C, such that arsenical oxide will be lost from the melt before or during casting, and fumes from fire setting for mining and ore processing have long been known to attack the nervous system, eyes, lungs, and skin.
Chronic arsenic poisoning leads to peripheral neuropathy, which can cause weakness in the legs and feet. It has been speculated that this lay behind the legend of lame smiths in many cultures and myths, such as the Greek god Hephaestus. As Hephaestus was an iron-age smith, not a bronze-age smith, the connection would be one from ancient folk memory.
A well-preserved mummy of a man who lived around 3,200 BC found in the Ötztal Alps, popularly known as Ötzi, showed high levels of both copper particles and arsenic in his hair. This, along with Ötzi's copper axe blade, which is 99.7% pure copper, has led scientists to speculate that he was involved in copper smelting.
Modern uses of arsenical bronze
Arsenical bronze has seen little use in the modern period. It appears that the closest equivalent goes by the name of arsenical copper, defined as copper with under 0.5% arsenic by mass, below the accepted percentage in archaeological artifacts. The presence of 0.5% arsenic in copper lowers the electrical conductivity to 34% of that of pure copper, and even as little as 0.05% decreases it by 15%.
See also
Arsenical copper
Arsenical brass
References
External links
Bronze Age
Copper alloys
History of metallurgy
Coinage metals and alloys
Arsenic | Arsenical bronze | [
"Chemistry",
"Materials_science"
] | 2,694 | [
"Copper alloys",
"Coinage metals and alloys",
"Metallurgy",
"History of metallurgy",
"Alloys"
] |
5,625,220 | https://en.wikipedia.org/wiki/Metakaolin | Metakaolin is the anhydrous calcined form of the clay mineral kaolinite. Rocks that are rich in kaolinite are known as china clay or kaolin, traditionally used in the manufacture of porcelain. The particle size of metakaolin is smaller than cement particles, but not as fine as silica fume.
Kaolinite sources
The quality and reactivity of metakaolin is strongly dependent of the characteristics of the raw material used. Metakaolin can be produced from a variety of primary and secondary sources containing kaolinite:Metakaolin is a dehydrated form of kaolinite, a type of clay mineral. Kaolinite-rich minerals are also referred to as china clay or kaolin, which are traditionally utilized in the production of porcelain. The grain size of metakaolin is less than that of cement particles, but it's not as minuscule as silica fume.
High purity kaolin deposits
Kaolinite deposits or tropical soils of lower purity
Paper sludge waste (if containing kaolinite)
Oil sand tailings (if containing kaolinite)
Forming metakaolin
The T-O clay mineral kaolinite does not contain interlayer cations or interlayer water. The temperature of dehydroxylation depends on the structural layer stacking order. Disordered kaolinite dehydroxylates between 530 and 570 °C, ordered kaolinite between 570 and 630 °C. Dehydroxylated disordered kaolinite shows higher pozzolanic activity than ordered. The dehydroxylation of kaolin to metakaolin is an endothermic process due to the large amount of energy required to remove the chemically bonded hydroxyl ions. Above the temperature range of dehydroxylation, kaolinite transforms into metakaolin, a complex amorphous structure which retains some long-range order due to layer stacking. Much of the aluminum of the octahedral layer becomes tetrahedrally and pentahedrally coordinated.
In order to produce a pozzolan (supplementary cementitious material) nearly complete dehydroxylation must be reached without overheating, i.e., thoroughly roasted but not burnt. This produces an amorphous, highly pozzolanic state, whereas overheating can cause sintering, to form a dead burnt, nonreactive refractory, containing mullite and a defect Al-Si spinel. Reported optimum activation temperatures vary between 550 and 850 °C for varying durations, however the range 650-750 °C is most commonly quoted.
In comparison with other clay minerals kaolinite shows a broad temperature interval between dehydroxylation and recrystallization, much favoring the formation of metakaolin and the use of thermally activated kaolin clays as pozzolans. Also, because the octahedral layer is directly exposed to the interlayer (in comparison to for instance T-O-T clay minerals such as smectites), structural disorder is attained more easily upon heating.
High-reactivity metakaolin
High-reactivity metakaolin (HRM) is a highly processed reactive aluminosilicate pozzolan, a finely-divided material that reacts with slaked lime at ordinary temperature and in the presence of moisture to form a strong slow-hardening cement. It is formed by calcining purified kaolinite, generally between 650 and 700 °C in an externally fired rotary kiln. It is also reported that HRM is responsible for acceleration in the hydration of ordinary portland cement (OPC), and its major impact is seen within 24 hours. It also reduces the deterioration of concrete by Alkali Silica Reaction (ASR), particularly useful when using recycled crushed glass or glass fines as aggregate. The amount of slaked lime that can be bound by metakaolin is measured by the modified Chapelle test.
Adsorption properties
The adsorption surface properties of the metakaolins can be characterized by inverse gas chromatography analysis.
Concrete admixture
Considered to have twice the reactivity of most other pozzolans, metakaolin is a valuable admixture for concrete/cement applications. Replacing portland cement with 8–20 wt.% (% by weight) metakaolin produces a concrete mix that exhibits favorable engineering properties, including: the filler effect, the acceleration of OPC hydration, and the pozzolanic reaction. The filler effect is immediate, while the effect of pozzolanic reaction occurs between 3 and 14 days.
In the mid-2010s, Limestone Calcined Clay Cement mixture incorporating even more than 20% metakaolin was developed as a lower-carbon cement substitute. The technology is on the commercialization stage in the 2020s.
Advantages
Increased compressive and flexural strengths
Reduced permeability (including chloride permeability)
Reduced potential for efflorescence, which occurs when calcium is transported by water to the surface where it combines with carbon dioxide from the atmosphere to make calcium carbonate, which precipitates on the surface as a white residue.
Increased resistance to chemical attack
Increased durability
Reduced effects of alkali-silica reactivity (ASR)
Enhanced workability and finishing of concrete
Reduced shrinkage, due to "particle packing" making concrete denser
Improved color by lightening the color of concrete making it possible to tint lighter integral color.
Higher thermal resistance due to increased temperature levels
Uses
High performance, high strength, and lightweight concrete
Precast and poured-mold concrete
Fibercement and ferrocement products
Glass fiber reinforced concrete
Countertops, art sculptures (see for example the free-standing sculptures of Albert Vrana)
Mortar and stucco
See also
Concrete
Engineered cementitious composite
Fly ash
Kaolinite
Portland cement
Pozzolan
Rice husk ash (also very rich in )
Silica fume
References
Concrete
Cement
Silicate minerals | Metakaolin | [
"Engineering"
] | 1,224 | [
"Structural engineering",
"Concrete"
] |
5,625,309 | https://en.wikipedia.org/wiki/Energy%20gap | In solid-state physics, an energy gap or band gap is an energy range in a solid where no electron states exist, i.e. an energy range where the density of states vanishes.
Especially in condensed matter physics, an energy gap is often known more abstractly as a spectral gap, a term which need not be specific to electrons or solids.
Band gap
If an energy gap exists in the band structure of a material, it is called band gap. The physical properties of semiconductors are to a large extent determined by their band gaps, but also for insulators and metals the band structure—and thus any possible band gaps—govern their electronic properties.
Superconductors
For superconductors the energy gap is a region of suppressed density of states around the Fermi energy, with the size of the energy gap much smaller than the energy scale of the band structure. The superconducting energy gap is a key aspect in the theoretical description of superconductivity and thus features prominently in BCS theory. Here, the size of the energy gap indicates the energy gain for two electrons upon formation of a Cooper pair. If a conventional superconducting material is cooled from its metallic state (at higher temperatures) into the superconducting state, then the superconducting energy gap is absent above the critical temperature , it starts to open upon entering the superconducting state at , and it grows upon further cooling.
BCS theory predicts that the size of the superconducting energy gap for conventional superconductors at zero temperature scales with their critical temperature : (with Boltzmann constant ).
Pseudogap
If the density of states is suppressed near the Fermi energy but does not fully vanish, then this suppression is called pseudogap. Pseudogaps are experimentally observed in a variety of material classes; a prominent example are the cuprate high-temperature superconductors.
Hard gap vs. soft gap
If the density of states vanishes over an extended energy range, then this is called a hard gap. If instead the density of states exactly vanishes only for a single energy value (while being suppressed, but not vanishing for nearby energy values), then this is called a soft gap. A prototypical example of a soft gap is the Coulomb gap that exists in localized electron states with Coulomb interaction.
References
Electronic band structures
Superconductivity | Energy gap | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 492 | [
"Electron",
"Physical quantities",
"Superconductivity",
"Materials science",
"Electronic band structures",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
5,625,345 | https://en.wikipedia.org/wiki/Thread%20seal%20tape | Thread seal tape (also known as PTFE tape, Teflon tape, or plumber's tape) is a polytetrafluoroethylene (PTFE) film tape commonly used in plumbing for sealing pipe threads. The tape is sold cut to specific widths and wound on a spool, making it easy to wind around pipe threads. Thread seal tape lubricates, allowing for a deeper seating of the threads, and it helps prevent the threads from seizing when being unscrewed. The tape also works as a deformable filler and thread lubricant, helping to seal the joint without hardening or making it more difficult to tighten, and instead making it easier to tighten. It also protects the threads of both pieces from direct contact with each other and physical wear. Morever, it helps seal and prevent leaks from the connection.
Typically the tape is wrapped in the same direction the male threads go for tightening and is commonly used commercially in applications including pressurized water systems, central heating systems, and air compression equipment.
Types
There are two US standards for determining the quality of any thread seal tape. MIL-T-27730A (an obsolete military specification still commonly used in industry in the US) requires a minimum thickness of 3.5 mils and a minimum PTFE purity of 99%. The second standard, A-A-58092, is a commercial grade which maintains the thickness requirement of MIL-T-27730A and adds a minimum density of 1.2 g/cm3. Relevant standards may vary between industries; tape for gas fittings (to UK gas regulations) is required to be thicker than that for water. Although PTFE itself is suitable for use with high-pressure oxygen, the grade of tape must also be known to be free from grease.
Thread seal tape used in plumbing applications is most commonly white, but it is also available in various colors. It is often used to correspond to color coded pipelines (US, Canada, Australia, and New Zealand: yellow for natural gas, green for oxygen, etc.). These color-codes for thread sealing tape were introduced by Bill Bentley of Unasco Pty Ltd in the 1970s. In the UK, the tape is used from coloured reels, e.g. yellow reels for gas, and green for oxygen.
White: used on NPT threads up to inch
Yellow: used on NPT threads inch to 2 inch, often labeled "gas tape"
Pink: used on NPT threads inch to 2 inch, safe for potable water
Green: oil-free PTFE used on oxygen lines and some specific medical gasses
Gray: contains nickel, anti-seizing, anti-galling, and anti-corrosion, used for stainless pipes
Copper: contains copper granules and is certified as a thread lubricant but not a sealer
In Europe, the BSI standard BS-7786:2006 specifies various grades and quality standards of PTFE thread sealing tape.
Uses
Thread seal tape is appropriate for use on tapered threads, where the sealing force is a wedge action. Parallel threads may not seal effectively with or without tape, as they are intended to be sealed by a gasket.
Thread seal tape is almost always applied by hand, although at least one machine is available for the production wrapping of fittings.
Thread seal tape is also commonly used in the stretching of body piercings, through a process known as taping, because it is inert and safe for this use. The wearer wraps a layer of tape around a plug and uses the jewelry, adding another layer every few days, thus gradually stretching the piercing.
Hazards
Overuse or misapplication of thread seal tape may be a hazard. Excess application of tape can prevent mating threads from fully engaging, reducing the shear point of the threads. Combining thread seal tape with a pipe dope compound can also overload threads. Also, internal overhangs of loose material may constrict a joint or slough off and form a foreign body that could jam a valve seat. Therefore, using tape as a thread sealant is generally not considered appropriate in fluid power (hydraulic) systems. Overheating (550° Fahrenheit, or about 288° Celsius) and subsequent decomposition of Teflon can produce perfluoroisobutene which is 10 times as toxic as phosgene. Inhalation of even a minute amount can be fatal.
Use of names
Familiarity with the Teflon brand of fluoropolymers has led to the name becoming a generic trademark, and the practice of any PTFE-based thread seal tape is referred to as "Teflon tape". Chemours, owner of the Teflon trademark, no longer manufactures any thread seal tape, and objects to this practice.
Most references to "plumber's tape" nowadays refer to thread seal tape; however, the original use in the plumbing trade describes a strap of material with holes in it used for supporting pipes and fixtures.
References
External links
Dry lubricants
Plumbing | Thread seal tape | [
"Engineering"
] | 1,039 | [
"Construction",
"Plumbing"
] |
5,625,361 | https://en.wikipedia.org/wiki/Phosphoenolpyruvate%20carboxylase | Phosphoenolpyruvate carboxylase (also known as PEP carboxylase, PEPCase, or PEPC; , PDB ID: 3ZGE) is an enzyme in the family of carboxy-lyases found in plants and some bacteria that catalyzes the addition of bicarbonate (HCO3−) to phosphoenolpyruvate (PEP) to form the four-carbon compound oxaloacetate and inorganic phosphate:
PEP + HCO3− → oxaloacetate + Pi
This reaction is used for carbon fixation in CAM (crassulacean acid metabolism) and organisms, as well as to regulate flux through the citric acid cycle (also known as Krebs or TCA cycle) in bacteria and plants. The enzyme structure and its two step catalytic, irreversible mechanism have been well studied. PEP carboxylase is highly regulated, both by phosphorylation and allostery.
Enzyme structure
The PEP carboxylase enzyme is present in plants and some types of bacteria, but not in fungi or animals (including humans). The genes vary between organisms, but are strictly conserved around the active and allosteric sites discussed in the mechanism and regulation sections. Tertiary structure of the enzyme is also conserved.
The crystal structure of PEP carboxylase in multiple organisms, including Zea mays (maize), and Escherichia coli has been determined. The overall enzyme exists as a dimer-of-dimers: two identical subunits closely interact to form a dimer through salt bridges between arginine (R438 - exact positions may vary depending on the origin of the gene) and glutamic acid (E433) residues. This dimer assembles (more loosely) with another of its kind to form the four subunit complex. The monomer subunits are mainly composed of alpha helices (65%), and have a mass of 106kDa each. The sequence length is about 966 amino acids.
The enzyme active site is not completely characterized. It includes a conserved aspartic acid (D564) and a glutamic acid (E566) residue that non-covalently bind a divalent metal cofactor ion through the carboxylic acid functional groups. This metal ion can be magnesium, manganese or cobalt depending on the organism, and its role is to coordinate the phosphoenolpyruvate molecule as well as the reaction intermediates. A histidine (H138) residue at the active site is believed to facilitate proton transfer during the catalytic mechanism.
Enzyme mechanism
The mechanism of PEP carboxylase has been well studied. The enzymatic mechanism of forming oxaloacetate is very exothermic and thereby irreversible; the biological Gibbs free energy change (△G°’) is -30kJmol−1. The substrates and cofactor bind in the following order: metal cofactor (either Co2+, Mg2+, or Mn2+), PEP, bicarbonate (HCO3−). The mechanism proceeds in two major steps, as described below and shown in figure 2:
The bicarbonate acts as a nucleophile to attack the phosphate group in PEP. This results in the splitting of PEP into a carboxyphosphate and the (very reactive) enolate form of pyruvate.
Proton transfer takes place at the carboxyphosphate. This is most likely modulated by a histidine (H138) residue that first deprotonates the carboxy side, and then, as an acid, protonates the phosphate part. The carboxyphosphate then exothermically decomposes into carbon dioxide and inorganic phosphate, at this point making this an irreversible reaction. Finally, after the decomposition, the carbon dioxide is attacked by the enolate to form oxaloacetate.
The metal cofactor is necessary to coordinate the enolate and carbon dioxide intermediates; the CO2 molecule is only lost 3% of the time. The active site is hydrophobic to exclude water, since the carboxyphosphate intermediate is susceptible to hydrolysis.
Function
The three most important roles that PEP carboxylase plays in plants and bacteria metabolism are in the cycle, the CAM cycle, and the citric acid cycle biosynthesis flux.
The primary mechanism of carbon dioxide assimilation in plants is through the enzyme ribulose-1,5-bisphosphate carboxylase/oxygenase (also known as RuBisCO), that adds CO2 to ribulose-1,5-bisphosphate (a 5 carbon sugar), to form two molecules of 3-phosphoglycerate (2x3 carbon sugars). However, at higher temperatures and lower CO2 concentrations, RuBisCO adds oxygen instead of carbon dioxide, to form the unusable product glycolate in a process called photorespiration. To prevent this wasteful process, plants increase the local CO2 concentration in a process called the cycle. PEP carboxylase plays the key role of binding CO2 in the form of bicarbonate with PEP to create oxaloacetate in the mesophyll tissue. This is then converted back to pyruvate (through a malate intermediate), to release the CO2 in the deeper layer of bundle sheath cells for carbon fixation by RuBisCO and the Calvin cycle. Pyruvate is converted back to PEP in the mesophyll cells, and the cycle begins again, thus actively pumping CO2.
The second important and very similar biological significance of PEP carboxylase is in the CAM cycle. This cycle is common in organisms living in arid habitats. Plants cannot afford to open stomata during the day to take in CO2, as they would lose too much water by transpiration. Instead, stomata open at night, when water evaporation is minimal, and take in CO2 by fixing with PEP to form oxaloacetate though PEP carboxylase. Oxaloacetate is converted to malate by malate dehydrogenase, and stored for use during the day when the light dependent reaction generates energy (mainly in the form of ATP) and reducing equivalents such as NADPH to run the Calvin cycle.
Third, PEP carboxylase is significant in non-photosynthetic metabolic pathways. Figure 3 shows this metabolic flow (and its regulation). Similar to pyruvate carboxylase, PEP carboxylase replenishes oxaloacetate in the citric acid cycle. At the end of glycolysis, PEP is converted to pyruvate, which is converted to acetyl-coenzyme-A (acetyl-CoA), which enters the citric acid cycle by reacting with oxaloacetate to form citrate. To increase flux through the cycle, some of the PEP is converted to oxaloacetate by PEP carboxylase. Since the citric acid cycle intermediates provide a hub for metabolism, increasing flux is important for the biosynthesis of many molecules, such as for example amino acids.
Regulation
PEP carboxylase is mainly subject to two levels of regulation: phosphorylation and allostery. Figure 3 shows a schematic of the regulatory mechanism.
Phosphorylation by phosphoenolpyruvate carboxylase kinase turns the enzyme on, whereas phosphoenolpyruvate carboxylase phosphatase turns it back off. Both kinase and phosphatase are regulated by transcription. It is further believed that malate acts as a feedback inhibitor of kinase expression levels, and as an activator for phosphatase expression (transcription). Since oxaloacetate is converted to malate in CAM and organisms, high concentrations of malate activate phosphatase expression - the phosphatase subsequently de-phosphorylates and thus de-actives PEP carboxylase, leading to no further accumulation of oxaloacetate and thus no further conversion of oxaloacetate to malate. Hence malate production is down-regulated.
The main allosteric inhibitors of PEP carboxylase are the carboxylic acids malate (weak) and aspartate (strong). Since malate is formed in the next step of the CAM and cycles after PEP carboxylase catalyses the condensation of CO2 and PEP to oxaloacetate, this works as a feedback inhibition pathway. Oxaloacetate and aspartate are easily inter-convertible through a transaminase mechanism; thus high concentrations of aspartate are also a pathway of feedback inhibition of PEP carboxylase.
The main allosteric activators of PEP carboxylase are acetyl-CoA and fructose-1,6-bisphosphate (F-1,6-BP). Both molecules are indicators of increased glycolysis levels, and thus positive feed-forward effectors of PEP carboxylase. They signal the need to produce oxaloacetate to allow more flux through the citric acid cycle. Additionally, increased glycolysis means a higher supply of PEP is available, and thus more storage capacity for binding CO2 in transport to the Calvin cycle. It is also noteworthy that the negative effectors aspartate competes with the positive effector acetyl-CoA, suggesting that they share an allosteric binding site.
Studies have shown that energy equivalents such as AMP, ADP and ATP have no significant effect on PEP carboxylase.
The magnitudes of the allosteric effects of these different molecules on PEP carboxylase activity depend on individual organisms.
References
EC 4.1.1
Photosynthesis | Phosphoenolpyruvate carboxylase | [
"Chemistry",
"Biology"
] | 2,066 | [
"Biochemistry",
"Photosynthesis"
] |
5,625,377 | https://en.wikipedia.org/wiki/Chat%20%28mining%29 | Chat is fragments of siliceous rock, limestone, and dolomite waste rejected in the lead-zinc milling operations that accompanied lead-zinc mining in the first half of the 20th century. Historic lead and zinc mining in the Midwestern United States was centered in two major areas: the tri-state area covering more than in southwestern Missouri, southeastern Kansas, and northeastern Oklahoma and the Old Lead Belt covering about in southeastern Missouri. The first recorded mining occurred in the Old Lead Belt in about 1742. The production increased significantly in both the tri-state area and the Old Lead Belt during the mid-19th century and lasted up to 1970.
Cleanup
Currently production still occurs in a third area, the Viburnum Trend, in southeastern Missouri. Mining and milling of ore produced more than 500 million tons of wastes in the tri-state area and about 250 million tons of wastes in the Old Lead Belt. More than 75 percent of this waste has been removed, with some portion of it used over the years. Today, approximately 100 million tons of chat remain in the tri-state area. The EPA, the states of Oklahoma, Kansas, and Missouri, local communities, and private companies continue to work together in implementing and monitoring
response actions that reduce or remove potential adverse impacts posed by remaining mine wastes contaminated with lead, zinc, cadmium, and other metals.
Ore processing
Ore production consisted of crushing and grinding the rock to standard sizes and separating the ore. Ore processing was accomplished in either a dry gravity separation or through a wet washing or flotation separation. Dry processes produced a fine gravel waste commonly called "chat." The wet processes resulted in the creation of tailing ponds used to dispose of waste material after ore separation. The wastes from wet
separation are typically sand and silt size and are called "tailings." Milling produces large chat waste piles and flat areas with tailings deposited in impoundments. Tailings generally contain higher concentrations of heavy metals and therefore present a higher risk to human health and the environment through direct contact. Chat typically ranges in diameter from 1/4 to 5/8 inch. Intermingled material such as sands
measure 0.033-0.008 inches in diameter and fine tailings are less than in diameter.
Uses
Although poisonous, chat can be used to improve traction on snow-covered roads; as gravel; and as construction aggregate, principally for railroad ballast, highway construction, and concrete production.
References
External links
The Creek Runs Red site for Independent Lens on PBS
Oklahoma Department of Mines website
EPA Tri-state Mining District chat regulations
Waste
Mining terminology | Chat (mining) | [
"Chemistry",
"Materials_science"
] | 525 | [
"Metallurgy",
"Metallurgical by-products"
] |
5,625,381 | https://en.wikipedia.org/wiki/Polypropylene%20carbonate | Polypropylene carbonate (PPC), a copolymer of carbon dioxide and propylene oxide, is a thermoplastic material. Catalysts like zinc glutarate are used in polymerization.
Properties
Polypropylene carbonate is soluble in polar solvents like lower ketones, ethyl acetate, dichloromethane and chlorinated hydrocarbons and insoluble in solvents like alcohols, water, and aliphatic hydrocarbons. It also forms stable emulsions in water.
PPC allows the diffusion of gases like oxygen through it. Having a glass temperature (Tg) between 25 and 45 °C, PPC binders are amorphous. The glass temperature of PPC is slightly greater than polyethylene carbonate (PEC).
Its refractive index is 1.46 while its dielectric constant is 3.
Applications
Polypropylene carbonate is used to increase the toughness of some epoxy resins.
It is used as a sacrificial binder in the ceramic industry, which decomposes and evaporates during sintering. It has a low sodium content which makes it suitable for the preparation of electroceramics like dielectric materials and piezoelectric ceramics.
Composites of polypropylene carbonate with starch (PPC/starch) are used as biodegradable plastics.
One of the largest manufacturers of polypropylene carbonate is Empower Materials, located in New Castle, DE, USA.
References
Further reading
External links
Material properties of Polypropylene carbonate
Plastics
Polycarbonates
Biodegradable plastics | Polypropylene carbonate | [
"Physics"
] | 339 | [
"Amorphous solids",
"Unsolved problems in physics",
"Plastics"
] |
5,625,508 | https://en.wikipedia.org/wiki/Boltzmann%20Medal | The Boltzmann Medal (or Boltzmann Award) is a prize awarded to physicists that obtain new results concerning statistical mechanics; it is named after the celebrated physicist Ludwig Boltzmann. The Boltzmann Medal is awarded once every three years by the Commission on Statistical Physics of the International Union of Pure and Applied Physics, during the STATPHYS conference.
The award consists of a gilded medal; its front carries the inscription Ludwig Boltzmann, 1844–1906.
Recipients
All the winners are influential physicists or mathematicians whose contribution to statistical physics have been relevant in the past decades. Institution with multiple recipients are Sapienza University of Rome (3) and École Normale Supérieure, Cornell University, University of Cambridge and Princeton University (2).
The Medal cannot be awarded to scientist who already has been laureate of a Nobel Prize. Three recipients of the Boltzmann Medal have gone to win the Nobel Prize in Physics: Kenneth G. Wilson (1982), Giorgio Parisi (2021) and John Hopfield (2024).
See also
List of physics awards
References
External links
IUPAP Commission on Statistical Physics (C3) the official website of C3, the Boltzmann Award recipients list during 1975–2010 (archived 10 August 2011)
Physics awards
Statistical mechanics
Triennial events
IUPAP | Boltzmann Medal | [
"Physics",
"Technology"
] | 267 | [
"Science and technology awards",
"Statistical mechanics",
"Physics awards"
] |
5,626,232 | https://en.wikipedia.org/wiki/Intersection%20theorem | In projective geometry, an intersection theorem or incidence theorem is a statement concerning an incidence structure – consisting of points, lines, and possibly higher-dimensional objects and their incidences – together with a pair of objects and (for instance, a point and a line). The "theorem" states that, whenever a set of objects satisfies the incidences (i.e. can be identified with the objects of the incidence structure in such a way that incidence is preserved), then the objects and must also be incident. An intersection theorem is not necessarily true in all projective geometries; it is a property that some geometries satisfy but others don't.
For example, Desargues' theorem can be stated using the following incidence structure:
Points:
Lines:
Incidences (in addition to obvious ones such as ):
The implication is then —that point is incident with line .
Famous examples
Desargues' theorem holds in a projective plane if and only if is the projective plane over some division ring (skewfield) — . The projective plane is then called desarguesian.
A theorem of Amitsur and Bergman states that, in the context of desarguesian projective planes, for every intersection theorem there is a rational identity such that the plane satisfies the intersection theorem if and only if the division ring satisfies the rational identity.
Pappus's hexagon theorem holds in a desarguesian projective plane if and only if is a field; it corresponds to the identity .
Fano's axiom (which states a certain intersection does not happen) holds in if and only if has characteristic ; it corresponds to the identity .
References
Incidence geometry
Theorems in projective geometry | Intersection theorem | [
"Mathematics"
] | 350 | [
"Theorems in geometry",
"Theorems in projective geometry",
"Incidence geometry",
"Combinatorics"
] |
5,626,568 | https://en.wikipedia.org/wiki/Salsola%20kali | Salsola kali is the restored botanical name for a species of flowering plants in the amaranth family. It is native to the Northern African and European Atlantic coasts to the Mediterranean (although it has been introduced elsewhere). It is an annual plant which grows primarily in the temperate biome.
Kali turgidum is a synonym for Salsola kali: an annual plant that grows in salty sandy coastal soils, whose commonly known as prickly saltwort or prickly glasswort.
Its distributional range is in Europe along the shores of Baltic Sea, North Sea and the Atlantic Ocean. In the Mediterranean and at dry inland places it is replaced by Kali tragus (syn. Salsola tragus or Salsola kali subsp. tragus), which is less tolerant to salty soils, and has spread from Eurasia to other continents. Kali turgidum does not seem to occur as an introduced species in America.
Systematics
The species was first described in 1753 as Salsola kali by Carl Linnaeus in Species Plantarum. Until 2007, it belonged to genus Salsola (sensu lato), but after molecular genetical research, this genus was split, and the species was placed into genus Kali Mill. (Syn.: Salsola sect. Kali Dum.). In genus Kali, the valid name is Kali turgidum (Dumort.) Guterm. (incorrectly as "turgida", Basionym: Salsola turgida Dumort., Fl. Belgica 23, 1827). The name Kali soda Moench used by Akhani et al. (2007) is invalid because of the older name Kali soda Scop. (a synonym of Salsola soda).
Kali turgidum belongs to tribe Salsoleae s. str. Kali turgidum, Kali tragus, and other closely related species form a species complex (Kali tragus-aggregate or formerly Salsola kali-aggregate). Some authors treat these species only on subspecies level. Then Kali tragus would be the valid name for the whole species complex, and Kali turgidum would be a subspecies of it.
It was previously thought that two subspecies should be reclassified as two separate species in the now defunct genus Kali:
Kali tragus: as Salsola tragus or Salsola kali subsp. tragus: a common weed of disturbed habitats, commonly known as prickly Russian thistle, windwitch, common saltwort, or tumbleweed.
Kali turgidum: as Salsola kali subsp. kali: a salt-resistant plant restricted to the shores of the Baltic Sea, North Sea and the Atlantic Ocean, commonly known as prickly saltwort.
In 2014, Mosyakin et al. proposed to conserve Salsola kali (= Kali turgidum) as nomenclatoral type for the genus Salsola. This is now accepted, with many species of genus Kali restored to Salsola, with some Palaearctic species placed in the genus Soda.
Alkali and soda ash
The plant is a halophyte, i.e. it grows where the water is salty, and the plant is a succulent, i.e. it holds much salty water. When the plant is burned, the sodium in the salt ends up in the chemical sodium carbonate. Sodium carbonate has a number of practical uses, including especially as an ingredient in making glass, and making soap. In the medieval and early modern centuries the Kali plant and others like it were collected at tidal marshes and seashores. The collected plants were burned. The resulting ashes were mixed with water. Sodium carbonate is soluble in water. Non-soluble components of the ashes sank to the bottom of the water container. The water with the sodium carbonate dissolved in it was then transferred to another container, and then the water was evaporated off, leaving behind the sodium carbonate. Another major component of the ashes that is soluble in water is potassium carbonate. The resulting product consisted mainly of a mixture of sodium carbonate and potassium carbonate. This product was called "soda ash" (was also called "alkali"). Soda ash extracted from the ashes of Kali turgidum/Kali tragus contains as much as 30% sodium carbonate. The soda ash was used primarily to make glass (secondarily used as a cleaning agent). Another notable halophilic plant that was collected for the purpose was Salsola soda. Another was Halogeton sativus. Historically in the late medieval and early post-medieval centuries the word "Kali" could refer to any such plants. (The words "alkali" and "kali" come from the Arabic word for soda ash, , where is the definite article.) Today such plants are also called saltworts, referring to their relatively high salt content. Because of their use historically in making glass, they are also called glassworts. In Spain the saltwort plants were called barilla and were the basis of a large industry in Spain in the 18th century; see barilla. In the early 19th century, plant sources were supplanted by synthetic sodium carbonate produced using the Leblanc process.
See also
Prickly Russian thistle
Russian globe thistle
Salsola
Tumbleweed
References
Further reading
Walter Gutermann: Notulae nomenclaturales 41–45. Neue Namen bei Cruciata und Kali sowie einige kleinere Korrekturen (New names in Cruciata, Kali, and some small corrections). In: Phyton (Horn). 51 (1), 2011, p. 98.
Amaranthaceae
Halophytes
Plants described in 1753
Taxa named by Carl Linnaeus
Barilla plants
Flora of Malta
Taxa named by Barthélemy Charles Joseph Dumortier | Salsola kali | [
"Chemistry"
] | 1,215 | [
"Halophytes",
"Salts"
] |
5,626,777 | https://en.wikipedia.org/wiki/International%20Teledemocracy%20Centre | The International Teledemocracy Centre (ITC) was established at Edinburgh Napier University in 1999. The centre is dedicated to researching innovative E-democracy systems that will strengthen public understanding and participation in democratic decision making.
ITC have worked in a number of roles on E-participation and E-democracy initiatives and research projects with a wide range of partners including parliaments, government departments and local authorities, NGOs, charities, youth groups, media and technical and research organisations.
One of its first projects, undertaken in partnership with BT Scotland, was the design of the E-Petitioner internet petitioning system.
References
External links
ITC Home Page
Edinburgh Napier University
1999 establishments in Scotland
Educational institutions established in 1999
E-democracy | International Teledemocracy Centre | [
"Technology"
] | 146 | [
"E-democracy",
"Computing and society"
] |
5,627,127 | https://en.wikipedia.org/wiki/Bevameter | A bevameter is a device used in terramechanics to measure the mechanical properties of soil. Bevameter technique was developed to measure terrain mechanical properties for the study of vehicle mobility. The bevameter test consists of penetration test to measure normal loads and shear test to determine shear loads exerted by a vehicle. Bevameter area size need to be the size of the wheel or track. DEM analysis can take data from one size and simulate bevameter performance for a different size.
External links
Terrain Trafficability Characterization with a Mobile Robot, Ojeda, L., Borenstein, J., Witus, G.
Soil science
Measuring instruments
Earth observation in-situ sensors | Bevameter | [
"Technology",
"Engineering",
"Environmental_science"
] | 141 | [
"Environmental instrumentation",
"Earth observation in-situ sensors",
"Measuring instruments"
] |
5,627,634 | https://en.wikipedia.org/wiki/Continua%20Health%20Alliance | Continua Health Alliance is an international non-profit, open industry group of nearly 240 healthcare providers, communications, medical, and fitness device companies.
Continua was a founding member of Personal Connected Health Alliance which was launched in February 2014 with other founding members mHealth SUMMIT and HIMSS.
Overview
Continua Health Alliance is an international not-for-profit industry organization enabling end-to-end, plug-and-play connectivity of devices and services for personal health management and healthcare delivery. Its mission is to empower information-driven health management and facilitate the incorporation of health and wellness into the day-to-day lives of consumers. ts activities include a certification and brand support program, events and collaborations to support technology and clinical innovation, as well as outreach to employers, payers, governments and care providers. With nearly 220 member companies reaching across the globe, Continua comprises technology, medical device and healthcare industry leaders and service providers dedicated to making personal connected health a reality.
Continua Health Alliance is working toward establishing systems of interoperable telehealth devices and services in three major categories: chronic disease management, aging independently, and health and physical fitness.
Devices and services
Continua Health Alliance version 1 design guidelines are based on proven connectivity technical standards and include Bluetooth for wireless and USB for wired device connection. The group released the guidelines to the public in June 2009.
The group is establishing a product certification program using its recognizable logo, the Continua Certified Logo program, signifying that the product is interoperable with other Continua-certified products. Products made under Continua Health Alliance guidelines will provide consumers with increased assurance of interoperability between devices, enabling them to more easily share information with caregivers and service providers.
Through collaborations with government agencies and other regulatory bodies, Continua works to provide guidelines for the effective management of diverse products and services from a global network of vendors. Continua Health Alliance products make use of the ISO/IEEE 11073 Personal Health Data (PHD) Standards.
Continua design guidelines are not available to the public without signing a Non-disclosure agreement. Continua's guidelines help technology developers build end-to-end, plug-and-play systems more efficiently and cost effectively.
Milestones
Continua Health Alliance was founded on June 6, 2006
Continua Health alliance performed its first public demonstration of interoperability on October 27, 2008 at the Partners Center for Connected Health 5th Annual Connected Health Symposium in Boston.
Continua Health Alliance certified its first product, the Nonin 2500 PalmSAT handheld pulse oximeter with USB, on January 26, 2009.
By the end of December 2014 there are more than 100 certified products.
Continua selected Bluetooth Low Energy and Zigbee wireless protocols as the wireless standards for its Version 2 Design Guidelines which have been released. Bluetooth Low Energy is to be used for low-power mobile devices. Zigbee will be used for networked low-power sensors such as those enabling independent living.
Beginning in 2012, Continua invites non-members to request a copy of its Design Guidelines after signing a non-disclosure agreement.
Continua has working groups and operations in the U.S., EU, Japan, India and China.
Members
Continua Health Alliance currently has nearly 220 member companies.
Continua's Board of Directors is currently composed of the following companies:
Fujitsu
Intel Corporation
Oracle Corporation
Orange
Philips
Qualcomm
Roche Diagnostics
Sharp
UnitedHealth Group
Organisational structure
The organisation is primarily staffed by volunteers from the member organisations that are organised into working groups that address the goals of the alliance. Below the board of directors sit the following main working groups:
Emerging Markets Working Group
EU Working Group
Global Development and Outreach Working Group
Marketing Council
Market Adoption Working Group
Regulatory Working Group
Technical Working Group
Test & Certification Work Group
Use Case Working Group
U.S. Policy Working Group
Relevant standards
ISO/IEEE 11073
ISO/IEEE 11073 Personal Health Data (PHD) Standards
Bluetooth
USB
HL7
Integrating the Healthcare Enterprise
Zigbee
Website
The Continua Alliance website contains a full listing of member organisations, a directory of qualified products, and a clear statement of their mission.
See also
Connected Health
eHealth
Telehealth
Telemedicine
Health 2.0
H.810
References
External links
Continua Health Alliance website
New Website: Personal Connected Health Alliance
Health informatics organizations
Interoperability
Telehealth | Continua Health Alliance | [
"Engineering"
] | 913 | [
"Telecommunications engineering",
"Interoperability"
] |
5,627,714 | https://en.wikipedia.org/wiki/Carbon%20monofluoride | Carbon monofluoride (CF, CFx, or (CF)n), also called polycarbon monofluoride (PMF), polycarbon fluoride, poly(carbon monofluoride), and graphite fluoride, is a material formed by high-temperature reaction of fluorine gas with graphite, charcoal, or pyrolytic carbon powder. It is a highly hydrophobic microcrystalline powder. Its CAS number is . In contrast to graphite intercalation compounds it is a covalent graphite compound.
Carbon is stable in a fluorine atmosphere up to about 400 °C, but between 420-600 °C a reaction takes place to give substoichiometric carbon monofluoride, CF0.68 appearing dark grey. With increasing temperature and fluorine pressure stoichiometries up to CF1.12 are formed. With increasing fluorine content the colour changes from dark grey to cream white indicating the loss of the aromatic character. The fluorine atoms are located in an alternating fashion above and under the former graphene plane, which is now buckled due to formation of covalent carbon-fluorine bonds. Reaction of carbon with fluorine at even higher temperature successively destroys the graphite compound to yield a mixture of gaseous fluorocarbons such as tetrafluoromethane, CF4, and tetrafluoroethylene, C2F4.
In a similar fashion in 2001 it was found that the carbon allotrope fullerene, C60 reacts with fluorine gas to give fullerene fluorides with stoichiometries up to C60F48.
A precursor of carbon monofluoride is the fluorine-graphite intercalation compound, also called fluorine-GIC.
Other intercalation fluorides of carbon are:
poly(dicarbon fluoride) ((C2F)n);
tetracarbon monofluoride (TCMF, C4F).
Graphite fluoride is a precursor for preparation of graphene fluoride by a liquid phase exfoliation.
Application
Carbon monofluoride is used as a high-energy-density cathode material in lithium batteries of the "BR" type. Other uses are a wear reduction additive for lubricants, and weather-resistant additive for paints. Graphite fluoride is also used as both oxidizing agent and combustion modifier in rocket propellants and pyrolants.
Carbon monofluoride is commercially available as Carbofluor-brand materials.
References
Fluorides
Inorganic carbon compounds
Nonmetal halides | Carbon monofluoride | [
"Chemistry"
] | 555 | [
"Fluorides",
"Inorganic carbon compounds",
"Inorganic compounds",
"Salts"
] |
5,628,201 | https://en.wikipedia.org/wiki/Automatic%20layout | Automatic layout is an option in graph drawing toolkits that allow to lay out the Graph according to specific rules, such as:
reducing the length of the arcs between the Graph vertices
reduce the number of edges crossing (to improve the graph readability)
See also
Methods in graph drawing
References
Graph drawing | Automatic layout | [
"Technology"
] | 61 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
5,628,752 | https://en.wikipedia.org/wiki/Porcelain%20tile | Porcelain tiles or ceramic tiles are either tiles made of porcelain, or relatively tough ceramic tiles made with a variety of materials and methods, that are suitable for use as floor tiles, or for walls. They have a low water absorption rate, generally less than 0.5 percent. The clay used to build porcelain tiles is generally denser than ceramic tiles. They can either be glazed or unglazed. Porcelain tiles are one type of vitrified tiles and are sometimes referred to as porcelain vitrified tiles.
Historically, porcelain was not the usual material for tiles, which were much more often made of earthenware (terracotta) or stoneware. The first porcelain tiles were made in China, and were largely used for decorating walls, such as in the 15th-century Porcelain Tower of Nanjing (now largely destroyed); the use of porcelain tile as wall decoration long remained typical. In Europe, palaces also occasionally featured a few rooms with walls decorated in porcelain plaques, often with forms in high relief. These were manufactured by Capodimonte porcelain and Real Fábrica del Buen Retiro, among others. Historically, porcelain was too expensive for most tiling purposes, but it is now much cheaper (especially in the form of bone china), and is now widely used.
Production
Large-scale production of porcelain tile is undertaken in many countries, with the major producers being China, Italy, Morbi India, Spain and Turkey. Many countries also undertake small-scale production, such as Australia and Brazil.
The wear rating of the tile can be graded from zero to five according to the ISO 10545-7 (also ASTM C1027) test for surface abrasion resistance of glazed tile, and this can be used to determine suitability for various end-use conditions.
Polished porcelain tiles
The dense, hard surface of porcelain has made polishing a viable alternative to a glazed surface. This means that a tile can be fired, then a polish cut into the surface, creating a shine without a glaze.
Use
Porcelain is much harder than ordinary ceramic tiles and is often selected, despite its higher price, for its hard-wearing nature. Porcelain can be used in both wet and dry areas such as bathrooms, showers, and kitchens.
Disadvantages
Porcelain is denser and therefore heavier to handle than other ceramic tiles. For this reason, it is generally more expensive. Being harder, it is more difficult to cut and drill and requires specialist tools, which can hamper fitting and increase costs. Polished porcelain may need sealing, where ordinary glazed tiles do not. The glazed surface is coated with less than two microns' thickness of glaze.
Installation
The installation of ceramic of porcelain tiles generally involves the following steps:
Planning and preparation
Surface preparation, including tile cutting
Applying adhesive
Laying tiles
Grouting
Finishing touches
Cleaning, sealing and maintenance
Cutting
There are several ways to cut a porcelain tile. Power tools like an angle grinder, tile cutter, tile nipper, and drill bit can be used to do this. However, the most effective way is to use a wet tile saw because of its versatility and cutting capacity.
Adhesives
Specialized cement is necessary for installation of porcelain tiles, and in the US specifications, are set by the Tile Council of America and supported by the Tile Contractors Association.
Porcelain, being denser and heavier than ordinary ceramic tiles, needs a stronger adhesive to hold the weight on walls. Therefore, typical ready-mix adhesives are not recommended for porcelain.
Tile profiles and trims
Ceramic tile trims and profiles are specialized edging or transitional pieces that are used in conjunction with ceramic tiles. They serve several purposes:
Edge protection: Profiles protect the edges of tiles from chipping and wear.
Transition: They provide a smooth transition between different surface materials or tile heights.
Aesthetic enhancement: Profiles add a finished look, contributing to the overall design of the tiled area.
Transition profiles are used when there are two different types of flooring or tiles that meet in the middle. A transition profile can help create a smooth and seamless transition between the two. Tile trims are used to cover the edges of tiles, creating a finished look and protecting them from damage.
Profiles and trims are generally installed at the same time that the tiles are laid down.
Sealing
When porcelain is first made, it is not absorbent, but the polishing process for making the unglazed surface shiny cuts into the surface, leaving it more porous and prone to absorbing stains, in the same way as natural stone tiles. Unless they have a suitable, long-lasting treatment applied by the manufacturer (for example, nanotech treatment), polished porcelain tiles may need sealing to make the maintenance of paving easier. Porcelain sealants are either solvent-based or water-based, which is cheaper but does not last.
Vitrification
Porcelain tiles can be vitrified to reduce their porosity and increase their strength. Vitrified porcelain tiles are created by combining clay with other elements such as quartz, silica, or feldspar under incredibly high temperatures. The vitrification process creates porcelain tiles that contain a glass substrate. The glass substrate gives the tiles a sleek appearance, provides added strength, and makes the tiles water and scratch-resistant. Vitrified porcelain tiles do not need to be re-sealed or glazed.
See also
Ceramic tile cutter
Thinset mortar
Thick bed mortar
References
Building materials
Visual arts materials
Porcelain
Ceramic materials
Tiling | Porcelain tile | [
"Physics",
"Engineering"
] | 1,101 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Ceramic materials",
"Ceramic engineering",
"Matter",
"Architecture"
] |
5,628,881 | https://en.wikipedia.org/wiki/Bechgaard%20salt | In organic chemistry, a Bechgaard salt is any one of a number of organic charge-transfer complexes that exhibit superconductivity at low temperatures. They are named for chemist Klaus Bechgaard, who was one of the first scientists to synthesize them and demonstrate their superconductivity with the help of physicist Denis Jérome. Most Bechgaard salt superconductors are extremely low temperature, and lose superconductivity above the 1–2 K range, although the most successful compound in this class superconducts up to almost 12 K.
All Bechgaard salts are formed using a small, planar organic molecule as an electron donor, with any of a number of electron acceptors (such as perchlorate, , or tetracyanoethylene, TCNE). All the organic electron donors contain multiply conjugated heterocycles with a number of properties, including planarity, low ionization potential and good orbital overlap between heteroatoms in neighboring donor molecules. These properties help the final salt conduct electrons by shuttling them through the orbital vacancies left in the donor molecules.
All Bechgaard salts have a variation on a single tetrathiafulvalene motif—different superconductors have been made with appendages to the motif, or using a tetraselenafulvalene center instead (which is a related compound), but all bear this general structural similarity.
There are a wide range of other organic superconductors including many other charge-transfer complexes.
See also
Superconductivity
Tetrathiafulvalene
References
Superconductivity
Organic compounds | Bechgaard salt | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 343 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Organic compounds",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
5,628,883 | https://en.wikipedia.org/wiki/Gautama%20Siddha | Gautama Siddha, (fl. 8th century) astronomer, astrologer and compiler of Indian descent, known for leading the compilation of the Treatise on Astrology of the Kaiyuan Era during the Tang dynasty. He was born in Chang'an, and his family was originally from India, according to a tomb stele uncovered in 1977 in Xi'an. The Gautama family had probably settled in China over many generations, and might have been present in China prior even to the foundation of the Tang dynasty. He was most notable for his translation of Navagraha calendar into Chinese. He also introduced Indian numerals with zero (〇) in 718 in China as a replacement of counting rods.
References
Footnotes
8th-century births
8th-century deaths
Chinese astrologers
Chinese people of Indian descent
8th-century Chinese astronomers
Indian astrologers
Writers from Xi'an
Tang dynasty writers
Scientists from Shaanxi
8th-century astrologers
Medieval Indian astrologers
8th-century Indian astronomers
8th-century Indian writers
8th-century Chinese writers
8th-century translators
Chinese translators | Gautama Siddha | [
"Astronomy"
] | 221 | [
"Astronomers",
"Astronomer stubs",
"Astronomy stubs"
] |
5,628,894 | https://en.wikipedia.org/wiki/Chiral%20derivatizing%20agent | In analytical chemistry, a chiral derivatizing agent (CDA), also known as a chiral resolving reagent, is a derivatization reagent that is a chiral auxiliary used to convert a mixture of enantiomers into diastereomers in order to analyze the quantities of each enantiomer present and determine the optical purity of a sample. Analysis can be conducted by spectroscopy or by chromatography. Some analytical techniques such as HPLC and NMR, in their most commons forms, cannot distinguish enantiomers within a sample, but can distinguish diastereomers. Therefore, converting a mixture of enantiomers to a corresponding mixture of diastereomers can allow analysis. The use of chiral derivatizing agents has declined with the popularization of chiral HPLC. Besides analysis, chiral derivatization is also used for chiral resolution, the actual physical separation of the enantiomers.
History
Since NMR spectroscopy has been available to chemists, there have been numerous studies on the applications of this technique. One of these noted the difference in the chemical shift (i.e. the distance between the peaks) of two diastereomers. Conversely, two compounds that are enantiomers have the same NMR spectral properties. It was reasoned that if a mix of enantiomers could be converted into a mix of diastereomers by bonding them to another chemical that was itself chiral, it would be possible to distinguish this new mixture using NMR, and therefore learn about the original enantiomeric mixture. The first popular example of this technique was published in 1969 by Harry S. Mosher. The chiral agent used was a single enantiomer of MTPA (α-methoxy-α-(trifluoromethyl)phenylacetic acid), also known as Mosher's acid. The corresponding acid chloride is also known as Mosher's acid chloride, and the resultant diastereomeric esters are known as Mosher's esters. Another system is Pirkle's Alcohol developed in 1977.
Requirements
The general use and design of CDAs obey the following rules so that the CDA can effectively determine the stereochemistry of an analyte:
The CDA must be enantiomerically pure, or (less satisfactorily) its enantiomeric purity must be accurately known.
The reaction of the CDA with both enantiomers should go to completion under reaction conditions. This acts to avoid enrichment or depletion of one enantiomer of the analyte by kinetic resolution.
CDA must not racemize under derivatization or analysis conditions. Its attachment should be mild enough so that the substrate does not racemize either. If analysis is completed by HPLC, the CDA must contain a chromophore to enhance detectability.
If analysis is completed by NMR, the CDA should have a functional group that gives a singlet in the resultant NMR spectrum, where the singlet must be remote from other peaks.
Mosher's method
Mosher's acid, via its acid chloride derivative, reacts readily with alcohols and amines to give esters and amides, respectively. The lack of an alpha-proton on the acid prevents loss of stereochemical fidelity under the reaction conditions. Thus, using an enantiomerically pure Mosher's acid allows for determination of the configuration of simple chiral amines and alcohols. For example, the (R)- and (S)-enantiomers of 1-phenylethanol react with (S)-Mosher acid chloride to yield (R,S)- and (S,S)-diastereomers, respectively, that are distinguishable in NMR.
CFNA (alternative to Mosher's acid)
A newer chiral derivatizing agent (CDA), α-cyano-α-fluoro (2-naphthyl)-acetic acid (2-CFNA) was prepared in optically pure form by the chiral HPLC separation of a racemic 2-CFNA methyl ester. This ester was obtained by fluorination of methyl α-cyano (2-naphthyl) acetate with FClO3. 2-CFNA has been shown to be a superior CDA than Mosher's agent to determine the enantiomeric excess of a primary alcohol.
Chromatography using CDAs
Upon reaction of a CDA with the target analyte, chromatography can be used to separate the resulting products. In general, chromatography can be used to separate chiral compounds to bypass difficult crystallizations and/or to collect all diastereomer pairs in solution. Chromatography also has many variations (e.g. HPLC, Gas Chromatography, flash chromatography) with a wide array of applicability to diverse categories of molecules. The ability for CDAs to separate chiral molecules is dependent on two major mechanisms of chromatography:
Differential solvation in the mobile phase
Differential adsorption to the stationary phase
Helmchen's postulates
Helmchen's Postulates are the theoretical models used to predict the elution order and extent of separation of diastereomers (including those formed from CDAs) that are adsorbed onto a surface. Although Helmchen's postulates are specific for amides on silica gel using liquid chromatography, the postulates provide fundamental guidelines for other molecules. Helmchen's Postulates are:
Conformations are the same in solution and when adsorbed.
Diastereomers bind to surfaces (silica gel in normal phase chromatography) mainly with hydrogen bonding.
Significant resolution of diastereomers is only expected when molecules can adsorb to silica through two contact points (two hydrogen bonds). This interaction can be perturbed by substituents.
Diastereomers with bulky substituents on the alpha carbon (R2) and on the nitrogen (R1) can shield the hydrogen bonding with the surface, thus the molecule will be eluted before similar molecules with smaller substituents.
Helmchen's postulates have been proven to be applicable to other functional groups such as: carbamates, esters, and epoxides.
Chiral stationary phases
Stationary phases can react with CDAs to form chiral stationary phases which can resolve chiral molecules. By reacting with alcohols on a silicate stationary phase, CDAs add a chiral center to the stationary phase, which allows for the separation of chiral molecules.
CDAs in NMR spectroscopy
CDAs are used with NMR spectroscopic analysis to determine enantiomeric excess and the absolute configuration of a substrate. Chiral discriminating agents are sometimes difficult to distinguish from chiral solvating agents (CSA) and some agents can be used as both. The speed of the exchange between the substrate and the metal center is the most important determining factor to differentiate between the use of a compound as a CDA or CSA. Generally, a CDA has a slow exchange whereas a CSA has a fast exchange. CDAs are more widely used than CSAs to determine absolute configurations because the covalent bonding to the substrate and auxiliary reagent produce species with greater conformational rigidity which creates greater differences in the NMR spectra. CDAs and CSAs can be used together to improve chiral recognition, although this is not a common.
NMR shift reagents such as EuFOD, Pirkle's alcohol, and TRISPHAT take advantage of the formation of diastereomeric complexes between the shift reagent and the analytical sample.
Primary concerns when using CDAs
The primary concerns to take into consideration when using a CDA in NMR spectroscopy are kinetic resolution, racemization during the derivatization reaction and that the reagent should have 100% optical purity. Kinetic resolution is especially significant when determining optical purity, but it is somewhat negligible when the CDA is being used to assign the absolute configuration of an optically pure substrate. Kinetic resolution can be overcome using excess of the CDA. Racemization can occur to either the CDA or the substrate and in both cases it has the potential to significantly affect the results.
Strategies for NMR analysis
The two basic methods of NMR analysis are single- and double-derivatization. Double-derivatization is generally considered more accurate, but single-derivatization usually requires less reagents and, thus, is more cost effective.
Single-derivatization methods
The NMR spectrum of the product formed from the reaction of the substrate with a CDA at room temperature is compared with one of the following:
Double-derivatization methods
Either the enantiomer of the substrate is derivatized with two enantiomers of the CDA or both enantiomers of the substrate are derivatized with one enantiomer of the CDA. Two diastereomers form in both cases and the chemical shifts of their nuclei are evaluated to assign the configuration of the substrate.
NMR techniques
The most common NMR techniques used when discriminating chiral compounds are 1H-NMR, 19F-NMR and 13C-NMR. 1H-NMR is the primary technique used to assign absolute configuration. 19F-NMR is almost exclusive applied to optical purity studies, and 13C-NMR is primarily used to characterize substrates that do not have protons that are directly bonded to an asymmetrical carbon atom.
References
Stereochemistry
Analytical reagents
Reagents for organic chemistry | Chiral derivatizing agent | [
"Physics",
"Chemistry"
] | 2,026 | [
"Stereochemistry",
"Space",
"nan",
"Reagents for organic chemistry",
"Spacetime",
"Analytical reagents"
] |
5,629,240 | https://en.wikipedia.org/wiki/Cyclin%20B | Cyclin B is a member of the cyclin family. Cyclin B is a mitotic cyclin. The amount of cyclin B (which binds to Cdk1) and the activity of the cyclin B-Cdk complex rise through the cell cycle until mitosis, where they fall abruptly due to degradation of cyclin B (Cdk1 is constitutively present). The complex of Cdk and cyclin B is called maturation promoting factor or mitosis promoting factor (MPF).
Function
Cyclin B is necessary for the progression of the cells into and out of M phase of the cell cycle.
At the end of S phase the phosphatase cdc25c dephosphorylates tyrosine15 and this activates the cyclin B/CDK1 complex. Upon activation the complex is shuttled to the nucleus where it serves to trigger for entry into mitosis. However, if DNA damage is detected alternative proteins are activated which results in the inhibitory phosphorylation of cdc25c and therefore cyclinB/CDK1 is not activated. In order for the cell to progress out of mitosis, the degradation of cyclin B is necessary.
The cyclin B/CDK1 complex also interacts with a variety of other key proteins and pathways which regulate cell growth and progression of mitosis. Cross-talk between many of these pathways links cyclin B levels indirectly to induction of apoptosis. The cyclin B/CDK1 complex plays a critical role in the expression of the survival signal survivin. Survivin is necessary for proper creation of the mitotic spindle which strongly affects cell viability, therefore when cyclin B levels are disrupted cells experience difficulty polarizing. A decrease in survivin levels and the associated mitotic disarray triggers apoptosis via caspase 3 mediated pathway.
Role in Cancer
Cyclin B plays an integral role in many types of cancer. Hyperplasia (uncontrolled cell growth) is one of the hallmarks of cancer. Because cyclin B is necessary for cells to enter mitosis and therefore necessary for cell division, cyclin B levels are often de-regulated in tumors. When cyclin B levels are elevated, cells can enter M phase prematurely and strict control over cell division is lost, which is a favorable condition for cancer development. On the other hand, if cyclin B levels are depleted the cyclin B/CDK1 complex cannot form, cells cannot enter M phase and cell division slows down. Some anti-cancer therapies have been designed to prevent cyclin B/CDK1 complex formation in cancer cells to slow or prevent cell division. Most of these methods have targeted the CDK1 subunit, but there is an emerging interest in the oncology field to target cyclin B as well.
As a Biomarker
Cyclin levels can easily be determined through immunohistological analysis of tumor biopsies. The fact that cyclin B is often disregulated in cancer cells makes cyclin B an attractive biomarker. Many studies have been performed to examine cyclin levels in tumors, and it has been shown that levels of cyclin B is a strong indicator of prognosis in many types of cancer. Generally, elevated levels of cyclin B are indicative of more aggressive cancers and a poor prognosis. Immunohistologically assessed levels of cyclin B could determine if women with stage 1, node negative, hormone receptor positive breast cancer were likely to benefit from adjuvant therapy. In general women with this cancer have a very good prognosis, with mortality in 10 years of only 5%. Therefore, it is rare for oncologists to recommend adjuvant chemotherapy in these cases. However, in a small subset of patient this type of cancer is unexpectedly aggressive. These rare patients can be identified through their elevated cyclin B levels. In addition high levels of cyclin B also indicate poor prognosis and lymph node metastasis in gastric cancers. However, not all cancers which overexpress cyclin B are more aggressive. A study in 2009 found that cyclin B overexpression in ovarian cancer indicates that the cancer is unlikely to be malignant while more aggressive ovarian cancers of epithelial cell origin do not show elevated cyclin B.
Cyclin B and p53
There is strong cross-talk between the pathways regulating cyclin B and the tumor suppressor gene p53. In general levels of p53 and cyclin B are negatively correlated. When p53 build-up triggers cell cycle arrest the levels of downstream proteins p21 and WAF1 are increased which prevents cyclinB/CDK1 complex activation and therefore progression through the cell cycle. It has also been observed that decreasing cyclin B levels in cells increases the levels of functional p53. Therefore, siRNAs for cyclin B may be an effective treatment against cancers where p53 function is inhibited but the gene has not been deleted. In such cases lowering cyclin B levels restores the tumor suppressing function of p53 and also prevents cancer cells from dividing as a consequence of low cyclin B.
See also
Cyclin B1
Cyclin B2
References
External links
Drosophila Cyclin B - The Interactive Fly
Biomarkers
Tumor markers
Cell cycle regulators
Proteins
Meiosis | Cyclin B | [
"Chemistry",
"Biology"
] | 1,146 | [
"Biomolecules by chemical classification",
"Biomarkers",
"Meiosis",
"Tumor markers",
"Signal transduction",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Proteins",
"Chemical pathology",
"Cell cycle regulators"
] |
5,629,249 | https://en.wikipedia.org/wiki/Cyclin%20A | Cyclin A is a member of the cyclin family, a group of proteins that function in regulating progression through the cell cycle. The stages that a cell passes through that culminate in its division and replication are collectively known as the cell cycle Since the successful division and replication of a cell is essential for its survival, the cell cycle is tightly regulated by several components to ensure the efficient and error-free progression through the cell cycle. One such regulatory component is cyclin A which plays a role in the regulation of two different cell cycle stages.
Types
Cyclin A was first identified in 1983 in sea urchin embryos. Since its initial discovery, homologues of cyclin A have been identified in numerous eukaryotes including Drosophila, Xenopus, mice, and in humans but has not been found in lower eukaryotes like yeast. The protein exists in both an embryonic form and somatic form. A single cyclin A gene has been identified in Drosophila while Xenopus, mice and humans contain two distinct types of cyclin A: A1, the embryonic-specific form, and A2, the somatic form. Cyclin A1 is prevalently expressed during meiosis and early on in embryogenesis. Cyclin A2 is expressed in dividing somatic cells.
Role in cell cycle progression
Cyclin A, along with the other members of the cyclin family, regulates cell cycle progression through physically interacting with cyclin-dependent kinases (CDKs), which thereby activates the enzymatic activity of its CDK partner.
CDK partner association
The interaction between the cyclin box, a region conserved across cyclins, and a region of the CDK, called the PSTAIRE, confers the foundation of the cyclin-CDK complex. Cyclin A is the only cyclin that regulates multiple steps of the cell cycle. Cyclin A can regulate multiple cell cycle steps because it associates with, and thereby activates, two distinct CDKs – CDK2 and CDK1. Depending on which CDK partner cyclin A binds, the cell will continue through the S phase or it will transition from G2 to the M phase. Association of cyclin A with CDK2 is required for passage into S phase while association with CDK1 is required for entry into M phase.
S phase
Cyclin A resides in the nucleus during S phase where it is involved in the initiation and completion of DNA replication. As the cell passes from G1 into S phase, cyclin A associates with CDK2, replacing cyclin E. Cyclin E is responsible for initiating the assembly of the pre-replication complex. This complex makes chromatin capable of replication. When the amount of cyclin A/CDK2 complex reaches a threshold level, it terminates the assembly of the pre-replication complex made by cyclin E/CDK2. As the amount of Cyclin A/CDK2 complex increases, the complex initiates DNA replication.
Cyclin A has a second function in S phase. In addition to initiating DNA synthesis, Cyclin A ensures that DNA is replicated once per cell cycle by preventing the assembly of additional replication complexes. This is thought to occur through the phosphorylation of particular DNA replication machinery components, such as CDC6, by the cyclin A/CDK2 complex. Since the action of cyclin A/CDK2 inhibits that of cyclin E/CDK2, the sequential activation of cyclin E followed by the activation of cyclin A is important and tightly regulated in S phase.
G2 / M phase
In late S phase, cyclin A can also associate with CDK1. Cyclin A remains associated with CDK1 from late S into late G2 phase when it is replaced by cyclin B. Cyclin A/CDK1 is thought to be involved in the activation and stabilization of cyclin B/CDK1 complex. Once cyclin B is activated, cyclin A is no longer needed and is subsequently degraded through the ubiquitin pathway. Degradation of cyclin A/CDK1 induces mitotic exit.
Cyclin A/CDK2 complex was thought to be restricted to the nucleus and thus exclusively involved in S phase progression. New research has since debunked this assumption, shedding light on cyclin A/CDK2 migration to the centrosomes in late G2. Cyclin A binds to the mitotic spindle poles in the centrosome however, the mechanism by which the complex is shuttled to the centrosome is not well understood. It is suspected that the presence of cyclin A/CDK2 at the centrosomes may confer a means of regulating the movement of cyclin B/CDK1 to the centrosome and thus the timing of mitotic events.
A study in 2008 provided further evidence of cyclin A/CDK2 complex's role in mitosis. Cells were modified so their CDK2 was inhibited and their cyclin A2 gene was knocked out. These mutants entered mitosis late due to a delayed activation of the cyclin B/CDK1 complex. Coupling of microtubule nucleation in the centrosome with mitotic events in the nucleus was lost in the cyclin A knockout/CDK2 inhibited mutant cells.
Cyclin A has been shown to play a crucial role in the G2/M transition in Drosophila and Xenopus embryos.
Regulation
Transcription of cyclin A is tightly regulated and synchronized with cell cycle progression. Initiation of transcription of cyclin A is coordinated with passage of the R point, a critical transition point that is required for progression from G1 into S phase. Transcription peaks and plateaus mid-S phase and abruptly declines in late G2.
E2F and pRb
Transcription of cyclin A is predominantly regulated by the transcription factor E2F in a negative feedback loop. E2F is responsible for initiating the transcription of many critical S phase genes. Cyclin A transcription is off during most of G1 and the begins shortly after the R point.
The retinoblastoma protein (pRb) is involved in the regulation of cyclin A through its interaction with E2F. It exists in two states: hypophosphorylated pRb and hyperphosphorylated pRb. Hypophosphorylated pRb binds E2F, which prevents transcription of cyclin A. The absence of cyclin A prior to the R point is due to the inhibition of E2F by hypophosphorylated pRb. After the cell passes through the R point, cyclin D/E- complexes phosphorylate pRb. Hyperphosphorylated pRb can no longer bind E2F, E2F is released and cyclin A genes, and other crucial genes for S phase, are transcribed.
E2F initiates transcription of cyclin A by de-repressing the promoter. The promoter is bound by a repressor molecule called the cell-cycle-responsive element (CCRE). E2F binds to an E2F binding site on the CCRE, releasing the repressor from the promoter and allowing the transcription of cyclin A. Cyclin A/CDK2 will eventually phosphorylate E2F when cyclin A reaches a certain level, completing the negative feedback loop. Phosphorylation of E2F turns the transcription factor off, providing another level of controlling the transcription of cyclin A.
p53 and p21
Transcription of cyclin A is indirectly regulated by the tumor suppressor protein p53. P53 is activated by DNA damage and turns on several downstream pathways, including cell cycle arrest. Cell cycle arrest is carried out by the p53-pRb pathway. Activated p53 turns on genes for p21. P21 is a CDK inhibitor that binds to several cyclin/CDK complexes, including cyclin A-CDK2/1 and cyclin D/CDK4, and blocks the kinase activity of CDKs. Activated p21 can bind cyclin D/CDK4 and render it incapable of phosphorylating pRb. PRb remains hypophosphorylated and binds E2F. E2F is unable to activate the transcription of cyclins involved in cell cycle progression, such as cyclin A and the cell cycle is arrested at G1. Cell cycle arrest allows the cell to repair DNA damage before the cell divides and passes damaged DNA to daughter cells.
References
External links
Drosophila Cyclin A - The Interactive Fly
Cell cycle
Proteins
Cell cycle regulators | Cyclin A | [
"Chemistry",
"Biology"
] | 1,850 | [
"Biomolecules by chemical classification",
"Signal transduction",
"Cellular processes",
"Molecular biology",
"Proteins",
"Cell cycle",
"Cell cycle regulators"
] |
5,629,262 | https://en.wikipedia.org/wiki/Cyclin%20E | Cyclin E is a member of the cyclin family.
Cyclin E binds to G1 phase Cdk2, which is required for the transition from G1 to S phase of the cell cycle that determines initiation of DNA duplication. The Cyclin E/CDK2 complex phosphorylates p27Kip1 (an inhibitor of Cyclin D), tagging it for degradation, thus promoting expression of Cyclin A, allowing progression to S phase.
Functions of Cyclin E
Like all cyclin family members, cyclin E forms complexes with cyclin-dependent kinases. In particular, Cyclin E binds with CDK2. Cyclin E/CDK2 regulates multiple cellular processes by phosphorylating numerous downstream proteins.
Cyclin E/CDK2 plays a critical role in the G1 phase and in the G1-S phase transition. Cyclin E/CDK2 phosphorylates retinoblastoma protein (Rb) to promote G1 progression. Hyper-phosphorylated Rb will no longer interact with E2F transcriptional factor, thus release it to promote expression of genes that drive cells to S phase through G1 phase. Cyclin E/CDK2 also phosphorylates p27 and p21 during G1 and S phases, respectively. Smad3, a key mediator of TGF-β pathway which inhibits cell cycle progression, can be phosphorylated by cyclin E/CDK2. The phosphorylation of Smad3 by cyclin E/CDK2 inhibits its transcriptional activity and ultimately facilitates cell cycle progression. CBP/p300 and E2F-5 are also substrates of cyclin E/CDK2. Phosphorylation of these two proteins stimulates the transcriptional events during cell cycle progression. Cyclin E/CDK2 can phosphorylate p220(NPAT) to promote histone gene transcription during cell cycle progression.
Apart from the function in cell cycle progression, cyclin E/CDK2 plays a role in the centrosome cycle. This function is performed by phosphorylating nucleophosmin (NPM). Then NPM is released from binding to an unduplicated centrosome, thereby triggering duplication. CP110 is another cyclin E/CDK2 substrate which involves in centriole duplication and centrosome separation. Cyclin E/CDK2 has also been shown to regulate the apoptotic response to DNA damage via phosphorylation of FOXO1.
Cyclin E and Cancer
Over-expression of cyclin E correlates with tumorigenesis. It is involved in various types of cancers, including breast, colon, bladder, skin and lung cancer. DNA copy-number amplification of cyclin E1 is involved in brain cancer. Besides that, dysregulated cyclin E activity causes cell lineage-specific abnormalities, such as impaired maturation due to increased cell proliferation and apoptosis or senescence.
Several mechanisms lead to the deregulated expression of cyclin E. In most cases, gene amplification causes the overexpression. Proteosome caused defected degradation is another mechanism. Loss-of-function mutations of FBXW7 were found in several cancer cells. FBXW7 encodes F-box proteins which target cyclin E for ubiquitination. Cyclin E overexpression can lead to G1 shortening, decrease in cell size or loss of serum requirement for proliferation.
Dysregulation of cyclin E occurs in 18-22% of the breast cancers. Cyclin E is a prognostic marker in breast cancer, its altered expression increased with the increasing stage and grade of the tumor. Low molecular weight cyclin E isoforms have been shown to be of great pathogenetic and prognostic importance for breast cancer. These isoforms are resistant to CKIs, bind with CDK2 more efficiently and can stimulate the cell cycle progression more efficiently. They are proved to be a remarkable marker of the prognosis of early-stage-node negative breast cancer. Importantly, a recent research pointed out cyclin E overexpression is a mechanism of Trastuzumab resistance in HER2+ breast cancer patients. Thus, co-treatment of trastuzumab with CDK2 inhibitors may be a valid strategy.
Cyclin E overexpression is implicated in carcinomas at various sites along the gastrointestinal tract. Among these carcinomas, cyclin E appears to be more important in stomach and colon cancer. Cyclin E overexpression was found in 50-60% of gastric adenomas and adenocarcinomas. In ~10% of colorectal carcinomas, cyclin E gene amplification is found, sometimes together with CDK2 gene amplification.
Cyclin E is also a useful prognostic marker for lung cancer. There is significant association between cyclin E over-expression and the prognosis of lung cancer. It is believed increased expression of cyclin E correlated with poorer prognosis.
References
External links
Cell cycle regulators
Proteins | Cyclin E | [
"Chemistry"
] | 1,126 | [
"Biomolecules by chemical classification",
"Signal transduction",
"Molecular biology",
"Proteins",
"Cell cycle regulators"
] |
5,629,357 | https://en.wikipedia.org/wiki/DNA%20field-effect%20transistor | A DNA field-effect transistor (DNAFET) is a field-effect transistor which uses the field-effect due to the partial charges of DNA molecules to function as a biosensor. The structure of DNAFETs is similar to that of MOSFETs, with the exception of the gate structure which, in DNAFETs, is replaced by a layer of immobilized ssDNA (single-stranded DNA) molecules which act as surface receptors. When complementary DNA strands hybridize to the receptors, the charge distribution near the surface changes, which in turn modulates current transport through the semiconductor transducer.
Arrays of DNAFETs can be used for detecting single nucleotide polymorphisms (causing many hereditary diseases) and for DNA sequencing. Their main advantage compared to optical detection methods in common use today is that they do not require labeling of molecules. Furthermore, they work continuously and (near) real-time. DNAFETs are highly selective since only specific binding modulates charge transport.
References
Biosensors
Biotechnology
Field-effect transistors
MOSFETs | DNA field-effect transistor | [
"Chemistry",
"Biology"
] | 224 | [
"Bioinformatics stubs",
"Biotechnology stubs",
"Biotechnology",
"Biosensors",
"Biochemistry stubs",
"Bioinformatics",
"nan"
] |
5,629,366 | https://en.wikipedia.org/wiki/Yrast | Yrast ( , ) is a technical term in nuclear physics that refers to a state of a nucleus with a minimum of energy (when it is least excited) for a given angular momentum. Yr is a Swedish adjective sharing the same root as the English whirl. Yrast is the superlative of yr and can be translated whirlingest, although it literally means "dizziest" or "most bewildered". The yrast levels are vital to understanding reactions, such as off-center heavy ion collisions, that result in high-spin states.
Yrare is the comparative of yr and is used to refer to the second-least energetic state of a given angular momentum.
Background
An unstable nucleus may decay in several different ways: it can eject a neutron, proton, alpha particle, or other fragment; it can emit a gamma ray; it can undergo beta decay. Because of the relative strengths of the fundamental interactions associated with those processes (the strong interaction, electromagnetism, and the weak interaction respectively), they usually occur with frequencies in that order. Theoretically, a nucleus has a very small probability of emitting a gamma ray even if it could eject a neutron, and beta decay rarely occurs unless both of the other two pathways are highly unlikely.
In some instances, however, predictions based on this model underestimate the total amount of energy released in the form of gamma rays; that is, nuclei appear to have more than enough energy to eject neutrons, but decay by gamma emission instead. This discrepancy is found by the energy of a nuclear angular momentum, and documentation and calculation of yrast levels for a given system may be used for analyzing such a situation.
The energy stored in the angular momentum of an atomic nucleus can also be responsible for the emission of larger-than-expected particles, such as alpha particles over single nucleons, because they can carry away angular momentum more effectively. This is not the only reason alpha particles are preferentially emitted, though; another reason is simply that alpha particles (He-4 nuclei) are energetically very stable in and of themselves.
Yrast isomers
Sometimes there is a large gap between two yrast states. For example, the nucleus 95Pd has a 21/2 state that lies below the lowest 19/2, 17/2, and 15/2 states. This state does not have enough energy to undergo strong particle decay, and because of the large spin difference, gamma decay from the 21/2 state to the 13/2 state below is very unlikely. The more likely decay option is beta decay, which forms an isomer with an unusually long half-life of 14 seconds.
An exceptional example is the J=9 state of tantalum-180, which is a very low-lying yrast state only 77 keV above the ground state. The ground state has J=1, which is too large a gap for gamma decay to occur. Alpha and beta decay are also suppressed, so strongly that the resulting isomer, tantalum-180m, is effectively stable for all practical purposes, and has never been observed to decay. Tantalum-180m is the only currently known yrast isomer to be observationally stable.
Some superheavy isotopes (such as copernicium-285) have longer-lived isomers with half-lives on the order of minutes. These may be yrasts, but the exact angular momentum and energy is often hard to determine for these nuclides.
References
Swedish words and phrases
Nuclear physics
Angular momentum | Yrast | [
"Physics",
"Mathematics"
] | 739 | [
"Physical quantities",
"Quantity",
"Nuclear physics",
"Angular momentum",
"Momentum",
"Moment (physics)"
] |
5,629,779 | https://en.wikipedia.org/wiki/Heterodont | In anatomy, a heterodont (from Greek, meaning 'different teeth') is an animal which possesses more than a single tooth morphology.
Human dentition is heterodont and diphyodont as an example.
In vertebrates, heterodont pertains to animals where teeth are differentiated into different forms. For example, members of the Synapsida generally possess incisors, canines ("dogteeth"), premolars, and molars. The presence of heterodont dentition is evidence of some degree of feeding and or hunting specialization in a species. In contrast, homodont or isodont dentition refers to a set of teeth that possess the same tooth morphology.
In invertebrates, the term heterodont refers to a condition where teeth of differing sizes occur in the hinge plate, a part of the Bivalvia.
References
Zoology
Dentition types | Heterodont | [
"Biology"
] | 192 | [
"Zoology"
] |
5,630,017 | https://en.wikipedia.org/wiki/Cadiot%E2%80%93Chodkiewicz%20coupling | The Cadiot–Chodkiewicz coupling in organic chemistry is a coupling reaction between a terminal alkyne and a haloalkyne catalyzed by a copper(I) salt such as copper(I) bromide and an amine base. The reaction product is a 1,3-diyne or di-alkyne.
The reaction mechanism involves deprotonation by base of the terminal alkyne proton followed by formation of a copper(I) acetylide. A cycle of oxidative addition and reductive elimination on the copper centre then creates a new carbon-carbon bond.
Scope
Unlike the related Glaser coupling the Cadiot–Chodkiewicz coupling proceeds selectively and will only couple the alkyne to the haloalkyne, giving a single product. By comparison the Glaser coupling would simply produce a distribution of all possible couplings.
In one study the Cadiot–Chodkiewicz coupling has been applied in the synthesis of acetylene macrocycles starting from cis-1,4-diethynyl-1,4-dimethoxycyclohexa-2,5-diene. This compound is also the starting material for the dibromide through N-bromosuccinimide (NBS) and silver nitrate:
The coupling reaction itself takes place in methanol with piperidine, the hydrochloric acid salt of hydroxylamine and copper(I) bromide.
See also
Glaser coupling – Another alkyne coupling reaction catalysed by a copper(I) salt.
Sonogashira coupling – Pd/Cu catalysed coupling of an alkyne with an aryl or vinyl halide
Castro–Stephens coupling – A cross-coupling reaction between a copper(I) acetylide and an aryl halide
References
Substitution reactions
Carbon-carbon bond forming reactions
Name reactions | Cadiot–Chodkiewicz coupling | [
"Chemistry"
] | 390 | [
"Coupling reactions",
"Name reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
5,630,475 | https://en.wikipedia.org/wiki/Rubrocurcumin | Rubrocurcumin is a red-colored dye that is formed by the reaction of curcumin and boric acid.
Synthesis
The reaction of curcumin with borates in presence of oxalic acid produces rubrocurcumin.
Characteristics
Rubrocurcumin produces a red-colored solution.
Rubrocurcumin is a neutral molecule, while rosocyanine is ionic. In rubrocurcumin, one molecule of curcumin is replaced with oxalate compared to rosocyanine.
Complexes with boron such as rubrocurcumin are called 1,3,2-dioxaborines.
References
Curcuminoid dyes
Tetrahydroxyborate esters
Complexometric indicators
Oxalate esters | Rubrocurcumin | [
"Chemistry",
"Materials_science"
] | 169 | [
"Complexometric indicators",
"Chromism"
] |
5,630,722 | https://en.wikipedia.org/wiki/Lake%20Bastrop | Lake Bastrop is a reservoir on Spicer Creek in the Colorado River basin northeast of the town of Bastrop in central Bastrop County, Texas, United States.
Description
The reservoir was formed in 1964 by the construction of a dam by the Lower Colorado River Authority. The lake serves primarily as a power plant cooling pond for the Sim Gideon Power Plant operated by the LCRA and the Lost Pines Power Project 1, owned by GenTex Power Corporation, a wholly owned affiliate of the LCRA. Lake Bastrop also serves as a venue for outdoor recreation, including fishing, boating, swimming, camping and picnicking, and is maintained at a constant level year round.
Approximately one quarter of the shoreline of the Lake is privately owned by the Capitol Area Council, Boy Scouts of America. This property is used for the Lost Pines Scout Reservation, consisting of Cub World at Camp Tom Wooten, for Cub Scouts and Lost Pines Boy Scout Camp, for Boy Scouts. The Scouts leased the property from the LCRA starting in 1965, buying the land in the late 1990s.
Fish populations
Lake Bastrop has been stocked with species of fish intended to improve the utility of the reservoir for recreational fishing. Fish present in Lake Bastrop include catfish, crappie, perch, sunfish, carp, and largemouth bass.
See also
List of dams and reservoirs in Texas
References
Bastrop
Protected areas of Bastrop County, Texas
Bodies of water of Bastrop County, Texas
Cooling ponds | Lake Bastrop | [
"Chemistry",
"Environmental_science"
] | 302 | [
"Cooling ponds",
"Water pollution"
] |
5,630,800 | https://en.wikipedia.org/wiki/Distributed%20amplifier | Distributed amplifiers are circuit designs that incorporate transmission line theory into traditional amplifier design to obtain a larger gain-bandwidth product than is realizable by conventional circuits.
History
The design of the distributed amplifiers was first formulated by William S. Percival in 1936. In that year Percival proposed a design by which the transconductances of individual vacuum tubes could be added linearly without lumping their element capacitances at the input and output, thus arriving at a circuit that achieved a gain-bandwidth product greater than that of an individual tube. Percival's design did not gain widespread awareness however, until a publication on the subject was authored by Ginzton, Hewlett, Jasberg, and Noe in 1948. It is to this later paper that the term distributed amplifier can actually be traced. Traditionally, DA design architectures were realized using vacuum tube technology.
Current technology
More recently, III-V semiconductor technologies, such as GaAs and InP have been used. These have superior performance resulting from higher bandgaps (higher electron mobility), higher saturated electron velocity, higher breakdown voltages and higher-resistivity substrates. The latter contributes much to the availability of higher quality-factor (Q-factor or simply Q) integrated passive devices in the III-V semiconductor technologies.
To meet the marketplace demands on cost, size, and power consumption of monolithic microwave integrated circuits (MMICs), research continues in the development of mainstream digital bulk-CMOS processes for such purposes. The continuous scaling of feature sizes in current IC technologies has enabled microwave and mm-wave CMOS circuits to directly benefit from the resulting increased unity-gain frequencies of the scaled technology. This device scaling, along with the advanced process control available in today's technologies, has recently made it possible to reach a transition frequency (ft) of 170 GHz and a maximum oscillation frequency (fmax) of 240 GHz in a 90 nm CMOS process.
Theory of operation
The operation of the DA can perhaps be most easily understood when explained in terms of the traveling-wave tube amplifier (TWTA). The DA consists of a pair of transmission lines with characteristic impedances of Z0 independently connecting the inputs and outputs of several active devices. An RF signal is thus supplied to the section of transmission line connected to the input of the first device. As the input signal propagates down the input line, the individual devices respond to the forward traveling input step by inducing an amplified complementary forward traveling wave on the output line. This assumes the delays of the input and output lines are made equal through selection of propagation constants and lengths of the two lines and as such the output signals from each individual device sum in phase. Terminating resistors Zg and Zd are placed to minimize destructive reflections.
The transconductive gain of each device is gm and the output impedance seen by each transistor is half the characteristic impedance of the transmission line. So that the overall voltage gain of the DA is:
Av = ½ n·gm·Z0, where n is the number of stages.
Neglecting losses, the gain demonstrates a linear dependence on the number of devices (stages). Unlike the multiplicative nature of a cascade of conventional amplifiers, the DA demonstrates an additive quality. It is this synergistic property of the DA architecture that makes it possible for it to provide gain at frequencies beyond that of the unity-gain frequency of the individual stages. In practice, the number of stages is limited by the diminishing input signal resulting from attenuation on the input line. Means of determining the optimal number of stages are discussed below. Bandwidth is typically limited by impedance mismatches brought about by frequency dependent device parasitics.
The DA architecture introduces delay in order to achieve its broadband gain characteristics. This delay is a desired feature in the design of another distributive system called the distributed oscillator.
Lumped elements
Delay lines are made of lumped elements of L and C. The parasitic L and the C from the transistors are used for this and usually some L is added to raise the line impedance. Because of the Miller effect in the common source amplifier the input and the output transmission line are coupled. For example, for voltage inverting and current amplifying the input and the output form a shielded balanced line. The current is increasing in the output transmission line with every subsequent transistor, and therefore less and less L is added to keep the voltage constant and more and more extra C is added to keep the velocity constant. This C can come from parasitics of a second stage. These delay lines do not have a flat dispersion near their cut off, so it is important to use the same L-C periodicity in the input and the output. If inserting transmission lines, input and output will disperse away from each other.
For a distributed amplifier the input is fed in series into the amplifiers and parallel out of them. To avoid losses in the input, no input signal is allowed to leak through. This is avoided by using a balanced input and output also known as push–pull amplifier. Then all signals which leak through the parasitic capacitances cancel. The output is combined in a delay line with decreasing impedance. For narrow band operation other methods of phase-matching are possible, which avoid feeding the signal through multiple coils and capacitors. This may be useful for power-amplifiers.
The single amplifiers can be of any class. There may be some synergy between distributed class E/F amplifiers and some phase-matching methods. Only the fundamental frequency is used in the end, so this is the only frequency, which travels through the delay line version.
Because of the Miller effect a common source transistor acts as a capacitor (non inverting) at high frequencies and has an inverting transconductance at low frequencies. The channel of the transistor has three dimensions. One dimension, the width, is chosen depending on the current needed. The trouble is for a single transistor parasitic capacitance and gain both scale linearly with the width. For the distributed amplifier the capacitance – that is the width – of the single transistor is chosen based on the highest frequency and the width needed for the current is split across all transistors.
Applications
Note that those termination resistors are usually not used in CMOS, but the losses due to these are small in typical applications. In solid state power amplifiers often multiple discrete transistors are used for power reasons anyway. If all transistors are driven in a synchronized fashion a very high gate drive power is needed. For frequencies at which small and efficient coils are available distributed amplifiers are more efficient.
Voltage can be amplified by a common gate transistor, which shows no miller effect and no unit gain frequency cut off. Adding this yields the cascode configuration. The common gate configuration is incompatible with CMOS; it adds a resistor, that means loss, and is more suited for broadband than for high efficiency applications.
Radio
Acousto-optic modulator
time to digital converter
See also
Gunn diode is a device without any parasitic C or L very suitable for broadband applications
Regenerative circuit is circuit using the parasitics of a single transistor for a high frequency narrow band amplifier
Armstrong oscillator is circuit using the parasitics of a single transistor for a high frequency narrow band oscillator
References
External links
Microwaves101.com – Distributed amplifiers
Electronic amplifiers
Distributed element circuits | Distributed amplifier | [
"Technology",
"Engineering"
] | 1,539 | [
"Electronic engineering",
"Electronic amplifiers",
"Amplifiers",
"Distributed element circuits"
] |
5,631,045 | https://en.wikipedia.org/wiki/Nicergoline | Nicergoline, sold under the brand name Sermion among others, is an ergot derivative used to treat senile dementia and other disorders with vascular origins. Internationally it has been used for frontotemporal dementia as well as early onset in Lewy body dementia and Parkinson's dementia. It decreases vascular resistance and increases arterial blood flow in the brain, improving the utilization of oxygen and glucose by brain cells. It has similar vasoactive properties in other areas of the body, particularly the lungs. Unlike many other ergolines, such as ergotamine, nicergoline is not associated with cardiac fibrosis.
It is used for vascular disorders such as cerebral thrombosis and atherosclerosis, arterial blockages in the limbs, Raynaud's disease, vascular migraines, and retinopathy.
Nicergoline has been registered in over fifty countries and has been used for more than three decades for the treatment of cognitive, affective, and behavioral disorders of older people.
Medical uses
Nicergoline is used in the following cases:
Acute and chronic cerebral metabolic-vascular disorders (cerebral arteriosclerosis, thrombosis and cerebral embolism, transitory cerebral ischaemia). Acute and chronic peripheral metabolic-vascular disorders (organic and functional arteriopathies of the limbs), Raynaud's disease and other syndromes caused by altered peripheral irrigation.
Migraines of vascular origin
Coadjutant therapy in clinical situations accompanied by platelet hyper-aggregability, arterial tension.
Corio-retinal vascular disorders: diabetic retinopathy, macular degeneration and retinal angiosclerosis
Oto-vestibular problems of a vascular nature: dizziness, auditory hallucinations, hypoacusis.
Dosages for known conditions are usually administered at 5–10 mg three times a day, however anti-aging preventative purposes may want to consider 5 mg once or twice a day more adequate.
Contraindications
Persons suffering from acute bleeding, myocardial infarction (heart conditions), hypertension, bradycardia or using alpha or beta receptor agonists should consult with their physician before use.
Although toxicology studies have not shown nicergoline to have any teratogenic effect, the use of this medicine during pregnancy should be limited to those cases where it is absolutely necessary.
On 28 June 2013, the European Medicines Agency recommended restricting the use of medicines containing ergot derivatives, including nicergoline. They stated that "these medicines should no longer be used to treat several conditions involving blood circulation problems or problems with memory and sensation, or to prevent migraine headaches, since the risks are greater than the benefits in these indications. This is based on a review of data showing an increased risk of fibrosis (formation of excess connective tissue that can damage organs and body structures) and ergotism (symptoms of ergot poisoning, such as spasms and obstructed blood circulation) with these medicines." However, only a subset of ergolines are associated with fibrosis and evidence suggests that nicergoline does not carry the same fibrotic risk like other ergoline derivatives such as ergotamine.
Nicergoline is considered unsafe in porphyria.
Side effects
The side effects of nicergoline are usually limited to nausea, hot flushes, mild gastric upset, hypotension and dizziness. At high drug dosages, bradycardia, increased appetite, agitation, diarrhea and perspiration were reported. Most of the available literature suggests that the side effects of nicergoline are mild and transient.
Interactions
Nicergoline is known to enhance the cardiac depressive effects of propranolol. At high dosages, it is advisable to seek one's physician's guidance if combining with potent vasodilators such as bromocriptine, Ginkgo biloba, picamilon, vinpocetine or xantinol nicotinate.
Pharmacology
Pharmacodynamics
Nicergoline is an ergot alkaloid derivative that acts as a potent and selective α1A-adrenergic receptor antagonist. The IC50 of nicergoline in vitro has been reported to be 0.2 nM. The primary action of nicergoline is to increase arterial blood flow by vasodilation. Furthermore, it is known that nicergoline inhibits platelet aggregation. Studies have shown that nicergoline also increases nerve growth factor in the aged brain. In addition to the α1A-adrenergic receptor, nicergoline is an antagonist of the serotonin 5-HT1A receptor (IC50 = 6 nM) and shows moderate affinity for serotonin 5-HT2 and α2-adrenergic receptors and low affinity for the dopamine D1 and D2 and muscarinic acetylcholine M1 and M2 receptors. The major metabolites of nicergoline, MMDL and MDL, show low or no affinity for adrenergic, serotonin, dopamine, or acetylcholine receptors.
Society and culture
Generic names
Nicergoline is the generic name of the drug and its , , , and .
Brand names
In some countries, Sermion is marketed by Viatris after Upjohn was spun off from Pfizer.
References
5-HT1A antagonists
Alpha-1 blockers
Antidementia agents
Bromoarenes
Cerebral vasodilators
Ethers
Ergolines
Nicotinate esters | Nicergoline | [
"Chemistry"
] | 1,169 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
5,631,061 | https://en.wikipedia.org/wiki/Operation%20Masher | Operation Masher, also known as Operation White Wing, (24 January—6 March 1966) was the largest search and destroy mission that had been carried out in the Vietnam War up until that time. It was a combined mission of the United States Army, Army of the Republic of Vietnam (ARVN), and Republic of Korea Army (ROK) in Bình Định Province on the central coast of South Vietnam. The People's Army of Vietnam (PAVN) 3rd Division, made up of two regiments of North Vietnamese regulars and one regiment of main force Viet Cong (VC) guerrillas, controlled much of the land and many of the people of Bình Định Province, which had a total population of about 800,000. A CIA report in 1965 said that Binh Dinh was "just about lost" to the communists.
The name "Operation Masher" was changed to "Operation White Wing", because President Lyndon Johnson wanted the name changed to one that sounded more benign. Adjacent to the operational area of Masher/White Wing in Quang Ngai province the U.S. and South Vietnamese Marine Corps carried out a complementary mission called Operation Double Eagle.
The 1st Cavalry Division (Airmobile) was the principal U.S. ground force involved in Operation Masher and that operation was marked as a success by its commanders. Claims are made that the PAVN 3rd Division had been dealt a hard blow, but intelligence reports indicated that a week after the withdrawal of the 1st Cavalry PAVN soldiers were returning to take control of the area where Operation Masher had taken place. Most of the PAVN/VC had slipped away prior to or during the operation, and discrepancy between weapons recovered and body count led to criticisms of the operation.
Allegations that there were a reported six civilian casualties for every reported PAVN/VC casualty during the Fulbright Hearings prompted growing criticism of US conduct of the war and contributed to greater public dissension at home. During Operation Masher, the ROK Capital Division were alleged to have committed the Bình An/Tây Vinh massacre between 12 February and 17 March 1966, in which over 1,000 civilians were allegedly killed. The operation would create almost 125,000 homeless people in this province, and the PAVN/VC forces would reappear just months after the US had conducted the operation.
Background
Bình Định Province was a traditional communist and VC stronghold. Binh Dinh consisted of a narrow, heavily cultivated coastal plain with river valleys separated by ridges and low mountains reaching into the interior. The main effort of the campaign in Binh Dinh would come on the Bồng Sơn Plain and in the mountains and valleys that bordered it. The plain, a narrow strip of land starting just north of the town of Bồng Sơn, ran northward along the coast into I Corps. Rarely more than 25 km wide, it consisted of a series of small deltas, which often backed into gently rolling terraces some 30-90m in height, and, at irregular intervals, of a number of mountainous spurs from the highlands. These spurs created narrow river valleys with steep ridges that frequently provided hideouts for PAVN/VC units or housed PAVN/VC command, control and logistical centers. The plain itself was bisected by the east-west Lai Giang River, which was in turn fed by two others, the An Lao, flowing from the northwest and the Kim Son, flowing from the southwest. These two rivers formed isolated but fertile valleys west of the coastal plain. The climate in the region was governed by the northeast monsoon. The heaviest rains had usually ended by December, but a light steady drizzle, which the French had called crachin weather and occasional torrential downpours could be expected to occur through March. These weather systems would at times limit the availability of air support.
The vital artery of Highway 1 ran north and south ran through Binh Dinh. The area of Operation Masher was about north to south and reached a maximum of inland from the South China Sea. The U.S. Marine's Operation Double Eagle extended northward from Masher and the ROK's Operation Flying Tiger extended southward. South Vietnamese forces participated in all three operations.
The First Cavalry Division (Airmobile) was selected by U.S. Commander William Westmoreland to carry out the operation. The 1st Cavalry had borne the brunt of the combat during the Siege of Plei Me and the Battle of Ia Drang in October and November 1965, and some battalions of the 1st Cavalry had sustained heavy casualties. More than 5,000 soldiers in the division were recent arrivals in Vietnam with little combat experience. The South Vietnamese 22nd Division stationed in Binh Dinh had also suffered heavy casualties in recent fighting and was on the defensive.
The opposition to the American and South Vietnamese units participating in Operation Masher/White Wing was the PAVN 3rd Division consisting of approximately 6,000 soldiers in two regiments of PAVN regulars who had a recently infiltrated into South Vietnam via the Ho Chi Minh Trail and one regiment of VC guerrillas who had been fighting the South Vietnamese government since 1962. The majority of the population of Binh Dinh was believed to be supportive of the VC.
The plan of Operation Masher was for the U.S., South Vietnamese and ROK soldiers to sweep north and for the U.S. and South Vietnamese marines to sweep south catching and killing the PAVN/VC forces between the allied forces. Orders for the U.S. forces in Operation Masher were to "locate and destroy VC/NVA units; enhance the security of GVN [Government of South Vietnam] installations in [provincial capital] Bồng Sơn, and to lay the groundwork for restoration of GVN control of the population and rich coastal plain area." The primary metric for judging the success of the operation would be the body count of PAVN/VC soldiers killed.
Preparations
The 1st Cavalry Division broke the campaign into two parts. During the first, primarily a preparation and deception operation, a brigade-size task force would establish a temporary command and forward supply base at Phu Cat on Highway 1 south of the area of operations, secure the highway somewhat northward, and start patrolling around Phu Cat to convey the impression that the true target area was well away from the plain. During the second, division elements would move to Bồng Sơn itself and launch a series of airmobile hammer-and-anvil operations around the plain and the adjacent valleys to flush the PAVN/VC toward strong blocking positions. General Harry Kinnard assigned the mission to Colonel Hal Moore's 3rd Brigade, but if need be, he was ready to add a second brigade to the operation to intensify the pressure and pursuit.
On the morning of 25 January the men of the 3rd Brigade at Camp Radcliff began their move to staging areas in eastern Binh Dinh. Two battalions, Lieutenant colonel Raymond L. Kampe's 1st Battalion, 7th Cavalry Regiment and Lt. Col. Rutland D. Beard's 1st Battalion, 12th Cavalry Regiment went by road and air to Phu Cat, joined South Koreans in securing the airfield and support base, and carried out wide-ranging search and destroy actions nearby that met only light resistance. Meanwhile, Lt. Col. Robert McDade's 2nd Battalion, 7th Cavalry, with about 80 percent of its authorized strength and thus still not fully reconstituted after the fight at LZ Albany, boarded a dozen C-123s at the airstrip for the short ride into Bồng Sơn. One of the C-123s crashed into mountains near An Khe, killing all four crewmen and 42 passengers on board. The rest of the battalion deployed without incident and then helicoptered north to Landing Zone Dog, where engineers started building an airstrip and digging in artillery.
On paper, the hammer-and-anvil attack plan was not complicated. After 3rd Brigade elements secured mountain positions west of the Bồng Sơn and set up Firebases Brass and Steel, covering the northern and southern parts of the search area, 2/7th Cavalry would push north from LZ Dog and 2/12th Cavalry, also staging from LZ Dog, would work its way south from the opposite end of the target zone. Meanwhile, with the South Vietnamese Airborne Brigade acting as an eastern blocking force along Highway 1, 1/7th Cavalry would air-assault onto the high ground to the west and push east towards 2/7th Cavalry and 2/12th Cavalry. If PAVN/VC units were in the area, the 3rd Brigade would bring them to battle or destroy them as they fled.
The operation
Phase One: Bồng Sơn
Operation Masher began officially on the morning of 28 January 1966. Low clouds, wind and heavy rain prevented the movement of artillery to Firebase Brass. Lacking supporting fire, Moore cancelled the 2/12th Cavalry's mission. In the meantime, PAVN/VC fire downed a CH-47 helicopter at Landing Zone Papa north of Bồng Sơn and Kampe responded by sending a 1/7th Cavalry company to secure
the crash site. When it too came under fire, he set aside his original mission, the attack east from the mountains and moved his two other companies to LZ Papa. By the time they arrived, however, the PAVN/VC had withdrawn. Kampe's units spent the night at the landing zone. McDade went ahead with the mission, directing his men to begin scouring the hamlets that started about 2 km north of LZ Dog and extended 4 km further up the plain. Company A, 2/7th Cavalry understrength at two rifle platoons because of the crash three days earlier, entered the area at Landing Zone 2 and pushed north through rice paddies. Company B flew to Firebase Steel to secure it for an artillery battery.
Company C deployed by helicopter to the northern edge of the target in order to sweep to the southwest. The sandy plain where it set down, Landing Zone 4, () seemed safe, a relatively open tract in the hamlet of Phung Du 2 with a graveyard in its midst and tall palm trees on three sides. Company C omitted the artillery preparation that normally preceded a landing due to the proximity of the village. The first helicopter lift landed at LZ 4 at 08:25, with no PAVN/VC reaction. When the second lift came ten minutes later however, the PAVN 7th Battalion, 22nd Regiment, entrenched in earthworks, palm groves and bamboo thickets throughout the hamlet, poured mortar and machine gun fire into the landing zone. Company C commander, Captain Fesmire waved the second flight away, expecting the troops to be dropped at an alternative landing zone a few hundred meters to the southwest. Instead, they ended up at four nearby but scattered locations. Returning ten minutes later with a third lift, the helicopters unloaded the men at a fifth site. By 08:45 Company C was on the ground, but the unit was so fragmented and enemy fire so intense that the various parts found maneuver difficult and effective communication with one another impossible. Meanwhile, heavy rain impeded the provision of adequate air support, and the men were so dispersed that artillery was of little use. American casualties soon littered the hamlet ground.
McDade ordered Company A to reinforce Company C but when they reached the southern edge of the landing zone, they also came under fire. Although the men formed a perimeter near a paddy dike, they were soon pinned down and never reached Company C. Early in the afternoon McDade joined Company A, but to no effect. Finally, six helicopters carrying reinforcements from Company B reached LZ 4. But the effort generated so much PAVN fire that all six were hit and two were driven off. Only the command group and part of one platoon were able to land and they quickly found themselves in a cross fire. Under heavy rain McDade managed to locate the fragmented Company C and succeeded in bringing in artillery support. Meanwhile, the darkness and poor weather gave Fesmire the cover he needed to pull Company C together. As he prepared to settle in for the night, he received orders from McDade to move south, closer to the rest of the battalion. Under heavy fire, he completed the linkup at 04:30 (). Along with 20 wounded, his men carried with them the bodies of eight killed.
After dawn on 29 January the low overcast lifted, and fighter-bombers pounded the area to McDade's north, detonating PAVN ammunition and causing large fires. Soon after, McDade's companies, reinforced by 2/12th Cavalry, swept north to eliminate the last PAVN from the hamlet. But the clearing operation took another day, and was completed only when elements of 1/7th Cavalry joined the sweep out of the landing zone.
From then on combat tapered off and Kinnard ordered an end to that phase of the operation, effective at 12:00 on 4 February. The 3rd Brigade had cleared elements of the 22nd Regiment from the coastal plain claiming 566 PAVN/VC killed. US losses were 123 dead (including the 42 troops and four crew killed in the C-123 crash) and two helicopters were shot down and 29 damaged.
Phase Two: An Lao valley
On 28 January three Project DELTA U.S. Special Forces teams consisting of 17 personnel were inserted in the An Lao Valley for reconnaissance. The teams ran into immediate trouble and when rescued a day later seven had been killed and three wounded. Project DELTA Commander Major Charles Beckwith was seriously wounded while extracting the teams. The 1st Cavalry was unable to provide support due to the fight at LZ 4. Beckwith was criticized for going into the An Lao valley, under VC control for 15 years, without South Vietnamese counterparts and ground intelligence and in poor weather.
The An Lao Valley and the surrounding highlands were the next target of the 1st Cavalry. Kinnard believed that the headquarters of the PAVN 3rd Division were located there. Bad weather delayed the beginning of the operation to 6 February. The U.S. Marines blocked the northern entrance of the valley, the ARVN blocked the southern entrance, and three battalions were landed in the valley, however the PAVN/VC forces had withdrawn. The 1st Cavalry discovered large caches of rice and defensive works, but reported killing only 11 PAVN/VC soldiers at a loss to American forces of 49 wounded.
The U.S. offered to assist the inhabitants in the An Lao valley to leave the valley and escape from PAVN/VC rule and 4,500 of 8,000 occupants did so. The U.S. reported that 3,000 people were moved by U.S. helicopter, the others leaving the valley on foot.
Phase Three: Kim Son Valley
The Kim Son Valley consisted of seven small river valleys about southwest of Bồng Sơn. Three American battalions were deployed to the valley. On 11 February the 1st Cavalry established ambush positions in the highlands at the exits to each of the valleys and on 12 February began a sweep up the valley and outward, hoping to catch the PAVN/VC as they retreated. Initially unsuccessful, over the next few days the number of enemy dead slowly mounted as the result of over a dozen clashes with the Americans. On the morning of 15 February a platoon from Company B, 2/7th Cavalry, came under small-arms and mortar fire while patrolling about 4 km southeast of Firebase Bird, near the valley center. Captain Diduryk, the company commander, initially estimated that the opposing force was no larger than a reinforced platoon, but it soon became apparent that he had bumped into at least two companies occupying a 300m long position running along a jungled streambank and up a hillside. Intelligence later identified the force as part of the VC 93rd Battalion, 2nd Regiment. Fire from Company B's mortar platoon, from helicopter gunships and Skyraiders and from artillery at Firebase Bird pounded the PAVN, then Diduryk's men attacked. One platoon fixed bayonets and charged the dug-in defenders across the stream. A second pushed north to block an escape route, and a third stayed in reserve. Unnerved by the frontal assault, the VC retreated in disorder. Many stumbled into the open and were quickly killed. Those who survived fled to the north, where they came within range of the waiting platoon. A smaller group attempted to escape southward but came under fire from the reserve platoon, which took many prisoners, including 93rd Battalion commander Lt. Col. Dong Doan who inadvertently provided his interrogators with enough information to identify the locations of both his regiment and its headquarters. During the fight Company B killed 59 VC and possibly another 90 for the loss of two killed.
On 16 February Kinnard decided to replace Colonel Moore's brigade with Col. Elvy B. Roberts' 1st Brigade. The next day, the 1st and 2nd Battalions, 7th Cavalry, returned to Camp Radcliff, while 1/12th Cavalry remained behind to join 1st Battalion, 8th Cavalry Regiment and 2/8th Cavalry. Together, the three battalions combed the area around Firebase Bird, but the PAVN/VC remained in hiding. Frustrated, on 22 February Roberts changed the direction of the hunt, dispatching 1/12th Cavalry to search Go Chai Mountain, 14 km east of Bird and 7 km west of Highway 1. During the afternoon of 23 February 1/12th Cavalry met an estimated PAVN company, probably from the 7th Battalion, 12th Regiment. They maintained contact until dark, but then the PAVN escaped. Operations in the area continued until the 27th, but when nothing more of substance occurred, Kinnard decided to abandon the Kim Son Valley. That evening he attached two battalions from 1st Brigade to 2nd Brigade and returned the 1st's command group and 1/12th Cavalry to Camp Radcliff. In all, the 1st Brigade had accounted for up to 160 PAVN/VC killed while losing 29 of its own men.
While the 1st and 3rd Brigades were patrolling the Kim Son Valley between 11 and 27 February, Colonel William R. Lynch's 2nd Brigade closed down operations north of the Lai Giang and transferred his command post to Landing Zone Pony just east of the valley. The move was triggered by Colonel Doan's revelation that the 2nd Regiment was operating in the mountains southeast of Pony, information that seemed to be confirmed when radio intercepts indicated the presence of a major PAVN/VC headquarters there. On 16 February Lynch began a block and sweep of the suspected terrain. Lt Col. Meyer's 2nd Battalion, 5th Cavalry Regiment, set up three blocking positions: Recoil, roughly 6 km east of the Kim Son Valley; Joe, 4 km southwest of Recoil; and Mike, just over 2 km north of Recoil. The sweep force, 1/5th Cavalry, plus a battery of the 1st Battalion, 77th Artillery Regiment, helicoptered to Landing Zone Coil approximately 6 km northeast of Recoil. 2/12th Cavalry remained near Pony as a reserve. At 06:30, on 17 February, the battery at Coil began pounding the area between Coil and Recoil. As the barrage lifted, two companies of 1/5th Cavalry moved off towards the three blocking positions. One of the companies moved out to establish a fourth blocking position east of Recoil, but before the men had gone more than a kilometer they were engulfed by fire from upslope. After calling in air strikes and artillery, Meyer directed one of his rifle companies to reinforce, but on its way it became so heavily engaged that it could not advance. Meyer then committed his third rifle company, and Colonel Lynch ordered 2/12th Cavalry to send a company as well. In the end, the cumulative weight of the American ground attack and the artillery and air strikes drove the VC from the heights, killing at least 127 VC and captured and destroyed three mortars, five recoilless rifles and a quantity of ammunition, leading Lynch to conclude that he had crushed the 2nd Regiment's heavy weapons battalion.
During the early afternoon of 18 February two platoons from Lt. Col. Ackerson's 1/5th Cavalry came under heavy fire while patrolling. With the platoons pinned down, Ackerson reinforced with two rifle companies, but fire from earthworks cut them apart, and casualties were left where they fell. At the end of the day the Americans broke contact to retrieve their dead and wounded. The troops labeled the sector where the roughest fighting had taken place the "Iron Triangle", because of its shape (not to be confused with the better-known Iron Triangle near Saigon). The fighting continued on the 19th. Company B, 2/12th Cavalry joined Company C, 2/5th Cavalry on a sweep southwest of the Iron Triangle. When one of the companies drew fire in the morning, the other attempted to turn the enemy's flank but ran into more VC. After breaking contact and calling in artillery and air strikes, the two companies attacked, killing 36 VC and forcing the remainder to withdraw. 1/5th Cavalry, meanwhile, renewed its assault into the triangle, with two companies moving west while the third blocked. But the VC stood their ground, stalling the advance. At dark, the 1/5th Cavalry broke contact to remove their wounded. The next day, 20 February, Lynch ordered Ackerson to continue his attack. Following a morning artillery strike, one of the companies came under fire from a strongpoint no more than 100m from the scene of the previous day's fighting. The Americans pulled back and called in artillery. In the afternoon a 2/12th Cavalry unit fought a running battle that left 23 VC dead before the VC withdrew.
On 21 February, attacks and counterthrusts were carried out by both sides. 2/4th Cavalry and 2/12th Cavalry patrolled around their landing zones, while a platoon from 1/5th Cavalry probed the site of the previous day's combat. Once again, intense VC fire forced the Americans to withdraw. Then, having arranged for air support, Lynch pulled all of his units out of the Iron Triangle. B-52s struck the site at midmorning and again in the afternoon. A tactical air mission then dropped 300 Tear gas grenades into the area. As evening approached, two companies of 1/5th Cavalry advanced toward the triangle but stopped before entering it when darkness fell. Artillery fired over 700 rounds into the redoubt and an AC-47 gunship dropped illumination flares throughout the night. During the action a psychological operations team circled overhead in a loudspeaker plane, broadcasting the message that further resistance would be futile and dropping safe conduct passes. On 22 February, 1/5th Cavalry moved in to find bunkers, foxholes, and trenches, but no live enemy. Although 41 bodies remained at the site, blood trails, bloody bandages and discarded weapons indicated that many more had been killed or wounded. Colonel Lynch insisted that the operation would have been even more successful if the two B-52 strikes had been timed more closely together. Instead, the delay between the first and the second bombing runs had prevented mopping up operations that might have kept more of the VC from escaping.
During the fight in the Iron Triangle American ground and air forces had killed at least 313 VC and possibly 400 more. The Americans also estimated that the VC had suffered some 900 wounded. Following the operation, one report observed, the entire valley floor reeked with the smell of VC dead. In addition to decimating the heavy weapons battalion of the 2nd Regiment, Colonel Lynch believed that his units had inflicted heavy losses on the Regiment's headquarters and its 93rd and 95th Battalions. The cost to the 2nd Brigade was 23 killed and 106 wounded. Colonel Lynch's brigade rested for a few days before resuming operations on 25 February. Over the next three days his men exchanged fire with small groups of PAVN/VC but failed to generate significant contacts.
Early in the morning of 28 February a patrol from Company B, 1/5th Cavalry came under sniper fire less than 2 km south of Pony. Unable to locate the sniper position, the patrol members continued their advance. Entering the hamlet of Tan Thanh 2, they met a hail of fire and suffered 4 wounded. As they pushed deeper into the settlement, automatic weapons opened up on them. They responded with grenades and small arms but soon came under attack on the right flank by 15-20 VC, who killed eight of them within minutes and wounded a number more. As the Americans scrambled for cover, the VC emerged from hiding to strip the U.S. dead of their weapons. A relief force arrived a short while later but by then the VC were gone.
Phase Four: Cay Giap mountains
Based on prisoner interrogations, American intelligence believed that the PAVN 6th Battalion, 12th Regiment was operating in the Cay Giep Mountains east of Bồng Sơn. General Kinnard wanted to encircle and annihilate it. The ARVN 22nd Division surrounded the target area, deploying along the Lai Giang to the north, Highway 1 to the west, and the Tra O Marsh in the south, while the division's junk fleet patrolled the coast to prevent escape by sea. Colonel Lynch's 2d Brigade would conduct the attack. At 07:30 on 1 March an intense hour-long air, land and sea bombardment of intended landing zones began. When the firing stopped, the designated sweep force 2/5th Cavalry, 1/8th Cavalry and 2/8th Cavalry came in over the mountains. However the assault forces found that the bombardment had hardly dented the thick foliage, and the helicopters were unable to land. Eventually, additional air strikes opened holes in the jungle canopy wide enough to allow the men to reach the ground by scrambling down rope ladders suspended from the hovering helicopters. Once deployed, the three battalions, soon joined by 1/5th Cavalry, searched the area and found little, although an ARVN unit near the Tra O Marsh killed about 50 PAVN who were attempting to flee the dragnet. On 4 March, following word from South Vietnamese civilians that most of the PAVN had left the a rea around the end of February, Kinnard decided that the operation had run its course and over the next two days returned the 2nd Brigade to Camp Radcliff.
Operation Double Eagle
Operation Double Eagle, carried out by U.S. and South Vietnamese marines, was a complementary mission to Operation Masher in neighboring Quảng Ngãi Province adjoining Binh Dinh province to the north. Operation Double Eagle was carried out over an area of about about north to south and extending as much as inland from the South China Sea. 6,000 regular troops and 600 guerrillas were believed to be operating within this area. U.S. Marines dedicated to the operation would number more than 5,000 plus several thousand South Vietnamese soldiers of the ARVN 2nd Division.
Operation Double Eagle began on 28 January with the largest amphibious assault of the Vietnam War and the largest since the Korean War. Bad weather hampered the early days of the operation, but the Marines pushed slowly inland. The plan was for the Marines to push southward into Binh Dinh province where they would meet the 1st Cavalry advancing northward in Operation Masher, trapping PAVN/VC forces between them. In reality, the Marines found few PAVN/VC soldiers in their operating area, the main force PAVN regiments having withdrawn from the area a few days prior to the amphibious landing. The Marines claimed to have killed 312 PAVN/VC soldiers and captured 19 at a loss of 24 Marines killed.
Marine Corps Commandant General Victor Krulak later said that Operation Double Eagle had failed because the PAVN and VC had been forewarned. He also said that Operation Double Eagle was a failure because it showed the people of the region that the Marines "would come in, comb the area and disappear; whereupon the VC would resurface and resume control."
Refugees
Operation Masher was carried out in heavily populated rural areas. The fighting resulted in the displacement, voluntary or involuntary, of a large number of people. The 1st Cavalry listed as a success of the operation that "140,000 Vietnamese civilians volunteered to leave their hamlets in the An Lao and Son Long valleys to return to GVN control." The "voluntary" nature of the departure or flight of many of the civilians from their land is questionable.
Operation Masher demonstrated that a consequence of large unit military operations and heavy utilization of artillery and aerial bombardment was the generation of refugees from the fighting and, inevitably, civilian casualties. The U.S. evacuated thousands of civilians by helicopter from combat areas and more thousands walked out to safety in the larger towns near the coast. The 1st Cavalry counted more than 27,000 people displaced by the operation. While many people fled the fighting, others remained for fear that if they abandoned their homes, the VC would confiscate their land and redistribute it to more dedicated supporters.
Although the U.S. Army maintained that the refugees were fleeing communism, an Army study in mid-1966 concluded that U.S. and South Vietnamese bombing and artillery fire, in conjunction with ground operations, were the immediate and prime causes of refugee movement into South Vietnamese government controlled cities and coastal areas. The U.S. considered that meeting the humanitarian needs of refugees was the responsibility of South Vietnam, but the response of the South Vietnamese government was often deficient.
An American journalist visited a camp housing 6,000 refugees from Operation Masher a week after their displacement. He found them packed 30 to a room, receiving inadequate food and medical treatment for diseases and wounds, and in a sullen and depressed mood.
Assessment
Operation Masher-White Wing was considered a success by the Americans, demonstrating the capability of the helicopter-borne 1st Cavalry to conduct a sustained campaign against PAVN and VC forces and "to find, fix, and finish" the enemy. The U.S., as it had in the earlier Battle of Ia Drang, relied on the massive use of firepower. 171 B-52 strikes hit suspected PAVN/VC positions and 132,000 artillery rounds were expended—100 for each PAVN/VC soldier killed. In addition, tactical air support was provided by 600 sorties by fixed-wing aircraft. 228 1st Cavalry soldiers were killed and another 46 died in an airplane crash; 834 were wounded. 24 U.S. Marines were killed and 156 wounded in Operation Double Eagle and several additional Americans from other units were killed. 11 ROK were reported killed; South Vietnamese casualties are not known. The U.S. claimed to have killed 1,342 PAVN/VC. The ARVN and ROK forces reported they had killed an additional 808 PAVN/VC. Further claims of 300-600 PAVN/VC were taken prisoner and 500 defected and an additional 1,746 were estimated killed. 52 crew-served weapons and 202 individual weapons were captured or recovered.
The PAVN claimed victory, stating that the 3rd Division had eliminated more than 2,000 enemy troops (killed, wounded or captured).:chapter 4
An unknown number of people killed were civilians, and under the standard operating rules at the time those who did not 'voluntarily' leave free-fire zone were generally regarded as VC. Total number of civilians killed is largely unknown, but one estimate was that there were 6 civilians casualties for every VC. The US called these allegations exaggerated and blamed the VC for many deaths because of tactics which endangered civilians such as recruiting civilians and firing from populated areas. These issues were raised in the Fulbright Hearings. ROK troops of the Capital Division were alleged to have killed over 1,000 civilians in the Bình An/Tây Vinh massacre.
Despite this operation being the biggest search-and-destroy operation in the war up to that point, most of the PAVN/VC forces had slipped away and re-appeared in the region a few months later. An estimated 125,000 people within the Binh Dinh province had lost their homes as a result of Operation Masher/White Wing.
The positive results cited by the Americans appear to have been only transitory. The 1st Cavalry cited among the favorable consequences of Operation Masher that it had give the local population "a chance to be freed from VC domination by moving to areas which are under government control" and stated that the South Vietnamese government "intends to reestablish civil government in the area." PAVN/VC influence, however, continued to be extensive in Binh Dinh province. Two months later, in Operation Crazy Horse, the 1st Cavalry was back sweeping part of the same area covered by Operation Masher and in October 1966 Operation Thayer began an extended effort by the 1st Cavalry once again to "fully pacify" Binh Dinh province.
A Joint Chiefs of Staff memo reported by The Wall Street Journal in 1966 urged President Johnson to "expand" the use of non-lethal chemicals in South Vietnam. The use of 3-Quinuclidinyl benzilate or Agent BZ was alleged in Operation White Wing by journalist Pierre Darcourt in L'Express news magazine. The allegation concerned an offensive and the 1st Cavalry Division in March 1966 during Operation "White Wing."
References
External links
Battles of the Vietnam War involving South Korea
Battles of the Vietnam War involving the United States
History of Bình Định province
Chemical warfare
Incapacitating agents
January 1966 events in Asia
February 1966 events in Asia
March 1966 events in Asia | Operation Masher | [
"Chemistry"
] | 6,798 | [
"Incapacitating agents",
"nan",
"Chemical weapons"
] |
5,631,083 | https://en.wikipedia.org/wiki/International%20Typographic%20Style | The International Typographic Style is a systemic approach to graphic design that emerged during the 1930s–1950s but continued to develop internationally. It is considered the basis of the Swiss style. It expanded on and formalized the modernist typographic innovations of the 1920s that emerged in part out of art movements such as Constructivism (Russia), De Stijl (The Netherlands) and at the Bauhaus (Germany). The International Typographic Style has had profound influence on graphic design as a part of the modernist movement, impacting many design-related fields including architecture and art. It emphasizes simplicity, clarity, readability, and objectivity. Hallmarks of the style are asymmetric layouts, use of a grid, sans-serif typefaces like Akzidenz Grotesk and Helvetica, and flush left, ragged right text. The style is also associated with a preference for photography in place of illustrations or drawings. Many of the early International Typographic Style works featured typography as a primary design element in addition to its use in text, and it is for this that the style is named. The influences of this graphic movement can still be seen in design strategy and theory to this day.
Specifics of the term
Swiss School
There are difficulties in defining the boundaries of the International Typographic Style. Sometimes, the term is considered as a synonym for the concept of "Swiss style" – a phenomenon that became widespread in international typography and Swiss design in the 1950s and 1960s. However, the International Typographic Style and the Swiss Style are different phenomena. Traditionally, the term "International Typographic Style" is used as a name that defines the state of international graphics in the 1920s and 1930s. The concept of "Swiss style" is usually limited to the 1950s and 1960s and associated with universal graphic systems of this period. Despite controversy and discrepancies, these terms are sometimes used interchangeably.
International Style
"International Typographic Style" is meaningfully related to the concept of International Style in architecture. This phenomenon, in turn, is attributed to the 1930s–1960s and is associated with the exhibition "Modern Architecture: An International Exhibition", which was held in 1932 at the Museum of Modern Art in New York. The visual system of the International Style formed the basis of the general artistic doctrine in various fields of design and formed the basis of the International Typographic Style.
History
The style emerged from a desire to represent information objectively, free from the influence of associated meaning. The International Typographic Style evolved as a modernist graphic movement that sought to convey messages clearly and in a universally straightforward manner. Two major Swiss design schools are responsible for the early years of International Typographic Style. A graphic design technique based on grid-work that began in the 19th century became inspiration for modifying the foundational course at the Basel School of Design in 1908. Shortly thereafter, in 1918 Ernst Keller became a professor at the Kunstgewerbeschule Zürich and began developing a graphic design and typography course. He did not teach a specific style to his students, rather he taught a philosophy of style that dictated "the solution to the design problem should emerge from its content." This idea of the solution to the design emerging from the problem itself was a reaction to previous artistic processes focused on "beauty for the sake of beauty" or "the creation of beauty as a purpose in and of itself". Keller's work uses simple geometric forms, vibrant colors and evocative imagery to further elucidate the meaning behind each design. Other early pioneers include Théo Ballmer and Max Bill.
The 1950s saw the distillation of International Typographic Style elements into sans-serif font families such as Univers. Univers paved the way for Max Miedinger and collaborator Edouard Hoffman to design the typeface Neue Haas Grotesk, which would be later renamed Helvetica. The goal with Helvetica was to create a pure typeface that could be applied to longer texts and that was highly readable. The movement began to coalesce after a periodical publication began in 1959 titled New Graphic Design, which was edited by several influential designers who played major roles in the development of International Typographic Style. The format of the journal represented many of the important elements of the style—visually demonstrating the content—and was published internationally, thus spreading the movement beyond Switzerland's borders. One of the editors, Josef Müller-Brockmann, "sought an absolute and universal form of graphic expression through objective and impersonal presentation, communicating to the audience without the interference of the designer's subjective feelings or propagandist techniques of persuasion." Many of Müller-Brockmann's feature large photographs as objective symbols meant to convey his ideas in particularly clear and powerful ways.
After World War II international trade began to increase and relations between countries grew steadily stronger. Typography and design were crucial to helping these relationships progress—clarity, objectivity, region-less glyphs, and symbols are essential to communication between international partners. International Typographic Style found its niche in this communicative climate and expanded further beyond Switzerland, to America.
One of the first American designers to integrate Swiss design with his own was Rudolph de Harak. The influence of International Typographic Style on de Harak's own works can be seen in his many book jacket designs for McGraw-Hill publishers in the 1960s. Each jacket shows the book title and author, often aligned with a grid—flush left, ragged-right. One striking image covers most of the jacket, elucidating the theme of the particular book. International Typographic Style was embraced by corporations and institutions in America from the 1960s on, for almost two decades. One institution particularly devoted to the style was MIT.
Associated movements
During the 1900s other design based movements were formulating, influencing and influenced by the International Typographic movement. These movements emerged within the relationships between artistic fields including architecture, literature, graphic design, painting, sculpting etc.
De Stijl was a Dutch artistic movement that saw prominence in the period between 1917 and 1931. Referred to as neoplasticism, this artistic strategy sought to reflect a new Utopian ideal of spiritual harmony and order. It was a form of pure abstraction through reduction to the essentials of form and colour, employing vertical and horizontal layouts using only black and white and primary colors. Proponents of this movement included painters like Piet Mondrian, Vilmos Huszar and Bart van der Leck as well as architects like Gerrit Rietveld, Robert van 't Hoff and J. J. P. Oud.
Bauhaus was a German-based movement that emphasized purity of geometry, absence of ornamentation and the motto 'form follows function'. This was a school of thought that combined craftsmaking with the fine arts and was founded by Walter Gropius. The goal was to work towards the essence of the form follows function relationship to facilitate a style that could be applied to all design problems; the International Style.
Constructivism was an art/architectural philosophy that emerged from Russia in the 1920s. The style develops by assorted mechanical objects that are combined into abstract mobile structural forms. Hallmarks of the movement include geometric reduction, photo-montage and simplified palettes.
Suprematism, which arose in 1913, is another Russian art movement similarly focused on the simplification and purity of geometric forms to speak to values of spirituality.
All of these movements including the International Typographic styles are defined by reductionist purity as a visually compelling strategy of conveying messages through geometric and color based hierarchies.
Theory
The Bauhaus mantra of 'form follows function' applies to design in the spirit of the International Typographic movement. The movement was structured by focusing on detail, precision, craft skill, systems of education and approach, technical training, high standards of print, and the innovative application of lettering. The theory revolves around critically approaching the development of a system specific to the design problem presented.
For example, a father of the style, Ernst Keller, argued that a design solution should always be respectful of its content.
A good comparison is the structure that defines a math problem. One only uses specific equations for specific types of problems. One similarly only can work through these equations in specific ways. With the International Typographic and other related philosophies, a design context is critical to deriving a response.
Characteristics of style
Each design done with International Typographic Style in mind begins with a mathematical grid, because a grid is the "most legible and harmonious means for structuring information." Text is then applied, most often aligned flush left, ragged right. Fonts chosen for the text are sans serif, a type style believed to "[express] the spirit of a more progressive age" by early designers in the movement and focus on delivering content over embellishment. Helvetica, a font that is named after the Latin name for Switzerland, has been described as synonymous with Swiss design; other characteristic fonts associated with the style include Univers and Akzidenz-Grotesk.
Objective photography is another design element meant to present information clearly, and without any of the persuading influences of propaganda or commercial advertising. Such a strong focus on order and clarity is drawn from early pioneers of the movement believing that design is a "socially useful and important activity ... the designers define their roles not as artists but as objective conduits for spreading important information between components of society."
See also
Adrian Frutiger
Armin Hofmann
Dorothea Hofmann
Emil Ruder
Jan Tschichold
Josef Müller-Brockmann
Max Bill
Paul Rand
Richard Paul Lohse
References
Further reading
Brändle C.; Gimmi K.; Junod B.; Reble C.; Richter B. 100 Years of Swiss Graphic Design. Museum fürGestaltung Zürich. Zürich: Lars Müller Publishers, 2014. 325 p.
Fiedl, Frederich, Nicholas Ott and Bernard Stein. Typography: An Encyclopedic Survey of Type Design and Techniques Through History. Black Dog & Leventhal: 1998. .
Hollis, Richard. Swiss Graphic Design: The Origins and Growth of an International Style, 1920–1965. Yale University Press: 2006. .
Meggs P. A History of Graphic Design. N. Y.: John Wiley & Sons, Inc., 1998. 592 p.
Müller-Brockmann, Josef. Grid Systems in Graphic Design. Niggli: 1996. .
Ruder, Emil. Typography. Hastings House: 1981. .
Vasileva, Ekaterina. The Swiss Style: It’s Prototypes, Origins and the Regulation Problem // Terra Artis. Arts and Design, 2021, 3, 84–101. DOI: 10.53273/27128768_2021_3_84
External links
Swiss Graphic Design and Typography Revisited
Art movements
Swiss art
Graphic design
Communication design
Modern art
1920s in art
1950s in art | International Typographic Style | [
"Engineering"
] | 2,272 | [
"Design",
"Communication design"
] |
5,631,171 | https://en.wikipedia.org/wiki/N-vector | The n-vector representation (also called geodetic normal or ellipsoid normal vector) is a three-parameter non-singular representation well-suited for replacing geodetic coordinates (latitude and longitude) for horizontal position representation in mathematical calculations and computer algorithms.
Geometrically, the n-vector for a given position on an ellipsoid is the outward-pointing unit vector that is normal in that position to the ellipsoid. For representing horizontal positions on Earth, the ellipsoid is a reference ellipsoid and the vector is decomposed in an Earth-centered Earth-fixed coordinate system. It behaves smoothly at all Earth positions, and it holds the mathematical one-to-one property.
More in general, the concept can be applied to representing positions on the boundary of a strictly convex bounded subset of k-dimensional Euclidean space, provided that that boundary is a differentiable manifold. In this general case, the n-vector consists of k parameters.
General properties
A normal vector to a strictly convex surface can be used to uniquely define a surface position. n-vector is an outward-pointing normal vector with unit length used as a position representation.
For most applications the surface is the reference ellipsoid of the Earth, and thus n-vector is used to represent a horizontal position. Hence, the angle between n-vector and the equatorial plane corresponds to geodetic latitude, as shown in the figure.
A surface position has two degrees of freedom, and thus two parameters are sufficient to represent any position on the surface. On the reference ellipsoid, latitude and longitude are common parameters for this purpose, but like all two-parameter representations, they have singularities. This is similar to orientation, which has three degrees of freedom, but all three-parameter representations have singularities. In both cases the singularities are avoided by adding an extra parameter, i.e. to use n-vector (three parameters) to represent horizontal position and a unit quaternion (four parameters) to represent orientation.
n-vector is a one-to-one representation, meaning that any surface position corresponds to one unique n-vector, and any n-vector corresponds to one unique surface position.
As a Euclidean 3D vector, standard 3D vector algebra can be used for the position calculations, and this makes n-vector well-suited for most horizontal position calculations.
Converting latitude/longitude to n-vector
Based on the definition of the ECEF coordinate system, called e, it is clear that going from latitude/longitude to n-vector, is achieved by:
The superscript e means that n-vector is decomposed in the coordinate system e (i.e. the first component is the scalar projection of n-vector onto the x-axis of e, the second onto the y-axis of e etc.). Note that the equation is exact both for spherical and ellipsoidal Earth model.
Converting n-vector to latitude/longitude
From the three components of n-vector, , , and , latitude can be found by using:
The rightmost expression is best suited for computer program implementation.
Longitude is found using:
In these expressions should be implemented using a call to atan2(y,x). The Pole singularity of longitude is evident as atan2(0,0) is undefined. Note that the equations are exact both for spherical and ellipsoidal Earth model.
Example: Great circle distance
Finding the great circle distance between two horizontal positions (assuming spherical Earth) is usually done by means of latitude and longitude. Three different expressions for this distance are common; the first is based on arccos, the second is based on arcsin, and the final is based on arctan. The expressions, which are successively more complex to avoid numerical instabilities, are not easy to find, and since they are based on latitude and longitude, the Pole singularities may become a problem. They also contain deltas of latitude and longitude, which in general should be used with care near the ±180° meridian and the Poles.
Solving the same problem using n-vector is simpler due to the possibility of using vector algebra. The arccos expression is achieved from the dot product, while the magnitude of the cross product gives the arcsin expression. Combining the two gives the arctan expression:
where and are the n-vectors representing the two positions a and b. is the angular difference, and thus the great-circle distance is achieved by multiplying with the Earth radius. This expression also works at the poles and at the ±180° meridian.
There are several other examples where the use of vector algebra simplifies standard problems. For a general comparison of the various representations, see the horizontal position representations page.
See also
Earth section paths
Horizontal position representation
Latitude
Longitude
Universal Transverse Mercator coordinate system
Quaternion
References
External links
Solving 10 problems by means of the n-vector
Navigation
Geodesy
Geographic position
Geographic coordinate systems | N-vector | [
"Mathematics"
] | 1,018 | [
"Point (geometry)",
"Geographic position",
"Applied mathematics",
"Position",
"Geographic coordinate systems",
"Coordinate systems",
"Geodesy"
] |
4,221,033 | https://en.wikipedia.org/wiki/K-theory%20%28physics%29 | In string theory, K-theory classification refers to a conjectured application of K-theory (in abstract algebra and algebraic topology) to superstrings, to classify the allowed Ramond–Ramond field strengths as well as the charges of stable D-branes.
In condensed matter physics K-theory has also found important applications, specially in the topological classification of topological insulators, superconductors and stable Fermi surfaces (, ).
History
This conjecture, applied to D-brane charges, was first proposed by . It was popularized by who demonstrated that in type IIB string theory arises naturally from Ashoke Sen's realization of arbitrary D-brane configurations as stacks of D9 and anti-D9-branes after tachyon condensation.
Such stacks of branes are inconsistent in a non-torsion Neveu–Schwarz (NS) 3-form background, which, as was highlighted by , complicates the extension of the K-theory classification to such cases. suggested a solution to this problem: D-branes are in general classified by a twisted K-theory, that had earlier been defined by .
Applications
The K-theory classification of D-branes has had numerous applications. For example, used it to argue that there are eight species of orientifold one-plane. applied the K-theory classification to derive new consistency conditions for flux compactifications. K-theory has also been used to conjecture a formula for the topologies of T-dual manifolds by . Recently K-theory has been conjectured to classify the spinors in compactifications on generalized complex manifolds.
Open problems
Despite these successes, RR fluxes are not quite classified by K-theory. argued that the K-theory classification is incompatible with S-duality in IIB string theory.
In addition, if one attempts to classify fluxes on a compact ten-dimensional spacetime, then a complication arises due to the self-duality of the RR fluxes. The duality uses the Hodge star, which depends on the metric and so is continuously valued and in particular is generically irrational. Thus not all of the RR fluxes, which are interpreted as the Chern characters in K-theory, can be rational. However Chern characters are always rational, and so the K-theory classification must be replaced. One needs to choose a half of the fluxes to quantize, or a polarization in the geometric quantization-inspired language of Diaconescu, Moore, and Witten and later of . Alternately one may use the K-theory of a 9-dimensional time slice as has been done by .
K-theory classification of RR fluxes
In the classical limit of type II string theory, which is type II supergravity, the Ramond–Ramond field strengths are differential forms. In the quantum theory the well-definedness of the partition functions of D-branes implies that the RR field strengths obey Dirac quantization conditions when spacetime is compact, or when a spatial slice is compact and one considers only the (magnetic) components of the field strength which lie along the spatial directions. This led twentieth century physicists to classify RR field strengths using cohomology with integral coefficients.
However some authors have argued that the cohomology of spacetime with integral coefficients is too big. For example, in the presence of Neveu–Schwarz H-flux or non-spin cycles some RR fluxes dictate the presence of D-branes. In the former case this is a consequence of the supergravity equation of motion which states that the product of a RR flux with the NS 3-form is a D-brane charge density. Thus the set of topologically distinct RR field strengths that can exist in brane-free configurations is only a subset of the cohomology with integral coefficients.
This subset is still too big, because some of these classes are related by large gauge transformations. In QED there are large gauge transformations which add integral multiples of two pi to Wilson loops. The p-form potentials in type II supergravity theories also enjoy these large gauge transformations, but due to the presence of Chern-Simons terms in the supergravity actions these large gauge transformations transform not only the p-form potentials but also simultaneously the (p+3)-form field strengths. Thus to obtain the space of inequivalent field strengths from the forementioned subset of integral cohomology we must quotient by these large gauge transformations.
The Atiyah–Hirzebruch spectral sequence constructs twisted K-theory, with a twist given by the NS 3-form field strength, as a quotient of a subset of the cohomology with integral coefficients. In the classical limit, which corresponds to working with rational coefficients, this is precisely the quotient of a subset described above in supergravity. The quantum corrections come from torsion classes and contain mod 2 torsion corrections due to the Freed-Witten anomaly.
Thus twisted K-theory classifies the subset of RR field strengths that can exist in the absence of D-branes quotiented by large gauge transformations. Daniel Freed has attempted to extend this classification to include also the RR potentials using differential K-theory.
K-theory classification of D-branes
K-theory classifies D-branes in noncompact spacetimes, intuitively in spacetimes in which we are not concerned about the flux sourced by the brane having nowhere to go. While the K-theory of a 10d spacetime classifies D-branes as subsets of that spacetime, if the spacetime is the product of time and a fixed 9-manifold then K-theory also classifies the conserved D-brane charges on each 9-dimensional spatial slice. While we were required to forget about RR potentials to obtain the K-theory classification of RR field strengths, we are required to forget about RR field strengths to obtain the K-theory classification of D-branes.
K-theory charge versus BPS charge
As has been stressed by Petr Hořava, the K-theory classification of D-branes is independent of, and in some ways stronger than, the classification of BPS states. K-theory appears to classify stable D-branes missed by supersymmetry based classifications.
For example, D-branes with torsion charges, that is with charges in the order N cyclic group , attract each other and so can never be BPS. In fact, N such branes can decay, whereas no superposition of branes that satisfy a Bogomolny bound may ever decay. However the charge of such branes is conserved modulo N, and this is captured by the K-theory classification but not by a BPS classification. Such torsion branes have been applied, for example, to model Douglas-Shenker strings in supersymmetric U(N) gauge theories.
K-theory from tachyon condensation
Ashoke Sen has conjectured that, in the absence of a topologically nontrivial NS 3-form flux, all IIB brane configurations can be obtained from stacks of spacefilling D9 and anti D9 branes via tachyon condensation. The topology of the resulting branes is encoded in the topology of the gauge bundle on the stack of the spacefilling branes. The topology of the gauge bundle of a stack of D9s and anti D9s can be decomposed into a gauge bundle on the D9's and another bundle on the anti D9's. Tachyon condensation transforms such a pair of bundles to another pair in which the same bundle is direct summed with each component in the pair. Thus the tachyon condensation invariant quantity, that is, the charge which is conserved by the tachyon condensation process, is not a pair of bundles but rather the equivalence class of a pair of bundles under direct sums of the same bundle on both sides of the pair. This is precisely the usual construction of topological K-theory. Thus the gauge bundles on stacks of D9's and anti-D9's are classified by topological K-theory. If Sen's conjecture is right, all D-brane configurations in type IIB are then classified by K-theory. Petr Horava has extended this conjecture to type IIA using D8-branes.
Twisted K-theory from MMS instantons
While the tachyon condensation picture of the K-theory classification classifies D-branes as subsets of a 10-dimensional spacetime with no NS 3-form flux, the Maldacena, Moore, Seiberg picture classifies stable D-branes with finite mass as subsets of a 9-dimensional spatial slice of spacetime.
The central observation is that D-branes are not classified by integral homology because Dp-branes wrapping certain cycles suffer from a Freed-Witten anomaly, which is cancelled by the insertion of D(p-2)-branes and sometimes D(p-4)-branes that end on the afflicted Dp-brane. These inserted branes may either continue to infinity, in which case the composite object has an infinite mass, or else they may end on an anti-Dp-brane, in which case the total Dp-brane charge is zero. In either case, one may wish to remove the anomalous Dp-branes from the spectrum, leaving only a subset of the original integral cohomology.
The inserted branes are unstable. To see this, imagine that they extend in time away (into the past) from the anomalous brane. This corresponds to a process in which the inserted branes decay via a Dp-brane that forms, wraps the forementioned cycle and then disappears. MMS refer to this process as an instanton, although really it need not be instantonic.
The conserved charges are thus the nonanomolous subset quotiented by the unstable insertions. This is precisely the Atiyah-Hirzebruch spectral sequence construction of twisted K-theory as a set.
Reconciling twisted K-theory and S-duality
Diaconescu, Moore, and Witten have pointed out that the twisted K-theory classification is not compatible with the S-duality covariance of type IIB string theory. For example, consider the constraint on the Ramond–Ramond 3-form field strength G3 in the Atiyah-Hirzebruch spectral sequence (AHSS):
where d3=Sq3+H is the first nontrivial differential in the AHSS, Sq3 is the third Steenrod square and the last equality follows from the fact that the nth Steenrod square acting on any n-form x is xx.
The above equation is not invariant under S-duality, which exchanges G3 and H. Instead Diaconescu, Moore, and Witten have proposed the following S-duality covariant extension
where P is an unknown characteristic class that depends only on the topology, and in particular not on the fluxes. have found a constraint on P using the E8 gauge theory approach to M-theory pioneered by Diaconescu, Moore, and Witten.
Thus D-branes in IIB are not classified by twisted K-theory after all, but some unknown S-duality-covariant object that inevitably also classifies both fundamental strings and NS5-branes.
However the MMS prescription for calculating twisted K-theory is easily S-covariantized, as the Freed-Witten anomalies respect S-duality. Thus the S-covariantized form of the MMS construction may be applied to construct the S-covariantized twisted K-theory, as a set, without knowing having any geometric description for just what this strange covariant object is. This program has been carried out in a number of papers, such as and , and was also applied to the classification of fluxes by . use this approach to prove Diaconescu, Moore, and Witten's conjectured constraint on the 3-fluxes, and they show that there is an additional term equal to the D3-brane charge. shows that the Klebanov-Strassler cascade of Seiberg dualities consists of a series of S-dual MMS instantons, one for each Seiberg duality. The group, of universality classes of the supersymmetric gauge theory is then shown to agree with the S-dual twisted K-theory and not with the original twisted K-theory.
Some authors have proposed radically different solutions to this puzzle. For example, propose that instead of twisted K-theory, II string theory configurations should be classified by elliptic cohomology.
Researchers
Prominent researchers in this area include Edward Witten, Peter Bouwknegt, Angel Uranga, Emanuel Diaconescu, Gregory Moore, Anton Kapustin, Jonathan Rosenberg, Ruben Minasian, Amihay Hanany, Hisham Sati, Nathan Seiberg, Juan Maldacena, Alexei Kitaev, Daniel Freed, and Igor Kriz.
See also
Kalb–Ramond field
Notes
References
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
References (condensed matter physics)
.
.
.
Further reading
An excellent introduction to the K-theory classification of D-branes in 10 dimensions via Ashoke Sen's conjecture is the original paper "D-branes and K-theory" by Edward Witten; there is also an extensive review by .
A very comprehensible introduction to the twisted K-theory classification of conserved D-brane charges on a 9-dimensional timeslice in the presence of Neveu–Schwarz flux is .
External links
K-theory on arxiv.org
String theory
K-theory | K-theory (physics) | [
"Astronomy"
] | 2,903 | [
"String theory",
"Astronomical hypotheses"
] |
4,221,170 | https://en.wikipedia.org/wiki/Christoph%20Arnold | Christoph Arnold (17 December 1650 – 15 April 1695) was a German farmer and amateur astronomer.
Life
Born in Sommerfeld near Leipzig, Arnold was a farmer by profession. Interested in astronomy, he spotted the great comet of 1683, eight days before Hevelius did. He also observed the great comet of 1686. In 1686, Kirch went to Leipzig. There, he observed the great comet of 1686, together with Gottfried Kirch. There, Kirch met his second wife, Maria Margarethe Winckelmann (1670–1720), who had actually learned astronomy from Arnold.
Arnold observed the transit of Mercury in front of the sun on 13 October 1690. For this work, he received some money and a tax exemption from the town of Leipzig. He was the author of Göttliche Gnadenzeichen, in einem Sonnenwunder vor Augen gestellt (Leipzig, 1692) which contains an account of the transit of Mercury in 1690. He died at Leipzig.
Honors
Lunar crater Arnold and asteroid 121016 Christopharnold were named in his honor. The asteroid's official was published by the Minor Planet Center on 29 October 2012 ().
References
External links
Chris Plicht, Biographies
Messier Catalog: Online Biography of Gottfried Kirch
1650 births
1695 deaths
Amateur astronomers
17th-century German astronomers
Scientists from Leipzig
People from the Electorate of Saxony
German farmers
17th-century farmers | Christoph Arnold | [
"Astronomy"
] | 293 | [
"Astronomers",
"Amateur astronomers"
] |
4,222,539 | https://en.wikipedia.org/wiki/Nuclear%20safety%20and%20security | Nuclear safety is defined by the International Atomic Energy Agency (IAEA) as "The achievement of proper operating conditions, prevention of accidents or mitigation of accident consequences, resulting in protection of workers, the public and the environment from undue radiation hazards". The IAEA defines nuclear security as "The prevention and detection of and response to, theft, sabotage, unauthorized access, illegal transfer or other malicious acts involving nuclear materials, other radioactive substances or their associated facilities".
This covers nuclear power plants and all other nuclear facilities, the transportation of nuclear materials, and the use and storage of nuclear materials for medical, power, industry, and military uses.
The nuclear power industry has improved the safety and performance of reactors, and has proposed new and safer reactor designs. However, a perfect safety cannot be guaranteed. Potential sources of problems include human errors and external events that have a greater impact than anticipated: the designers of reactors at Fukushima in Japan did not anticipate that a tsunami generated by an earthquake would disable the backup systems which were supposed to stabilize the reactor after the earthquake. Catastrophic scenarios involving terrorist attacks, war, insider sabotage, and cyberattacks are also conceivable.
Nuclear weapon safety, as well as the safety of military research involving nuclear materials, is generally handled by agencies different from those that oversee civilian safety, for various reasons, including secrecy. There are ongoing concerns about terrorist groups acquiring nuclear bomb-making material.
Overview of nuclear processes and safety issues
, nuclear safety considerations occur in a number of situations, including:
Nuclear fission power used in nuclear power stations, and nuclear submarines and ships.
Nuclear weapons
Fissionable fuels such as uranium-235 and plutonium-239 and their extraction, storage and use
Radioactive materials used for medical, diagnostic and research purposes, for batteries in some space projects,
Nuclear waste, the radioactive waste residue of nuclear materials
Nuclear fusion power, a technology under long-term development
Unplanned entry of nuclear materials into the biosphere and food chain (living plants, animals and humans) if breathed or ingested
Continuity of uranium supplies
With the exception of thermonuclear weapons and experimental fusion research, all safety issues specific to nuclear power stems from the need to limit the biological uptake of committed dose (ingestion or inhalation of radioactive materials), and external radiation dose due to radioactive contamination.
Nuclear safety therefore covers at minimum:
Extraction, transportation, storage, processing, and disposal of fissionable materials
Safety of nuclear power generators
Control and safe management of nuclear weapons, nuclear material capable of use as a weapon, and other radioactive materials
Safe handling, accountability and use in industrial, medical and research contexts
Disposal of nuclear waste
Limitations on exposure to radiation
Responsible agencies
International
Internationally the International Atomic Energy Agency "works with its Member States and multiple partners worldwide to promote safe, secure and peaceful nuclear technologies." Some scientists say that the 2011 Japanese nuclear accidents have revealed that the nuclear industry lacks sufficient oversight, leading to renewed calls to redefine the mandate of the IAEA so that it can better police nuclear power plants worldwide.
The IAEA Convention on Nuclear Safety was adopted in Vienna on 17 June 1994 and entered into force on 24 October 1996. The objectives of the convention are to achieve and maintain a high level of nuclear safety worldwide, to establish and maintain effective defences in nuclear installations against potential radiological hazards, and to prevent accidents having radiological consequences.
The convention was drawn up in the aftermath of the Three Mile Island and Chernobyl accidents at a series of expert level meetings from 1992 to 1994, and was the result of considerable work by States, including their national regulatory and nuclear safety authorities, and the International Atomic Energy Agency, which serves as the Secretariat for the convention.
The obligations of the Contracting Parties are based to a large extent on the application of the safety principles for nuclear installations contained in the IAEA document Safety Fundamentals ‘The Safety of Nuclear Installations’ (IAEA Safety Series No. 110 published 1993). These obligations cover the legislative and regulatory framework, the regulatory body, and technical safety obligations related to, for instance, siting, design, construction, operation, the availability of adequate financial and human resources, the assessment and verification of safety, quality assurance and emergency preparedness.
The convention was amended in 2014 by the Vienna Declaration on Nuclear Safety. This resulted in the following principles:
1. New nuclear power plants are to be designed, sited, and constructed, consistent with the objective of preventing accidents in the commissioning and operation and, should an accident occur, mitigating possible releases of radionuclides causing long-term off site contamination and avoiding early radioactive releases or radioactive releases large enough to require long-term protective measures and actions.
2. Comprehensive and systematic safety assessments are to be carried out periodically and regularly for existing installations throughout their lifetime in order to identify safety improvements that are oriented to meet the above objective. Reasonably practicable or achievable safety improvements are to be implemented in a timely manner.
3. National requirements and regulations for addressing this objective throughout the lifetime of nuclear power plants are to take into account the relevant IAEA Safety Standards and, as appropriate, other good practices as identified inter alia in the Review Meetings of the CNS.
There are several problems with the IAEA, says Najmedin Meshkati of University of Southern California, writing in 2011:
"It recommends safety standards, but member states are not required to comply; it promotes nuclear energy, but it also monitors nuclear use; it is the sole global organization overseeing the nuclear energy industry, yet it is also weighed down by checking compliance with the Nuclear Non-Proliferation Treaty (NPT)".
National
Many nations utilizing nuclear power have specialist institutions overseeing and regulating nuclear safety. Civilian nuclear safety in the U.S. is regulated by the Nuclear Regulatory Commission (NRC). However, critics of the nuclear industry complain that the regulatory bodies are too intertwined with the industries themselves to be effective. The book The Doomsday Machine for example, offers a series of examples of national regulators, as they put it 'not regulating, just waving' (a pun on waiving) to argue that, in Japan, for example, "regulators and the regulated have long been friends, working together to offset the doubts of a public brought up on the horror of the nuclear bombs". Other examples offered include:
in China, where Kang Rixin, former general manager of the state-owned China National Nuclear Corporation, was sentenced to life in jail in 2010 for accepting bribes (and other abuses), a verdict raising questions about the quality of his work on the safety and trustworthiness of China's nuclear reactors.
in India, where the nuclear regulator reports to the national Atomic Energy Commission, which champions the building of nuclear power plants there and the chairman of the Atomic Energy Regulatory Board, S. S. Bajaj, was previously a senior executive at the Nuclear Power Corporation of India, the company he is now helping to regulate.
in Japan, where the regulator reports to the Ministry of Economy, Trade and Industry, which overtly seeks to promote the nuclear industry and ministry posts and top jobs in the nuclear business are passed among the same small circle of experts.
The book argues that nuclear safety is compromised by the suspicion that, as Eisaku Sato, formerly a governor of Fukushima province (with its infamous nuclear reactor complex), has put it of the regulators: “They're all birds of a feather”.
The safety of nuclear plants and materials controlled by the U.S. government for research, weapons production, and those powering naval vessels is not governed by the NRC. In the UK nuclear safety is regulated by the Office for Nuclear Regulation (ONR) and the Defence Nuclear Safety Regulator (DNSR). The Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) is the Federal Government body that monitors and identifies solar radiation and nuclear radiation risks in Australia. It is the main body dealing with ionizing and non-ionizing radiation and publishes material regarding radiation protection.
Other agencies include:
Autorité de sûreté nucléaire
Canadian Nuclear Safety Commission
Radiological Protection Institute of Ireland
Federal Atomic Energy Agency in Russia
Kernfysische dienst, (NL)
Pakistan Nuclear Regulatory Authority
Bundesamt für Strahlenschutz, (DE)
Atomic Energy Regulatory Board (India)
Nuclear power plant safety and security
Complexity
Nuclear power plants are some of the most sophisticated and complex energy systems ever designed. Any complex system, no matter how well it is designed and engineered, cannot be deemed failure-proof. Veteran journalist and author Stephanie Cooke has argued:
The reactors themselves were enormously complex machines with an incalculable number of things that could go wrong. When that happened at Three Mile Island in 1979, another fault line in the nuclear world was exposed. One malfunction led to another, and then to a series of others, until the core of the reactor itself began to melt, and even the world's most highly trained nuclear engineers did not know how to respond. The accident revealed serious deficiencies in a system that was meant to protect public health and safety.
The 1979 Three Mile Island accident inspired Perrow's book Normal Accidents, where a nuclear accident occurs, resulting from an unanticipated interaction of multiple failures in a complex system. TMI was an example of a normal accident because it was "unexpected, incomprehensible, uncontrollable and unavoidable".
Perrow concluded that the failure at Three Mile Island was a consequence of the system's immense complexity. Such modern high-risk systems, he realized, were prone to failures however well they were managed. It was inevitable that they would eventually suffer what he termed a 'normal accident'. Therefore, he suggested, we might do better to contemplate a radical redesign, or if that was not possible, to abandon such technology entirely.
A fundamental issue contributing to a nuclear power system's complexity is its extremely long lifetime. The timeframe from the start of construction of a commercial nuclear power station through the safe disposal of its last radioactive waste, may be 100 to 150 years.
Failure modes of nuclear power plants
There are concerns that a combination of human and mechanical error at a nuclear facility could result in significant harm to people and the environment:
Operating nuclear reactors contain large amounts of radioactive fission products which, if dispersed, can pose a direct radiation hazard, contaminate soil and vegetation, and be ingested by humans and animals. Human exposure at high enough levels can cause both short-term illness and death and longer-term death by cancer and other diseases.
It is impossible for a commercial nuclear reactor to explode like a nuclear bomb since the fuel is never sufficiently enriched for this to occur.
Nuclear reactors can fail in a variety of ways. Should the instability of the nuclear material generate unexpected behavior, it may result in an uncontrolled power excursion. Normally, the cooling system in a reactor is designed to be able to handle the excess heat this causes; however, should the reactor also experience a loss-of-coolant accident, then the fuel may melt or cause the vessel in which it is contained to overheat and melt. This event is called a nuclear meltdown.
After shutting down, for some time the reactor still needs external energy to power its cooling systems. Normally this energy is provided by the power grid to which that plant is connected, or by emergency diesel generators. Failure to provide power for the cooling systems, as happened in Fukushima I, can cause serious accidents.
Nuclear safety rules in the United States "do not adequately weigh the risk of a single event that would knock out electricity from the grid and from emergency generators, as a quake and tsunami recently did in Japan", Nuclear Regulatory Commission officials said in June 2011.
Vulnerability of nuclear plants to attack
Nuclear reactors become preferred targets during military conflict and, over the past three decades, have been repeatedly attacked during military air strikes, occupations, invasions and campaigns:
In September 1980, Iran bombed the Al Tuwaitha nuclear complex in Iraq in Operation Scorch Sword.
In June 1981, an Israeli air strike completely destroyed Iraq's Osirak nuclear research facility in Operation Opera.
Between 1984 and 1987, Iraq bombed Iran's Bushehr nuclear plant six times.
On 8 January 1982, Umkhonto we Sizwe, the armed wing of the ANC, attacked South Africa's Koeberg nuclear power plant while it was still under construction.
In 1991, the U.S. bombed three nuclear reactors and an enrichment pilot facility in Iraq.
In 1991, Iraq launched Scud missiles at Israel's Dimona nuclear power plant
In September 2007, Israel bombed a Syrian reactor under construction.
On 4 March 2022, Russian forces carried out artillery strikes at the Zaporizhzhia Nuclear Power Plant during the 2022 Russian invasion of Ukraine.
In the U.S., plants are surrounded by a double row of tall fences which are electronically monitored. The plant grounds are patrolled by a sizeable force of armed guards. In Canada, all reactors have an "on-site armed response force" that includes light-armored vehicles that patrol the plants daily. The NRC's "Design Basis Threat" criterion for plants is a secret, and so what size of attacking force the plants are able to protect against is unknown. However, to scram (make an emergency shutdown) a plant takes fewer than 5 seconds while unimpeded restart takes hours, severely hampering a terrorist force in a goal to release radioactivity.
Attack from the air is an issue that has been highlighted since the September 11 attacks in the U.S. However, it was in 1972 when three hijackers took control of a domestic passenger flight along the east coast of the U.S. and threatened to crash the plane into a U.S. nuclear weapons plant in Oak Ridge, Tennessee. The plane got as close as 8,000 feet above the site before the hijackers’ demands were met.
The most important barrier against the release of radioactivity in the event of an aircraft strike on a nuclear power plant is the containment building and its missile shield. Former NRC Chairman Dale Klein has said "Nuclear power plants are inherently robust structures that our studies show provide adequate protection in a hypothetical attack by an airplane. The NRC has also taken actions that require nuclear power plant operators to be able to manage large fires or explosions—no matter what has caused them."
In addition, supporters point to large studies carried out by the U.S. Electric Power Research Institute that tested the robustness of both reactor and waste fuel storage and found that they should be able to sustain a terrorist attack comparable to the September 11 terrorist attacks in the U.S. Spent fuel is usually housed inside the plant's "protected zone" or a spent nuclear fuel shipping cask; stealing it for use in a "dirty bomb" would be extremely difficult. Exposure to the intense radiation would almost certainly quickly incapacitate or kill anyone who attempts to do so.
Threat of terrorist attacks
Nuclear power plants are considered to be targets for terrorist attacks. Even during the construction of the first nuclear power plants, this issue has been advised by security bodies. Concrete threats of attack against nuclear power plants by terrorists or criminals are documented from several states. While older nuclear power plants were built without special protection against air accidents in Germany, the later nuclear power plants built with a massive concrete buildings are partially protected against air accidents. They are designed against the impact of combat aircraft at a speed of about 800 km / h. It was assumed as a basis of assessment of the impact of an aircraft of type Phantom II with a mass of 20 tonnes and speed of 215 m / s.
The danger arising from a terrorist caused large aircraft crash on a nuclear power plant is currently being discussed. Such a terrorist attack could have catastrophic consequences. For example, the German government has confirmed that the nuclear power plant Biblis A would not be completely protected from an attack by a military aircraft. Following the terrorist attacks in Brussels in 2016, several nuclear power plants were partially evacuated. At the same time, it became known that the terrorists had spied on the nuclear power plants, and several employees had their access privileges withdrawn.
Moreover, "nuclear terrorism", for instance with a so-called "Dirty bomb," poses a considerable potential hazard.
Plant location
In many countries, plants are often located on the coast, in order to provide a ready source of cooling water for the essential service water system. As a consequence the design needs to take the risk of flooding and tsunamis into account. The World Energy Council (WEC) argues disaster risks are changing and increasing the likelihood of disasters such as earthquakes, cyclones, hurricanes, typhoons, flooding. High temperatures, low precipitation levels and severe droughts may lead to fresh water shortages. Failure to calculate the risk of flooding correctly lead to a Level 2 event on the International Nuclear Event Scale during the 1999 Blayais Nuclear Power Plant flood, while flooding caused by the 2011 Tōhoku earthquake and tsunami lead to the Fukushima I nuclear accidents.
The design of plants located in seismically active zones also requires the risk of earthquakes and tsunamis to be taken into account. Japan, India, China and the USA are among the countries to have plants in earthquake-prone regions. Damage caused to Japan's Kashiwazaki-Kariwa Nuclear Power Plant during the 2007 Chūetsu offshore earthquake underlined concerns expressed by experts in Japan prior to the Fukushima accidents, who have warned of a genpatsu-shinsai (domino-effect nuclear power plant earthquake disaster).
Safeguarding critical infrastructure like nuclear power plants is a requirement and necessary for chemical facilities, operating nuclear reactors and many other utility facilities. In 2003, the United States Nuclear Regulatory Commission (NRC) developed mandates regarding enhanced security at nuclear power plants. Primary among them were changes to the security perimeter and the screening of employees, vendors, and visitors as they accessed the site. Many facilities recognize their vulnerabilities, and licensed security-contracting firms have arisen.
Multiple reactors
The Fukushima nuclear disaster illustrated the dangers of building multiple nuclear reactor units close to one another. Because of the closeness of the reactors, Plant Director Masao Yoshida "was put in the position of trying to cope simultaneously with core meltdowns at three reactors and exposed fuel pools at three units".
Nuclear safety systems
The three primary objectives of nuclear safety systems as defined by the Nuclear Regulatory Commission are to shut down the reactor, maintain it in a shutdown condition, and prevent the release of radioactive material during events and accidents. These objectives are accomplished using a variety of equipment, which is part of different systems, of which each performs specific functions.
Routine emissions of radioactive materials
During everyday routine operations, emissions of radioactive materials from nuclear plants are released to the outside of the plants although they are quite slight amounts.
The daily emissions go into the air, water and soil.
NRC says, "nuclear power plants sometimes release radioactive gases and liquids into the environment under controlled, monitored conditions to ensure that they pose no danger to the public or the environment", and "routine emissions during normal operation of a nuclear power plant are never lethal".
According to the United Nations (UNSCEAR), regular nuclear power plant operation including the nuclear fuel cycle amounts to 0.0002 millisieverts (mSv) annually in average public radiation exposure; the legacy of the Chernobyl disaster is 0.002 mSv/a as a global average as of a 2008 report; and natural radiation exposure averages 2.4 mSv annually although frequently varying depending on an individual's location from 1 to 13 mSv.
Japanese public perception of nuclear power safety
In March 2012, Prime Minister Yoshihiko Noda said that the Japanese government shared the blame for the Fukushima disaster, saying that officials had been blinded by an image of the country's technological infallibility and were "all too steeped in a safety myth."
Japan has been accused by authors such as journalist Yoichi Funabashi of having an "aversion to facing the potential threat of nuclear emergencies." According to him, a national program to develop robots for use in nuclear emergencies was terminated in midstream because it "smacked too much of underlying danger." Though Japan is a major power in robotics, it had none to send in to Fukushima during the disaster. He mentions that Japan's Nuclear Safety Commission stipulated in its safety guidelines for light-water nuclear facilities that "the potential for extended loss of power need not be considered." However, this kind of extended loss of power to the cooling pumps caused the Fukushima meltdown.
In other countries such as the UK, nuclear plants have not been claimed to be absolutely safe. It is instead claimed that a major accident has a likelihood of occurrence lower than (for example) 0.0001/year.
Incidents such as the Fukushima Daiichi nuclear disaster could have been avoided with stricter regulations over nuclear power. In 2002, TEPCO, the company that operated the Fukushima plant, admitted to falsifying reports on over 200 occasions between 1997 and 2002. TEPCO faced no fines for this. Instead, they fired four of their top executives. Three of these four later went on to take jobs at companies that do business with TEPCO.
Uranium supplies
Nuclear fuel is a strategic resource whose continuous supply needs to be secured to prevent plant outages. IAEA recommends at least two suppliers to prevent supply disruptions as result of political events or monopolistic pressure. Worldwide uranium supplies are well diversified, with dozens of suppliers in various countries, and the small amounts of fuel required make the diversification much easier than in the case of the large-volume fossil fuel supplies required by the energy sector. For example, Ukraine faced such a challenge as a result of the conflict with Russia, which continued to supply the fuel but used it to leverage political pressure. In 2016 Ukraine obtained 50% of its supplies from Russia, and the other half from Sweden, with a number of framework contracts with other countries.
Title 10 CFR Part 73 (U.S. NRC)
Title 10 of the Code of Federal Regulations (CFR) Part 73, Physical Protection of Plants and Materials, regulated by the entity the Nuclear Regulatory Commission (NRC) contains Subparts A (General Provisions) through I (Enforcement) and Subpart T (Security Notifications, Reports, and Recordkeeping) are available online U.S. NRC 10 CFR Part 7 This section and the table contents below, as reflected in the e-CFR per December 20, 2023, is as follows:
Other
Refer to Vehicle Barriers for regulation details affiliated with 10 CFR 73.55(e)(10)(i)(A) and Vehicle Barrier Systems and protection from land vehicles.
Refer to Security Lighting for regulation details affiliated with 10 CFR 73.55(i)(6)(ii), identifying minimum illumination requirements.
Refer to Cybersecurity for regulation details affiliated with 10 CFR 73.54, identifying cybersecurity requirements for nuclear facilities. For guidelines on the satisfaction of 10 CFR 73.54 requirements, refer to NEI 08-09.
Hazards of nuclear material
There is currently a total of 47,000 tonnes of high-level nuclear waste stored in the USA. Nuclear waste is approximately 94% Uranium, 1.3% Plutonium, 0.14% other actinides, and 5.2% fission products. About 1.0% of this waste consists of long-lived isotopes 79Se, 93Zr, 99Te, 107Pd, 126Sn, 129I and 135Cs. Shorter lived isotopes including 89Sr, 90Sr, 106Ru, 125Sn, 134Cs, 137Cs, and 147Pm constitute 0.9% at one year, decreasing to 0.1% at 100 years. The remaining 3.3–4.1% consists of non-radioactive isotopes. There are technical challenges, as it is preferable to lock away the long-lived fission products, but the challenge should not be exaggerated. One tonne of waste, as described above, has measurable radioactivity of approximately 600 TBq equal to the natural radioactivity in one km3 of the Earth's crust, which if buried, would add only 25 parts per trillion to the total radioactivity.
The difference between short-lived high-level nuclear waste and long-lived low-level waste can be illustrated by the following example. As stated above, one mole of both 131I and 129I release 3x1023 decays in a period equal to one half-life. 131I decays with the release of 970 keV whilst 129I decays with the release of 194 keV of energy. 131gm of 131I would therefore release 45 gigajoules over eight days beginning at an initial rate of 600 EBq releasing 90 kilowatts with the last radioactive decay occurring inside two years. In contrast, 129gm of 129I would therefore release 9 gigajoules over 15.7 million years beginning at an initial rate of 850 MBq releasing 25 microwatts with the radioactivity decreasing by less than 1% in 100,000 years.
One tonne of nuclear waste also reduces CO2 emission by 25 million tonnes.
Radionuclides such as 129I or 131I, may be highly radioactive, or very long-lived, but they cannot be both. One mole of 129I (129 grams) undergoes the same number of decays (3x1023) in 15.7 million years, as does one mole of 131I (131 grams) in 8 days. 131I is therefore highly radioactive, but disappears very quickly, whilst 129I releases a very low level of radiation for a very long time. Two long-lived fission products, technetium-99 (half-life 220,000 years) and iodine-129 (half-life 15.7 million years), are of somewhat greater concern because of a greater chance of entering the biosphere. The transuranic elements in spent fuel are neptunium-237 (half-life two million years) and plutonium-239 (half-life 24,000 years), which will also remain in the environment for long periods of time. A more complete solution to both the problem of both actinides and to the need for low-carbon energy may be the integral fast reactor. One tonne of nuclear waste after a complete burn in an IFR reactor will have prevented 500 million tonnes of CO2 from entering the atmosphere. Otherwise, waste storage usually necessitates treatment, followed by a long-term management strategy involving permanent storage, disposal or transformation of the waste into a non-toxic form.
Governments around the world are considering a range of waste management and disposal options, usually involving deep-geologic placement, although there has been limited progress toward implementing long-term waste management solutions. This is partly because the timeframes in question when dealing with radioactive waste range from 10,000 to millions of years, according to studies based on the effect of estimated radiation doses.
Since the fraction of a radioisotope's atoms decaying per unit of time is inversely proportional to its half-life, the relative radioactivity of a quantity of buried human radioactive waste would diminish over time compared to natural radioisotopes (such as the decay chain of 120 trillion tons of thorium and 40 trillion tons of uranium which are at relatively trace concentrations of parts per million each over the crust's 3 * 1019 ton mass). For instance, over a timeframe of thousands of years, after the most active short half-life radioisotopes decayed, burying U.S. nuclear waste would increase the radioactivity in the top 2000 feet of rock and soil in the United States (10 million km2) by ≈ 1 part in 10 million over the cumulative amount of natural radioisotopes in such a volume, although the vicinity of the site would have a far higher concentration of artificial radioisotopes underground than such an average.
Safety culture and human errors
One relatively prevalent notion in discussions of nuclear safety is that of safety culture. The International Nuclear Safety Advisory Group, defines the term as “the personal dedication and accountability of all individuals engaged in any activity which has a bearing on the safety of nuclear power plants”. The goal is “to design systems that use human capabilities in appropriate ways, that protect systems from human frailties, and that protect humans from hazards associated with the system”.
At the same time, there is some evidence that operational practices are not easy to change. Operators almost never follow instructions and written procedures exactly, and “the violation of rules appears to be quite rational, given the actual workload and timing constraints under which the operators must do their job”. Many attempts to improve nuclear safety culture “were compensated by people adapting to the change in an unpredicted way”.
According to Areva's Southeast Asia and Oceania director, Selena Ng, Japan's Fukushima nuclear disaster is "a huge wake-up call for a nuclear industry that hasn't always been sufficiently transparent about safety issues". She said "There was a sort of complacency before Fukushima and I don't think we can afford to have that complacency now".
An assessment conducted by the Commissariat à l’Énergie Atomique (CEA) in France concluded that no amount of technical innovation can eliminate the risk of human-induced errors associated with the operation of nuclear power plants. Two types of mistakes were deemed most serious: errors committed during field operations, such as maintenance and testing, that can cause an accident; and human errors made during small accidents that cascade to complete failure.
According to Mycle Schneider, reactor safety depends above all on a 'culture of security', including the quality of maintenance and training, the competence of the operator and the workforce, and the rigour of regulatory oversight. So a better-designed, newer reactor is not always a safer one, and older reactors are not necessarily more dangerous than newer ones. The 1979 Three Mile Island accident in the United States occurred in a reactor that had started operation only three months earlier, and the Chernobyl disaster occurred after only two years of operation. A serious loss of coolant occurred at the French Civaux-1 reactor in 1998, less than five months after start-up.
However safe a plant is designed to be, it is operated by humans who are prone to errors. Laurent Stricker, a nuclear engineer and chairman of the World Association of Nuclear Operators says that operators must guard against complacency and avoid overconfidence. Experts say that the "largest single internal factor determining the safety of a plant is the culture of security among regulators, operators and the workforce — and creating such a culture is not easy".
Investigative journalist Eric Schlosser, author of Command and Control, discovered that at least 700 "significant" accidents and incidents involving 1,250 nuclear weapons were recorded in the United States between 1950 and 1968. Experts believe that up to 50 nuclear weapons were lost during the Cold War.
Risks
The routine health risks and greenhouse gas emissions from nuclear fission power are small relative to those associated with coal, but there are several "catastrophic risks":
The extreme danger of the radioactive material in power plants and of nuclear technology in and of itself is so well known that the US government was prompted (at the industry's urging) to enact provisions that protect the nuclear industry from bearing the full burden of such inherently risky nuclear operations. The Price-Anderson Act limits industry's liability in the case of accidents, and the 1982 Nuclear Waste Policy Act charges the federal government with responsibility for permanently storing nuclear waste.
Population density is one critical lens through which other risks have to be assessed, says Laurent Stricker, a nuclear engineer and chairman of the World Association of Nuclear Operators:
The KANUPP plant in Karachi, Pakistan, has the most people — 8.2 million — living within 30 kilometres of a nuclear plant, although it has just one relatively small reactor with an output of 125 megawatts. Next in the league, however, are much larger plants — Taiwan's 1,933-megawatt Kuosheng plant with 5.5 million people within a 30-kilometre radius and the 1,208-megawatt Chin Shan plant with 4.7 million; both zones include the capital city of Taipei.
172,000 people living within a 30 kilometre radius of the Fukushima Daiichi nuclear power plant, have been forced or advised to evacuate the area. More generally, a 2011 analysis by Nature and Columbia University, New York, shows that some 21 nuclear plants have populations larger than 1 million within a 30-km radius, and six plants have populations larger than 3 million within that radius.
Black Swan events are highly unlikely occurrences that have big repercussions. Despite planning, nuclear power will always be vulnerable to black swan events:
A rare event – especially one that has never occurred – is difficult to foresee, expensive to plan for and easy to discount with statistics. Just because something is only supposed to happen every 10,000 years does not mean that it will not happen tomorrow. Over the typical 40-year life of a plant, assumptions can also change, as they did on September 11, 2001, in August 2005 when Hurricane Katrina struck, and in March, 2011, after Fukushima.
The list of potential black swan events is "damningly diverse":
Nuclear reactors and their spent-fuel pools could be targets for terrorists piloting hijacked planes. Reactors may be situated downstream from dams that, should they ever burst, could unleash massive floods. Some reactors are located close to faults or shorelines, a dangerous scenario like that which emerged at Three Mile Island and Fukushima – a catastrophic coolant failure, the overheating and melting of the radioactive fuel rods, and a release of radioactive material.
The AP1000 has an estimated core damage frequency of 5.09 × 10−7 per plant per year. The Evolutionary Power Reactor (EPR) has an estimated core damage frequency of 4 × 10−7 per plant per year. In 2006 General Electric published recalculated estimated core damage frequencies per year per plant for its nuclear power plant designs:
BWR/4 – 1 × 10−5
BWR/6 – 1 × 10−6
ABWR – 2 × 10−7
ESBWR – 3 × 10−8
Beyond design basis events
The Fukushima I nuclear accident was caused by a "beyond design basis event," the tsunami and associated earthquakes were more powerful than the plant was designed to accommodate, and the accident is directly due to the tsunami overflowing the too-low seawall. Since then, the possibility of unforeseen beyond design basis events has been a major concern for plant operators.
Transparency and ethics
According to journalist Stephanie Cooke, it is difficult to know what really goes on inside nuclear power plants because the industry is shrouded in secrecy. Corporations and governments control what information is made available to the public. Cooke says "when information is made available, it is often couched in jargon and incomprehensible prose".
Kennette Benedict has said that nuclear technology and plant operations continue to lack transparency and to be relatively closed to public view:
Despite victories like the creation of the Atomic Energy Commission, and later the Nuclear Regular Commission, the secrecy that began with the Manhattan Project has tended to permeate the civilian nuclear program, as well as the military and defense programs.
In 1986, Soviet officials held off reporting the Chernobyl disaster for several days. The operators of the Fukushima plant, Tokyo Electric Power Co, were also criticised for not quickly disclosing information on releases of radioactivity from the plant. Russian President Dmitry Medvedev said there must be greater transparency in nuclear emergencies.
Historically many scientists and engineers have made decisions on behalf of potentially affected populations about whether a particular level of risk and uncertainty is acceptable for them. Many nuclear engineers and scientists that have made such decisions, even for good reasons relating to long term energy availability, now consider that doing so without informed consent is wrong, and that nuclear power safety and nuclear technologies should be based fundamentally on morality, rather than purely on technical, economic and business considerations.
Non-Nuclear Futures: The Case for an Ethical Energy Strategy is a 1975 book by Amory B. Lovins and John H. Price. The main theme of the book is that the most important parts of the nuclear power debate are not technical disputes but relate to personal values, and are the legitimate province of every citizen, whether technically trained or not.
Nuclear and radiation accidents
The nuclear industry has an excellent safety record and the deaths per megawatt hour are the lowest of all the major energy sources. According to Zia Mian and Alexander Glaser, the "past six decades have shown that nuclear technology does not tolerate error". Nuclear power is perhaps the primary example of what are called ‘high-risk technologies’ with ‘catastrophic potential’, because “no matter how effective conventional safety devices are, there is a form of accident that is inevitable, and such accidents are a ‘normal’ consequence of the system.” In short, there is no escape from system failures.
Whatever position one takes in the nuclear power debate, the possibility of catastrophic accidents and consequent economic costs must be considered when nuclear policy and regulations are being framed.
Accident liability protection
Kristin Shrader-Frechette has said "if reactors were safe, nuclear industries would not demand government-guaranteed, accident-liability protection, as a condition for their generating electricity". No private insurance company or even consortium of insurance companies "would shoulder the fearsome liabilities arising from severe nuclear accidents".
Hanford Site
The Hanford Site is a mostly decommissioned nuclear production complex on the Columbia River in the U.S. state of Washington, operated by the United States federal government. Plutonium manufactured at the site was used in the first nuclear bomb, tested at the Trinity site, and in Fat Man, the bomb detonated over Nagasaki, Japan. During the Cold War, the project was expanded to include nine nuclear reactors and five large plutonium processing complexes, which produced plutonium for most of the 60,000 weapons in the U.S. nuclear arsenal. Many of the early safety procedures and waste disposal practices were inadequate, and government documents have since confirmed that Hanford's operations released significant amounts of radioactive materials into the air and the Columbia River, which still threatens the health of residents and ecosystems. The weapons production reactors were decommissioned at the end of the Cold War, but the decades of manufacturing left behind of high-level radioactive waste, an additional of solid radioactive waste, of contaminated groundwater beneath the site and occasional discoveries of undocumented contaminations that slow the pace and raise the cost of cleanup. The Hanford site represents two-thirds of the nation's high-level radioactive waste by volume. Today, Hanford is the most contaminated nuclear site in the United States and is the focus of the nation's largest environmental cleanup.
1986 Chernobyl disaster
The Chernobyl disaster was a nuclear accident that occurred on 26 April 1986 at the Chernobyl Nuclear Power Plant in Ukraine. An explosion and fire released large quantities of radioactive contamination into the atmosphere, which spread over much of Western USSR and Europe. It is considered the worst nuclear power plant accident in history, and is one of only two classified as a level 7 event on the International Nuclear Event Scale (the other being the Fukushima Daiichi nuclear disaster). The battle to contain the contamination and avert a greater catastrophe ultimately involved over 500,000 workers and cost an estimated 18 billion rubles, crippling the Soviet economy.
The accident raised concerns about the safety of the nuclear power industry, slowing its expansion for a number of years.
UNSCEAR has conducted 20 years of detailed scientific and epidemiological research on the effects of the Chernobyl accident. Apart from the 57 direct deaths in the accident itself, UNSCEAR predicted in 2005 that up to 4,000 additional cancer deaths related to the accident would appear "among the 600 000 persons receiving more significant exposures (liquidators working in 1986–87, evacuees, and residents of the most contaminated areas)". Russia, Ukraine, and Belarus have been burdened with the continuing and substantial decontamination and health care costs of the Chernobyl disaster.
Eleven of Russia's reactors are of the RBMK 1000 type, similar to the one at Chernobyl Nuclear Power Plant. Some of these RBMK reactors were originally to be shut down but have instead been given life extensions and uprated in output by about 5%. Critics say that these reactors are of an "inherently unsafe design", which cannot be improved through upgrades and modernization, and some reactor parts are impossible to replace. Russian environmental groups say that the lifetime extensions "violate Russian law, because the projects have not undergone environmental assessments".
2011 Fukushima I accidents
Despite all assurances, a major nuclear accident on the scale of the 1986 Chernobyl disaster happened again in 2011 in Japan, one of the world's most industrially advanced countries. Nuclear Safety Commission Chairman Haruki Madarame told a parliamentary inquiry in February 2012 that "Japan's atomic safety rules are inferior to global standards and left the country unprepared for the Fukushima nuclear disaster last March". There were flaws in, and lax enforcement of, the safety rules governing Japanese nuclear power companies, and this included insufficient protection against tsunamis.
A 2012 report in The Economist said: "The reactors at Fukushima were of an old design. The risks they faced had not been well analysed. The operating company was poorly regulated and did not know what was going on. The operators made mistakes. The representatives of the safety inspectorate fled. Some of the equipment failed. The establishment repeatedly played down the risks and suppressed information about the movement of the radioactive plume, so some people were evacuated from more lightly to more heavily contaminated places".
The designers of the Fukushima I Nuclear Power Plant reactors did not anticipate that a tsunami generated by an earthquake would disable the backup systems that were supposed to stabilize the reactor after the earthquake. Nuclear reactors are such "inherently complex, tightly coupled systems that, in rare, emergency situations, cascading interactions will unfold very rapidly in such a way that human operators will be unable to predict and master them".
Lacking electricity to pump water needed to cool the atomic core, engineers vented radioactive steam into the atmosphere to release pressure, leading to a series of explosions that blew out concrete walls around the reactors. Radiation readings spiked around Fukushima as the disaster widened, forcing the evacuation of 200,000 people. There was a rise in radiation levels on the outskirts of Tokyo, with a population of 30 million, 135 miles (210 kilometers) to the south.
Back-up diesel generators that might have averted the disaster were positioned in a basement, where they were quickly overwhelmed by waves. The cascade of events at Fukushima had been predicted in a report published in the U.S. several decades ago:
The 1990 report by the U.S. Nuclear Regulatory Commission, an independent agency responsible for safety at the country's power plants, identified earthquake-induced diesel generator failure and power outage leading to failure of cooling systems as one of the “most likely causes” of nuclear accidents from an external event.
The report was cited in a 2004 statement by Japan's Nuclear and Industrial Safety Agency, but it seems adequate measures to address the risk were not taken by TEPCO. Katsuhiko Ishibashi, a seismology professor at Kobe University, has said that Japan's history of nuclear accidents stems from an overconfidence in plant engineering. In 2006, he resigned from a government panel on nuclear reactor safety, because the review process was rigged and “unscientific”.
According to the International Atomic Energy Agency, Japan "underestimated the danger of tsunamis and failed to prepare adequate backup systems at the Fukushima Daiichi nuclear plant". This repeated a widely held criticism in Japan that "collusive ties between regulators and industry led to weak oversight and a failure to ensure adequate safety levels at the plant". The IAEA also said that the Fukushima disaster exposed the lack of adequate backup systems at the plant. Once power was completely lost, critical functions like the cooling system shut down. Three of the reactors "quickly overheated, causing meltdowns that eventually led to explosions, which hurled large amounts of radioactive material into the air".
Louise Fréchette and Trevor Findlay have said that more effort is needed to ensure nuclear safety and improve responses to accidents:
The multiple reactor crises at Japan's Fukushima nuclear power plant reinforce the need for strengthening global instruments to ensure nuclear safety worldwide. The fact that a country that has been operating nuclear power reactors for decades should prove so alarmingly improvisational in its response and so unwilling to reveal the facts even to its own people, much less the International Atomic Energy Agency, is a reminder that nuclear safety is a constant work-in-progress.
David Lochbaum, chief nuclear safety officer with the Union of Concerned Scientists, has repeatedly questioned the safety of the Fukushima I Plant's General Electric Mark 1 reactor design, which is used in almost a quarter of the United States' nuclear fleet.
A report from the Japanese Government to the IAEA says the "nuclear fuel in three reactors probably melted through the inner containment vessels, not just the core". The report says the "inadequate" basic reactor design — the Mark-1 model developed by General Electric — included "the venting system for the containment vessels and the location of spent fuel cooling pools high in the buildings, which resulted in leaks of radioactive water that hampered repair work".
Following the Fukushima emergency, the European Union decided that reactors across all 27 member nations should undergo safety tests.
According to UBS AG, the Fukushima I nuclear accidents are likely to hurt the nuclear power industry's credibility more than the Chernobyl disaster in 1986:
The accident in the former Soviet Union 25 years ago 'affected one reactor in a totalitarian state with no safety culture,' UBS analysts including Per Lekander and Stephen Oldfield wrote in a report today. 'At Fukushima, four reactors have been out of control for weeks – casting doubt on whether even an advanced economy can master nuclear safety.'
The Fukushima accident exposed some troubling nuclear safety issues:
Despite the resources poured into analyzing crustal movements and having expert committees determine earthquake risk, for instance, researchers never considered the possibility of a magnitude-9 earthquake followed by a massive tsunami. The failure of multiple safety features on nuclear power plants has raised questions about the nation's engineering prowess. Government flip-flopping on acceptable levels of radiation exposure confused the public, and health professionals provided little guidance. Facing a dearth of reliable information on radiation levels, citizens armed themselves with dosimeters, pooled data, and together produced radiological contamination maps far more detailed than anything the government or official scientific sources ever provided.
As of January 2012, questions also linger as to the extent of damage to the Fukushima plant caused by the earthquake even before the tsunami hit. Any evidence of serious quake damage at the plant would "cast new doubt on the safety of other reactors in quake-prone Japan".
Two government advisers have said that "Japan's safety review of nuclear reactors after the Fukushima disaster is based on faulty criteria and many people involved have conflicts of interest". Hiromitsu Ino, Professor Emeritus at the University of Tokyo, says
"The whole process being undertaken is exactly the same as that used previous to the Fukushima Dai-Ichi accident, even though the accident showed all these guidelines and categories to be insufficient".
In March 2012, Prime Minister Yoshihiko Noda acknowledged that the Japanese government shared the blame for the Fukushima disaster, saying that officials had been blinded by a false belief in the country's "technological infallibility", and were all too steeped in a "safety myth".
Other accidents
Serious nuclear and radiation accidents include the Chalk River accidents (1952, 1958 & 2008), Mayak disaster (1957), Windscale fire (1957), SL-1 accident (1961), Soviet submarine K-19 accident (1961), Three Mile Island accident (1979), Church Rock uranium mill spill (1979), Soviet submarine K-431 accident (1985), Therac-25 accidents (1985–1987), Goiânia accident (1987), Zaragoza radiotherapy accident (1990), Costa Rica radiotherapy accident (1996), Tokaimura nuclear accident (1999), Sellafield THORP leak (2005), and the Flerus IRE cobalt-60 spill (2006).
Health impacts
Four hundred and thirty-seven nuclear power stations are presently in operation but, unfortunately, five major nuclear accidents have occurred in the past. These accidents occurred at Kyshtym (1957), Windscale (1957), Three Mile Island (1979), Chernobyl (1986), and Fukushima (2011). A report in Lancet says that the effects of these accidents on individuals and societies are diverse and enduring:
"Accumulated evidence about radiation health effects on atomic bomb survivors and other radiation-exposed people has formed the basis for national and international regulations about radiation protection. However, past experiences suggest that common issues were not necessarily physical health problems directly attributable to radiation exposure, but rather psychological and social effects. Additionally, evacuation and long-term displacement created severe health-care problems for the most vulnerable people, such as hospital inpatients and elderly people."
In spite of accidents like these, studies have shown that nuclear deaths are mostly in uranium mining and that nuclear energy has generated far fewer deaths than the high pollution levels that result from the use of conventional fossil fuels. However, the nuclear power industry relies on uranium mining, which itself is a hazardous industry, with many accidents and fatalities.
Journalist Stephanie Cooke says that it is not useful to make comparisons just in terms of number of deaths, as the way people live afterwards is also relevant, as in the case of the 2011 Japanese nuclear accidents:
"You have people in Japan right now that are facing either not returning to their homes forever, or if they do return to their homes, living in a contaminated area for basically ever... It affects millions of people, it affects our land, it affects our atmosphere ... it's affecting future generations ... I don't think any of these great big massive plants that spew pollution into the air are good. But I don't think it's really helpful to make these comparisons just in terms of number of deaths".
The Fukushima accident forced more than 80,000 residents to evacuate from neighborhoods around the plant.
A survey by the Iitate, Fukushima local government obtained responses from some 1,743 people who have evacuated from the village, which lies within the emergency evacuation zone around the crippled Fukushima Daiichi Plant. It shows that many residents are experiencing growing frustration and instability due to the nuclear crisis and an inability to return to the lives they were living before the disaster. Sixty percent of respondents stated that their health and the health of their families had deteriorated after evacuating, while 39.9 percent reported feeling more irritated compared to before the disaster.
"Summarizing all responses to questions related to evacuees' current family status, one-third of all surveyed families live apart from their children, while 50.1 percent live away from other family members (including elderly parents) with whom they lived before the disaster. The survey also showed that 34.7 percent of the evacuees have suffered salary cuts of 50 percent or more since the outbreak of the nuclear disaster. A total of 36.8 percent reported a lack of sleep, while 17.9 percent reported smoking or drinking more than before they evacuated."
Chemical components of the radioactive waste may lead to cancer.
For example, Iodine 131 was released along with the radioactive waste when Chernobyl disaster and Fukushima disasters occurred. It was concentrated in leafy vegetation after absorption in the soil. It also stays in animals’ milk if the animals eat the vegetation. When Iodine 131 enters the human body, it migrates to the thyroid gland in the neck and can cause thyroid cancer.
Other elements from nuclear waste can lead to cancer as well. For example, Strontium 90 causes breast cancer and leukemia, Plutonium 239 causes liver cancer.
Improvements to nuclear fission technologies
Redesigns of fuel pellets and cladding are being undertaken which can further improve the safety of existing power plants.
Newer reactor designs intended to provide increased safety have been developed over time. These designs include those that incorporate passive safety and Small Modular Reactors. While these reactor designs "are intended to inspire trust, they may have an unintended effect: creating distrust of older reactors that lack the touted safety features".
The next nuclear plants to be built will likely be Generation III or III+ designs, and a few such are already in operation in Japan. Generation IV reactors would have even greater improvements in safety. These new designs are expected to be passively safe or nearly so, and perhaps even inherently safe (as in the PBMR designs).
Some improvements made (not all in all designs) are having three sets of emergency diesel generators and associated emergency core cooling systems rather than just one pair, having quench tanks (large coolant-filled tanks) above the core that open into it automatically, having a double containment (one containment building inside another), etc.
Approximately 120 reactors, such as all those in Switzerland prior to and all reactors in Japan after the Fukushima accident, incorporate Filtered Containment Venting Systems, onto the containment structure, which are designed to relieve the containment pressure during an accident by releasing gases to the environment while retaining most of the fission products in the filter structures.
However, safety risks may be the greatest when nuclear systems are the newest, and operators have less experience with them. Nuclear engineer David Lochbaum explained that almost all serious nuclear accidents occurred with what was at the time the most recent technology. He argues that "the problem with new reactors and accidents is twofold: scenarios arise that are impossible to plan for in simulations; and humans make mistakes". As one director of a U.S. research laboratory put it, "fabrication, construction, operation, and maintenance of new reactors will face a steep learning curve: advanced technologies will have a heightened risk of accidents and mistakes. The technology may be proven, but people are not".
Developing countries
There are concerns about developing countries "rushing to join the so-called nuclear renaissance without the necessary infrastructure, personnel, regulatory frameworks and safety culture". Some countries with nuclear aspirations, like Nigeria, Kenya, Bangladesh and Venezuela, have no significant industrial experience and will require at least a decade of preparation even before breaking ground at a reactor site.
Precipitated by a 2010 Nuclear Security Summit convened by the Obama administration, China and the United States launched a number of initiatives to secure potentially dangerous, Chinese-supplied, nuclear material in countries such as Ghana or Nigeria. Through these initiatives, China and the US have converted Chinese-origin Miniature Neutron Source Reactors (MNSRs) from using highly enriched uranium to using low-enriched uranium fuel (which is not directly usable in weapons, thereby making reactors more proliferation resistant).
China and the United States collaborated to build the China Center of Excellence on Nuclear Security, which opened in 2015. The Center is a forum for nuclear security exchange, training, and demonstration in the Asia Pacific region.
Nuclear security and terrorist attacks
Nuclear power plants, civilian research reactors, certain naval fuel facilities, uranium enrichment plants, and fuel fabrication plants, are vulnerable to attacks which could lead to widespread radioactive contamination. The attack threat is of several general types: commando-like ground-based attacks on equipment which if disabled could lead to a reactor core meltdown or widespread dispersal of radioactivity; and external attacks such as an aircraft crash into a reactor complex, or cyber attacks.
The United States 9/11 Commission has said that nuclear power plants were potential targets originally considered for the September 11, 2001 attacks. If terrorist groups could sufficiently damage safety systems to cause a core meltdown at a nuclear power plant, and/or sufficiently damage spent fuel pools, such an attack could lead to widespread radioactive contamination. The Federation of American Scientists have said that if nuclear power use is to expand significantly, nuclear facilities will have to be made extremely safe from attacks that could release massive quantities of radioactivity into the community. New reactor designs have features of passive safety, which may help. In the United States, the NRC carries out "Force on Force" (FOF) exercises at all Nuclear Power Plant (NPP) sites at least once every three years.
Nuclear reactors become preferred targets during military conflict and, over the past three decades, have been repeatedly attacked during military air strikes, occupations, invasions and campaigns. Various acts of civil disobedience since 1980 by the peace group Plowshares have shown how nuclear weapons facilities can be penetrated, and the groups actions represent extraordinary breaches of security at nuclear weapons plants in the United States. The National Nuclear Security Administration has acknowledged the seriousness of the 2012 Plowshares action. Non-proliferation policy experts have questioned "the use of private contractors to provide security at facilities that manufacture and store the government's most dangerous military material". Nuclear weapons materials on the black market are a global concern, and there is concern about the possible detonation of a small, crude nuclear weapon by a militant group in a major city, with significant loss of life and property. Stuxnet is a computer worm discovered in June 2010 that is believed to have been created by the United States and Israel to attack Iran's nuclear facilities.
Nuclear fusion research
Nuclear fusion power is a developing technology still under research. It relies on fusing rather than fissioning (splitting) atomic nuclei, using very different processes compared to current nuclear power plants. Nuclear fusion reactions have the potential to be safer and generate less radioactive waste than fission. These reactions appear potentially viable, though technically quite difficult and have yet to be created on a scale that could be used in a functional power plant. Fusion power has been under theoretical and experimental investigation since the 1950s.
Construction of the International Thermonuclear Experimental Reactor facility began in 2007, but the project has run into many delays and budget overruns. The facility is now not expected to begin operations until the year 2027 – 11 years after initially anticipated. A follow on commercial nuclear fusion power station, DEMO, has been proposed. There is also suggestions for a power plant based upon a different fusion approach, that of an Inertial fusion power plant.
Fusion powered electricity generation was initially believed to be readily achievable, as fission power had been. However, the extreme requirements for continuous reactions and plasma containment led to projections being extended by several decades. In 2010, more than 60 years after the first attempts, commercial power production was still believed to be unlikely before 2050.
More stringent safety standards
Matthew Bunn, the former US Office of Science and Technology Policy adviser, and Heinonen, the former Deputy Director General of the IAEA, have said that there is a need for more stringent nuclear safety standards, and propose six major areas for improvement:
operators must plan for events beyond design bases;
more stringent standards for protecting nuclear facilities against terrorist sabotage;
a stronger international emergency response;
international reviews of security and safety;
binding international standards on safety and security; and
international co-operation to ensure regulatory effectiveness.
Coastal nuclear sites must also be further protected against rising sea levels, storm surges, flooding, and possible eventual "nuclear site islanding".
See also
Lists of nuclear disasters and radioactive incidents
Broken Arrow (nuclear)
Deep geological repository
Design basis accident
Environmental impact of nuclear power
International Nuclear Events Scale
Journey to the Safest Place on Earth
Nuclear terrorism
Nuclear accidents in the United States
Nuclear criticality safety
RELAP5-3D A reactor design and simulation tool to prevent accidents.
Nuclear fuel response to reactor accidents
Nuclear holocaust
Nuclear power debate
Nuclear power plant emergency response team
Nuclear whistleblowers
Nuclear weapon
Micro nuclear reactor
Passive nuclear safety
Yucca Mountain nuclear waste repository
Safety code (nuclear reactor)
Material unaccounted for
References
External links
International Atomic Energy Agency website
Nuclear Safety Info Resources
Nuclear Safety Discussion Forums
The Nuclear Energy Option, online book by Bernard L. Cohen. Emphasis on risk estimates of nuclear.
Environmental impact of nuclear power
Nuclear weapons
Safety practices | Nuclear safety and security | [
"Technology"
] | 12,406 | [
"Environmental impact of nuclear power"
] |
4,222,668 | https://en.wikipedia.org/wiki/Witt%27s%20theorem | "Witt's theorem" or "the Witt theorem" may also refer to the Bourbaki–Witt fixed point theorem of order theory.
In mathematics, Witt's theorem, named after Ernst Witt, is a basic result in the algebraic theory of quadratic forms: any isometry between two subspaces of a nonsingular quadratic space over a field k may be extended to an isometry of the whole space. An analogous statement holds also for skew-symmetric, Hermitian and skew-Hermitian bilinear forms over arbitrary fields. The theorem applies to classification of quadratic forms over k and in particular allows one to define the Witt group W(k) which describes the "stable" theory of quadratic forms over the field k.
Statement
Let be a finite-dimensional vector space over a field k of characteristic different from 2 together with a non-degenerate symmetric or skew-symmetric bilinear form. If {{nowrap|f : U → ''U}} is an isometry between two subspaces of V then f extends to an isometry of V.
Witt's theorem implies that the dimension of a maximal totally isotropic subspace (null space) of V is an invariant, called the index or of b, and moreover, that the isometry group of acts transitively on the set of maximal isotropic subspaces. This fact plays an important role in the structure theory and representation theory of the isometry group and in the theory of reductive dual pairs.
Witt's cancellation theorem
Let , , be three quadratic spaces over a field k. Assume that
Then the quadratic spaces and are isometric:
In other words, the direct summand appearing in both sides of an isomorphism between quadratic spaces may be "cancelled".
Witt's decomposition theorem
Let be a quadratic space over a field k. Then
it admits a Witt decomposition:
where is the radical of q, is an anisotropic quadratic space and is a split quadratic space. Moreover, the anisotropic summand, termed the core form, and the hyperbolic summand in a Witt decomposition of are determined uniquely up to isomorphism.
Quadratic forms with the same core form are said to be similar or Witt equivalent'''.
Citations
References
Emil Artin (1957) Geometric Algebra, page 121 via Internet Archive
Theorems in linear algebra
Quadratic forms | Witt's theorem | [
"Mathematics"
] | 515 | [
"Theorems in algebra",
"Theorems in linear algebra",
"Quadratic forms",
"Number theory"
] |
4,223,023 | https://en.wikipedia.org/wiki/Lacing%20%28drugs%29 | Lacing or cutting, in drug culture, refer to the act of using a substance (referred to as the lacing agent or cutting agent) to adulterate substances independent of the reason. The resulting substance is laced or cut.
Some street drugs are commonly laced with other chemicals for various reasons, but it is most commonly done to bulk up the original product or to sell other, cheaper drugs in the place of something more expensive. Individuals sometimes lace their own drugs with another substance to combine or alter the physiological or psychoactive effects.
Overview
The classical model of drug cutting refers to the way that illicit drugs were diluted at each stage of the chain of distribution.
Drug markets have changed considerably since the 1980s; greater competition, and a shift from highly structured (and thus controlled) to greatly fragmented markets, has generated competition among dealers in terms of purity. Many drugs that reach the street are now only cut at the manufacture/producer stage, and this may be more a matter of lacing the drug with another substance designed to appeal to the consumer, as opposed to simple diluents that increase the profit for the seller. The extent of cutting can vary significantly over time but for the last 15 years drugs such as heroin and cocaine have often sat at the 50% purity level. Heroin purity sitting at 50% does not mean 50% cutting agents; other adulterants could include other opiate by-products of making heroin from opium. Coomber, after having street heroin seizures from the UK re-analysed, reported that nearly 50% of the samples had no cutting agents present at all. This means that 50% of street heroin in the UK in 1995 had worked its way from producer to user without being cut at any stage, although other adulterants may have been present. Other research outlined how drug dealers have other ways of making profit without having to resort to cutting the drugs they sell.
Cocaine has been cut with various substances ranging from flour and powdered milk to ground drywall, mannitol, baking soda, and other common, easily obtainable substances.
Most hard drugs are adulterated to some degree. Some street drugs can be as low as 10–15% of the active drug, with the other (85–90%) not necessarily being the cutting agent. In fact a heroin sample of only 20% purity may have no cutting agents in it at all. The other 80% may be impurities produced in the manufacturing process and substances created as by products of this process and/or degradation of the drug if improperly stored.
When choosing a cutting agent, the drug manufacturer or dealer would ideally attempt to find a chemical that is inexpensive, easy to obtain, relatively non-toxic, and mimics the physical attributes of the drug to be adulterated. For example, if a drug is soluble in water, the preferred adulterant would also be water-soluble. Similar melting and boiling points are also important if the drug is to be smoked.
Types of lacing agents
Non-psychoactive lacing agents
Visually mimics
Some fake drugs consist of substances from relatively harmless sources, such as grocery store goods like flour, oregano or allergy pills. Even despite the substances' harmlessness, legal penalties for the crime of selling them can include time in jail.
Flavor masker
Sometimes a flavor masker is added to give a more pleasant experience.
Psychoactive mimics
Lacing/cutting agents may be psychoactive.
Certain fake drugs include other controlled drugs, or they may include synthetic drugs with similar properties. Uncertainty of an identity of the substance may increase the risk of an overdose.
A related, yet distinct, problem is the trade of counterfeit medications with pills including substances such as fentanyl which can be used recreationally.
Reasons for lacing
Illegal drug trade
Drugs may be sold to end users who are unaware they have been laced or are unaware what was used to lace them. At various points in the supply chain, in order to maximize profitability, many drugs are adulterated with cutting agents. Substances with similar physical and/or chemical properties can be used so the end product most closely resembles what it is purported to be. Inert substances with similar physical properties can be used to increase weight without changing the look and feel. Less expensive or easier to obtain compounds with similar chemical properties may be used to lace heavily adulterated drugs while still maintaining some psychoactive potency.
Mickey Finn
In slang, a Mickey Finn—or simply a Mickey—is a drink laced with a psychoactive drug or incapacitating agent (especially chloral hydrate) given to someone without their knowledge, with intent to incapacitate them.
Poly drug use
Drugs may also be laced with the end user being made aware of the lacing. In this case, rather than as an adulteration, the lacing is intended to make the product more desirable. Sometimes less potent, often less expensive drugs, are laced with a small amount of a more potent, often more expensive drug. This may be used to facilitate the ingestion of drugs or to allow the simultaneous ingestion of multiple drugs. Cigarettes laced with PCP allow users to take in the liquid PCP through smoking and some multi drug users report intentionally buying marijuana laced with methamphetamine.
Commonly laced substances
Dietary supplements
CBD
Cannabidiol (CBD) is often cut with synthetic cannabinoids.
Street drugs
Depressants
Heroin
Heroin is commonly cut with quinine, caffeine, dimethocaine, lidocaine, procaine, lactose, inositol, dextrose, mannitol, and starch.
Other opioids are sometimes sold as heroin or cut with heroin. Fentanyl sold as or laced into heroin has made the news in the past due to the numerous fatalities it causes when it appears on the market. Recently, Fentanyl and close analogues have been produced in pure powder form for very cheap. Dealers may cut with or sell heroin with Fentanyl due to the street cost of Fentanyl versus the cost of heroin. The potency of such mixtures (especially if made carelessly) can be far above that of pure heroin, and users frequently overdose due to this. Gray death is a street drug in the United States. Samples have been found to contain the designer drug U-47700, heroin and opioids including fentanyl and carfentanil.
α-Methylfentanyl
In 1976, α-Methylfentanyl ("China White") began to appear mixed with heroin, as an additive, and the mixture was sometimes also called "China White". It was first identified in the bodies of two drug overdose victims in Orange County, California, in December 1979, who appeared to have died from opiate overdose but tested negative for any known drugs of this type. Over the next year, there were 13 more deaths, and eventually the responsible agent was identified as α-methylfentanyl.
Stimulants
Stimulants are drugs that speed or give a mental boost to the consumer.
Cocaine
Black cocaine, and cocaine paste, are impure forms of cocaine.
The most common cocaine adulterants found in 1998 in samples in Rome, Italy were lidocaine and caffeine. Cocaine is sometimes mixed with methylamphetamine, methylphenidate, and ephedrine, but is usually mixed with non psychoactive chemicals such as mannitol, inositol, pectin, glucose, lactose, saccharin, white rice flour, and maltodextrin. Other
of agranulocytosis, including 2 deaths, according to an alert from the Substance Abuse and Mental Health Services Administration (SAMHSA).
The emergence of fentanyl-laced cocaine has led to an increase in cocaine overdose fatalities in New York City.
Methamphetamine
MSM is sometimes used as a cutting agent for illicitly manufactured methamphetamine.
Psychedelics
Cannabis
Cannabis products that are laced are usually laced with synthetic cannabinoids:
Counterfeit cannabis-liquid (c-liquid) for e-cigarettes: Synthetic cannabinoids are increasingly offered in e-cigarette form as "c-liquid".
Counterfeit cannabis buds: Hemp buds (or low-potency cannabis buds) laced with synthetic cannabinoids.
Counterfeit cannabis edible: The Florida Poison Information Center in Jacksonville warned parents in September 2020 that the number of people poisoned by fake marijuana edibles and candies has tripled.
Counterfeit hash oil: Several school kids in Greater Manchester collapsed after vaping synthetic cannabinoids mis-sold as THC vape.
Counterfeit hashish: In 2020 counterfeit hashish were found to contain 4F-MDMB-BINACA and 5F-MDMB-PINACA (5F-ADB).
Less common psychoactive substances used to adulterate cannabis:
Erectile dysfunction drugs: In the Netherlands two chemical analogs of sildenafil (Viagra) were found in adulterated marijuana.
Methamphetamine: psychiatrist Dr Bill MacEwan believes that drug dealers in British Columbia are intentionally lacing cannabis with methamphetamine to make it more addictive. He had some psychiatric patients that claimed they only smoked pot but their drug tests were positive for methamphetamine use.
PCP: Rarely, cannabis (especially that of low quality) is laced with PCP, particularly in the United States. However, it is not always done surreptitiously. Dealers who do so often (but not always) advertise their wares as being "enhanced" with other substances, and charge more money than they would otherwise, even if they do not say exactly what the lacing agents are. Such concoctions are often called "fry", "wet", "illy", "sherm", "water-water", "dust(ed)", "super weed", "grecodine" or other names.
Weight cutting agents:
Binding substances: Sometimes cannabis is adulterated with other binding substances including industrial glues such as neoprene, tar, ammonia, bitumen, petroleum-derived hydrocarbons, dog food or even human or animal excrement. to make it cheaper, thus being of poorer quality.
Sand, sugar, brix fertilizers, hair spray, fertilizers, pesticides and fungicides.
Microscopic glass beads: Cannabis buds was found to be contaminated with glass beads in 2007, known as gris weed.
Lead: In 2008, 30 German teenagers were hospitalized after the marijuana which they smoked was found to have been contaminated with lead, which was added in order to increase its weight.
Shoe polish: Hash has been cut with shoe polish.
Vitamin E acetate: Although harmless when used orally, high levels of the substance cause vaping-associated pulmonary injury when inhaled.
Ecstasy
Black market ecstasy pills are frequently found to contain other drugs in place of or in addition to methylenedioxymethylamphetamine (MDMA). Since the slang term "ecstasy" usually refers only to MDMA, any pill which contains other compounds may be considered adulterated. 3,4-Methylenedioxyamphetamine (MDA), methylenedioxyethamphetamine (MDEA), amphetamine, methylamphetamine, benzylpiperazine (BZP), trifluoromethylphenylpiperazine (TFMPP), caffeine, ephedrine, pseudoephedrine, and dextromethorphan (DXM) are all commonly found in pills being sold as ecstasy. Less common drugs in ecstasy include diphenhydramine, acetaminophen, 5-MeO-DiPT, 2C-B, procaine, and phencyclidine (PCP). Ecstasy pills sometimes contain dimethylamylamine to increase its stimulant effects. Ecstasy pills might also contain a low dose of 2C-I to potentiate its euphoric effects. Pharmaceutical pills are sometimes sold as ecstasy, as well as pills that contain no psychoactive chemicals at all. Ecstasy sometimes contains 10 mg to 20 mg of baclofen to reduce overheating caused by ecstasy. para-Methoxyamphetamine (PMA or "Dr. Death", a drug that causes so much overheating that it can kill within 40 minutes) is sometimes sold as ecstasy. There is one published case of an ecstasy tablet being adulterated with 8 mg of strychnine, a toxic alkaloid which was used in very low doses (less than 1 mg) as a stimulant and performance-enhancing drug in the past. Recently, several groups advocating for drug safety through education have made reagent testing products available to confirm what substances there are.
LSD
LSD is virtually never laced with other chemicals, but other lysergamides such as ALD-52 are sometimes sold as LSD-25. DOB, DOI, and other closely related drugs are sometimes sold as LSD. Several other highly potent hallucinogens such as Bromo-DragonFLY or 25I-NBOMe can be found in the form of blotters. LSD is also tasteless in normal dosages, so detection is only possible after ingestion or reagent testing. For these reasons, it is not uncommon to find blotters sold as LSD completely devoid of psychoactive substances.
Prescription medication
As the sources of prescription medication on the street are not verifiable through legitimate channels, misrepresentation of prescription medications is a common practice.
Deaths
Case reports in commercial products
Alcohol
In June 2022, the Dutch Food and Consumer Product Safety Authority warned that the 3-liter champagne bottle from Moët & Chandon Ice Impérial contained MDMA, killing a person in Germany.
Polydrug intoxication deaths
A drug called Voodoo that has gained popularity among Egyptian youth, intoxicated seventy-one individuals, and killed two, in 2017. The drug samples contained synthetic cannabinoids, amphetamine, tramadol, methadone, MDA, benzodiazepines, morphine derivatives, and penitrem A (a neurotoxin).
Testing
Reagent testing
Reagent testing kits are available online and also sold at some head shops. These kits claim to be able to identify common adulterants in ecstasy.
Professional lab tests
There are services available for testing the contents of an ecstasy pill that can tell the user what chemicals are contained in the pill and at what ratio. The results are then posted on their website along with every other pill that they have tested. The tests are considered to be highly accurate. Their services were at one time free, but when they ran out of funding they had to charge a fee for every pill tested.
See also
Darknet market
Date rape drug
Drug checking
Isopropylbenzylamine
Pill testing
Surrogate alcohol
References
Further reading
Coomber, R. (1997) Vim in the Veins – Fantasy or Fact: The Adulteration of Illicit Drugs, Addiction Research, Vol 5, No. 3. pp. 195-212
Coomber, R. (1997) ‘Adulteration of Drugs: The Discovery of a Myth', Contemporary Drug Problems, Vol 24, No. 2. pp. 239-271
Causes of death
Drug culture
Adulteration | Lacing (drugs) | [
"Chemistry"
] | 3,159 | [
"Adulteration",
"Drug safety"
] |
4,223,137 | https://en.wikipedia.org/wiki/Reed%20pipe | A reed pipe (also referred to as a lingual pipe) is an organ pipe that is sounded by a vibrating brass strip known as a reed. Air under pressure (referred to as wind) is directed towards the reed, which vibrates at a specific pitch. This is in contrast to flue pipes, which contain no moving parts and produce sound solely through the vibration of air molecules. Reed pipes are common components of pipe organs.
Stop
Reed pipes include all stops of the "Reed" class, and some stops from the "Hybrid" class. The reed stops of an organ are collectively called the "reed-work".
Construction
A reed pipe comprises a metal tongue (the reed) which rests against a shallot, in which is carved a tunnel. The reed and shallot are held in place by a wooden wedge. This assembly protrudes from the underside of the block and hangs down into the boot. A tuning wire is inserted through the boot and is bent to hold the reed against the shallot. The wire is moved up or down using a tuning knife in order to change the length of the tongue that is permitted to vibrate, thereby changing the pitch produced by the pipe. The resonator joins with the upper opening of the shallot and extends above the boot. The resonator may be made in a wide variety of lengths, shapes, and configurations, depending on the desired tone quality.
An en chamade is a specific type of reed which is mounted horizontally on the organ case rather than stood vertically inside the organ case. This is done to project the tone more directly at the listener. In cases where this cannot be done, hooded reeds (generally trumpets) are used. This method of construction projects the sound in the same manner with a vertical resonator which turns at a 90-degree angle at the top.
In places where a full-length resonator will not fit, a technique called mitering is used, wherein organ pipes are created so that instead of standing straight up, they appear to make a loop in the middle of the resonator. This is done by joining several small pieces of metal together.
Actuation
As wind enters the boot, it travels over the reed, causing it to vibrate against the shallot. This produces the pipe's sound. The wind passes through the shallot and up into the resonator, which focuses and refines the sound wave produced by the reed. The length of the air column as well as the length, mass, and stiffness of the reed itself, determine the frequency of the instrument.
Free reeds
A less-common type of reed construction is the free reed. The term refers to two types of reeds where the tongue does not beat directly against the shallot in order to produce the reed tone, which creates a unique sound (these are most commonly used on nineteenth-century German or French organs). In one case, the free reed stop will appear from the outside like a normal reed (complete with boot, tuning wire, and resonator, etc.). The only difference lies in the action of the tongue (see above), which beats "through" the shallot (hence the German term for the reed — durchschlagend). In the other form of the reed, an enclosed boot does not exist (as in normal reed pipes); therefore, all the tongues are held together in the same chamber, as in the harmonica, accordion, or harmonium. This arrangement makes it possible to change the volume produced without changing its pitch by varying the wind pressure, which is not possible with normal organ pipes. Volume adjustment was available to the organist by means of a balanced expression pedal which varied the wind pressure delivered to the free reed stop. This type of free reed was popular among certain organ builders of the nineteenth century due to the increased interest in a more expressive aesthetic.
Tonal characteristics
The tonal characteristics of reed pipes are determined by several factors, the most important of which is the interaction between the shallot and the tongue. The thickness and curve of the tongue itself play an important role in determining the tonal quality of the pipe. When voicing a reed pipe, the voicer will take great care in shaping the curve of the tongue, because this controls how the tongue beats against the shallot. Whether the shallot is cylindrical or tapered (and, in the latter case, whether or not the taper is inverted) greatly affects the pipe's timbre. Likewise, the "cut" (referring to the depth of the shallot and the shape of the opening) and the closed-end shape (whether the closed end of the shallot is flat, domed, or Schiffschen) determine whether the tone is more Baroque or more Romantic. In addition, the type of block (whether a standard shape or a French "double-block") in which the reed assembly is set has an effect on the sound.
Scaling is important when determining the final tone color of a reed pipe, though it is not of primary importance as it is in flue pipe construction. This is because reed pipe resonators simply reinforce certain partials of the sound wave; the air column inside the resonator is not the primary vibrator. The shape of the resonator, however, is quite important: an inverted-conical resonator (such as is typical with a Trumpet rank) produces more harmonics than does a cylindrical resonator (like that of a Clarinet rank).
There are generally two main types of reed stops: chorus reeds (such as the Trumpet, Clairon and Bombarde), whose main function is to blend with the flue stops and reinforce the full organ; and solo reeds or orchestral reeds (such as the Clarinet, the Oboe, and the Cor Anglais), which often (but not always) imitate orchestral instruments, and are used for quieter, solo passages (similar to woodwinds in an orchestra).
References
External links
Encyclopedia of Organ Stops
Pipe organ components
de:Orgelpfeife | Reed pipe | [
"Technology"
] | 1,238 | [
"Pipe organ components",
"Components"
] |
4,223,273 | https://en.wikipedia.org/wiki/Economic%20capital | In finance, mainly for financial services firms, economic capital (ecap) is the amount of risk capital, assessed on a realistic basis, which a firm requires to cover the risks that it is running or collecting as a going concern, such as market risk, credit risk, legal risk, and operational risk. It is the amount of money that is needed to secure survival in a worst-case scenario. Firms and financial services regulators should then aim to hold risk capital of an amount equal at least to economic capital.
Typically, economic capital is calculated by determining the amount of capital that the firm needs to ensure that its realistic balance sheet stays solvent over a certain time period with a pre-specified probability. Therefore, economic capital is often calculated as value at risk. The balance sheet, in this case, would be prepared showing market value (rather than book value) of assets and liabilities.
The first accounts of economic capital date back to the ancient Phoenicians, who took rudimentary tallies of frequency and severity of illnesses among rural farmers to gain an intuition of expected losses in productivity. These calculations were advanced by correlations to climate change, political outbreaks, and birth rate change.
The concept of economic capital differs from regulatory capital in the sense that regulatory capital is the mandatory capital the regulators require to be maintained while economic capital is the best estimate of required capital that financial institutions use internally to manage their own risk and to allocate the cost of maintaining regulatory capital among different units within the organization.
In social science
In social science, economic capital is distinguished in relation to other types of capital which may not necessarily reflect a monetary or exchange-value. These forms of capital include natural capital, cultural capital and social capital; the latter two represent a type of power or status that an individual can attain in a capitalist society via formal education or through social ties. Non-economic forms of capital have been variously discussed most famously by sociologist Pierre Bourdieu.
See also
Asset allocation
Basel I
Basel II
Capital structure
Financial risk management
Financial services conglomerate
RAROC, risk-adjusted return on capital
RORAC, return on risk-adjusted capital
Solvency II
References
External links
FDIC.gov, Economic Capital and the Assessment of Capital Adequacy Federal Deposit Insurance Corporation
BIS.org, "Basel Committee, Bank for International Settlements"
Economic Capital - A Preamble
CEIOPS"
Actuarial science
Financial risk | Economic capital | [
"Mathematics"
] | 487 | [
"Applied mathematics",
"Actuarial science"
] |
4,223,285 | https://en.wikipedia.org/wiki/Trachyandesite | Trachyandesite is an extrusive igneous rock with a composition between trachyte and andesite. It has little or no free quartz, but is dominated by sodic plagioclase and alkali feldspar. It is formed from the cooling of lava enriched in alkali metals and with an intermediate content of silica.
The term trachyandesite had begun to fall into disfavor by 1985 but was revived to describe extrusive igneous rocks falling into the S3 field of the TAS classification. These are divided into sodium-rich benmoreite and potassium-rich latite.
Trachyandesitic magma can produce explosive Plinian eruptions, such as happened at Tambora in 1815. The Eyjafjallajökull 2010 eruption (VEI-4), which disrupted European and transatlantic air travel from 15-20 April 2010, for some time was dominated by trachyandesite.
Petrology
Trachyandesite is characterized by a silica content near 58% and a total alkali oxide content near 9%. This places trachyandesite in the S3 field of the TAS diagram. When it is possible to identify the minerals present, trachyandesite is characterized by a high content of sodic plagioclase, typically andesine, and contains at least 10% alkali feldspar. Common mafic accessory minerals are amphibole, biotite or pyroxene. Small amounts of nepheline may be present and apatite is a common accessory mineral. Trachyandesite is not a recognized rock type in the QAPF classification, which is based on the actual mineral content. However, latite is recognized in this classification, while benmoreite would likely fall into either the latite or the andesite fields.
Trachyandesite magmas can have a relatively high sulfur content, and their eruption can inject great quantities of sulfur into the stratosphere. The sulfur may take the form of anhydrite phenocrysts in the magma. The 1982 El Chichón eruption produced trachyandesite pumice rich in anhydrite, and released 2.2 × 107 metric tons of sulfur.
Varieties
Sodium-rich trachyandesite (with %Na2O > %K2O + 2) is called benmoreite, while the more potassic form is called latite. Feldspathoid-bearing latite is sometimes referred to as tristanite. Basaltic trachyandesite is transitional to basalt and likewise comes in two varieties, mugearite (sodium-rich) and shoshonite (potassium-rich).
Occurrence
Trachyandesite is a member of the alkaline magma series, in which alkaline basaltic magma experiences fractional crystallization while still underground. This process removes calcium, magnesium, and iron from the magma. As a result, trachyandesite is common wherever alkali magma is erupted, including late eruptions of oceanic islands and in continental rift valleys and mantle plumes.
Trachyandesite is found in the Yellowstone area as part of the Absaroka Volcanic Supergroup, and has been erupted in arc volcanism in Mesoamerica and at Mount Tambora.
References
External links
Photomicrograph of a thin section of trachyandesite from France, in crossed-polarised light
Intermediate rocks
Volcanic rocks | Trachyandesite | [
"Chemistry"
] | 721 | [
"Intermediate rocks",
"Igneous rocks by composition"
] |
4,223,880 | https://en.wikipedia.org/wiki/Ultrapotassic%20igneous%20rocks | Ultrapotassic igneous rocks are a class of rare, volumetrically minor, generally ultramafic or mafic silica-depleted igneous rocks.
While there are debates on the exact classifications of ultrapotassic rocks, they are defined by using the chemical screens K2O/Na2O > 3 in much of the scientific literature. However caution is indicated in interpreting the use of the term "ultrapotassic", and the nomenclature of these rocks continues to be debated, with some classifications using K2O/Na2O > 2 to indicate a rock is ultrapotassic.
Conditions of formation
The magmas that produce ultrapotassic rocks are produced by a variety of mechanisms and from a variety of sources, but generally occur in a heterogenous, anomalous, phlogopite-bearing upper mantle.
The following conditions are favorable for the formation of ultrapotassic magmas.
partial melting at a great depth
low degrees of partial melting
lithophile element (K, Ba, Cs, Rb) enrichment in sources
enriched peridotite (variety harzburgite), especially in potassium
pyroxene and phlogopite-rich volumes within the mantle, not from peridotite alone
carbon dioxide or water (each condition leading to a distinctive magma);
reaction of melts with surrounding rock as they rise from their sources
Mantle sources of ultrapotassic magmas may contain subducted sediments, or the sources may have been enriched in potassium by melts or fluids partly derived from subducted sediments. Phlogopite and/or potassic amphibole are typical in the sources from which many such magmas have been derived. Ultrapotassic granites are uncommon and may be produced by melting of the continental crust above upwelling mafic magma, such as at rift zones.
Types of ultrapotassic rocks
Lamprophyres and melilitic rocks
Kimberlite
Lamproite
Orangeite (see Group II kimberlite)
Feldspathoid-bearing rocks such as leucitites
K-feldspar enriched leucogranites
Vaugnerite and Durbachite
Economic importance
The economic importance of ultrapotassic rocks is wide and varied. Because kimberlites, lamproites and lamprophyres are all produced at depths of 120 km or greater, they are known to be a major source of diamond deposits and thus can bring diamonds to the surface as xenocrysts. Additionally, ultrapotassic granites are a known host for granite-hosted gold mineralization and well as significant porphyry-style mineralization. Ultrapotassic A-type intracontinental granites may also be associated with fluorite and columbite – tantalite mineralization.
References
Ultrapotassic rocks
Igneous petrology
Geochemistry | Ultrapotassic igneous rocks | [
"Chemistry"
] | 592 | [
"Ultrapotassic rocks",
"nan",
"Igneous rocks by composition"
] |
4,223,925 | https://en.wikipedia.org/wiki/Zirconium%28IV%29%20chloride | Zirconium(IV) chloride, also known as zirconium tetrachloride, () is an inorganic compound frequently used as a precursor to other compounds of zirconium. This white high-melting solid hydrolyzes rapidly in humid air.
Structure
Unlike molecular TiCl4, solid ZrCl4 adopts a polymeric structure wherein each Zr is octahedrally coordinated. This difference in structures is responsible for the disparity in their properties: is distillable, but is a solid. In the solid state, ZrCl4 adopts a tape-like linear polymeric structure—the same structure adopted by HfCl4. This polymer degrades readily upon treatment with Lewis bases, which cleave the Zr-Cl-Zr linkages.
Synthesis
This conversion entails treatment of zirconium oxide with carbon in the presence of chlorine at high temperature:
ZrO2 + 2 C + 2 Cl2 → ZrCl4 + 2 CO
A laboratory scale process uses carbon tetrachloride in place of carbon and chlorine:
ZrO2 + 2 CCl4 → ZrCl4 + 2 COCl2
Applications
Precursor to zirconium metal
ZrCl4 is an intermediate in the conversion of zirconium minerals to metallic zirconium by the Kroll process. In nature, zirconium minerals usually exist as oxides (reflected also by the tendency of all zirconium chlorides to hydrolyze). For their conversion to bulk metal, these refractory oxides are first converted to the tetrachloride, which can be distilled at high temperatures. The purified ZrCl4 can be reduced with Zr metal to produce zirconium(III) chloride.
Other uses
ZrCl4 is the most common precursor for chemical vapor deposition of zirconium dioxide and zirconium diboride.
In organic synthesis zirconium tetrachloride is used as a weak Lewis acid for the Friedel-Crafts reaction, the Diels-Alder reaction and intramolecular cyclisation reactions. It is also used to make water-repellent treatment of textiles and other fibrous materials.
Properties and reactions
Hydrolysis of ZrCl4 gives the hydrated hydroxy chloride cluster called zirconyl chloride. This reaction is rapid and virtually irreversible, consistent with the high oxophilicity of zirconium(IV). For this reason, manipulations of ZrCl4 typically require air-free techniques.
ZrCl4 is the principal starting compound for the synthesis of many organometallic complexes of zirconium. Because of its polymeric structure, ZrCl4 is usually converted to a molecular complex before use. It forms a 1:2 complex with tetrahydrofuran: CAS [21959-01-3], mp 175–177 °C. Sodium cyclopentadienide (NaC5H5) reacts with ZrCl4(THF)2 to give zirconocene dichloride, ZrCl2(C5H5)2, a versatile organozirconium complex. One of the most curious properties of ZrCl4 is its high solubility in the presence of methylated benzenes, such as durene. This solubilization arises through the formation of π-complexes.
The log (base 10) of the vapor pressure of zirconium tetrachloride (from 480 to 689 K) is given by the equation: log10(P) = −5400/T + 11.766, where the pressure is measured in torrs and temperature in kelvins. The log (base 10) of the vapor pressure of solid zirconium tetrachloride (from 710 to 741 K) is given by the equation log10(P) = −3427/T + 9.088. The pressure at the melting point is 14,500 torrs.
References
Zirconium(IV) compounds
Chlorides
Metal halides
Coordination complexes | Zirconium(IV) chloride | [
"Chemistry"
] | 872 | [
"Chlorides",
"Inorganic compounds",
"Coordination complexes",
"Coordination chemistry",
"Salts",
"Metal halides"
] |
4,224,324 | https://en.wikipedia.org/wiki/Origin%20of%20water%20on%20Earth | The origin of water on Earth is the subject of a body of research in the fields of planetary science, astronomy, and astrobiology. Earth is unique among the rocky planets in the Solar System in having oceans of liquid water on its surface. Liquid water, which is necessary for all known forms of life, continues to exist on the surface of Earth because the planet is at a far enough distance (known as the habitable zone) from the Sun that it does not lose its water, but not so far that low temperatures cause all water on the planet to freeze.
It was long thought that Earth's water did not originate from the planet's region of the protoplanetary disk. Instead, it was hypothesized water and other volatiles must have been delivered to Earth from the outer Solar System later in its history. Recent research, however, indicates that hydrogen inside the Earth played a role in the formation of the ocean. The two ideas are not mutually exclusive, as there is also evidence that water was delivered to Earth by impacts from icy planetesimals similar in composition to asteroids in the outer edges of the asteroid belt.
History of water on Earth
One factor in estimating when water appeared on Earth is that water is continually being lost to space. H2O molecules in the atmosphere are broken up by photolysis, and the resulting free hydrogen atoms can sometimes escape Earth's gravitational pull. When the Earth was younger and less massive, water would have been lost to space more easily. Lighter elements like hydrogen and helium are expected to leak from the atmosphere continually, but isotopic ratios of heavier noble gases in the modern atmosphere suggest that even the heavier elements in the early atmosphere were subject to significant losses. In particular, xenon is useful for calculations of water loss over time. Not only is it a noble gas (and therefore is not removed from the atmosphere through chemical reactions with other elements), but comparisons between the abundances of its nine stable isotopes in the modern atmosphere reveal that the Earth lost at least one ocean of water early in its history, between the Hadean and Archean eons.
Any water on Earth during the latter part of its accretion would have been disrupted by the Moon-forming impact (~4.5 billion years ago), which likely vaporized much of Earth's crust and upper mantle and created a rock-vapor atmosphere around the young planet. The rock vapor would have condensed within two thousand years, leaving behind hot volatiles which probably resulted in a majority carbon dioxide atmosphere with hydrogen and water vapor. Afterward, liquid water oceans may have existed despite the surface temperature of due to the increased atmospheric pressure of the CO2 atmosphere. As the cooling continued, most CO2 was removed from the atmosphere by subduction and dissolution in ocean water, but levels oscillated wildly as new surface and mantle cycles appeared.
Geological evidence also helps constrain the time frame for liquid water existing on Earth. A sample of pillow basalt (a type of rock formed during an underwater eruption) was recovered from the Isua Greenstone Belt and provides evidence that water existed on Earth 3.8 billion years ago. In the Nuvvuagittuq Greenstone Belt, Quebec, Canada, rocks dated at 3.8 billion years old by one study and 4.28 billion years old by another show evidence of the presence of water at these ages. If oceans existed earlier than this, any geological evidence has yet to be discovered (which may be because such potential evidence has been destroyed by geological processes like crustal recycling). More recently, in August 2020, researchers reported that sufficient water to fill the oceans may have always been on the Earth since the beginning of the planet's formation.
Unlike rocks, minerals called zircons are highly resistant to weathering and geological processes and so are used to understand conditions on the very early Earth. Mineralogical evidence from zircons has shown that liquid water and an atmosphere must have existed 4.404 ± 0.008 billion years ago, very soon after the formation of Earth. This presents somewhat of a paradox, as the cool early Earth hypothesis suggests temperatures were cold enough to freeze water between about 4.4 billion and 4.0 billion years ago. Other studies of zircons found in Australian Hadean rock point to the existence of plate tectonics as early as 4 billion years ago. If true, that implies that rather than a hot, molten surface and an atmosphere full of carbon dioxide, early Earth's surface was much as it is today (in terms of thermal insulation). The action of plate tectonics traps vast amounts of CO2, thereby reducing greenhouse effects, leading to a much cooler surface temperature and the formation of solid rock and liquid water.
Earth's water inventory
While the majority of Earth's surface is covered by oceans, those oceans make up just a small fraction of the mass of the planet. The mass of Earth's oceans is estimated to be 1.37 × 1021 kg, which is 0.023% of the total mass of Earth, 6.0 × 1024 kg. An additional 5.0 × 1020 kg of water is estimated to exist in ice, lakes, rivers, groundwater, and atmospheric water vapor. A significant amount of water is also stored in Earth's crust, mantle, and core. Unlike molecular H2O that is found on the surface, water in the interior exists primarily in hydrated minerals or as trace amounts of hydrogen bonded to oxygen atoms in anhydrous minerals. Hydrated silicates on the surface transport water into the mantle at convergent plate boundaries, where oceanic crust is subducted underneath continental crust. While it is difficult to estimate the total water content of the mantle due to limited samples, approximately three times the mass of the Earth's oceans could be stored there. Similarly, the Earth's core could contain four to five oceans' worth of hydrogen.
Hypotheses for the origins of Earth's water
Extraplanetary sources
Water has a much lower condensation temperature than other materials that compose the terrestrial planets in the Solar System, such as iron and silicates. The region of the protoplanetary disk closest to the Sun was very hot early in the history of the Solar System, and it is not feasible that oceans of water condensed with the Earth as it formed. Further from the young Sun where temperatures were lower, water could condense and form icy planetesimals. The boundary of the region where ice could form in the early Solar System is known as the frost line (or snow line), and is located in the modern asteroid belt, between about 2.7 and 3.1 astronomical units (AU) from the Sun. It is therefore necessary that objects forming beyond the frost line–such as comets, trans-Neptunian objects, and water-rich meteoroids (protoplanets)–delivered water to Earth. However, the timing of this delivery is still in question.
One hypothesis claims that Earth accreted (gradually grew by accumulation of) icy planetesimals about 4.5 billion years ago, when it was 60 to 90% of its current size. In this scenario, Earth was able to retain water in some form throughout accretion and major impact events. This hypothesis is supported by similarities in the abundance and the isotope ratios of water between the oldest known carbonaceous chondrite meteorites and meteorites from Vesta, both of which originate from the Solar System's asteroid belt. It is also supported by studies of osmium isotope ratios, which suggest that a sizeable quantity of water was contained in the material that Earth accreted early on. Measurements of the chemical composition of lunar samples collected by the Apollo 15 and 17 missions further support this, and indicate that water was already present on Earth before the Moon was formed.
One problem with this hypothesis is that the noble gas isotope ratios of Earth's atmosphere are different from those of its mantle, which suggests they were formed from different sources. To explain this observation, a so-called "late veneer" theory has been proposed in which water was delivered much later in Earth's history, after the Moon-forming impact. However, the current understanding of Earth's formation allows for less than 1% of Earth's material accreting after the Moon formed, implying that the material accreted later must have been very water-rich. Models of early Solar System dynamics have shown that icy asteroids could have been delivered to the inner Solar System (including Earth) during this period if Jupiter migrated closer to the Sun.
Yet a third hypothesis, supported by evidence from molybdenum isotope ratios from a 2019 study, suggests that the Earth gained most of its water from the same interplanetary collision that caused the formation of the Moon.
The evidence from 2019 shows that the molybdenum isotopic composition of the Earth's mantle originates from the outer Solar System, likely having brought water to Earth. The explanation is that Theia, the planet said in the giant-impact hypothesis to have collided with Earth 4.5 billion years ago forming the Moon, may have originated in the outer Solar System rather than in the inner Solar System, bringing water and carbon-based materials with it.
Geochemical analysis of water in the Solar System
Isotopic ratios provide a unique "chemical fingerprint" that is used to compare Earth's water with reservoirs elsewhere in the Solar System. One such isotopic ratio, that of deuterium to hydrogen (D/H), is particularly useful in the search for the origin of water on Earth. Hydrogen is the most abundant element in the universe, and its heavier isotope deuterium can sometimes take the place of a hydrogen atom in molecules like H2O. Most deuterium was created in the Big Bang or in supernovae, so its uneven distribution throughout the protosolar nebula was effectively "locked in" early in the formation of the Solar System. By studying the different isotopic ratios of Earth and of other icy bodies in the Solar System, the likely origins of Earth's water can be researched.
Earth
The deuterium to hydrogen ratio for ocean water on Earth is known very precisely to be (1.5576 ± 0.0005) × 10−4. This value represents a mixture of all of the sources that contributed to Earth's reservoirs, and is used to identify the source or sources of Earth's water. The ratio of deuterium to hydrogen has increased over the Earth's lifetime between 2 and 9 times the ratio at the Earth's origin, because the lighter isotope is more likely to leak into space in atmospheric loss processes. Hydrogen beneath the Earth's crust is thought to have a D/H ratio more representative of the original D/H ratio upon formation of the Earth, because it is less affected by those processes. Analysis of subsurface hydrogen contained in recently released lava has been estimated to show that there was a 218‰ higher D/H ratio in the primordial Earth compared to the current ratio. No process is known that can decrease Earth's D/H ratio over time. This loss of the lighter isotope is one explanation for why Venus has such a high D/H ratio, as that planet's water was vaporized during the runaway greenhouse effect and subsequently lost much of its hydrogen to space. Because Earth's D/H ratio has increased significantly over time, the D/H ratio of water originally delivered to the planet was lower than at present. This is consistent with a scenario in which a significant proportion of the water on Earth was already present during the planet's early evolution.
Asteroids
Multiple geochemical studies have concluded that asteroids are most likely the primary source of Earth's water. Carbonaceous chondrites—which are a subclass of the oldest meteorites in the Solar System—have isotopic levels most similar to ocean water. The CI and CM subclasses of carbonaceous chondrites specifically have hydrogen and nitrogen isotope levels that closely match Earth's seawater, which suggests water in these meteorites could be the source of Earth's oceans. Two 4.5 billion-year-old meteorites found on Earth that contained liquid water alongside a wide diversity of deuterium-poor organic compounds further support this. Earth's current deuterium to hydrogen ratio also matches ancient eucrite chondrites, which originate from the asteroid Vesta in the outer asteroid belt. CI, CM, and eucrite chondrites are believed to have the same water content and isotope ratios as ancient icy protoplanets from the outer asteroid belt that later delivered water to Earth.
A further asteroid particle study supported the theory that a large source of earth's water has come from hydrogen atoms carried on particles in the solar wind which combine with oxygen on asteroids and then arrive on earth in space dust. Using atom probe tomography the study found hydroxide and water molecules on the surface of a single grain from particles retrieved from the asteroid 25143 Itokawa by the Japanese space probe Hayabusa.
Comets
Comets are kilometer-sized bodies made of dust and ice that originate from the Kuiper belt (20-50 AU) and the Oort cloud (>5,000 AU), but have highly elliptical orbits which bring them into the inner solar system. Their icy composition and trajectories which bring them into the inner solar system make them a target for remote and in situ measurements of D/H ratios.
It is implausible that Earth's water originated only from comets, since isotope measurements of the deuterium to hydrogen (D/H) ratio in comets Halley, Hyakutake, Hale–Bopp, 2002T7, and Tuttle, yield values approximately twice that of oceanic water. Using this cometary D/H ratio, models predict that less than 10% of Earth's water was supplied from comets.
Other, shorter period comets (<20 years) called Jupiter family comets likely originate from the Kuiper belt, but have had their orbital paths influenced by gravitational interactions with Jupiter or Neptune. 67P/Churyumov–Gerasimenko is one such comet that was the subject of isotopic measurements by the Rosetta spacecraft, which found the comet has a D/H ratio three times that of Earth's seawater. Another Jupiter family comet, 103P/Hartley 2, has a D/H ratio which is consistent with Earth's seawater, but its nitrogen isotope levels do not match Earth's.
See also
Notes
Jörn Müller, Harald Lesch (2003): Woher kommt das Wasser der Erde? - Urgaswolke oder Meteoriten. Chemie in unserer Zeit 37(4), pg. 242 – 246, ISSN 0009-2851
Parts of this article were translated from the original article from the German Wikipedia, on 4/3/06
References
External links
(archived copy)
Nature journal: "Earth has water older than the Sun"
Origins of water
Origins
Beginnings
Hadean
Hadean events
Scientific problems
Water | Origin of water on Earth | [
"Physics",
"Environmental_science"
] | 3,089 | [
"Beginnings",
"Hydrology",
"Physical quantities",
"Time",
"Water",
"Spacetime"
] |
4,224,429 | https://en.wikipedia.org/wiki/Truncation%20%28geometry%29 | In geometry, a truncation is an operation in any dimension that cuts polytope vertices, creating a new facet in place of each vertex. The term originates from Kepler's names for the Archimedean solids.
Uniform truncation
In general any polyhedron (or polytope) can also be truncated with a degree of freedom as to how deep the cut is, as shown in Conway polyhedron notation truncation operation.
A special kind of truncation, usually implied, is a uniform truncation, a truncation operator applied to a regular polyhedron (or regular polytope) which creates a resulting uniform polyhedron (uniform polytope) with equal edge lengths. There are no degrees of freedom, and it represents a fixed geometric, just like the regular polyhedra.
In general all single ringed uniform polytopes have a uniform truncation. For example, the icosidodecahedron, represented as Schläfli symbols r{5,3} or , and Coxeter-Dynkin diagram or has a uniform truncation, the truncated icosidodecahedron, represented as tr{5,3} or , . In the Coxeter-Dynkin diagram, the effect of a truncation is to ring all the nodes adjacent to the ringed node.
A uniform truncation performed on the regular triangular tiling {3,6} results in the regular hexagonal tiling {6,3}.
Truncation of polygons
A truncated n-sided polygon will have 2n sides (edges). A regular polygon uniformly truncated will become another regular polygon: t{n} is {2n}. A complete truncation (or rectification), r{3}, is another regular polygon in its dual position.
A regular polygon can also be represented by its Coxeter-Dynkin diagram, , and its uniform truncation , and its complete truncation . The graph represents Coxeter group I2(n), with each node representing a mirror, and the edge representing the angle π/n between the mirrors, and a circle is given around one or both mirrors to show which ones are active.
Star polygons can also be truncated. A truncated pentagram {5/2} will look like a pentagon, but is actually a double-covered (degenerate) decagon ({10/2}) with two sets of overlapping vertices and edges. A truncated great heptagram {7/3} gives a tetradecagram {14/3}.
Uniform truncation in regular polyhedra and tilings and higher
When "truncation" applies to platonic solids or regular tilings, usually "uniform truncation" is implied, which means truncating until the original faces become regular polygons with twice as many sides as the original form.
This sequence shows an example of the truncation of a cube, using four steps of a continuous truncating process between a full cube and a rectified cube. The final polyhedron is a cuboctahedron. The middle image is the uniform truncated cube; it is represented by a Schläfli symbol t{p,q,...}.
A bitruncation is a deeper truncation, removing all the original edges, but leaving an interior part of the original faces. Example: a truncated octahedron is a bitruncated cube: t{3,4} = 2t{4,3}.
A complete bitruncation, called a birectification, reduces original faces to points. For polyhedra, this becomes the dual polyhedron. Example: an octahedron is a birectification of a cube: {3,4} = 2r{4,3}.
Another type of truncation, cantellation, cuts edges and vertices, removing the original edges, replacing them with rectangles, removing the original vertices, and replacing them with the faces of the dual of the original regular polyhedra or tiling.
Higher dimensional polytopes have higher truncations. Runcination cuts faces, edges, and vertices. In 5 dimensions, sterication cuts cells, faces, and edges.
Edge-truncation
Edge-truncation is a beveling, or chamfer for polyhedra, similar to cantellation, but retaining the original vertices, and replacing edges by hexagons. In 4-polytopes, edge-truncation replaces edges with elongated bipyramid cells.
Alternation or partial truncation
Alternation or partial truncation removes only some of the original vertices.
In partial truncation, or alternation, half of the vertices and connecting edges are completely removed. The operation applies only to polytopes with even-sided faces. Faces are reduced to half as many sides, and square faces degenerate into edges. For example, the tetrahedron is an alternated cube, h{4,3}.
Diminishment is a more general term used in reference to Johnson solids for the removal of one or more vertices, edges, or faces of a polytope, without disturbing the other vertices. For example, the tridiminished icosahedron starts with a regular icosahedron with 3 vertices removed.
Other partial truncations are symmetry-based; for example, the tetrahedrally diminished dodecahedron.
Generalized truncations
The linear truncation process can be generalized by allowing parametric truncations that are negative, or that go beyond the midpoint of the edges, causing self-intersecting star polyhedra, and can parametrically relate to some of the regular star polygons and uniform star polyhedra.
Shallow truncation - Edges are reduced in length, faces are truncated to have twice as many sides, while new facets are formed, centered at the old vertices.
Uniform truncation are a special case of this with equal edge lengths. The truncated cube, t{4,3}, with square faces becoming octagons, with new triangular faces are the vertices.
Antitruncation A reverse shallow truncation, truncated outwards off the original edges, rather than inward. This results in a polytope which looks like the original, but has parts of the dual dangling off its corners, instead of the dual cutting into its own corners.
Complete truncation or rectification - The limit of a shallow truncation, where edges are reduced to points. The cuboctahedron, r{4,3}, is an example.
Hypertruncation A form of truncation that goes past the rectification, inverting the original edges, and causing self-intersections to appear.
Quasitruncation A form of truncation that goes even farther than hypertruncation where the inverted edge becomes longer than the original edge. It can be generated from the original polytope by treating all the faces as retrograde, i.e. going backwards round the vertex. For example, quasitruncating the square gives a regular octagram (t{4,3}={8/3}), and quasitruncating the cube gives the uniform stellated truncated hexahedron, t{4/3,3}.
See also
Uniform polyhedron
Uniform 4-polytope
Bitruncation (geometry)
Rectification (geometry)
Alternation (geometry)
Conway polyhedron notation
Truncated cone
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, (pp. 145–154 Chapter 8: Truncation)
Norman Johnson Uniform Polytopes, Manuscript (1991)
N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966
External links
Polyhedra Names, truncation
Polytopes | Truncation (geometry) | [
"Physics"
] | 1,660 | [
"Tessellation",
"Truncated tilings",
"Symmetry"
] |
4,224,672 | https://en.wikipedia.org/wiki/Canadian%20Military%20Engineers | The Canadian Military Engineers (CME; ) is the military engineering personnel branch of the Canadian Armed Forces. The members of the branch that wear army uniform comprise the Corps of Royal Canadian Engineers (RCE; ).
The mission of the Canadian Military Engineers is to contribute to the survival, mobility, and combat effectiveness of the Canadian Armed Forces. Their roles are to conduct combat operations, support the Canadian Forces in war and peace, support national development, provide assistance to civil authorities, and support international aid programs. Military engineers' responsibilities encompass the use of demolitions and land mines, the design, construction and maintenance of defensive works and fortifications, urban operations (hostile room entry), breaching obstacles, establishing/maintaining lines of communication, and bridging. They also provide water, power and other utilities, provide fire, aircraft crash and rescue services, hazardous material operations, and develop maps and other engineering intelligence. In addition, military engineers are experts in deception and concealment, as well as in the design and development of equipment necessary to carry out these operations.
The official role of the combat engineer is to allow friendly troops to live, move and fight on the battlefield and deny that to the enemy.
History
Local militia engineering companies 1855–1903
With the passing of the 1855 Militia Act, volunteer militia engineering companies formed within local militia units:
Halifax: two companies
Montreal: one company (1st Volunteer Militia Engineering Company)
Ottawa: one company
Quebec: one company
Creation
Following the Boer War the Canadian Government realized that the defence of Canada required more than just a single infantry battalion and a few artillery batteries as part of the permanent defence force. In 1903 The Royal Canadian Engineers were founded as the basis of the permanent military engineers, while the militia had the Royal Canadian Engineers created under the leadership of a former Royal Military College of Canada officer cadet, Lieutenant-Colonel Paul Weatherbe.
First World War
One of the first tasks completed by the engineers after the declaration of war upon Germany in 1914 was for the rapid development of the Valcartier training site in Quebec. At its peak size, 30,000 men were stationed here before the 1st Canadian Division was deployed to England.
When the 1st Division arrived on the front in Belgium they were accompanied by field companies of the Canadian Engineers (men recruited into the service after the start of the war were part of the Militia branch and not the regulars). These troops were responsible for the construction of defences, sanitation systems, water supplies, bridging, and assisting with trench raids. Canadian Engineers also served in the Middle East fighting the Turks.
One of the most important functions of the Sappers in the war was to dig tunnels for mines underneath enemy trenches, with which to plant explosives to destroy them. At the Battle of Vimy Ridge, and particularly at the Battle of Messines, several such mines were used to win the battle. The Canadian Military Engineers contributed three tunnelling companies to the British Expeditionary Force: 1st Canadian Tunnelling Company, 2nd Canadian Tunnelling Company and 3rd Canadian Tunnelling Company. One was formed from men on the battlefield, while two other companies first trained in Canada and were then shipped to France.
The only Victoria Cross the Canadian Engineers have ever received was earned by Captain C. N. Mitchell for actions on 8 October 1918 at Canal de I'Escaut, north-east of Cambrai.
In total, more than 40,000 Canadians served as Engineers in the war, with 14,000 on the front on the last day of the war.
On 1 June 2022, the perpetuation of No. 2 Construction Battalion, CEF, was assigned to the CME, with 4 Engineer Support Regiment having the honour of publicly recognizing the perpetuation.
Between the wars
On demobilization, the permanent force of Engineers was changed to 38 officers and 249 other ranks. As a matter of honour, King George V, the Canadian monarch bestowed on the organization the right to use the prefix royal before its name in 1932. On 29 April 1936, the Militia and Permanent components were joined to form the Corps of Royal Canadian Engineers. On this date the Militia adopted the cap badge used by the regulars.
Second World War
The Corps of Royal Canadian Engineers expanded dramatically in size to support Canada's war effort.
On August 31, 1939, the Permanent Force engineers included 50 officers (with 14 seconded to other branches of the Canadian Army) and 323 other ranks; the maximum size of the Corps was reached in 1944, when it included 210 officers and 6283 other ranks.
In keeping with British Army practice, company-sized units in the two armoured divisions were called "squadrons" following cavalry terminology. The following units were deployed in Canada and in Europe:
1st Canadian Infantry Division
1st Field Company
3rd Field Company
4th Field Company
2nd Field Park Company
2nd Canadian Infantry Division
2nd Field Company
7th Field Company
11th Field Company
1st Field Park Company
3rd Canadian Infantry Division
6th Field Company
16th Field Company
18th Field Company
3rd Field Park Company
4th Canadian Armoured Division
8th Field Squadron
9th Field Squadron
6th Field Park Squadron
5th Canadian Armoured Division
1st Field Squadron
10th Field Squadron
4th Field Park Squadron
6th Canadian Infantry Division in Pacific Command
20th Field Company
25th Field Company
26th Field Company
7th Field Park Company
7th Canadian Infantry Division in Atlantic Command
15th Field Company
23rd Field Company
27th Field Company
5th Field Park Company
8th Canadian Infantry Division in Pacific Command
21st Field Company
24th Field Company
I Canadian Corps
12th Field Company
13th Field Company
14th Field Company
9th Field Park Company
1st Drilling Company
II Canadian Corps
29th Field Company
30th Field Company
31st Field Company
8th Field Park Company
2nd Drilling Company
First Canadian Army
First Canadian Army Troops Engineers
5th Field Company (unit code 1207)
20th Field Company (unit code 1208)
23rd Field Company (unit code 1209)
10th Field Park Company (unit code 1210)
2nd Canadian Army Troops Engineers
32nd Field Company
33rd Field Company
34th Field Company
11th Field Park Company
No. 1 Workshop and Park Company
1st Field (Air) Survey Company
2nd Field Survey Company
3rd Field (Reproduction) Survey Company
General Headquarters (GHQ) and Line of Communication (LoC) Troops
1st Mechanical Equipment Company
1st Mechanical Equipment Park Company
2nd Battalion
3rd Battalion
1st Road Construction Company
2nd Road Construction Company
No. 1 Railway Operating Company
No. 1 Railway Workshop Company
Other units
1st Chemical Warfare Company (in Canada, September 1942 – 31 August 1943)
2nd Chemical Warfare Company (in Canada, September 1942 – 31 August 1943)
No.1 Tunnelling Company R.C.E. (in Gibraltar)
No.2 Tunnelling Company R.C.E. (in Gibraltar)
The senior officers of the Corps in World War II were as follows:
Chief Engineer, First Canadian Army
Major-General Charles Sumner Lund Hertzberg (6 April 1942 – 23 June 1943)
Brigadier James Learmont Melville (24 June 1943 – October 1943)
Brigadier Allister Thompson MacLean (20 October 1943 – 1 September 1944)
Brigadier Geoffrey Walsh (2 September 1944 – 20 July 1945)
Colonel Henry Lloyd Meuser (Acting Chief Engineer, 21 July 1945 – 31 December 1945)
Chief Engineer, I Canadian Corps
Brigadier Charles Sumner Lund Hertzberg (25 December 1940 – 6 April 1942)
Brigadier James Learmont Melville (6 April 1942 – October 1943)
Brigadier Alan Burton Connelly (1943–1944)
Brigadier Colin Alexander Campbell (27 July 1944 – 23 April 1945)
Brigadier John Despard Christian (24 April 1945 – 17 July 1945)
Chief Engineer, II Canadian Corps
Brigadier Allister Thompson MacLean (1943)
Brigadier William Norman Archibald Bostock (1943–1944)
Brigadier Geoffrey Walsh (13 February 1944 – 1 September 1944)
Brigadier Dudley Kingdon Black (2 September 1944 – 16 June 1945)
Korea
Post-Korea, Unification and the Cold War
The branch maintained a military band in its ranks from 1953 to 1968. During its 15 years in existence, the band performed for members of the Canadian royal family, Governors General of Canada including Georges Vanier, and American President Lyndon B. Johnson. In 1968, the band was dissolved, with most being sent to the Royal Canadian Navy.
On 1 February 1968, the Canadian Army, Royal Canadian Navy, and Royal Canadian Air Force were officially unified as the Canadian Armed Forces. As such the Royal Canadian Engineers, Royal Canadian Navy Civil Engineers and Royal Canadian Air Force Construction Division were amalgamated. However, the new branch went under the name Royal Canadian Engineers until 1973 when the branch was officially named as the Canadian Military Engineers.
The present day structure of army field units was set on 17 June 1977 with the creation of 1 Combat Engineer Regiment (1 CER), 2 CER, 4 ESR and 5 CER. The new regiments were each created from one of the squadrons of the former 1 Field Engineer Regiment.
21st century
The role of the Canadian Military Engineers has been expanding. The regular force component has been expanding the size of their units, due to the current missions of the Canadian Armed Forces.
In April 1997, Canada's Primary Reserve reorganized into ten brigade groups and in November 1997, the first reserve combat engineer regiment was created by converting an armoured reconnaissance regiment. A number of years later the three field engineer regiments, and seven independent field engineer squadrons were reorganized into combat engineer regiments. Three Canadian brigade groups had more than one engineer unit, and one (38 Canadian Brigade Group) did not have any units at all. Now the field engineer regiments have been redesignated or amalgamated to become combat engineer regiments, and the field engineer squadrons have either been amalgamated to make new combat engineer regiments or reroled as generic engineer squadrons.
38 CBG previously had 21st Field Engineer Squadron, based in Flin Flon, Manitoba. It was however disbanded in 1995. In 2003, the Fort Garry Horse in Winnipeg, Manitoba, began hosting what became 31 Engineer Squadron in 2012. The brigade formed 46 Engineer Squadron in Saskatoon in 2012, which was a subunit of the North Saskatchewan Regiment until it gained full strength. Both squadrons are now subunits of 38 Combat Engineer Regiment.
The deployment in Afghanistan required considerable use of engineers for road clearance, explosive ordnance disposal, heavy equipment, and combat support. By the end of the deployment 16 members of the RCE were killed in Afghanistan.
In April 2013, the title Corps of Royal Canadian Engineers was brought back for the army element of the branch.
Customs and traditions
Colonel-in-chief
Queen Elizabeth II, Queen of Canada, was the colonel-in-chief of the CME until her death in 2022. King George V, Edward VIII, and George VI all served as previous colonels-in-chief of the Royal Canadian Engineers.
Mottos
King George V granted the CME the same mottoes as the Royal Engineers.
(Latin, "Everywhere") serves as a substitution for the battle honours the corps would have obtained if they were a line regiment.
(Latin, "Whither right and glory lead")
Cap badge
From shortly after their creation until 1967, the Royal Canadian Engineers had a nearly identical cap badge to the Royal Engineers. This consisted of the Cipher of the Reigning monarch, surrounded by the Garter, surmounted by the crown with the words Royal Canadian Engineers on the scroll at the bottom, and surrounded by maple leaves instead of laurels.
The cap badge came to its current form after unification. Since the Royal Canadian Engineer cap badge was representative only of the army, a new one was developed, which is almost identical to that worn by the (Army's) non-permanent Canadian Engineers prior to the Great War (which was not bilingual and did not use enamel). In bilingual format, the words Engineers and appear on the cap badge indicating the bilingual nature of the CME. The word also appears, a motto inherited by engineers and artillery in the Canadian military from their British forebears.
From the 1960s to the late 1980s or early 1990s, the branch badge was enamel-highlighted cast metal with a prong-type slider to attach to both the beret and forage cap. The collar dogs (worn only on army uniforms after introduction of distinctive environmental uniforms) were miniatures of the cap badge. By 1998, the metal cap badge had been replaced by an embroidered cloth version which was sewn directly to the beret. Collar dogs were replaced by a crouching beaver over the motto . Left- and right-facing beavers are required for a complete set.
Chimo
The CME/RCE greeting or toast is "chimo" ( ). This expression is also often used as a closing on correspondence between engineers. The word chimo is derived from the Inuktitut greeting: that means "hello," "goodbye," "peace be with you," and similar sentiments. This salutation was used in the Ungava region of northern Quebec and shares the same derivation as Fort Chimo (today Kuujjuaq) on Ungava Bay in northern Quebec. The current spelling and pronunciation result from the English and French languages importing the loanword from Inuktitut. On April 1, 1946, the Canadian Army assumed responsibility for the portions of the Alaska Highway that lay within Canadian boundaries. This section of the highway was renamed the "Northwest Highway System" and the responsibility for maintenance was given to the Royal Canadian Engineers for the next 20 years. The soldiers of the CME/RCE adopted the greeting of "chimo" and in 1973 it became the cheer of the CME.
CME Flag
The present CME flag was created at the time of unification. It measures six "units" long by three "units" high, and is in the colours of brick red and royal blue.
Engineer Prayer
The Engineer Prayer was created for 2 Field Engineer Regiment by Major Hugh Macdonald, the unit's padre. It goes as follows:
Patron saint
The Canadian Military Engineers have no patron saint but Engineers often take part in artillery celebrations honouring St. Barbara, the patron saint of the artillery. Engineers, along with the artillery and miners, celebrate her feast day on December 4. St. Barbara is the patroness of artillerymen, fireworks manufacturers, firemen, stonemasons, against sudden death, against fires, and against storms (especially lightning storms).
Equipment
The CME/RCE has various equipment for use in supporting the Canadian Forces at home and on deployment overseas.
For more refer to Engineering and support vehicles of the Canadian Forces.
Training
Canadian Forces School of Military Engineering
The Canadian Forces School of Military Engineering (CFSME) at CFB Gagetown in Oromocto, New Brunswick is responsible for the conduct of 85 different courses that span all ranks and occupations within the Field, Construction and Airfield Engineer organizations. CFSME is the Canadian Forces Centre of Excellence in Engineer Training and home of the Engineers.
Units
Regular Force units
Reserve Force units
Order of precedence
See also
List of Canadian organizations with royal prefix
Royal Engineers, Columbia Detachment
Royal Engineers
References
External links
Canadian Forces Recruiting
Canadian Forces and Department of National Defence
Administrative corps of the Canadian Army
Military units and formations established in 1903
Army units and formations of Canada in World War I
Army units and formations of Canada in World War II
Canadian Armed Forces personnel branches
Military history of Canada
Military engineer corps | Canadian Military Engineers | [
"Engineering"
] | 3,040 | [
"Engineering units and formations",
"Military engineer corps"
] |
4,224,687 | https://en.wikipedia.org/wiki/164%20%28number%29 | 164 (one hundred [and] sixty-four) is the natural number following 163 and preceding 165.
In mathematics
164 is a zero of the Mertens function.
In base 10, 164 is the smallest number that can be expressed as a concatenation of two squares in two different ways: as 1 concatenate 64 or 16 concatenate 4.
External links
Number Facts and Trivia: 164
The Number 164
The Positive Integer 164
Integers | 164 (number) | [
"Mathematics"
] | 91 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
4,224,977 | https://en.wikipedia.org/wiki/Automatic%20group | In mathematics, an automatic group is a finitely generated group equipped with several finite-state automata. These automata represent the Cayley graph of the group. That is, they can tell whether a given word representation of a group element is in a "canonical form" and can tell whether two elements given in canonical words differ by a generator.
More precisely, let G be a group and A be a finite set of generators. Then an automatic structure of G with respect to A is a set of finite-state automata:
the word-acceptor, which accepts for every element of G at least one word in representing it;
multipliers, one for each , which accept a pair (w1, w2), for words wi accepted by the word-acceptor, precisely when in G.
The property of being automatic does not depend on the set of generators.
Properties
Automatic groups have word problem solvable in quadratic time. More strongly, a given word can actually be put into canonical form in quadratic time, based on which the word problem may be solved by testing whether the canonical forms of two words represent the same element (using the multiplier for ).
Automatic groups are characterized by the fellow traveler property. Let denote the distance between in the Cayley graph of . Then, G is automatic with respect to a word acceptor L if and only if there is a constant such that for all words which differ by at most one generator, the distance between the respective prefixes of u and v is bounded by C. In other words, where for the k-th prefix of (or itself if ). This means that when reading the words synchronously, it is possible to keep track of the difference between both elements with a finite number of states (the neighborhood of the identity with diameter C in the Cayley graph).
Examples of automatic groups
The automatic groups include:
Finite groups. To see this take the regular language to be the set of all words in the finite group.
Euclidean groups
All finitely generated Coxeter groups
Geometrically finite groups
Examples of non-automatic groups
Baumslag–Solitar groups
Non-Euclidean nilpotent groups
Not every CAT(0) group is biautomatic
Biautomatic groups
A group is biautomatic if it has two multiplier automata, for left and right multiplication by elements of the generating set, respectively. A biautomatic group is clearly automatic.
Examples include:
Hyperbolic groups.
Any Artin group of finite type, including braid groups.
Automatic structures
The idea of describing algebraic structures with finite-automata can be generalized from groups to other structures. For instance, it generalizes naturally to automatic semigroups.
References
Further reading
.
Computability theory
Properties of groups
Combinatorics on words
Computational group theory | Automatic group | [
"Mathematics"
] | 578 | [
"Mathematical structures",
"Mathematical logic",
"Properties of groups",
"Combinatorics",
"Algebraic structures",
"Computability theory",
"Combinatorics on words"
] |
4,224,990 | https://en.wikipedia.org/wiki/Marjorie%20Clarke | Marjorie J. "Maggie" Clarke is an American environmental scientist who specializes in recycling participation, waste prevention methods, waste-to-energy/incinerator emissions controls, environmental impacts of the World Trade Center fires and collapse, and community botanical gardening. Since the September 11, 2001 attacks she has focused on increasing participation in New York City's waste prevention and recycling programs.
Early life and education
She was born on July 14, 1953, in Miami, Florida. She graduated in 1975 with a B.A. in geology from Smith College. She received an M.S. in environmental science from Johns Hopkins University in 1978 and in energy technology from New York University in 1982. She completed a Ph.D. in 2000 for environmental sciences.
Career and research
Clarke was the Department of Sanitation's specialist on emissions from incinerators from 1984 to 1988 and served on a National Academy of Sciences committee on Health Effects of Waste Incineration.
From 2002 to 2004, she was a scientist-in-residence and adjunct assistant professor at Lehman College, and an adjunct professor at Hunter College, City University of New York from 1996 to 2005.
Clarke is a persistent questioner of United States Environmental Protection Agency's claims about the safety of the World Trade Center site.
She also conceived and garnered support for a New York City local law to eliminate 2200 apartment building incinerators which was signed into law in 1989.
NGO participation
Clarke has been chair or vice chair of the Manhattan Citizens' Solid Waste Advisory Board for 8 of the years since its inception in 1990.cShe co-founded and has been president of the Riverside-Inwood Neighborhood Garden (RING), a volunteer botanical garden in Upper Manhattan, since 1984.
See also
Health effects arising from the September 11, 2001 attacks
References
External links
www.maggieclarkeenvironmental.com - Papers and testimony
1953 births
Living people
American environmentalists
American women environmentalists
American environmental scientists
People from Miami
Smith College alumni
Johns Hopkins University alumni
New York University alumni
CUNY Graduate Center alumni
Lehman College faculty
Hunter College faculty
American scientists
21st-century American women | Marjorie Clarke | [
"Environmental_science"
] | 422 | [
"American environmental scientists",
"Environmental scientists"
] |
4,225,013 | https://en.wikipedia.org/wiki/Comparison%20of%20webmail%20providers | The following tables compare general and technical information for a number of notable webmail providers who offer a web interface in English.
The list does not include web hosting providers who may offer email server and/or client software as a part of hosting package, or telecommunication providers (mobile network operators, internet service providers) who may offer mailboxes exclusively to their customers.
General
General information on webmail providers and products
Supported protocols
Digital rights
Verification
How much information users must provide to verify and complete the registration when opening an account (green means less personal information requested):
Secure delivery
Features to reduce the risk of third-party tracking and interception of the email content; measures to increase the deliverability of correct outbound messages.
Other
Unique features
Features
See also
Comparison of web search engines - often merged with webmail by companies that host both services
References
Webmail Providers
Network software comparisons | Comparison of webmail providers | [
"Technology"
] | 174 | [
"Online services comparisons",
"Computing comparisons"
] |
4,225,132 | https://en.wikipedia.org/wiki/Bion%20%28satellite%29 | The Bion satellites (), also named Biocosmos, is a series of Soviet (later Russian) biosatellites focused on space medicine.
Bion space program
Bion precursor flights and Bion flights
The Soviet biosatellite program began in 1966 with Kosmos 110, and resumed in 1973 with Kosmos 605. Cooperation in space ventures between the Soviet Union and the United States was initiated in 1971, with the signing of the United States and Soviet Union in Science and Applications Agreement (which included an agreement on space research cooperation). The Soviet Union first offered to fly U.S. experiments on a Kosmos biosatellite in 1974, only a few years after the termination (in 1969) of the U.S. biosatellite program. The offer was realized in 1975 when the first joint U.S./Soviet research were carried out on the Kosmos 782 mission.
The Bion spacecraft were based on the Zenit spacecraft and launches began in 1973 with primary emphasis on the problems of radiation effects on human beings. Launches in the program included Kosmos 110, 605, 690, 782, plus Nauka modules flown on Zenit-2M reconnaissance satellites. of equipment could be contained in the external Nauka module.
The Soviet/Russian Bion program provided U.S. investigators a platform for launching Fundamental Space Biology and biomedical experiments into space. The Bion program, which began in 1966, included a series of missions that flew biological experiments using primates, rodents, insects, cells, and plants on a biosatellite in near Earth orbit. NASA became involved in the program in 1975 and participated in 9 of the 11 Bion missions. NASA ended its participation in the program with the Bion No.11 mission launched in December 1996. The collaboration resulted in the flight of more than 100 U.S. experiments, one-half of all U.S. life sciences flight experiments accomplished with non-human subjects.
The missions ranged from five days (Bion 6) (Kosmos 1514) to around 22 days (Bion 1 and Kosmos 110).
Bion-M
In 2005, the Bion program was resumed with three new satellites of the modified Bion-M type – the first flight was launched on 19 April 2013 from Baikonur Cosmodrome, Kazakhstan. The first satellite of the new series Bion-M1 featured an aquarium by the German Aerospace Center (DLR) and carried 45 mice, 18 Mongolian gerbils, 15 geckos, snails, fish and micro-organisms into orbit for 30 days before re-entry and recovery. All the gerbils died due to a hardware failure, but condition of the rest of the experiments, including all geckos, was satisfactory. Half the mice died as was predicted.
Bion-M2 is scheduled to launch no earlier than March 2025 on a Soyuz 2.1a rocket to an altitude of 800 km. The orbiter will carry 75 mice and studies will focus on how they are affected at the molecular level by space radiation.
Launch history
See also
BIOPAN
Biosatellite program
EXPOSE
Foton-M2
Interkosmos
List of Kosmos satellites
List of microorganisms tested in outer space
O/OREOS
OREOcube
Tanpopo
Zond 5
References
External links
Zenit Satellites - Bion variant
Astronautix, Bion
TsSKB, Bion images (Russian)
R. W. Ballard, and J. P. Connolly; U.S./U.S.S.R. joint research in space biology and medicine on Kosmos biosatellites, FASEB J. 4: 5-9 (Overview of Bion 1 to 9)
Satellites formerly orbiting Earth
Satellites of the Soviet Union
Satellites of Russia
Animals in space
Astrobiology space missions
Biosatellites
Animal testing in the Soviet Union | Bion (satellite) | [
"Chemistry",
"Biology"
] | 797 | [
"Animal testing",
"Space-flown life",
"Animals in space"
] |
4,225,396 | https://en.wikipedia.org/wiki/Urban%20wilderness | Urban wilderness refers to informal green spaces within urban areas that distant enough from urbanized areas so that human activities cannot be registered. Urban wilderness areas within cities have been shown to beneficially impact the public's perception of wilderness and nature, making this an important element to future city planning
Overview
Key traits of urban wilderness that differentiate it from other urban green spaces:
Involves green spaces that are far enough removed from the urban areas, so human actions cannot be noticed.
Supports biodiversity - Urban wilderness efforts aim to enhance/improve a regions' local biodiversity through careful management plans.
A high degree of self-regulation - vegetation can survive with minimal interference or management by humans.
Various urban wilderness areas have been established throughout the world. Examples include the Knoxville Urban Wilderness in Knoxville, TN, Purgatory Creek Natural Area in San Marcos, TX, the Danube-Auen National Park in Vienna and Lower Austria, the Turkey Mountain Urban Wilderness Area in Tulsa, and the Milwaukee River Greenway in Milwaukee, WI.
History
The nineteenth and twentieth centuries saw the urbanization of cities. Jacob Riis and other reformers fought for parks in urban areas.
While many societies had traditions of intense urban plantings, such as the rooftops of pre-conquistador Mexico City, these traditions did not reemerge on a larger scale in the industrialized world until the creation of naturalistic urban parks, such as the ones by Calvert Vaux and Frederick Law Olmsted.
More recently, groups such as squatters and Reclaim The Streets have performed guerrilla plantings, worked in and on abandoned buildings, and torn holes in highway asphalt to fill with soil and flowers. These actions have been effective in creating new planted zones in economically stagnant areas like urban Eastern Germany, where abandoned buildings have been reverted to forest-like conditions.
See also
Hundertwasserhaus
Green roof
Green wall
Urban forestry
Urban ecology
Urban agriculture
Urban prairie
References
Urban planning
Habitats
Urban forestry | Urban wilderness | [
"Engineering"
] | 389 | [
"Urban planning",
"Architecture"
] |
4,225,483 | https://en.wikipedia.org/wiki/Kosmos%20110 | Kosmos 110 ( meaning Kosmos 110) was a Soviet spacecraft launched on 22 February 1966 from the Baikonur Cosmodrome aboard a Voskhod rocket. It carried two dogs, Veterok ("Breeze") and Ugolyok ("Little piece of coal"). It was one of the more eye-catching and popular experiments of the long series of Russian Kosmos satellites.
Mission
The launch of Kosmos 110 was conducted using a Voskhod 11A57 s/n R15000-06 carrier rocket, which flew from Site 31/6 at Baikonour. The launch occurred at 20:09:36 GMT on 22 February 1966. Kosmos 110 separated from its launch vehicle into a low Earth orbit with a perigee of , an apogee of , an inclination of 51.9°, and an orbital period of 95.3 minutes.
It incorporated a re-entry body (capsule) for landing scientific instruments and test objects. It was a biological satellite that made a sustained biomedical experiment through the Van Allen radiation belts with the dogs Veterok and Ugolyok. In addition to the two dogs, several species of plants, moisturized prior to launch, were also carried. On 16 March 1966, after 22 days in orbit around the Earth, they landed safely and were recovered by recovery forces at 14:09 GMT.The dogs had orbited the Earth 330 times.
Results from the mission showed that whilst some beans germinated poorly, lettuce grew larger all around with 50% more yield and Chinese cabbage showed greater mass. Those that germinated in space thus became the first seeds to do so.
Overall the mission showed that long duration space flight had definite but variable effects on plants, with some producing better results than on Earth.
The two dogs showed severe dehydration, weight loss, loss of muscle and coordination and took several weeks to fully recover.
This spaceflight of record-breaking duration was not surpassed by humans until Skylab 2 in June 1974 and still stands as the longest space flight by dogs.
See also
1966 in spaceflight
Animals in space
Russian space dogs
References
Kosmos satellites
1966 in the Soviet Union
1966 in spaceflight
Spacecraft launched in 1966
Life in outer space | Kosmos 110 | [
"Astronomy"
] | 467 | [
"Life in outer space",
"Outer space"
] |
4,225,577 | https://en.wikipedia.org/wiki/N-Ethylmaleimide | N-Ethylmaleimide (NEM) is an organic compound that is derived from maleic acid. It contains the amide functional group, but more importantly it is an alkene that is reactive toward thiols and is commonly used to modify cysteine residues in proteins and peptides.
Organic chemistry
NEM is a Michael acceptor in the Michael reaction, which means that it adds nucleophiles such as thiols. The resulting thioether features a strong C–S bond and the reaction is virtually irreversible. Reaction with thiols occur in the pH range 6.5–7.5, NEM may react with amines or undergo hydrolysis at a more alkaline pH. NEM has been widely used to probe the functional role of thiol groups in enzymology. NEM is an irreversible inhibitor of all cysteine peptidases, with alkylation occurring at the active site thiol group (see schematic).
Case studies
NEM blocks vesicular transport. In lysis buffers, 20 to 25 mM of NEM is used to inhibit de-sumoylation of proteins for Western Blot analysis. NEM has also been used as an inhibitor of deubiquitinases.
N-Ethylmaleimide was used by Arthur Kornberg and colleagues to knock out DNA polymerase III in order to compare its activity to that of DNA polymerase I (pol III and I, respectively). Kornberg had been awarded the Nobel Prize for discovering pol I, then believed to be the mechanism of bacterial DNA replication, although in this experiment he showed that pol III was the actual replicative machinery.
NEM activates ouabain-insensitive Cl-dependent K efflux in low-K sheep and goat red blood cells. This discovery contributed to the molecular identification of K–Cl cotransport (KCC) in human embryonic cells transfected by KCC1 isoform cDNA, 16 years later. Since then, NEM has been widely used as a diagnostic tool to uncover or manipulate the membrane presence of K–Cl cotransport in cells of many species in the animal kingdom. Despite repeated unsuccessful attempts to identify chemically the target thiol group, at physiological pH, NEM may form adducts with thiols within protein kinases that phosphorylate KCC at specific serine and threonine residues primarily within the C-terminal domain of the transporter. The ensuing dephosphorylation of KCC by protein phosphatases leads to activation of KCC.
References
External links
The MEROPS online database for peptidases and their inhibitors: NEM
The bifunctional analogues such as p-NN′-phenylenebismaleimide can be used as cross-linking reagent for cystine residues. see Lutter, L. C., Zeichhardt, H., Kurland, C. G. & Stoffier, G. (1972) Mol. Gen. Genet. 119, 357-366.
Maleimides
Biochemistry
Biochemistry methods
Reagents
Reagents for biochemistry
Enzyme inhibitors
Protease inhibitors | N-Ethylmaleimide | [
"Chemistry",
"Biology"
] | 675 | [
"Biochemistry methods",
"Reagents for biochemistry",
"Biochemistry",
"nan"
] |
4,225,622 | https://en.wikipedia.org/wiki/Zenit-2M | The Zenit-2M, Zenit-2SB, Zenit-2SLB or Zenit-2FG was a Ukrainian expendable carrier rocket derived from the Zenit-3SL. It was a member of the Zenit family of rockets, which were designed by the Yuzhmash.
Development
The Zenit 2M was a modernised version of the Zenit-2, incorporating modifications and upgrades made to the design for the Sea Launch programme.
Launches of Zenit-2M rockets were conducted from Baikonur Cosmodrome Site 45/1. Commercial launches are conducted by Land Launch, and use the designation 2SLB, however as of 2011, no commercial launches have been ordered and no launch of 2SLB has taken place as of 2023. Launches conducted by Roskosmos or the Russian Space Forces use the designation 2M. The designation 2SB can also be applied to the rocket when it is being used as part of a larger vehicle, such as the Zenit-3SLB.
The first launch of a Zenit-2M occurred on 29 June 2007, carrying the last Tselina-2 ELINT satellite for the Russian Space Forces, Tselina-2 satellites having been previously launched by older Zenit-2 rockets. The second launch, carrying the Fobos-Grunt and Yinghuo-1 spacecraft, was conducted on 8 November 2011, using a modified configuration designated the Zenit-2FG. This configuration incorporated the payload fairing used on the Zenit-3F rocket, and a special adaptor for the Fobos-Grunt spacecraft, which incorporated a Fregat-derived propulsion system. The Zenit-2 and Zenit-2M, however, were supplanted by the Zenit-3SLB after 2008.
See also
List of Zenit launches
References
Zenit (rocket family)
Vehicles introduced in 2007 | Zenit-2M | [
"Astronomy"
] | 394 | [
"Rocketry stubs",
"Astronomy stubs"
] |
4,225,907 | https://en.wikipedia.org/wiki/Docstring | In programming, a docstring is a string literal specified in source code that is used, like a comment, to document a specific segment of code. Unlike conventional source code comments, or even specifically formatted comments like docblocks, docstrings are not stripped from the source tree when it is parsed and are retained throughout the runtime of the program. This allows the programmer to inspect these comments at run time, for instance as an interactive help system, or as metadata.
Languages that support docstrings include Python, Lisp, Elixir, Clojure, Gherkin, Julia and Haskell.
Implementation examples
Elixir
Documentation is supported at language level, in the form of docstrings. Markdown is Elixir's de facto markup language of choice for use in docstrings:
def module MyModule do
@moduledoc """
Documentation for my module. With **formatting**.
"""
@doc "Hello"
def world do
"World"
end
end
Lisp
In Lisp, docstrings are known as documentation strings. The Common Lisp standard states that a particular implementation may choose to discard docstrings whenever they want, for whatever reason. When they are kept, docstrings may be viewed and changed using the DOCUMENTATION function. For instance:
(defun foo () "hi there" nil)
(documentation #'foo 'function) => "hi there"
Python
The common practice of documenting a code object at the head of its definition is captured by the addition of docstring syntax in the Python language.
The docstring for a Python code object (a module, class, or function) is the first statement of that code object, immediately following the definition (the 'def' or 'class' statement). The statement must be a bare string literal, not any other kind of expression. The docstring for the code object is available on that code object's __doc__ attribute and through the help function.
The following Python file shows the declaration of docstrings within a Python source file:
"""The module's docstring"""
class MyClass:
"""The class's docstring"""
def my_method(self):
"""The method's docstring"""
def my_function():
"""The function's docstring"""
Assuming that the above code was saved as , the following is an interactive session showing how the docstrings may be accessed:
>>> import mymodule
>>> help(mymodule)
The module's docstring
>>> help(mymodule.MyClass)
The class's docstring
>>> help(mymodule.MyClass.my_method)
The method's docstring
>>> help(mymodule.my_function)
The function's docstring
>>>
Tools using docstrings
cobra -doc (Cobra)
doctest (Python)
Epydoc (Python)
Pydoc (Python)
Sphinx (Python)
See also
Literate programming – alternative code commenting paradigm
Plain Old Documentation – Perl documentation
References
External links
Python Docstrings at Epydoc's SourceForge page
Documentation in GNU Emacs Lisp
Section from the doxygen documentation about Python docstrings
Programming constructs
Lisp (programming language)
Python (programming language)
Source code documentation formats
String (computer science) | Docstring | [
"Mathematics",
"Technology"
] | 747 | [
"Sequences and series",
"Computer science",
"Mathematical structures",
"String (computer science)"
] |
4,226,265 | https://en.wikipedia.org/wiki/Fluorine-18 | Fluorine-18 (18F, also called radiofluorine) is a fluorine radioisotope which is an important source of positrons. It has a mass of 18.0009380(6) u and its half-life is 109.771(20) minutes. It decays by positron emission 96.7% of the time and electron capture 3.3% of the time. Both modes of decay yield stable oxygen-18.
Natural occurrence
is a natural trace radioisotope produced by cosmic ray spallation of atmospheric argon as well as by reaction of protons with natural oxygen: 18O + p → 18F + n.
Synthesis
In the radiopharmaceutical industry, fluorine-18 is made using either a cyclotron or linear particle accelerator to bombard a target, usually of natural or enriched [18O]water with high energy protons (typically ~18 MeV). The fluorine produced is in the form of a water solution of [18F]fluoride, which is then used in a rapid chemical synthesis of various radio pharmaceuticals. The organic oxygen-18 pharmaceutical molecule is not made before the production of the radiopharmaceutical, as high energy protons destroy such molecules (radiolysis). Radiopharmaceuticals using fluorine must therefore be synthesized after the fluorine-18 has been produced.
History
First published synthesis and report of properties of fluorine-18 were in 1937 by Arthur H. Snell, produced by the nuclear reaction of 20Ne(d,α)18F in the cyclotron laboratories of Ernest O. Lawrence.
Chemistry
Fluorine-18 is often substituted for a hydroxyl group in a radiotracer parent molecule, due to similar steric and electrostatic properties. This may however be problematic in certain applications due to possible changes in the molecule polarity.
Applications
Fluorine-18 is one of the early tracers used in positron emission tomography (PET), having been in use since the 1960s.
Its significance is due to both its short half-life and the emission of positrons when decaying.
A major medical use of fluorine-18 is: in positron emission tomography (PET) to image the brain and heart; to image the thyroid gland; as a radiotracer to image bones and seeking cancers that have metastasized from other locations in the body and in radiation therapy treating internal tumors.
Tracers include sodium fluoride which can be useful for skeletal imaging as it displays high and rapid bone uptake accompanied by very rapid blood clearance, which results in a high bone-to-background ratio in a short time and
fluorodeoxyglucose (FDG), where the 18F substitutes a hydroxyl.
New dioxaborolane chemistry enables radioactive fluoride (18F) labeling of antibodies, which allows for positron emission tomography (PET) imaging of cancer. A Human-Derived, Genetic, Positron-emitting and Fluorescent (HD-GPF) reporter system uses a human protein, PSMA and non-immunogenic, and a small molecule that is positron-emitting (18F) and fluorescent for dual modality PET and fluorescence imaging of genome modified cells, e.g. cancer, CRISPR/Cas9, or CAR T-cells, in an entire mouse. The dual-modality small molecule targeting PSMA was tested in humans and found the location of primary and metastatic prostate cancer, fluorescence-guided removal of cancer, and detects single cancer cells in tissue margins.
References
Isotopes of fluorine
Medicinal radiochemistry
Positron emitters
Medical isotopes | Fluorine-18 | [
"Chemistry"
] | 785 | [
"Medicinal radiochemistry",
"Isotopes of fluorine",
"Isotopes",
"Medicinal chemistry",
"Chemicals in medicine",
"Medical isotopes"
] |
4,226,525 | https://en.wikipedia.org/wiki/Grid%20cell | A grid cell is a type of neuron within the entorhinal cortex that fires at regular intervals as an animal navigates an open area, allowing it to understand its position in space by storing and integrating information about location, distance, and direction. Grid cells have been found in many animals, including rats, mice, bats, monkeys, and humans.
Grid cells were discovered in 2005 by Edvard Moser, May-Britt Moser, and their students Torkel Hafting, Marianne Fyhn, and Sturla Molden at the Centre for the Biology of Memory (CBM) in Norway. They were awarded the 2014 Nobel Prize in Physiology or Medicine together with John O'Keefe for their discoveries of cells that constitute a positioning system in the brain. The arrangement of spatial firing fields, all at equal distances from their neighbors, led to a hypothesis that these cells encode a neural representation of Euclidean space. The discovery also suggested a mechanism for dynamic computation of self-position based on continuously updated information about position and direction.
To detect grid cell activity in a typical rat experiment, an electrode which can record single-neuron activity is implanted in the dorsomedial entorhinal cortex and collects recordings as the rat moves around freely in an open arena. The resulting data can be visualized by marking the rat's position on a map of the arena every time that neuron fires an action potential. These marks accumulate over time to form a set of small clusters, which in turn form the vertices of a grid of equilateral triangles. The regular triangle pattern distinguishes grid cells from other types of cells that show spatial firing. By contrast, if a place cell from the rat hippocampus is examined in the same way, then the marks will frequently only form one cluster (one "place field") in a given environment, and even when multiple clusters are seen, there is no perceptible regularity in their arrangement.
Background of discovery
In 1971 John O'Keefe and Jonathon Dostrovsky reported the discovery of place cells in the rat hippocampus—cells that fire action potentials when an animal passes through a specific small region of space, which is called the place field of the cell. This discovery, although controversial at first, led to a series of investigations that culminated in the 1978 publication of a book by O'Keefe and his colleague Lynn Nadel called The Hippocampus as a Cognitive Map (a phrase that also appeared in the title of the 1971 paper)—the book argued that the hippocampal neural network instantiates cognitive maps as hypothesized by the psychologist Edward C. Tolman. This theory aroused a great deal of interest, and motivated hundreds of experimental studies aimed at clarifying the role of the hippocampus in spatial memory and spatial navigation.
Because the entorhinal cortex provides by far the largest input to the hippocampus, it was clearly important to understand the spatial firing properties of entorhinal neurons. The earliest studies, such as Quirk et al. (1992), described neurons in the entorhinal cortex as having relatively large and fuzzy place fields. But the Mosers thought it was possible that a different result would be obtained if recordings were made from a different part of the entorhinal cortex. The entorhinal cortex is a strip of tissue running along the back edge of the rat brain from the ventral to the dorsal sides. Anatomical studies had shown that different sectors of the entorhinal cortex project to different levels of the hippocampus: the dorsal end of the EC projects to the dorsal hippocampus, the ventral end to the ventral hippocampus. This was relevant because several studies had shown that place cells in the dorsal hippocampus have considerably sharper place fields than cells from more ventral levels. But every study of entorhinal spatial activity before 2004 had made use of electrodes implanted near the ventral end of the EC. Accordingly, together with Marianne Fyhn, Sturla Molden, and Menno Witter, the Mosers set out to examine spatial firing from the different dorsal-to-ventral levels of the entorhinal cortex. They found that in the dorsal part of medial entorhinal cortex (MEC), cells had sharply defined place fields like in the hippocampus but the cells fired at multiple locations. The arrangement of the firing fields showed hints of regularity, but the size of the environment was too small for spatial periodicity to be visible in this study.
The next set of experiments, reported in 2005, made use of a larger environment, which led to the recognition that the cells were actually firing in a hexagonal grid pattern. The study showed that cells at similar dorsal-to-ventral MEC levels had similar grid spacing and grid orientation, but that the phase of the grid (the offset of the grid vertices relative to the x and y axes) appeared to be randomly distributed between cells. The periodic firing pattern was expressed independently of the configuration of landmarks, in darkness as well as in the presence of visible landmarks and independently of changes in the animal’s speed and direction, leading the authors to suggest that grid cells expressed a path-integration-dependent dynamic computation of the animal’s location.
For their discovery of grid cells, May-Britt Moser, and Edvard Moser were awarded the Nobel Prize in Physiology or Medicine in 2014, alongside John O'Keefe.
Properties
Grid cells are neurons that fire when a freely moving animal traverses a set of small regions (firing fields) which are roughly equal in size and arranged in a periodic triangular array that covers the entire available environment. Cells with this firing pattern have been found in all layers of the dorsocaudal medial entorhinal cortex (dMEC), but cells in different layers tend to differ in other respects. Layer II contains the largest density of pure grid cells, in the sense that they fire equally regardless of the direction in which an animal traverses a grid location. Grid cells from deeper layers are intermingled with conjunctive cells and head direction cells (i.e. in layers III, V and VI there are cells with a grid-like pattern that fire only when the animal is facing a particular direction).
Grid cells that lie next to one another (i.e., cells recorded from the same electrode) usually show the same grid spacing and orientation, but their grid vertices are displaced from one another by apparently random offsets. But cells recorded from separate electrodes at a distance from one another typically show different grid spacings. Cells located more ventrally (farther from the dorsal border of the MEC) generally have larger firing fields at each grid vertex, and correspondingly greater spacing between the grid vertices. The total range of grid spacings is not well established: the initial report described a roughly twofold range of grid spacings (from 39 cm to 73 cm) across the dorsalmost part (upper 25%) of the MEC, but there are indications of considerably larger grid scales in more ventral zones. Brun et al. (2008) recorded grid cells from multiple levels in rats running along an 18-meter track, and found that the grid spacing expanded from about 25 cm in their dorsalmost sites to about 3 m at the ventralmost sites. These recordings only extended 3/4 of the way to the ventral tip, so it is possible that even larger grids exist. Such multi-scale representations have been shown to be information theoretically desirable.
Grid-cell activity does not require visual input, since grid patterns remain unchanged when all the lights in an environment are turned off. But when visual cues are present they exert strong control over the alignment of the grids: rotating a cue card on the wall of a cylinder causes grid patterns to rotate by the same amount. Grid patterns appear on the first entrance of an animal into a novel environment, and then usually remain stable. When an animal is moved into a completely different environment, grid cells maintain their grid spacing, and the grids of neighboring cells maintain their relative offsets.
Interactions with hippocampal place cells
When a rat is moved to a different environment, the spatial activity patterns of hippocampal place cells usually show "complete remapping"—that is, the pattern of place fields reorganizes in a way that bears no detectable resemblance to the pattern in the original environment. If the features of an environment are altered less radically, however, the place field pattern may show a lesser degree of change, referred to as "rate remapping", in which many cells alter their firing rates but the majority of cells retain place fields in the same locations as before. This was examined using simultaneous recordings of hippocampal and entorhinal cells, and found that in situations where the hippocampus shows rate remapping, grid cells show unaltered firing patterns, whereas when the hippocampus shows complete remapping, grid cell firing patterns show unpredictable shifts and rotations.
Theta rhythmicity
Neural activity in nearly every part of the hippocampal system is modulated by the hippocampal theta rhythm, which has a frequency range of about 6–9 Hz in rats. The entorhinal cortex is no exception: like the hippocampus, it receives cholinergic and GABAergic input from the medial septal area, the central controller of theta. Grid cells, like hippocampal place cells, show strong theta modulation. Grid cells from layer II of the MEC also resemble hippocampal place cells in that they show phase precession—that is, their spike activity advances from late to early phases of the theta cycle as an animal passes through a grid vertex. A recent model of grid cell activity explained this phase precession by assuming the presence of 1-dimensional attractor network composed of stellate cells. Most grid cells from layer III do not precess, but their spike activity is largely confined to half of the theta cycle. The grid cell phase precession is not derived from the hippocampus, because it continues to appear in animals whose hippocampus has been inactivated by an agonist of GABA.
Possible functions
Many species of mammals can keep track of spatial location even in the absence of visual, auditory, olfactory, or tactile cues, by integrating their movements—the ability to do this is referred to in the literature as path integration. A number of theoretical models have explored mechanisms by which path integration could be performed by neural networks. In most models, such as those of Samsonovich and McNaughton (1997) or Burak and Fiete (2009), the principal ingredients are (1) an internal representation of position, (2) internal representations of the speed and direction of movement, and (3) a mechanism for shifting the encoded position by the right amount when the animal moves. Because cells in the MEC encode information about position (grid cells) and movement (head direction cells and conjunctive position-by-direction cells), this area is currently viewed as the most promising candidate for the place in the brain where path integration occurs. However, the question remains unresolved, as in humans the entorhinal cortex does not appear to be required for path integration. Burak and Fiete (2009) showed that a computational simulation of the grid cell system was capable of performing path integration to a high level of accuracy. However, more recent theoretical work has suggested that grid cells might perform a more general denoising process not necessarily related to spatial processing.
Hafting et al. (2005) suggested that a place code is computed in the entorhinal cortex and fed into the hippocampus, which may make the associations between place and events that are needed for the formation of memories.
In contrast to a hippocampal place cell, a grid cell has multiple firing fields, with regular spacing, which tessellate the environment in a hexagonal pattern. The unique properties of grid cells are as follows:
Grid cells have firing fields dispersed over the entire environment (in contrast to place fields which are restricted to certain specific regions of the environment)
The firing fields are organized into a hexagonal lattice
Firing fields are generally equally spaced apart, such that the distance from one firing field to all six adjacent firing fields is approximately the same (though when an environment is resized, the field spacing may shrink or expand differently in different directions; Barry et al. 2007)
Firing fields are equally positioned, such that the six neighboring fields are located at approximately 60 degree increments
The grid cells are anchored to external landmarks, but persist in darkness, suggesting that grid cells may be part of a self-motion–based map of the spatial environment.
A convergent evolution analogy has been argued to exist between grid cells and the decomposition of images in JPEG compression into superimposed discrete cosine basis functions. According to this interpretation, the joint activity of all grid cells provides an optimal compression of an animal's topography. The result of this is that "grid cells and even same scale modules of grid cells do not function as isolated entities, but rather integrate their information across the many grid scales in order to form a unique compressed representation of the animal’s spatial location."
See also
Boundary cell, discovered in 2008.
List of distinct cell types in the adult human body
References
External links
Mosers Group
Movie of a grid cell
Neurons
Perception
Neurology
Brain
Spatial cognition | Grid cell | [
"Physics"
] | 2,790 | [
"Spacetime",
"Space",
"Spatial cognition"
] |
4,226,856 | https://en.wikipedia.org/wiki/Amadeus%20IT%20Group | Amadeus IT Group, S.A. () is a major Spanish multinational technology company that provides software for the global travel and tourism industry. It is the world's leading provider of travel technology that focus on developing software for airlines, hotels, travel agencies, and other travel-related businesses.
The company is structured around two areas: its global distribution system and its Information Technology business. Amadeus provides search, pricing, booking, ticketing and other processing services in real-time to travel providers and travel agencies through its Amadeus CRS distribution business area. It also offers computer software that automates processes such as reservations, inventory management software and departure control systems. It services customers including airlines, hotels, tour operators, insurers, car rental and railway companies, ferry and cruise lines, travel agencies and individual travellers directly.
Amadeus processed 945 million billable travel transactions in 2011.
The parent company of Amadeus IT Group, holding over 99.7% of the firm, is Amadeus IT Holding S.A. It was listed on the Spanish stock exchanges on 29 April 2010.
Amadeus has central sites in Madrid, Spain (corporate headquarters and marketing), Sophia Antipolis, France (product development), London, UK (product development), Breda, Netherlands (development), Erding, Germany (Data center) and Bangalore, India (product development) as well as regional offices in Bangkok, Buenos Aires, Dubai, Miami, Istanbul, Singapore, and Sydney. At market level, Amadeus maintains customer operations through 173 local Amadeus Commercial Organisations (ACOs) covering 195 countries. The Amadeus group employs 14,200 employees worldwide, and listed in Forbes' list of "The World's Largest Public Companies" as No. 985.
History
Amadeus was originally created as a neutral global distribution system (GDS) by Air France, Iberia, Lufthansa and SAS in 1987 in order to connect providers' content with travel agencies and consumers in real time. The creation of Amadeus was intended to offer a European alternative to Sabre, an American GDS. The first Amadeus system was built from core reservation system code coming from System One, an American GDS that competed with Sabre but went bankrupt, and a copy of the Air France pricing engine. These systems were respectively running under IBM TPF and Unisys. At the first, the systems were dedicated to airline reservation and centered on the PNR (Passenger Name Record), the passenger's travel file. Gradually the PNR was opened up to additional travel industries (hotels, rail, cars, cruises, ferries, insurance, etc.).
Initially a private partnership, Amadeus went public in October 1999, becoming listed on the Paris, Frankfurt and Madrid stock exchanges. The firm diversified its operations with information technologies (IT) to deliver services beyond sales and reservation functionalities, centered on streamlining the operational and distribution requirements of its customer base.
Since 2004, the company has invested €1 billion in R&D with its technology increasingly using open systems which provide clients with more flexibility and features. , 85% of its software portfolio was open system based and it expects by the end of 2016 to have fully migrated away from mainframe-based TPF software.
In 2005, Amadeus was delisted from the Paris, Frankfurt and Madrid stock exchanges when BC Partners and Cinven bought their stake from three of the four founding airlines and the rest of the capital floated from institutional and minority shareholders. The transition from distribution system to technology provider was reflected by the change in its corporate name to Amadeus IT Group in 2006. In 2009, Amadeus invested about €257 million in R&D. Amadeus was listed on the Spanish Stock Exchanges on 29 April 2010.
Amadeus has acquired:
2000: Vacation.com, the largest US marketing network for leisure travel
2001: E-Travel, Inc., a supplier of hosted technology products for corporate travel
2002: SMART AB, a travel distribution company in Northern Europe
2003: Airline Automation (AAI), a robotic PNR processing company In 2006, its name was changed to Amadeus Revenue Integrity.
2004-2008: Opodo, a European travel website, which it sold in February 2011 for €450 million
2005: Optims, a European hotel software company
2006: TravelTainment, a leisure content provider
2008: Onerail, a rail IT software supplier
2013: Travel Audience GmbH, an online advertising firm
2014: Newmarket International, an IT provider for hotels
2014: UFIS, an airport IT provider
2014: i:FAO, a corporate travel buying software system
2015: iTesso, a Property Management System provider for hotels
2015: AirIT, property and revenue management software for airports
2017: Navitaire, a software provider for rail and low-cost airlines
2017: Pyton, an online booking engine supplier
2018: TravelClick, a provider of cloud-based services for the hotel industry, for $1.52 billion
2022: Kambr, an airline revenue management solutions provider
2024: Voxel, electronic invoice and B2B payment solutions provider
In September 2014, Air France sold a 3% stake in the firm for $438 million. In November 2017, Amadeus invested in global mapping tech provider AVUXI.
Data centre
Amadeus has its own data centre in Erding, Germany, two strategic operation centres in Miami and Sydney and local competency centres in Germany, Thailand, India, Poland, Colombia, Ukraine, and the United Kingdom.
Vulnerability discovered
On January 15, 2019, the hacker and activist Noam Rotem discovered a major vulnerability affecting nearly half of all airlines worldwide. While booking a flight with Israeli national carrier El Al, he came across a significant security breach that allows anyone to access and change private information on flight bookings. The same breach was then discovered to include 44% of the international carriers market, potentially affecting tens of millions of travelers.
Operations
Distribution
Amadeus CRS is the largest GDS provider in the worldwide travel and tourism industry, with an estimated market share of 37% in 2009. As of December 2010, over travel agencies worldwide use the Amadeus system and airline sales offices use it as their internal sales and reservations system. Amadeus gives access to bookable content from 435 airlines (including 60 low-cost carriers), 29 car rental companies (representing car rental locations), 51 cruise lines and ferry operators, 280 hotel chains and hotels, 200 tour operators, 103 rail operators and 116 travel insurance companies.
Information Technology
Amadeus Altéa Customer Management System (CMS) is a software suite for airlines' sales and reservations, inventory management and departure control systems. Using it, airlines outsource their IT operations onto a community platform which allows them to share information with both airline alliance and codeshare agreement partners.
It consists of four main modules: Altéa Reservation, Altéa Inventory, Altéa Departure Control; and Altéa e-commerce.
In 2009, 238 million passengers were boarded by airlines using the system. It is developing similar systems for rail companies, hotel chains, airport operators and aircraft ground handling companies.
Contribution to open source projects
According to a May 2015 investigation, Amadeus has contributed to the Docker open source software project.
Business model and other business lines
The business model of Amadeus is booking fee or transaction based, which means that a fee is taken for each confirmed net booking made in the Amadeus CRS.
In late 1990s, a business division specialized in e-commerce was created.
In 2000, Amadeus was awarded the development of two new operational applications for British Airways and Qantas: the inventory management and the departure control systems. These products were outside of the core expertise domain of Amadeus and were built with the expertise of the airlines.
In March 2015, Amadeus announced that Blacklane, a Berlin-based professional driver service available worldwide, would become their first fully integrated taxi and transfer service provider.
References
Software companies of Spain
Information technology companies of Spain
Hospitality companies established in 1987
Software companies established in 1987
Travel technology
Companies based in Madrid
IBEX 35
Companies listed on the Madrid Stock Exchange
Spanish brands
Spanish companies established in 1987
1999 initial public offerings
Computer reservation systems | Amadeus IT Group | [
"Technology"
] | 1,632 | [
"Computer reservation systems",
"Computer systems"
] |
4,226,883 | https://en.wikipedia.org/wiki/Photopolymer | A photopolymer or light-activated resin is a polymer that changes its properties when exposed to light, often in the ultraviolet or visible region of the electromagnetic spectrum. These changes are often manifested structurally, for example hardening of the material occurs as a result of cross-linking when exposed to light. An example is shown below depicting a mixture of monomers, oligomers, and photoinitiators that conform into a hardened polymeric material through a process called curing.
A wide variety of technologically useful applications rely on photopolymers; for example, some enamels and varnishes depend on photopolymer formulation for proper hardening upon exposure to light. In some instances, an enamel can cure in a fraction of a second when exposed to light, as opposed to thermally cured enamels which can require half an hour or longer. Curable materials are widely used for medical, printing, and photoresist technologies.
Changes in structural and chemical properties can be induced internally by chromophores that the polymer subunit already possesses, or externally by addition of photosensitive molecules. Typically a photopolymer consists of a mixture of multifunctional monomers and oligomers in order to achieve the desired physical properties, and therefore a wide variety of monomers and oligomers have been developed that can polymerize in the presence of light either through internal or external initiation. Photopolymers undergo a process called curing, where oligomers are cross-linked upon exposure to light, forming what is known as a network polymer. The result of photo-curing is the formation of a thermoset network of polymers. One of the advantages of photo-curing is that it can be done selectively using high energy light sources, for example lasers, however, most systems are not readily activated by light, and in this case a photoinitiator is required. Photoinitiators are compounds that upon radiation of light decompose into reactive species that activate polymerization of specific functional groups on the oligomers. An example of a mixture that undergoes cross-linking when exposed to light is shown below. The mixture consists of monomeric styrene and oligomeric acrylates.
Most commonly, photopolymerized systems are typically cured through UV radiation, since ultraviolet light is more energetic. However, the development of dye-based photoinitiator systems have allowed for the use of visible light, having the potential advantages of being simpler and safer to handle. UV curing in industrial processes has greatly expanded over the past several decades. Many traditional thermally cured and solvent-based technologies can be replaced by photopolymerization technologies. The advantages of photopolymerization over thermally cured polymerization include higher rates of polymerization and environmental benefits from elimination of volatile organic solvents.
There are two general routes for photoinitiation: free radical and ionic. The general process involves doping a batch of neat polymer with small amounts of photoinitiator, followed by selective radiation of light, resulting in a highly cross-linked product. Many of these reactions do not require solvent which eliminates termination path via reaction of initiators with solvent and impurities, in addition to decreasing the overall cost.
Ionic mechanism
In ionic curing processes, an ionic photoinitiator is used to activate the functional group of the oligomers that are going to participate in cross-linking. Typically photopolymerization is a very selective process and it is crucial that the polymerization takes place only where it is desired to do so. In order to satisfy this, liquid neat oligomer can be doped with either anionic or cationic photoinitiators that will initiate polymerization only when radiated with light. Monomers, or functional groups, employed in cationic photopolymerization include: styrenic compounds, vinyl ethers, N-vinyl carbazoles, lactones, lactams, cyclic ethers, cyclic acetals, and cyclic siloxanes. The majority of ionic photoinitiators fall under the cationic class; anionic photoinitiators are considerably less investigated. There are several classes of cationic initiators, including onium salts, organometallic compounds and pyridinium salts. As mentioned earlier, one of the drawbacks of the photoinitiators used for photopolymerization is that they tend to absorb in the short UV region. Photosensitizers, or chromophores, that absorb in a much longer wavelength region can be employed to excite the photoinitiators through an energy transfer. Other modifications to these types of systems are free radical assisted cationic polymerization. In this case, a free radical is formed from another species in solution that reacts with the photoinitiator in order to start polymerization. Although there are a diverse group of compounds activated by cationic photoinitiators, the compounds that find most industrial uses contain epoxides, oxetanes, and vinyl ethers. One of the advantages to using cationic photopolymerization is that once the polymerization has begun it is no longer sensitive to oxygen and does not require an inert atmosphere to perform well.
Photolysis
M = Monomer
Cationic photoinitiators
The proposed mechanism for cationic photopolymerization begins with the photoexcitation of the initiator. Once excited, both homolytic cleavage and dissociation of a counter anion takes place, generating a cationic radical (R), an aryl radical (R') and an unaltered counter anion (X). The abstraction of a lewis acid by the cationic radical produces a very weakly bound hydrogen and a free radical. The acid is further deprotonated by the anion (X) in solution, generating a lewis acid with the starting anion (X) as a counter ion. It is thought that the acidic proton generated is what ultimately initiates the polymerization.
Onium salts
Since their discovery in the 1970s aryl onium salts, more specifically iodonium and sulfonium salts, have received much attention and have found many industrial applications. Other less common onium salts include ammonium and phosphonium salts.
A typical onium compound used as a photoinitiator contains two or three arene groups for iodonium and sulfonium respectively. Onium salts generally absorb short wavelength light in the UV region spanning from 225300 nm. One characteristic that is crucial to the performance of the onium photoinitiators is that the counter anion is non-nucleophilic. Since the Brønsted acid generated during the initiation step is considered the active initiator for polymerization, there is a termination route where the counter ion of the acid could act as the nucleophile instead of a functional groups on the oligomer. Common counter anions include , , and . There is an indirect relationship between the size of the counter ion and percent conversion.
Organometallic
Although less common, transition metal complexes can act as cationic photoinitiators as well. In general, the mechanism is more simplistic than the onium ions previously described. Most photoinitiators of this class consist of a metal salt with a non-nucleophilic counter anion. For example, ferrocinium salts have received much attention for commercial applications. The absorption band for ferrocinium salt derivatives are in a much longer, and sometimes visible, region. Upon radiation the metal center loses one or more ligands and these are replaced by functional groups that begin the polymerization. One of the drawbacks of this method is a greater sensitivity to oxygen. There are also several organometallic anionic photoinitiators which react through a similar mechanism. For the anionic case, excitation of a metal center is followed by either heterolytic bond cleavage or electron transfer generating the active anionic initiator.
Pyridinium salts
Generally pyridinium photoinitiators are N-substituted pyridine derivatives, with a positive charge placed on the nitrogen. The counter ion is in most cases a non-nucleophilic anion. Upon radiation, homolytic bond cleavage takes place generating a pyridinium cationic radical and a neutral free radical. In most cases, a hydrogen atom is abstracted from the oligomer by the pyridinium radical. The free radical generated from the hydrogen abstraction is then terminated by the free radical in solution. This results in a strong pyridinium acid that can initiate polymerization.
Free radical mechanism
Nowadays, most radical photopolymerization pathways are based on addition reactions of carbon double bonds in acrylates or methacrylates, and these pathways are widely employed in photolithography and stereolithography.
Before the free radical nature of certain polymerizations was determined, certain monomers were observed to polymerize when exposed to light. The first to demonstrate the photoinduced free radical chain reaction of vinyl bromide was Ivan Ostromislensky, a Russian chemist who also studied the polymerization of synthetic rubber. Subsequently, many compounds were found to become dissociated by light and found immediate use as photoinitiators in the polymerization industry.
In the free radical mechanism of radiation curable systems, light absorbed by a photoinitiator generates free-radicals which induce cross-linking reactions of a mixture of functionalized oligomers and monomers to generate the cured film
Photocurable materials that form through the free-radical mechanism undergo chain-growth polymerization, which includes three basic steps: initiation, chain propagation, and chain termination. The three steps are depicted in the scheme below, where R• represents the radical that forms upon interaction with radiation during initiation, and M is a monomer. The active monomer that is formed is then propagated to create growing polymeric chain radicals. In photocurable materials the propagation step involves reactions of the chain radicals with reactive double bonds of the prepolymers or oligomers. The termination reaction usually proceeds through combination, in which two chain radicals are joined, or through disproportionation, which occurs when an atom (typically hydrogen) is transferred from one radical chain to another resulting in two polymeric chains.
Initiation
Propagation
{RM^\bullet} + M_\mathit{n} -> RM^\bullet_{\mathit{n}+1}
Termination
combination
{RM^\bullet_\mathit{n}} + {^\bullet M_\mathit{m}R} -> RM_\mathit{n}M_\mathit{m}R
disproportionation
{RM^\bullet_\mathit{n}} + {^\bullet M_\mathit{m}R} -> {RM_\mathit{n}} + M_\mathit{m}R
Most composites that cure through radical chain growth contain a diverse mixture of oligomers and monomers with functionality that can range from 2-8 and molecular weights from 500 to 3000. In general, monomers with higher functionality result in a tighter crosslinking density of the finished material. Typically these oligomers and monomers alone do not absorb sufficient energy for the commercial light sources used, therefore photoinitiators are included.
Free-radical photoinitiators
There are two types of free-radical photoinitators: A two component system where the radical is generated through abstraction of a hydrogen atom from a donor compound (also called co-initiator), and a one-component system where two radicals are generated by cleavage. Examples of each type of free-radical photoinitiator is shown below.
Benzophenone, xanthones, and quinones are examples of abstraction type photoinitiators, with common donor compounds being aliphatic amines. The resulting R• species from the donor compound becomes the initiator for the free radical polymerization process, while the radical resulting from the starting photoinitiator (benzophenone in the example shown above) is typically unreactive.
Benzoin ethers, Acetophenones, Benzoyl Oximes, and Acylphosphines are some examples of cleavage-type photoinitiators. Cleavage readily occurs for the species, giving two radicals upon absorption of light, and both radicals generated can typically initiate polymerization. Cleavage type photoinitiators do not require a co-initiator, such as aliphatic amines. This can be beneficial since amines are also effective chain transfer species. Chain-transfer processes reduce the chain length and ultimately the crosslink density of the resulting film.
Oligomers and monomers
The properties of a photocured material, such as flexibility, adhesion, and chemical resistance, are provided by the functionalized oligomers present in the photocurable composite. Oligomers are typically epoxides, urethanes, polyethers, or polyesters, each of which provide specific properties to the resulting material. Each of these oligomers are typically functionalized by an acrylate. An example shown below is an epoxy oligomer that has been functionalized by acrylic acid. Acrylated epoxies are useful as coatings on metallic substrates and result in glossy hard coatings. Acrylated urethane oligomers are typically abrasion resistant, tough, and flexible, making ideal coatings for floors, paper, printing plates, and packaging materials. Acrylated polyethers and polyesters result in very hard solvent resistant films, however, polyethers are prone to UV degradation and therefore are rarely used in UV curable material. Often formulations are composed of several types of oligomers to achieve the desirable properties for a material.
The monomers used in radiation curable systems help control the speed of cure, crosslink density, final surface properties of the film, and viscosity of the resin. Examples of monomers include styrene, N-Vinylpyrrolidone, and acrylates. Styrene is a low cost monomer and provides a fast cure, N-vinylpyrrolidone results in a material that is highly flexible when cured and has low toxicity, and acrylates are highly reactive, allowing for rapid cure rates, and are highly versatile with monomer functionality ranging from monofunctional to tetrafunctional. Like oligomers, several types of monomers can be employed to achieve the desired properties of the final material.
Applications
Photopolymerization has wide-ranging applications, from imaging to biomedical uses.
Dentistry
Dentistry is one field in which free radical photopolymers have found wide usage as adhesives, sealant composites, and protective coatings. These dental composites are based on a camphorquinone photoinitiator and a matrix containing methacrylate oligomers with inorganic fillers such as silicon dioxide. Resin cements are utilized in luting cast ceramic, full porcelain, and veneer restorations that are thin or translucent, which permits visible light penetration in order to polymerize the cement. Light-activated cements may be radiolucent and are usually provided in various shades since they are utilized in esthetically demanding situations.
Conventional halogen bulbs, argon lasers and xenon arc lights are currently used in clinical practice. A new technological approach for curing light-activated oral biomaterials using a light curing unit (LCU) is based on blue light-emitting diodes (LED). The main benefits of LED LCU technology are the long lifetime of LED LCUs (several thousand hours), no need for filters or a cooling fan, and virtually no decrease of light output over the lifetime of the unit, resulting in consistent and high quality curing. Simple depth of cure experiments on dental composites cured with LED technology show promising results.
Medical uses
Photocurable adhesives are also used in the production of catheters, hearing aids, surgical masks, medical filters, and blood analysis sensors. Photopolymers have also been explored for uses in drug delivery, tissue engineering and cell encapsulation systems. Photopolymerization processes for these applications are being developed to be carried out in vivo or ex vivo. In vivo photopolymerization would provide the advantages of production and implantation with minimal invasive surgery. Ex vivo photopolymerization would allow for fabrication of complex matrices and versatility of formulation. Although photopolymers show promise for a wide range of new biomedical applications, biocompatibility with photopolymeric materials must still be addressed and developed.
3D printing
Stereolithography, digital imaging, and 3D inkjet printing are just a few 3D printing technologies that make use of photopolymerization pathways. 3D printing usually utilizes CAD-CAM software, which creates a 3D computer model to be translated into a 3D plastic object. The image is cut in slices; each slice is then reconstructed through radiation curing of the liquid polymer, converting the image into a solid object. Photopolymers used in 3D imaging processes require sufficient cross-linking and should ideally be designed to have minimal volume shrinkage upon polymerization in order to avoid distortion of the solid object. Common monomers utilized for 3D imaging include multifunctional acrylates and methacrylates, often combined with a non-polymeric component in order to reduce volume shrinkage. A competing composite mixture of epoxide resins with cationic photoinitiators is becoming increasingly used since their volume shrinkage upon ring-opening polymerization is significantly below those of acrylates and methacrylates. Free-radical and cationic polymerizations composed of both epoxide and acrylate monomers have also been employed, gaining the high rate of polymerization from the acrylic monomer, and better mechanical properties from the epoxy matrix.
Photoresists
Photoresists are coatings, or oligomers, that are deposited on a surface and are designed to change properties upon irradiation of light. These changes either polymerize the liquid oligomers into insoluble cross-linked network polymers or decompose the already solid polymers into liquid products. Polymers that form networks during photopolymerization are referred to as negative resist. Conversely, polymers that decompose during photopolymerization are referred to as positive resists. Both positive and negative resists have found many applications including the design and production of micro-fabricated chips. The ability to pattern the resist using a focused light source has driven the field of photolithography.
Negative resists
As mentioned, negative resists are photopolymers that become insoluble upon exposure to radiation. They have found a variety of commercial applications, especially in the area of designing and printing small chips for electronics. A characteristic found in most negative tone resists is the presence of multifunctional branches on the polymers used. Radiation of the polymers in the presence of an initiator results in the formation of a chemically resistant network polymer. A common functional group used in negative resists is epoxy functional groups. An example of a widely used polymer of this class is SU-8. SU-8 was one of the first polymers used in this field, and found applications in wire board printing. In the presence of a cationic photoinitiator photopolymer, SU-8 forms networks with other polymers in solution. Basic scheme shown below.
SU-8 is an example of an intramolecular photopolymerization forming a matrix of cross-linked material. Negative resists can also be made using co-polymerization. In the event that two different monomers, or oligomers, are in solution with multiple functionalities, it is possible for the two to polymerize and form a less soluble polymer.
Manufacturers also use light curing systems in OEM assembly applications such as specialty electronics or medical device applications.
Positive resists
Exposure of a positive resist to radiation changes the chemical structure such that it becomes a liquid or more soluble. These changes in chemical structure are often rooted in the cleavage of specific linkers in the polymer. Once irradiated, the "decomposed" polymers can be washed away using a developer solvent leaving behind the polymer that was not exposed to light. This type of technology allows the production of very fine stencils for applications such as microelectronics. In order to have these types of qualities, positive resists utilize polymers with labile linkers in their back bone that can be cleaved upon irradiation, or use a photo-generated acid to hydrolyze bonds in the polymer. A polymer that decomposes upon irradiation to a liquid or more soluble product is referred to as a positive tone resist. Common functional groups that can be hydrolyzed by a photo-generated acid catalyst include polycarbonates and polyesters.
Fine printing
Photopolymers can be used to generate printing plates, which are then pressed onto paper-like metal type. This is often used in modern fine printing to achieve the effect of embossing (or the more subtly three-dimensional effect of letterpress printing) from designs created on a computer without needing to engrave designs into metal or cast metal type. It is often used for business cards.
Repairing leaks
Industrial facilities are utilizing light-activated resin as a sealant for leaks and cracks. Some light-activated resins have unique properties that make them ideal as a pipe repair product. These resins cure rapidly on any wet or dry surface.
Fishing
Light-activated resins recently gained a foothold with fly tiers as a way to create custom flies in a short period of time, with very little clean up involved.
Floor refinishing
Light-activated resins have found a place in floor refinishing applications, offering an instant return to service not available with any other chemical due to the need to cure at ambient temperatures. Because of application constraints, these coatings are exclusively UV cured with portable equipment containing high intensity discharge lamps. Such UV coatings are now commercially available for a variety of substrates, such as wood, vinyl composition tile and concrete, replacing traditional polyurethanes for wood refinishing and low durability acrylics for VCT.
Environment Pollution
Washing the polymer plates after they have been exposed to ultra-violet light may result in monomers entering the sewer system, eventually adding to the plastic content of the oceans. Current water purification installations are not able to remove monomer molecules from sewer water. Some monomers, such as styrene, are toxic or carcinogenic.
References
Polymers
Photochemistry
Adhesives | Photopolymer | [
"Chemistry",
"Materials_science"
] | 4,698 | [
"Polymers",
"nan",
"Polymer chemistry"
] |
4,227,415 | https://en.wikipedia.org/wiki/Costume%20Designers%20Guild | The Costume Designers Guild (CDG), is a union of professional costume designers, assistant designers, and illustrators working in film, television, commercials and other media. Founded in 1953, the CDG comprises over 1,200 members as of 2023. As the Local 892 of the International Alliance of Theatrical Stage Employees (IATSE), the union protects member’s wages and working conditions through collective bargaining.
The CDG has published a quarterly publication, The Costume Designer Magazine, since 2005. The Costume Designers Guild Awards recognizes excellence in costume design in motion pictures, television, and commercials, and other media.
Costume Designers Guild Awards
Award categories
Film:
Excellence in Contemporary Film
Excellence in Period Film
Excellence in Sci-Fi/Fantasy Film
Television:
Excellence in Contemporary Television
Excellence in Period Television
Excellence in Sci-Fi/Fantasy Television
Excellence in Short Form Design
Special Awards:
Career Achievement Award
Additional Awards:
Spotlight Award
Distinguished Collaborator Award
Discontinued categories:
Excellence in Period/Fantasy Film (awarded 1999–2004)
Excellence in Television Movie or Miniseries (awarded 2005–2014)
List of winners
Excellence in Costume Design for a Contemporary Film
This award has been presented at each of the annual awards.
Excellence in Costume Design for a Period Film
This award has been presented at each annual awards. The awards from 1999 to 2004 were for Period and Fantasy films combined.
Excellence in Costume Design for a Fantasy Film
This award was a part of the Excellence in Costume Design for a Period Film until 2005.
Excellence in Costume Design for a Contemporary Television Series
Excellence in Costume Design for a Period Television Series
Excellence in Costume Design for a Fantasy Television Series
Best Costume Design – Period or Fantasy TV Series
This award was presented at each annual awards from 2000–14, before being split into Period Television Series and Fantasy Television Series in 2015.
Excellence in Short Form Design
This award has been presented at each annual awards from 2003–Present.
Best Costume Design – TV Film or Miniseries
This award was first presented at the 8th annual awards, for 2005 filmmaking.
Hall of Fame Award
See also
26th Annual Awards 2024
25th Annual Awards 2023
24th Annual Awards 2022
23rd Annual Awards 2021
22nd Annual Awards 2020
21st Annual Awards 2019
20th Annual Awards 2018
19th Annual Awards 2017
18th Annual Awards 2016
17th Annual Awards 2015
16th Annual Awards 2014
15th Annual Awards 2013
14th Annual Awards 2012
13th Annual Awards 2011
12th Annual Awards 2010
11th Annual Awards 2009
10th Annual Awards 2008
9th Annual Awards 2007
8th Annual Awards 2006
7th Annual Awards 2005
6th Annual Awards 2004
5th Annual Awards 2003
4th Annual Awards 2002
3rd Annual Awards 2001
2nd Annual Awards 2000
1st Annual Awards 1999
References
External links
Official website
Costume Designers Guild collection, Margaret Herrick Library, Academy of Motion Picture Arts and Sciences
1953 establishments in California
Costume design
International Alliance of Theatrical Stage Employees
Trade unions established in 1953
Guilds in the United States | Costume Designers Guild | [
"Engineering"
] | 559 | [
"Costume design",
"Design"
] |
4,227,970 | https://en.wikipedia.org/wiki/Strong%20monad | In category theory, a strong monad is a monad on a monoidal category with an additional natural transformation, called the strength, which governs how the monad interacts with the monoidal product.
Strong monads play an important role in theoretical computer science where they are used to model computation with side effects.
Definition
A (left) strong monad is a monad (T, η, μ) over a monoidal category (C, ⊗, I) together with a natural transformation tA,B : A ⊗ TB → T(A ⊗ B), called (tensorial) left strength, such that the diagrams
, ,
, and
commute for every object A, B and C.
Commutative strong monads
For every strong monad T on a symmetric monoidal category, a right strength natural transformation can be defined by
A strong monad T is said to be commutative when the diagram
commutes for all objects and .
Properties
The Kleisli category of a commutative monad is symmetric monoidal in a canonical way, see corollary 7 in Guitart and corollary 4.3 in Power & Robison. When a monad is strong but not necessarily commutative, its Kleisli category is a premonoidal category.
One interesting fact about commutative strong monads is that they are "the same as" symmetric monoidal monads. More explicitly,
a commutative strong monad defines a symmetric monoidal monad by
and conversely a symmetric monoidal monad defines a commutative strong monad by
and the conversion between one and the other presentation is bijective.
References
External links
Strong monad at the nLab
Adjoint functors
Monoidal categories | Strong monad | [
"Mathematics"
] | 357 | [
"Monoidal categories",
"Mathematical structures",
"Category theory"
] |
4,228,351 | https://en.wikipedia.org/wiki/Background%20Intelligent%20Transfer%20Service | Background Intelligent Transfer Service (BITS) is a component of Microsoft Windows XP and later iterations of the operating systems, which facilitates asynchronous, prioritized, and throttled transfer of files between machines using idle network bandwidth. It is most commonly used by recent versions of Windows Update, Microsoft Update, Windows Server Update Services, and System Center Configuration Manager to deliver software updates to clients, Microsoft's anti-virus scanner Microsoft Security Essentials (a later version of Windows Defender) to fetch signature updates, and is also used by Microsoft's instant messaging products to transfer files. BITS is exposed through the Component Object Model (COM).
Technology
BITS uses idle bandwidth to transfer data. Normally, BITS transfers data in the background, i.e., BITS will only transfer data whenever there is bandwidth which is not being used by other applications. BITS also supports resuming transfers in case of disruptions.
BITS version 1.0 supports only downloads. From version 1.5, BITS supports both downloads and uploads. Uploads require the IIS web server, with BITS server extension, on the receiving side.
Transfers
BITS transfers files on behalf of requesting applications asynchronously, i.e., once an application requests the BITS service for a transfer, it will be free to do any other task, or even terminate. The transfer will continue in the background as long as the network connection is there and the job owner is logged in. BITS jobs do not transfer when the job owner is not signed in.
BITS suspends any ongoing transfer when the network connection is lost or the operating system is shut down. It resumes the transfer from where it left off when (the computer is turned on later and) the network connection is restored. BITS supports transfers over SMB, HTTP and HTTPS.
Bandwidth
BITS attempts to use only spare bandwidth. For example, when applications use 80% of the available bandwidth, BITS will use only the remaining 20%. BITS constantly monitors network traffic for any increase or decrease in network traffic and throttles its own transfers to ensure that other foreground applications (such as a web browser) get the bandwidth they need.
Note that BITS does not necessarily measure the actual bandwidth. BITS versions 3.0 and up will use Internet Gateway Device counters, if available, to more accurately calculate available bandwidth. Otherwise, BITS will use the speed as reported by the NIC to calculate bandwidth. This can lead to bandwidth calculation errors, for example when a fast network adapter (10 Mbit/s) is connected to the network via a slow link (56 kbit/s).
Jobs
BITS uses a queue to manage file transfers. A BITS session has to be started from an application by creating a Job. A job is a container, which has one or more files to transfer. A newly created job is empty. Files must be added, specifying both the source and destination URIs. While a download job can have any number of files, upload jobs can have only one. Properties can be set for individual files. Jobs inherit the security context of the application that creates them.
BITS provides API access to control jobs. A job can be programmatically started, stopped, paused, resumed, and queried for status. Before starting a job, a priority has to be set for it to specify when the job is processed relative to other jobs in the transfer queue. By default, all jobs are of Normal priority. Jobs can optionally be set to High, Low, or Foreground priority. Background transfers are optimized by BITS,1 which increases and decreases (or throttles) the rate of transfer based on the amount of idle network bandwidth that is available. If a network application begins to consume more bandwidth, BITS decreases its transfer rate to preserve the user's interactive experience, except for Foreground priority downloads.
Scheduling
BITS schedules each job to receive only a finite time slice, for which only that job is allowed to transfer, before it is temporarily paused to give another job a chance to transfer. Higher priority jobs get a higher chunk of time slice. BITS uses round-robin scheduling to process jobs in the same priority and to prevent a large transfer job from blocking smaller jobs.
When a job is newly created, it is automatically suspended (or paused). It has to be explicitly resumed to be activated. Resuming moves the job to the queued state. On its turn to transfer data, it first connects to the remote server and then starts transferring. After the job's time slice expires, the transfer is temporarily paused, and the job is moved back to the queued state. When the job gets another time slice, it has to connect again before it can transfer. When the job is complete, BITS transfers ownership of the job to the application that created it.
BITS includes a built-in mechanism for error handling and recovery attempts. Errors can be either fatal or transient; either moves a job to the respective state. A transient error is a temporary error that resolves itself after some time. For a transient error, BITS waits for some time and then retries. For fatal errors, BITS transfers control of the job to the creating application, with as much information regarding the error as it can provide.
Command-line interface tools
BITSAdmin command
Microsoft provides a BITS Administration Utility (BITSAdmin) command-line utility to manage BITS jobs. The utility is part of Windows Vista and later. It is also available as a part of the Windows XP Service Pack 2 Support Tools or Windows Server 2003 Service Pack 1 Support Tools.
Usage example:
C:\>bitsadmin /transfer myDownloadJob /download /priority normal https://example.com/file.zip C:\file.zip
PowerShell BitsTransfer
In Windows 7, the BITSAdmin utility is deprecated in favor of Windows PowerShell cmdlets. The BitsTransfer PowerShell module provides eight cmdlets with which to manage BITS jobs.
The following example is the equivalent of the BITSAdmin example above:
PS C:\> Start-BitsTransfer -Source "https://example.com/file.zip" -Destination "C:\file.zip" -DisplayName "myDownloadJob"
List of non-Microsoft applications that use BITS
AppSense – Uses BITS to install Packages on clients.
BITS Download Manager – A download manager for Windows that creates BITS Jobs.
BITSync – An open source utility that uses BITS to perform file synchronization on Server Message Block network shares.
Civilization V – Uses BITS to download mod packages.
Endless OS installer for Windows – Uses BITS to download OS images.
Eve Online – Uses BITS to download all the patches post-Apocrypha (March 10, 2009). It is also now used in the client repair tool.
Some Google services including Chrome, Gears, Pack, Flutter updater and YouTube Uploader used BITS.
Firefox (since version 68) for updates.
KBOX Systems Management Appliance – A systems management appliance that can use BITS to deliver files to Windows systems.
RSS Bandit – Uses BITS to download attachments in web feeds.
Oxygen media platform – Uses BITS to distribute Media Content and Software Updates.
SharpBITS – An open source download manager for Windows that handles BITS jobs.
WinBITS – An open source Downloader for Windows that downloads files by creating BITS Jobs.
Novell ZENworks Desktop Management – A systems management software that can use BITS to deliver application files to workstations.
Specops Deploy/App – A systems management software that (when available) uses BITS for delivering packages to the clients in the background.
See also
List of Microsoft Windows components
Comparison of file transfer protocols
References
External links
Background Intelligent Transfer Service in Windows Server 2008
Fix Background Intelligent Transfer Service in Windows 10
BITS version history
bitsadmin | Microsoft Docs
Distributed data storage
Network file transfer protocols
Hypertext Transfer Protocol clients
Windows services
Windows administration | Background Intelligent Transfer Service | [
"Technology"
] | 1,622 | [
"Windows commands",
"Computing commands"
] |
4,228,371 | https://en.wikipedia.org/wiki/Vertical%20wind%20tunnel | A vertical wind tunnel (VWT) is a wind tunnel that moves air up in a vertical column. Unlike standard wind tunnels, which have test sections that are oriented horizontally, as experienced in level flight, a vertical orientation enables gravity to be countered by drag instead of lift, as experienced in an aircraft spin or by a skydiver at terminal velocity.
Although vertical wind tunnels have been built for aerodynamic research, the most high-profile are those used as recreational wind tunnels, frequently advertised as indoor skydiving or bodyflight, which have also become a popular training tool for skydivers.
Recreational vertical wind tunnels
A recreational wind tunnel enables human beings to experience the sensation of flight without planes or parachutes, through the force of wind being generated vertically. Air moves upwards at approximately 195 km/h (120 mph or 55 m/s), the terminal velocity of a falling human body belly-downwards. A vertical wind tunnel is frequently called 'indoor skydiving' due to the popularity of vertical wind tunnels among skydivers, who report that the sensation is extremely similar to skydiving. The human body 'floats' in midair in a vertical wind tunnel, replicating the physics of 'body flight' or 'bodyflight' experienced during freefall.
History
The first human to fly in a vertical wind tunnel was Jack Tiffany in 1964 at Wright-Patterson Air Force Base located in Greene and Montgomery County, Ohio.
In 1982 Jean St-Germain, an inventor from Drummondville, Quebec, sold a vertical wind tunnel concept to both Les Thompson and Marvin Kratter, both of whom went on to build their own wind tunnels. Soon after, St Germain sold the franchising rights to Kratter for $1.5 million. Originally known as the "Aérodium", it was patented as the "Levitationarium" by Jean St. Germain in the USA in 1984 and 1994 under Patent Nos. 4,457,509 and 5,318,481, respectively.
The first reference, in print, to a Vertical Wind Tunnel specifically for parachuting was published in CANPARA (the Canadian Sport Parachuting Magazine) in 1979.
St. Germain then helped build two wind tunnels in America. The first vertical wind tunnel built intended purely for commercial use opened in the summer of 1982 in Las Vegas, Nevada. Later that same year, a second wind tunnel opened in Pigeon Forge, Tennessee. Both facilities opened and operated under the name of Flyaway Indoor Skydiving. In 2005 the 15-year Flyaway Manager Keith Fields purchased the Las Vegas facility and later renamed it "Vegas Indoor Skydiving".
In the 1990s William Kitchen, an inventor living in Orlando, FL filed patents for a vertical wind tunnel and founded the US Company "Sky Venture" in July 1998. This tunnel is specifically designed to simulate the free fall skydiving experience. Popularity grew quickly and the Orlando, FL site was visited by former U.S. President George H.W. Bush. After the initial location continued to rise in popularity, the rights were sold to Alan Metni, who divided the company into a manufacturing and distribution company (Sky Venture) and public experience company (iFly) which now operates or has licensed tunnels to over 80 locations around the world, including 5 cruise ships, with more in the works.
Another milestone in vertical wind tunnel history was 'Wind Machine' at the closing ceremonies of the 2006 Torino Winter Olympics. This was a custom-built unit by Aerodium (Latvia/Canada) for the closing ceremony. Many people had never seen a vertical wind tunnel before, and were fascinated by the flying humans with no wires.
A vertical wind tunnel performance in Moscow's Red Square was shown in 2009 during the presentation of logotype of Sochi 2014 Winter Olympics. In 2010, a vertical wind tunnel was shown at the Latvian exhibition of Expo 2010 in Shanghai, China.
Types
Outdoor vertical wind tunnels can either be portable or stationary. Portable vertical wind tunnels are often used in movies and demonstrations, and are often rented for large events such as conventions and state fairs. Portable units offer a dramatic effect for the flying person and the spectators, because there are no walls around the flight area. These vertical wind tunnels allow people to fly with a full or partial outdoor/sky view. Outdoor vertical wind tunnels may also have walls or netting around the wind column, to keep beginner tunnel flyers from falling out of the tunnel.
Stationary indoor vertical wind tunnels include recirculating and non-recirculating types. Non-recirculating vertical wind tunnels usually suck air through inlets near the bottom of the building, through the bodyflight area, and exhaust through the top of the building. Recirculating wind tunnels form an aerodynamic loop with turning vanes, similar to a scientific wind tunnel, but using a vertical loop with a bodyflight chamber within a vertical part of the loop. Recirculating wind tunnels are usually built in climates that are too cold for non-recirculating wind tunnels. The airflow of an indoor vertical wind tunnel is usually smoother and more controlled than that of an outdoor unit. Indoor tunnels are more temperature-controllable, so they are operated year-round even in cold climates.
Various propellers and fan types can be used as the mechanism to move air through a vertical wind tunnel. Motors can either be diesel-powered or electric-powered, and typically provide a vertical column of air between 6 and 16 feet wide. A control unit allows for air speed adjustment by a controller in constant view of the flyers. Wind speed can be adjusted at many vertical wind tunnels, usually between 130 and 300 km/h (80 and 185 mph, or 35 and 80 m/s), to accommodate the abilities of an individual and to compensate for variable body drag during advanced acrobatics.
Safety and market appeal
Indoor skydiving also appeals to the mass market audience that are afraid of heights, since in a vertical wind tunnel, one only floats a few meters above trampoline-type netting. Indoor vertical wind tunnels contain the person within a chamber through the use of walls. While wind tunnel flying is considered a low impact activity, it does exert some strain on the flier's back, neck, and shoulders. Therefore, people with shoulder dislocations or back/neck problems should check with a doctor first. While actual skydiving out of an aircraft is subject to age limitations which vary from country to country, and even from state to state in the US, bodyflying has no set lower or upper limits.
Competitions
A number of competitions based on indoor skydiving have emerged, such as the FAI World Cup of Indoor Skydiving and the Windoor Wind Games.
References
External links
Indoor Skydiving Source - Complete Wind Tunnel Database & Resource
First published article about a Vertical Wind Tunnel specifically designed for Free Fall, Skydiving, Sport Parachuting in CANPARA, the Canadian Sport Parachuting Magazine, at the end of 1979
Dropzone.com Indoor Section - Information Resource and Wind Tunnel Database
Air sports
Aerodynamics
Wind tunnels
Parachuting | Vertical wind tunnel | [
"Chemistry",
"Engineering"
] | 1,442 | [
"Aerospace engineering",
"Aerodynamics",
"Fluid dynamics"
] |
4,228,754 | https://en.wikipedia.org/wiki/United%20States%20National%20Grid | The United States National Grid (USNG) is a multi-purpose location system of grid references used in the United States. It provides a nationally consistent "language of location", optimized for local applications, in a compact, user friendly format. It is similar in design to the national grid reference systems used in other countries. The USNG was adopted as a national standard by the Federal Geographic Data Committee (FGDC) of the US Government in 2001.
Overview
While latitude and longitude are well suited to describing locations over large areas of the Earth's surface, most practical land navigation situations occur within much smaller, local areas. As such, they are often better served by a local Cartesian coordinate system, in which the coordinates represent actual distance units on the ground, using the same units of measurement from two perpendicular coordinate axes. This can improve human comprehension by providing reference of scale, as well as making actual distance computations more efficient.
Paper maps often are published with overlaid rectangular (as opposed to latitude/longitude) grids to provide a reference to identify locations. However, these grids, if non-standard or proprietary (such as so-called "bingo" grids with references such as "B-4"), are typically not interoperable with each other, nor can they usually be used with GPS.
The goal of the USNG is to provide a uniform, nationally consistent rectangular grid system that is interoperable across maps at different scales, as well as with GPS and other location based systems. It is intended to provide a frame of reference for describing and communicating locations that is easier to use than latitude/longitude for many practical applications, works across jurisdictional boundaries, and is simple to learn, teach, and use. It is also designed to be both flexible and scalable so that location references are as compact and concise as possible.
The USNG is intended to supplement—not to replace—other location systems such as street addresses. It can be applied to printed maps and to computer mapping and other (GIS) applications. It has found increasing acceptance especially in emergency management, search and rescue, and other public safety applications; yet, its utility is by no means limited to those fields.
Description and functioning
The USNG is an alpha-numeric reference system that overlays the UTM coordinate system. A number of brief tutorial references explain the system in detail, with examples. . Briefly, an example of a full USNG spatial address (grid reference) is:18S UJ 23371 06519(This example used by the FGDC is the full one-meter grid reference of the Jefferson Pier in Washington DC.)
This full form (15 characters) uniquely identifies a single one-meter grid square out of the entire surface of the earth. It consists of three parts (each of which follows a "read-right-then-up" paradigm familiar with other "X,Y" coordinates):
Grid Zone Designation (GZD); for a world-wide unique address. This consists of up to 2 digits (6-degree longitude UTM zone) for West to East, followed by a letter (8-degree latitude band) from South to North; in this example, "18S".
100,000-meter (100 km) Square Identification; for regional areas. This consists of two letters, the first West to East, the second South to North; in this example, "UJ".
Grid Coordinates; for local areas. This part consists of an even number of digits, in this example, 23371 06519, and specifies a location within the 100 km grid square, relative to its lower-left corner. Split in half, the first part (here 23371), called the "easting", gives the displacement east of the left edge of the square; the second part (here 06519), called the "northing"), gives a distance north of the bottom edge of the containing square.
Users determine the required precision, so a grid reference is typically truncated to fewer than the full 10 digits when less precision is required. These values represent a point position (southwest corner) for an area of refinement:
Ten digits..... 23371 06519 ..Locating a point within a 1 m square
Eight digits..... 2337 0651 ...Locating a point within a 10 m square
Six digits......... 233 065 .....Locating a point within a 100 m square
Four digits......... 23 06 .......Locating a point within a 1000 m (1 km) square
Two digits........... 2 0 .........Locating a point within a 10000 m (10 km) square
Note that when going from a higher- to a lower-precision grid reference, it is important to truncate rather than round when removing the unneeded digits. Because one is always measuring from the lower-left corner of the 100 km square, this ensures that a lower-precision grid reference is a square that contains all of the higher-precision references contained within it.
In addition to truncating references (on the right) when less precision is required, another powerful feature of USNG is the ability to omit (on the left) the Grid Zone Designation, and possibly even the 100 km Square Identification, when one or both of these are unambiguously understood; that is, when operating within a known regional or local area. For example:
Full USNG: 18S UJ 23371 06519 (world-wide unique reference to 1 meter precision)
Without Grid Zone Designation: UJ 2337 0651 (when regional area is understood; here to 10 meter precision)
Without 100 km Square Identification: 233 065 (when local area is understood; here to 100 meter precision)
Thus in practical usage, USNG references are typically very succinct and compact, making them convenient (and less error prone) for communication.
History
Rectangular, distance-based (Cartesian) coordinate systems have long been recognized for their practical utility for land measurement and geolocation over local areas. In the United States, the Public Land Survey System (PLSS), created in 1785 in order to survey land newly ceded to the nation, introduced a rectangular coordinate system to improve on the earlier metes-and-bounds survey basis used earlier in the original colonies. In the first half of the 20th century, State Plane Coordinate Systems (SPCS) brought the simplicity and convenience of Cartesian coordinates to state-level areas, providing high accuracy (low distortion) survey-grade coordinates for use primarily by state and local governments. (Both of these planar systems remain in use today for specialized purposes.)
Internationally, during the period between World Wars I and II, several European nations mapped their territory with national-scale grid systems optimized for the geography of each country, such as the Ordnance Survey National Grid (British National Grid). Near the end of World War II, the Universal Transverse Mercator (UTM) coordinate system extended this grid concept around the globe, dividing it into 60 zones of 6 degrees longitude each. Circa 1949, the US further refined UTM for ease of use (and combined it with the Universal Polar Stereographic system covering polar areas) to create the Military Grid Reference System (MGRS), which remains the geocoordinate standard used across the militaries of NATO counties.
In the 1990s, a US grass-roots citizen effort led to the Public X-Y Mapping Project, a not-for-profit organization created specifically to promote the acceptance of a national grid for the United States. The Public XY Mapping Project developed the idea, conducting informal tests and surveys to determine which coordinate reference system best met the requirements of national consistency and ease of human use. Based on its findings, a standard based on the MGRS was adopted and brought to the Federal Geographic Data Committee (FGDC) in 1998. After an iterative review process and public comment period, the USNG was adopted by the FGDC as standard FGDC-STD-011-2001 in December 2001.
Since then, the USNG has seen gradual but steadily increasing adoption both in formal standards and in practical use and applications, in public safety and in other fields.
Advantages over latitude/longitude
Users encountering the USNG (or similar grid reference systems) sometimes question why they are used instead of latitude and longitude coordinates, with which they may be more familiar. Proponents note that, in contrast to latitude and longitude coordinates, the USNG provides:
Coordinate units that represent actual distances on the ground
Equal distance units in both east–west and north–south directions
An intuitive sense of scale and distance, across a local area
Simpler distance calculation (by Pythagorean Theorem, rather than spherical trigonometry)
A single unambiguous representation instead of the three (3) formats of latitude and longitude, each in widespread use, and each having punctuation sub-variants:
degrees-minutes-seconds (DMS): N 38°53'23.3", W 077°02'11.6"
degrees-minutes-decimal minutes (DMM or DDM): 38°53.388' N, 077°02.193' W
decimal degrees (DDD or DD): 38.88980°, -077.03654°
This format ambiguity has led to confusion with potentially serious consequences, particularly in emergency situations.
References comprising only alphanumeric characters (letters and positive numbers). (Spaces have no significance but are allowed for readability.)
No negative numbers, hemisphere indicators (+, -, N, S, E, W), decimal points (.), or special symbols (°, ′, ″, :).
A familiar "read right then up" convention of XY Cartesian coordinates.
An explicit convention for shortening references (at two levels) when the local or regional area is already unambiguously known.
A reference to a definite grid square with variable, explicit precision (size), rather than to a point with (usually) unspecified precision implicit in number of decimal places.
All of the above also lead to USNG references being typically very succinct and compact, with flexibility to convey precise location information in a short sequence of characters that is easily relayed in writing or by voice.
Limitations
As with any projection that seeks to represent the curved Earth as a flat surface, distortions and tradeoffs will inevitably occur. The USNG attempts to balance and minimize these, consistent with making the grid as useful as possible for its intended purpose of efficiently communicating practical locations. Since the UTM (the basis for USNG) is not a single projection, but rather a set of 6-degree longitudinal zones, there will necessarily be a local discontinuity along each of the 'seam' meridians between zones. However, every point continues to have a well-defined, unique geoaddress, and there are established conventions to minimize confusion near zone intersections. The six-degree zone width of UTM strikes a balance between the frequency of these discontinuities versus distortion of scale, which would increase unacceptably if the zones were made wider. (UTM further uses a 0.9996 scale factor at the central meridian, growing to 1.0000 at two meridians offset from the center, and increasing toward the zone boundaries, so as to minimize the overall effect of scale distortion across the zone breadth.) The USNG is not intended for surveying, for which a higher-precision (lower-distortion) coordinate system such as SPCS would be more appropriate. Also, since USNG north-south grid lines are (by design) a fixed distance from the zone central meridian, only the central meridian itself will be aligned with "true north". Other grid lines establish a local "grid north", which will differ from true north by a small amount. The amount of this deviation, which is indicated on USGS topographic maps, is typically much less than the magnetic declination (between true north and magnetic north), and is small enough that it can be disregarded in most land navigation situations.
Adoption and current applications
Standards
Since its adoption as a national standard in 2001, the USNG has itself been incorporated into standards and operating procedures of other organizations:
In 2011, the US Government's National Search and Rescue Committee (NSARC) released Version 1.0 of the Land Search and Rescue Addendum to the National Search and Rescue Supplement to the International Aeronautical and Maritime Search and Rescue Manual. This document specifies the US National Grid as the primary standard coordinate reference system to be used for all land-based search and rescue (SAR) activities in the US.
In 2015, the Federal Emergency Management Agency (FEMA) issued FEMA Directive 092–5, "Use of the United States National Grid (USNG)":
"POLICY STATEMENT: FEMA will use the United States National Grid (USNG) as its standard geographic reference system for land-based operations and will encourage use of the USNG among whole community partners."
A number of state and local Emergency Management agencies have also adopted the USNG for their operations.
Other organizations including the National Fire Protection Association (NFPA) and the Society of Automotive Engineers (SAE) have incorporated the USNG into specific standards issued by those organizations.
Gridded maps
The utility of almost every large or medium scale map (paper or electronic) can be greatly enhanced by having an overlaid coordinate grid. The USNG provides such a grid that is universal, interoperable, non-proprietary, works across all jurisdictions, and can readily be used with GPS receivers and other location service applications.
In addition to providing a convenient means to identify and communicate specific locations (points and areas), an overlaid USNG grid also provides an orientation, and—because it is distance based—a scale of distance that is present across the map.
USGS topographic maps have for decades been published with 1000-meter UTM tick marks in the map collar, and sometimes with full grid lines across the map. Recent editions of these maps (those referenced to the North American datum of 1983, or NAD83) are compatible with USNG, and current editions also contain a standard USNG information box in the collar which identifies the GZD(s) (Grid Zone Designator(s) and the 100 km Grid Square ID(s) covering the area of the particular map. USNG can now be found on various pre-printed and custom-printed maps available for purchase, or generated from various mapping software packages.
Software applications
A growing number of software applications incorporate or refer to the US National Grid. See the External Links section below for links to some of these, including The National Map (USGS). These applications include conventional mapping applications with overlaid USNG grid and/or coordinate readouts, and several 'you-are-here' mobile applications which give the user's current USNG coordinates, such as USNGapp.org and FindMeSAR.com.
Mission Manager, the most widely used incident management software tool for first responders, integrates the USNG in its functionality.
Search and rescue (SAR)
As noted above under Standards, since 2011 the USNG has been designated by the US Government's National Search and Rescue Committee (NSARC) as the primary coordinate reference system to be used for all land-based search and rescue (SAR) activities in the US. (Latitude and longitude [DMM variant] may be used as the secondary system for land responders; especially when coordinating with air and sea based responders who may use it as their primary system, and USNG as secondary.)
The National Association for Search and Rescue (NASAR) is moving its education and certification testing programming towards USNG. Other organizations such as the National Alliance for Public Safety GIS (NAPSG) also provide USNG SAR training.
FEMA Urban Search and Rescue (USAR) task forces including Florida Task Force 4 (FL-TF4) and Iowa Task Force 1 (IA-TF1) have incorporated the USNG into their training and operations.
Emergency Location Marker (ELM)
Responders are often faced with significant geolocation issues when a responding to an emergency without a street address. This is particularly true in the recreational trail environment:
34% of U.S. response calls go to a location without a street address – recreational trails are a leading category.
Trails with location signs typically employ an approach unique to that park or trail system, and
Locally unique marking systems have no value to responders unless those locations are readily available via dispatch and response systems.
In response to these issues, in 2009, a project funded by the nonprofit SharedGeo and University of Minnesota/Minnesota Department of Transportation Local Operational Research Assistance (OPERA) grant program got underway which had the following objectives:
Develop a standardized Emergency Location Marker (ELM) which can be used anywhere in the nation in a variety of scenarios,
Align the marking system with established federal and state cartographic and signage standards,
Ensure the format leverages GPS instead of requiring constant updating of Computer Aided Dispatch (CAD) systems,
Use a consistent approach which over time will become instantly recognizable by the public, and
Involve multiple stakeholders during development to ensure a "Best Practices" outcome.
After three years of field research and vetting by multiple focus groups of trail users, responders, and geospatial experts, a design based on USNG was adopted.
This format, which can be used anywhere in the United States, was originally offered in three sizes to conform to federal, state and local signage standards:
6" x 9" (15 cm x 23 cm) -- for non-motorized trails
9" x 12" (23 cm x 30 cm) -- for motorized trails
12" x 12" (30 cm x 30 cm) -- for trail heads and huts
In the years since introduction, the USNG ELM program now includes vertical ELM versions for breakaway scenarios (e.g. mountain bike trails), ELM information signs, ELM stickers to retrofit trail posts, and corresponding apps such as USNGapp.org.
USNG ELM implementations can be found in Minnesota, Florida, Georgia, Hawaii, Michigan, and other states.
First responders
The USNG can increase the effectiveness of all types of emergency response, ranging from missing persons searches to off-road medical responses. In Lake County MN, with 900 miles of recreational trails, dispatchers and first responders have been provided the tools and training to use USNG as their primary means of geo-location. The goal of this education for responders and the public is to "Take the 'Search' out of 'Search and Rescue.'"
In addition to ELM signs, notices at trailheads encourage hikers and off-road vehicle operators to "Download this USNG App" on their cell phones. Trail maps including USNG grid lines allow responders to interpolate locations from 911 callers who give their coordinates from ELMs or GPS apps. Cell phones also provide responders the opportunity to counsel lost or injured persons to determine their location by downloading USNG apps on the spot. This saves time and effort for responders and patients alike who are not on roads or addressed locations. When multiple teams of responders are working in close vicinity, such as during woods searches for lost individuals, communicating with USNG allows them to truncate their coordinate string to eight digits, giving their location within 10 meters without the use of decimals, special symbols or unit descriptors, and intuitively estimate the distance and direction between teams for better coordination.
Emergency management
Emergency managers coordinate response to and recovery from all types of natural hazards and man-made threats. In large scale events, where responders may be imported from many jurisdictions, coordination of geo-location formats is mandatory. The USNG is used to reduce confusion and improve efficiency in response to wildfires, floods and hurricanes and other events.
As noted above, In 2015, the Federal Emergency Management Agency (FEMA) issued FEMA Directive 092–5, "Use of the United States National Grid (USNG)":"POLICY STATEMENT: FEMA will use the United States National Grid (USNG) as its standard geographic reference system for land-based operations and will encourage use of the USNG among whole community partners." "Lessons learned from several large-scale disasters within the past three decades highlight the need for a common, geographic reference system in order to anticipate resource requirements, facilitate decision-making, and accurately deploy resources. ... Decision support tools that apply the USNG enable emergency managers to locate positions and identify areas of interest or operations where traditional references (i.e., landmarks or street signs) may be destroyed, damaged, or missing due to the effects of a disaster."The USNG is also seen as a tool for enhancing situational awareness and facilitating a common operating picture in emergency scenarios.
The Department of Defense also has recognized the role of the civil USNG standard for the Armed Forces in support of homeland security and homeland defense.
Asset identification and mapping
Organizations such as public utilities, transportation departments, emergency responders, and others own or rely upon fixed, field-based assets which they need to track, inventory, maintain, and locate efficiently when needed. Examples include fire hydrants, overhead utility poles, storm drains, roadside signs, and many others.
Assigning unique identifiers is a common method for identifying and referencing particular assets. A strategically assigned asset identifier can include location information, thereby assuring both that the name is unique and that the location of the asset is always known. The USNG offers a method to locate any place or any object in the world with a brief alphanumeric code, which can be shortened depending on the known service area, and enhanced with a prefix code to identify the type of asset. Organizations have successfully fielded this type of USNG-based asset naming recently:"The Mohawk Valley Water Authority serves 40,000 customers in the Greater Utica Area in Central New York. We have 700+ miles of pipe, 28 storage tanks, 21 pump stations, and numerous fire hydrants. We communicate hydrant status information internally and with many fire departments. We need to name these items meaningfully. We have tried several naming conventions—both sequential and hierarchical—with confusing and disappointing results. We converted to USNG asset naming and have used this successfully for over 4 years!" -- Elisabetta T. DeGeronimo, Watershed/GIS Coordinator at Mohawk Valley Water Authority, Utica, New York
--
"Hundreds of thousands of roadside assets—culverts, drains, signs on ground mounts, signs on overhead support structures, signs on span wires, and guide rails—are found along the routes maintained by the New York State Department of Transportation. In the past, the existence of these assets was only recorded in construction plans and the minds and memories of dedicated career staff. Our new asset naming convention, based upon the U.S. National Grid, benefits the entire department and particularly the field forces." -- Mary Susan Knauss, Senior Transportation Analyst, Office of Transportation Management, New York State Department of Transportation, Albany, New York These and other contributors at Florida State University and elsewhere have collaborated to produce a manual to guide GIS users and others through the practical steps of naming assets using the USNG.
Recreation and other uses
There has been a concerted outreach to educate the public in the uses and advantages of USNG. Sharing USNG maps and apps with friends and families encourages them to keep each other informed of their locations when traveling off-road (i.e., in wilderness or on the water) for work or recreation. In addition, USNG can be used to mark and communicate locations in busy or remote urban areas, including where to meet friends in a wooded park, locating a car in a mall parking lot, or requesting help inside a large warehouse or business complex. One doesn't even need compass directions.
Scientific research fieldwork can also benefit.
Future direction and initiatives
The USNG has seen steady but gradually increasing adoption and use since the standard was approved in 2001. Formal adoption by other standards bodies has taken place, while practical adoption in actual use has been more uneven in achieving its full potential. In 2018, the USNG Institute (UGNGI) was established "to study and report on USNG implementation efforts taking place across the United States" , as was a USNG Implementation Working Group (USNG IWG) to help assist and coordinate implementation efforts.
Further adoption of USNG for public safety and the Emergency Location Marker (ELM) system may depend in part on greater coordination of USNG adoption at Public Safety Answering Points (PSAPs, or 911 centers), in their procedures and Computer-Aided-Dispatch (CAD) systems. Currently such implementations, being generally under local control, have been more fragmented than some national adoption initiatives.
Proponents of the USNG envision many other ways in which it could play roles in improving safety, convenience, and quality of life.
See also
Cartesian coordinate system
Grid reference
Ordnance Survey National Grid (British National Grid)
Irish national grid reference system
Spatial Reference System
List of National Coordinate Reference Systems
Universal Transverse Mercator coordinate system (UTM)
Military Grid Reference System (MGRS)
Federal Geographic Data Committee (FGDC)
Public Land Survey System (PLSS)
State Plane Coordinate System (SPCS)
References
Further reading
A Quick Guide to Using USNG Coordinates (MapTools)
How to Read US National Grid (USNG) Coordinates (FGDC/NGA)
How to Read USNG Spatial Addresses (FGDC)
A Quick Guide to the USNG (NAPSG via USNG Center)
United States National Grid Standard (FGDC-STD-011-2001) (FGDC, official standard)
FEMA Directive 092-5: Use of the United States National Grid (USNG) (FEMA policy directive)
Implementation Guide to the USNG (NAPSG)
Emergency Location Marker (ELM) system (USNG Florida on Medium)
Hikers, Know Your Grid! (USNG Florida on Medium)
911 Caller Location Solutions (USNG Florida on Medium)
Why PSAPs Should Be Using The U.S. National Grid To Find 911 Callers (Kova Corp)
An Introduction to Standards-Based GIT and the US National Grid
Instructions for GIS Asset Naming Using the U.S. National Grid (USNG)
External links
General information sites about the USNG:
U.S. National Grid Information Center
USNG home page at the Federal Geographic Data Committee (FGDC)
USNG resources at the NAPSG Foundation
USNG resources at ESRI
USNG Florida
USNG Iowa
USNG resources at Florida Division of Emergency Management
USNG resources at Minnesota Geospatial Information Office
USNG resources at Dakota County (MN)
USNG resources at Clinton County (OH)
Online mapping and coordinate conversion sites:
USNGapp.org and FindMeSAR.com (mobile applications that give the user's current coordinates, e.g., for relay on calls for help)
GISsurfer (a general purpose web map with a USNG overlay and more)
GISsurfer: USNG and MGRS Coordinates (documentation, including "Why are USNG coordinates important?")
NAPSG Situational Awareness Viewer (select Grid Overlay button in toolbar for USNG)
The National Map Viewer (USGS; set coordinate display to USNG)
NOAA/NWS Enhanced Data Display (EDD) (with USNG coordinate display enabled)
Utility to convert latitude and longitude to USNG (NOAA/NGS)
Programmer resource: JavaScript utility for converting between lat/long and MGRS/USNG
Emergency Location Marker (ELM) system brief introductory videos:
Cook & Lake Counties (MN) (49s)
Cobb County (GA):
"Cobb County Expands Trail Marker Program" (1m 59s)
"Cobb's Trail Marker Program EXPLAINED!" (3m 20s)
"Cobb's Trail Markers Now at Kennesaw Mountain [National Battlefield Park]!" (2m 10s)
Fire Engineering's USNG Video Series
Geography of the United States
Cartography of the United States
Geographic coordinate systems
Geocodes | United States National Grid | [
"Mathematics"
] | 5,845 | [
"Geographic coordinate systems",
"Coordinate systems"
] |
4,228,767 | https://en.wikipedia.org/wiki/World%27s%20Largest%20Buffalo | World's Largest Buffalo is a sculpture of an American Bison located in Jamestown, North Dakota, United States, at the Frontier Village. It is visible from Interstate 94, overlooking the city from above the James River valley. The statue is a significant tourist draw for Jamestown and the source of its nickname, The Buffalo City.
Description
The sculpture is tall and long and weighs . It was constructed with stucco and cement around a steel beam frame shaped with wire mesh.
The sculpture is complete in many respects of detail. It is sculptured after a male bison in mid-stride and is anatomically correct.
History
The sculpture was commissioned in 1959 by local businessman Harold Newman, designed by Elmer Petersen, Jamestown College Art Professor and sculptor, and constructed under Peterson's supervision by professional construction workers and community members.
The final construction cost was approximately US$8,500 in 1969; a significant overrun from initial estimates closer to $4,600. The concrete slab that lies under the sculpture was added later and was not included in the initial cost.
When originally constructed, the statue stood alone on a hill south of Jamestown. Beginning in the mid-1960s, the city began expanding the site with the collection of a small number of historic buildings moved there in an attempt to recreate the look of a small Midwestern town in the 1800s. Named Frontier Village, the project has grown over the years to encompass several acres (hectares) with a complex of buildings and other attractions, including the National Buffalo Museum. However, the Buffalo remains the featured attraction.
In June 2007, the city of Jamestown received a grant of $16,500 from Hampton Hotels' Save-A-Landmark program to refurbish the buffalo. The money was used to repaint the buffalo to look more lifelike and to enlarge the horns. Original designer Elmer Petersen directly oversaw the renovation.
On July 24, 2010, the World's Largest Buffalo was named "Dakota Thunder", after a contest that drew more than 3,500 entries.
References
External links
Outdoor sculptures in North Dakota
Buildings and structures in Jamestown, North Dakota
Tourist attractions in Stutsman County, North Dakota
Roadside attractions in North Dakota
1959 sculptures
Sculptures of bison
Stucco sculptures
Construction records
Colossal statues in the United States
1959 establishments in North Dakota
Animal sculptures in North Dakota | World's Largest Buffalo | [
"Engineering"
] | 460 | [
"Construction",
"Construction records"
] |
11,009,033 | https://en.wikipedia.org/wiki/Type%20II%20supernova | A Type II supernova or SNII (plural: supernovae) results from the rapid collapse and violent explosion of a massive star. A star must have at least eight times, but no more than 40 to 50 times, the mass of the Sun () to undergo this type of explosion. Type II supernovae are distinguished from other types of supernovae by the presence of hydrogen in their spectra. They are usually observed in the spiral arms of galaxies and in H II regions, but not in elliptical galaxies; those are generally composed of older, low-mass stars, with few of the young, very massive stars necessary to cause a supernova.
Stars generate energy by the nuclear fusion of elements. Unlike the Sun, massive stars possess the mass needed to fuse elements that have an atomic mass greater than hydrogen and helium, albeit at increasingly higher temperatures and pressures, causing correspondingly shorter stellar life spans. The degeneracy pressure of electrons and the energy generated by these fusion reactions are sufficient to counter the force of gravity and prevent the star from collapsing, maintaining stellar equilibrium. The star fuses increasingly higher mass elements, starting with hydrogen and then helium, progressing up through the periodic table until a core of iron and nickel is produced. Fusion of iron or nickel produces no net energy output, so no further fusion can take place, leaving the nickel–iron core inert. Due to the lack of energy output creating outward thermal pressure, the core contracts due to gravity until the overlying weight of the star can be supported largely by electron degeneracy pressure.
When the compacted mass of the inert core exceeds the Chandrasekhar limit of about , electron degeneracy is no longer sufficient to counter the gravitational compression. A cataclysmic implosion of the core takes place within seconds. Without the support of the now-imploded inner core, the outer core collapses inwards under gravity and reaches a velocity of up to 23% of the speed of light, and the sudden compression increases the temperature of the inner core to up to 100 billion kelvins. Neutrons and neutrinos are formed via reversed beta-decay, releasing about 1046 joules (100 foe) in a ten-second burst. The collapse of the inner core is halted by the repulsive nuclear force and neutron degeneracy, causing the implosion to rebound and bounce outward. The energy of this expanding shock wave is sufficient to disrupt the overlying stellar material and accelerate it to escape velocity, forming a supernova explosion. The shock wave and extremely high temperature and pressure rapidly dissipate but are present for long enough to allow for a brief period during which the
production of elements heavier than iron occurs. Depending on initial mass of the star, the remnants of the core form a neutron star or a black hole. Because of the underlying mechanism, the resulting supernova is also described as a core-collapse supernova.
There exist several categories of Type II supernova explosions, which are categorized based on the resulting light curve—a graph of luminosity versus time—following the explosion. Type II-L supernovae show a steady (linear) decline of the light curve following the explosion, whereas Type II-P display a period of slower decline (a plateau) in their light curve followed by a normal decay. Type Ib and Ic supernovae are a type of core-collapse supernova for a massive star that has shed its outer envelope of hydrogen and (for Type Ic) helium. As a result, they appear to be lacking in these elements.
Formation
Stars far more massive than the sun evolve in complex ways. In the core of the star, hydrogen is fused into helium, releasing thermal energy that heats the star's core and provides outward pressure that supports the star's layers against collapse – a situation known as stellar or hydrostatic equilibrium. The helium produced in the core accumulates there. Temperatures in the core are not yet high enough to cause it to fuse. Eventually, as the hydrogen at the core is exhausted, fusion starts to slow down, and gravity causes the core to contract. This contraction raises the temperature high enough to allow a shorter phase of helium fusion, which produces carbon and oxygen, and accounts for less than 10% of the star's total lifetime.
In stars of less than eight solar masses, the carbon produced by helium fusion does not fuse, and the star gradually cools to become a white dwarf. If they accumulate more mass from another star, or some other source, they may become Type Ia supernovae. But a much larger star is massive enough to continue fusion beyond this point.
The cores of these massive stars directly create temperatures and pressures needed to cause the carbon in the core to begin to fuse when the star contracts at the end of the helium-burning stage. The core gradually becomes layered like an onion, as progressively heavier atomic nuclei build up at the center, with an outermost layer of hydrogen gas, surrounding a layer of hydrogen fusing into helium, surrounding a layer of helium fusing into carbon via the triple-alpha process, surrounding layers that fuse to progressively heavier elements. As a star this massive evolves, it undergoes repeated stages where fusion in the core stops, and the core collapses until the pressure and temperature are sufficient to begin the next stage of fusion, reigniting to halt collapse.
{| class="wikitable"
|+ Core-burning nuclear fusion stages for a 25-solar mass star
!rowspan="2"| Process
!rowspan="2"| Main fuel
!rowspan="2"| Main products
!colspan="3"| star
|-
!style="font-weight: normal"| Temperature(K)
!style="font-weight: normal"| Density(g/cm3)
!style="font-weight: normal"| Duration
|-
|| hydrogen burning
|| hydrogen
|| helium
| style="text-align:center;"|
| style="text-align:center;"| 10
| style="text-align:center;"|
|-
|| triple-alpha process
|| helium
|| carbon, oxygen
| style="text-align:center;"|
| style="text-align:center;"| 2000
| style="text-align:center;"|
|-
|| carbon-burning process
|| carbon
|| Ne, Na, Mg, Al
| style="text-align:center;"|
| style="text-align:center;"|
| style="text-align:center;"| 1000 years
|-
|| neon-burning process
|| neon
|| O, Mg
| style="text-align:center;"|
| style="text-align:center;"|
| style="text-align:center;"| 3 years
|-
|| oxygen-burning process
|| oxygen
|| Si, S, Ar, Ca
| style="text-align:center;"|
| style="text-align:center;"|
| style="text-align:center;"| 0.3 years
|-
|| silicon-burning process
|| silicon
|| nickel (decays into iron)
| style="text-align:center;"|
| style="text-align:center;"|
| style="text-align:center;"| 5 days
|}
Core collapse
The factor limiting this process is the amount of energy that is released through fusion, which is dependent on the binding energy that holds together these atomic nuclei. Each additional step produces progressively heavier nuclei, which release progressively less energy when fusing. In addition, from carbon-burning onwards, energy loss via neutrino production becomes significant, leading to a higher rate of reaction than would otherwise take place. This continues until nickel-56 is produced, which decays radioactively into cobalt-56 and then iron-56 over the course of a few months. As iron and nickel have the highest binding energy per nucleon of all the elements, energy cannot be produced at the core by fusion, and a nickel-iron core grows. This core is under huge gravitational pressure. As there is no fusion to further raise the star's temperature to support it against collapse, it is supported only by degeneracy pressure of electrons. In this state, matter is so dense that further compaction would require electrons to occupy the same energy states. However, this is forbidden for identical fermion particles, such as the electron – a phenomenon called the Pauli exclusion principle.
When the core's mass exceeds the Chandrasekhar limit of about , degeneracy pressure can no longer support it, and catastrophic collapse ensues. The outer part of the core reaches velocities of up to (23% of the speed of light) as it collapses toward the center of the star. The rapidly shrinking core heats up, producing high-energy gamma rays that decompose iron nuclei into helium nuclei and free neutrons via photodisintegration. As the core's density increases, it becomes energetically favorable for electrons and protons to merge via inverse beta decay, producing neutrons and elementary particles called neutrinos. Because neutrinos rarely interact with normal matter, they can escape from the core, carrying away energy and further accelerating the collapse, which proceeds over a timescale of milliseconds. As the core detaches from the outer layers of the star, some of these neutrinos are absorbed by the star's outer layers, beginning the supernova explosion.
For Type II supernovae, the collapse is eventually halted by short-range repulsive neutron-neutron interactions, mediated by the strong force, as well as by degeneracy pressure of neutrons, at a density comparable to that of an atomic nucleus. When the collapse stops, the infalling matter rebounds, producing a shock wave that propagates outward. The energy from this shock dissociates heavy elements within the core. This reduces the energy of the shock, which can stall the explosion within the outer core.
The core collapse phase is so dense and energetic that only neutrinos are able to escape. As the protons and electrons combine to form neutrons by means of electron capture, an electron neutrino is produced. In a typical Type II supernova, the newly formed neutron core has an initial temperature of about 100 billion kelvins, 104 times the temperature of the Sun's core. Much of this thermal energy must be shed for a stable neutron star to form, otherwise the neutrons would "boil away". This is accomplished by a further release of neutrinos. These 'thermal' neutrinos form as neutrino-antineutrino pairs of all flavors, and total several times the number of electron-capture neutrinos. The two neutrino production mechanisms convert the gravitational potential energy of the collapse into a ten-second neutrino burst, releasing about 1046 joules (100 foe).
Through a process that is not clearly understood, about 1%, or 1044 joules (1 foe), of the energy released (in the form of neutrinos) is reabsorbed by the stalled shock, producing the supernova explosion. Neutrinos generated by a supernova were observed in the case of Supernova 1987A, leading astrophysicists to conclude that the core collapse picture is basically correct. The water-based Kamiokande II and IMB instruments detected antineutrinos of thermal origin, while the gallium-71-based Baksan instrument detected neutrinos (lepton number = 1) of either thermal or electron-capture origin.
When the progenitor star is below about – depending on the strength of the explosion and the amount of material that falls back – the degenerate remnant of a core collapse is a neutron star. Above this mass, the remnant collapses to form a black hole. The theoretical limiting mass for this type of core collapse scenario is about . Above that mass, a star is believed to collapse directly into a black hole without forming a supernova explosion, although uncertainties in models of supernova collapse make calculation of these limits uncertain.
Theoretical models
The Standard Model of particle physics is a theory which describes three of the four known fundamental interactions between the elementary particles that make up all matter. This theory allows predictions to be made about how particles will interact under many conditions. The energy per particle in a supernova is typically 1–150 picojoules (tens to hundreds of MeV). The per-particle energy involved in a supernova is small enough that the predictions gained from the Standard Model of particle physics are likely to be basically correct. But the high densities may require corrections to the Standard Model. In particular, Earth-based particle accelerators can produce particle interactions which are of much higher energy than are found in supernovae, but these experiments involve individual particles interacting with individual particles, and it is likely that the high densities within the supernova will produce novel effects. The interactions between neutrinos and the other particles in the supernova take place with the weak nuclear force, which is believed to be well understood. However, the interactions between the protons and neutrons involve the strong nuclear force, which is much less well understood.
The major unsolved problem with Type II supernovae is that it is not understood how the burst of neutrinos transfers its energy to the rest of the star producing the shock wave which causes the star to explode. From the above discussion, only one percent of the energy needs to be transferred to produce an explosion, but explaining how that one percent of transfer occurs has proven extremely difficult, even though the particle interactions involved are believed to be well understood. In the 1990s, one model for doing this involved convective overturn, which suggests that convection, either from neutrinos from below, or infalling matter from above, completes the process of destroying the progenitor star. Heavier elements than iron are formed during this explosion by neutron capture, and from the pressure of the neutrinos pressing into the boundary of the "neutrinosphere", seeding the surrounding space with a cloud of gas and dust which is richer in heavy elements than the material from which the star originally formed.
Neutrino physics, which is modeled by the Standard Model, is crucial to the understanding of this process. The other crucial area of investigation is the hydrodynamics of the plasma that makes up the dying star; how it behaves during the core collapse determines when and how the shockwave forms and when and how it stalls and is reenergized.
In fact, some theoretical models incorporate a hydrodynamical instability in the stalled shock known as the "Standing Accretion Shock Instability" (SASI). This instability comes about as a consequence of non-spherical perturbations oscillating the stalled shock thereby deforming it. The SASI is often used in tandem with neutrino theories in computer simulations for re-energizing the stalled shock.
Computer models have been very successful at calculating the behavior of Type II supernovae when the shock has been formed. By ignoring the first second of the explosion, and assuming that an explosion is started, astrophysicists have been able to make detailed predictions about the elements produced by the supernova and of the expected light curve from the supernova.
Light curves for Type II-L and Type II-P supernovae
When the spectrum of a Type II supernova is examined, it normally displays Balmer absorption lines – reduced flux at the characteristic frequencies where hydrogen atoms absorb energy. The presence of these lines is used to distinguish this category of supernova from a Type I supernova.
When the luminosity of a Type II supernova is plotted over a period of time, it shows a characteristic rise to a peak brightness followed by a decline. These light curves have an average decay rate of 0.008 magnitudes per day; much lower than the decay rate for Type Ia supernovae. Type II is subdivided into two classes, depending on the shape of the light curve. The light curve for a Type II-L supernova shows a steady (linear) decline following the peak brightness. By contrast, the light curve of a Type II-P supernova has a distinctive flat stretch (called a plateau) during the decline; representing a period where the luminosity decays at a slower rate. The net luminosity decay rate is lower, at 0.0075 magnitudes per day for Type II-P, compared to 0.012 magnitudes per day for Type II-L.
The difference in the shape of the light curves is believed to be caused, in the case of Type II-L supernovae, by the expulsion of most of the hydrogen envelope of the progenitor star. The plateau phase in Type II-P supernovae is due to a change in the opacity of the exterior layer. The shock wave ionizes the hydrogen in the outer envelope – stripping the electron from the hydrogen atom – resulting in a significant increase in the opacity. This prevents photons from the inner parts of the explosion from escaping. When the hydrogen cools sufficiently to recombine, the outer layer becomes transparent.
Type IIn supernovae
The "n" denotes narrow, which indicates the presence of narrow or intermediate width hydrogen emission lines in the spectra. In the intermediate width case, the ejecta from the explosion may be interacting strongly with gas around the star – the circumstellar medium. The estimated circumstellar density required to explain the observational properties is much higher than that expected from the standard stellar evolution theory. It is generally assumed that the high circumstellar density is due to the high mass-loss rates of the Type IIn progenitors. The estimated mass-loss rates are typically higher than per year. There are indications that they originate as stars similar to luminous blue variables with large mass losses before exploding. SN 1998S and SN 2005gl are examples of Type IIn supernovae; SN 2006gy, an extremely energetic supernova, may be another example.
Some supernovae of type IIn show interactions with the circumstellar medium, which leads to an increased temperature of the cirumstellar dust. This warm dust can be observed as a brightening in the mid-infrared light. If the circumstellar medium extends further from the supernova, the mid-infrared brightening can cause an infrared echo, causing the brightening to last more than 1000 days. These kind of supernovae belong to the rare 2010jl-like supernovae, named after the archetypal SN 2010jl. Most 2010jl-like supernovae were discovered with the decommissioned Spitzer Space Telescope and the Wide-Field Infrared Survey Explorer (e.g. SN 2014ab, SN 2017hcc).
Type IIb supernovae
A Type IIb supernova has a weak hydrogen line in its initial spectrum, which is why it is classified as a Type II. However, later on the H emission becomes undetectable, and there is also a second peak in the light curve that has a spectrum which more closely resembles a Type Ib supernova. The progenitor could have been a massive star that expelled most of its outer layers, or one which lost most of its hydrogen envelope due to interactions with a companion in a binary system, leaving behind the core that consisted almost entirely of helium. As the ejecta of a Type IIb expands, the hydrogen layer quickly becomes more transparent and reveals the deeper layers.
The classic example of a Type IIb supernova is SN 1993J, while another example is Cassiopeia A. The IIb class was first introduced (as a theoretical concept) by Woosley et al. in 1987, and the class was soon applied to SN 1987K and SN 1993J.
See also
History of supernova observation
Supernova remnant
References
External links
Type 2 | Type II supernova | [
"Chemistry",
"Astronomy"
] | 4,154 | [
"Supernovae",
"Astronomical events",
"Explosions"
] |
11,009,044 | https://en.wikipedia.org/wiki/Dinner%20in%20the%20Sky | Dinner in the Sky is a Belgian-based novelty restaurant service which uses a crane to hoist its diners, table, and waiting staff into the air. Forbes magazine called it one of the world's ten most unusual restaurants.
Dinner in the Sky has mobile services available in 60 nations, and has operations in various cities including Paris and Las Vegas.
History
In 2007, David Ghysels, the owner of a marketing and communications company, partnered with Stefan Kerkhofs, a bungee jumping organizer, to create an aerial-based dinner for the Jeunes Restaurateurs d'Europe association. Shortly afterwards, Ghysels and Kerkhofs began receiving telephone calls from people around the world who wished to replicate their aerial dinner concept; the two men subsequently chose to franchise their idea. Ghysels said, "People were getting bored with just going to the same old restaurants."
In 2008, Las Vegas resident Michael Hinden and his wife Janeen discovered Dinner in the Sky during a trade fair. On 31 December 2008, the Hindens tested the concept in Las Vegas as part of a New Year's Eve party for their friends and business partners. In March 2009, Michael Hinden began operating a Las Vegas-based Dinner in the Sky on West Sahara Avenue during weekends.
By August 2009, Dinner in the Sky operated in more than a dozen countries, including Canada and China. At that time, Hinden planned to move his restaurant to the Las Vegas Strip, at the site of a vacant building previously used as a sales office for the nearby Trump International Hotel. Hinden, who had 15 employees working for his restaurant service, hoped to begin operating six days a week at the new location. However, Steve Wynn, owner of the Wynn and Encore properties across the street, objected to the plan, calling Dinner in the Sky a "carnival-like attraction." Boyd Gaming also opposed the relocation, which would place the restaurant near its Echelon Place project. Hinden's relocation plans were rejected by county officials who noted safety concerns and felt that such a restaurant did not belong on the Las Vegas Strip.
In January 2013, plans were underway for a new, permanent location in Las Vegas, near CityCenter. The new location was to cost $4 million, and would include the conversion of an office into a ground-based restaurant and bar. A groundbreaking ceremony for the Las Vegas location took place in June 2013. The Las Vegas restaurant was the company's first permanent location.
References
External links
Official website
Dinner in the Sky Video
Amusement rides
Restaurants in Belgium | Dinner in the Sky | [
"Physics",
"Technology"
] | 526 | [
"Physical systems",
"Machines",
"Amusement rides"
] |
11,009,131 | https://en.wikipedia.org/wiki/Short-tailed%20chinchilla | The short-tailed chinchilla (Chinchilla chinchilla) is a small rodent part of the Chinchillidae family and is classified as an endangered species by the IUCN. Originating in South America, the chinchilla is part of the genus Chinchilla, which is separated into two species: the long-tailed chinchilla and the short-tailed chinchilla. Although the short-tailed chinchilla used to be found in Chile, Argentina, Peru, and Bolivia, the geographical distribution of the species has since shifted. Today, the species remains extant in the Andes mountains of northern Chile, but small populations have been found in southern Bolivia.
The short-tailed chinchilla is characterized by its grayish-blue fur which is extremely dense and plush. The short-tailed chinchilla has a short furry tail, which distinguishes it from the long-tailed chinchilla. Compared to C. lanigera, C. chinchilla has smaller, more rounded ears and is slightly smaller in body size.
Chinchillas have been exploited by humans for centuries. Commercial hunting of short-tailed chinchillas for fur began in 1828 in Chile, leading to an increased demand in Europe and the United States. As the demand for chinchilla pelts rose, the species number declined, leading to the species' apparent extinction in 1917. In 1929 a ban against hunting chinchillas was enacted, but not strictly enforced until 1983. Despite the species' rediscovery in the wild in 1953, the population of short-tailed chinchillas has continued to decline and has been categorized as endangered. Numerous threats to short-tailed chinchillas exist, including illegal hunting, habitat loss, firewood harvesting, and mining. In the last few decades, chinchillas have become increasingly popular as exotic pets, which has led to an increase in hunting and trapping.
Appearance and characteristics
Short-tailed chinchillas are generally smaller than long-tailed chinchillas and can be distinguished by comparing general body length, head size, tail length, and ear size. Upon closer observation, short-tailed chinchillas appear to have a larger body size, thicker necks, wider shoulders, and smaller ears than long-tailed chinchillas. Their tail length is what distinguishes them greatly, with short-tailed chinchillas having a tail measuring up to 100 mm, whereas long-tailed chinchillas have a tail measuring up to 130 mm. They have broad heads with vestigial cheek pouches.
The short-tailed chinchilla has a body size measuring between 23–38 cm long and weighing around 400-800g. Before maturity, short-tailed chinchillas weigh anywhere between 113-170g. Short-tailed chinchillas which have been bred to be pets are typically larger, measuring almost twice the size of those in the wild. In both wild and domestic short-tailed chinchillas, females are larger than males, but this difference in size is more apparent in domesticated chinchillas. Both sexes of short-tailed chinchillas are sexually dimorphic and appear the same, besides a size difference.
Short-tailed chinchillas are covered in a thick coat of extremely fine hair. The fur is very soft and plush due to the high number of hairs in a single follicle. 50 hairs can be held in a follicle, as compared to human hair which typically has one hair per follicle. Chinchilla fur is extremely valuable and is considered the softest in the world. Fur color can vary by individual, but colors range from violet, sapphire, blue-grey, beige, beige, brown, ebony, gray, white, cream, and pearl with each hair having a black tip. Typically, the underbelly of the species is a cream or off-white color shade.
The tail is usually bushy and has coarser hair. The dense coat of chinchillas allows the species to survive in the cold temperatures of their habitat in the Andes mountains. Since their coat is extremely thick, water is prevented from evaporating, which allows chinchillas to maintain body warmth. Additionally, the fur is so dense, that fleas and parasites cannot penetrate through the hair and will often die of suffocation. However, chinchillas cannot pant or sweat and with a dense fur coat, they are prone to overheating, especially in the care of humans. Their natural cooling mechanism is pumping blood through their ears, which have finer hair than the rest of their bodies.
Chinchillas are extremely well-adapted to their environment, with short front legs and long, powerful hind legs that aid in climbing and jumping in the mountains. Short-tailed chinchillas can jump across six-foot crevices and have large feet with foot pads and weak claws which allows them to move over rock crevices without slipping. Short-tailed chinchillas have extremely long vibrissae, in comparison to their body size, measuring around 100 mm. Short-tailed chinchillas have large eyes with vertical slit pupils, which allow them to have a clear, wide view at night. Another prominent feature are the large ears of chinchillas which helps them hear faint sounds and listen for predators.
Behavior
Although not much is known about short-tailed chinchilla behavior due to the shy nature of the species, they're known to be extremely intelligent creatures. In nature, they are timid and stay hidden throughout the day to avoid predators. Chinchillas are crepuscular, awakening at dawn and dusk to find food. They navigate and forage through the darkness using their vibrissae. At dawn, chinchillas sunbathe and groom themselves by taking dust baths. In the wild, chinchillas living in the Andes Mountains will roll in volcanic ash to coat their fur and prevent matting due to oils from their skin. Owners of pet chinchillas often provide them with dust or sand baths to help distribute oils, clear any dirt, and keep their fur soft.
Social interactions
Chinchillas are social creatures, normally living in colonies that may range from several to a hundred individuals, in groups called herds.
Reproduction
Short-tailed chinchillas have one mating partner and are considered monogamous. Due to females being slightly larger than males, female chinchillas often dominate males and will mate twice a year. The breeding season is November to May in the Northern Hemisphere. They have gestation periods lasting for 128 days.
Females may have up to two litters a year, but three is possible, but unusual. Litter size ranges from one to six offspring, called kits, with two being the average. Newborns chinchillas are capable of eating plant food and are weaned at 6 weeks old. Short-tailed chinchillas reach sexual maturity relatively quickly at an average age of 8 months, but it has been observed to occur at as young as 5.5 months with pet chinchillas or those in captivity. In the wild, short-tailed chinchillas typically have a lifespan of 8–10 years, as compared to in captivity, where they may survive for as long as 15–20 years.
An interesting behavior has been observed with females, with other lactating females sometimes feeding the young of others if they're unable to produce milk. Unlike many rodent species, father chinchillas also take on a caring and nurturing role, taking care of offspring when the mother is collecting food.
Defense mechanisms
Although they're not usually aggressive, pet chinchillas can develop a nipping tendency if handled improperly. If nipped or bitten by a predator, chinchillas can release tufts of hair, in order to escape. This leaves the predator with a mouth full of fur and is called a "fur slip.” A fur slip happens when a chinchilla releases tufts of its hair to escape its predator. With pet chinchillas, fur slip occurs while owners are holding their pets tightly or if the chinchilla is stressed.
Vocal sounds
In order to communicate, short-tailed chinchillas vocalize and have specific calls. There are ten distinct sounds emitted by chinchillas and each varies based on the context of the situation. Chinchillas will make a whistle-like sound, growl, or chatter their teeth to warn and alarm others of danger. Short-tailed chinchillas have also been known to emit hiss-and-spit noises if provoked and a cooing sound while mating.
Habitat
Short-tailed chinchillas primarily live in self-dug burrows or crevices of rocky areas with shrubs and grasses nearby, usually mountainous grasslands. Typically, their habitat has a sparse cover of thorny shrubs, cacti, and patches of succulents. Chinchillas live in arid climates at high altitudes with temperature dropping at night. Due to their environmental surroundings, chinchillas have adapted to expend less energy by having a low metabolic rate. Chinchillas are nocturnal creatures, often foraging for food at dusk and dawn.
Distribution
Historically, short-tailed chinchillas lived in the Andes mountains and were native to Peru, Chile, Bolivia, and Argentina. Although there has been speculation that chinchillas have become regionally extinct in Bolivia and Peru. In Bolivia, the chinchillas ranged from the La Paz, Oruro, and Potosi regions with the last wild specimens being captured by near Sabaya, and Caranga. However, a small population was recently discovered in Bolivia near the Laguna Colorada basin. Today, the only recorded sightings of short-tailed chinchillas has been in the Andes Mountains of northern Chile, where they remain endemic. In Chile, known chinchilla populations have been seen near the towns of El Laco, Morro Negro which are both near the Llullaillaco volcano in the Antofagasta region, as well as near the Nevado Tres Cruces National Park in the Atacama region.
Range
Their range extends through the relatively barren areas of the Andes Mountains at an elevation of 9,800 to over 16,000 feet (3,000 to 5,000 meters).
Diet
The diet for chinchillas is heavily plant-based, mainly grasses and shrubs found on the sides of mountains. Short-tailed chinchillas are herbivores and mainly feed on high-fiber vegetation specifically foliage, leaves, shrubs, seeds, nuts, grasses, herbs, flowers, and grains. Short-tailed chinchillas also compete with other species for food, mainly grazers like goats and cattle. Sometimes, they will feed on insects as part of their diet. However, their diet changes with the season, depending on what is available, mainly the perennial Chilean needle-grass. Short-tailed chinchillas acquire their drinking water through morning dew or from the flesh of various plants such as cacti. While eating, the short-tailed chinchilla sits upright and grasps its food in its front feet. Chinchillas are prone to overeating when an excess of food is available, so pet owners must be careful not to overfeed. Chinchillas also gnaw on whatever they can find to file down their constantly growing teeth.
History/Spread to U.S.
Chinchillas were hunted and kept as pets by the ancient Incas. In the 1700s, commercial hunting of chinchillas began in Chile. Short-tailed chinchillas were first brought to the U.S. in the 1920s by a mining engineer named Mathias F. Chapman. Chapman loved chinchillas and received permission from the government of Chile to import 12 individuals of the species to the U.S. He made sure to allow the chinchillas to assimilate into their new environment. Over the course of a year, he brought the chinchillas to a lower altitude and fed them food from their natural habitat.
Population threats
Predators
Chinchillas have natural predators in the wild, on the ground and in the sky. Birds, such as owls and hawks may swoop down and snatch chinchillas. On the ground, snakes, wild cats, and foxes hunt chinchillas as prey. In the three recognized populations, the Andean fox is the main predator. However, chinchillas are agile and can run up to 15 mph, so they can escape predators.
Habitat destruction
Short-tailed chinchillas are impacted by human activities such as mining and firewood extraction. Mining operations are a significant threat to chinchillas due to the destruction of their habitats. In Chile, gold fields have been discovered, but mining these areas would disrupt chinchilla populations. One main critical threat to chinchillas is the burning and harvesting of the algarrobilla shrub, which is their natural habitat. Since chinchillas are so well-adapted to their environments, any long-term environmental change threatens the species' survival. While hunting the species for their pelts, fur traders used dynamite to destroy their burrows and force the chinchillas out, which killed many in the process. The impact of these events has led to a 90% decrease in the short-tailed chinchilla population and caused them to go extinct in the three of the four countries where they were once found.
Fur trade
Many chinchillas are hunted for their fur and meat, often being bred for the pet and fur trade. Chinchilla fur is very fine and dense. One of their hair follicles can hold 50 hairs, while humans have 1 hair per follicle. Chinchilla fur is highly luxurious and in demand in the fur industry. Commercial hunting began in 1829 and increased every year by about half a million skins, as fur and skin demand increased in the United States and Europe: “[t]he continuous and intense harvesting rate [...] was not sustainable and the number of chinchillas hunted declined until the resource was considered economically extinct by 1917."
Once the hunting started, demand for the chinchilla skins skyrocketed in the United States and Europe, causing an unsustainable decline for living chinchillas. The supply of chinchillas slowly diminished, with the last short-tailed chinchilla being seen in 1953, causing skin prices to increase drastically. Short-tailed chinchillas were especially sought-after due to their higher quality fur and larger size as compared to long-tailed chinchillas.
Exotic pet trade
Since short-tailed chinchillas are so rare and their wild colonies only recently rediscovered, they are absent in the pet trade. Instead, their close relatives, long-tailed chinchillas are frequently kept as pets and often mistaken for their short-tailed cousins. Potential early-generation short-tailed and long-tailed chinchilla hybrids are considered absent from any wildlife trade for a long time, if not ever.
Conservation
The status of short-tailed chinchillas has declined by 90% over the years due to hunting and fur and trapping to support the fur trade. In the early 20th century, humans hunted chinchillas for their skins in great numbers which led to over 20 million individuals being killed. By the 1960s, both species of chinchilla, C.langiera and C.chinchilla were considered extinct in the wild. It wasn't until 1983, when specimens of short-tailed chinchillas were rediscovered.
Short-tailed chinchillas faced the greatest hunting during the early 1900s, since the South American fur traders were exchanging the chinchilla with Europeans. To meet the growing demand of chinchilla fur in Europe, the Andean fur traders had to hunt at great numbers. As the fur trade of chinchillas became increasingly successful, people began to quit their jobs as miners and farmers to become hunters.
Many inhumane hunting techniques were practiced to acquire the skins of chinchillas. These techniques ranged from using dogs to hunt to placing throned shrubs lit on fire into burrows. Others crushed chinchillas with large boulders. Throughout the late 1800s and early 1900s, half a million chinchilla skins were being exported by Chile. However, due to these practices, only 1/3 of the exported skins were able to be purchased. Buenos Aires exported the majority of the skins, including the pelts coming from Bolivia. At this rate of exploitation, the short-tailed chinchilla became extinct in Chile, Bolivia, and Argentina.
To this day, only three populations are known. The short-tailed chinchillas are regionally extinct, except in Chile, but small groups have been rediscovered in Bolivia. Although, the short-tailed chinchillas is labeled as Critically Endangered in Bolivia. But, in Peru and Argentina, C.chinchilla is still labeled as Critically Endangered or Endangered instead of Extinct. In Chile, the species is Endangered. Chile has three regions where C.chinchilla can be found. In the Tarapacá region, they are considered “Extinguished,” and in the Antofagasta and Atacama regions “Endangered.”
Specific measures
In 1929, the first successful protection law prohibiting hunting chinchillas was passed in Chile, but weren't effectively enforced until the establishment in 1983 of the Reserva Nacional Las Chinchillas in Auco, Chile. Because of an impending extinction of short-tailed chinchillas, conservation measures were implemented in the 1890s in Chile. However, these measures were unregulated. The 1910 treaty between Chile, Bolivia and Peru brought the first international efforts to ban the hunting and commercial harvesting of chinchillas. Unfortunately, this effort led to great price increases, which caused a further decline of the remaining populations.
Today, short-tailed chinchillas are still considered Critically Endangered by the IUCN. Unfortunately, even with commercial hunting being illegal the last 100 years, C.chinchilla has not recovered or redistributed to their former areas of living. The populations that remain are small and isolated groups, which has caused reproductive isolation and led to inbreeding depression and low genetic diversity. This has caused a lower genetic fitness and further increased the species' risk of extinction. However, several individuals from a wild population were transferred to a breeding program in order to increase genetic diversity for captive populations.Certain groups such "Save the Wild Chinchillas" help to raise awareness on the current status of the short-tailed chinchillas. In order to save the species, more research and surveys need to be done to find the location of other populations. If these actions are not taken, short-tailed chinchillas risk extinction within a matter of years.
Captivity
Short-tailed chinchillas in captivity are difficult to breed experimentally, which leads to high percentages of sterility. In captivity, there have been attempts to crossbreed long-tailed chinchillas and short-tailed chinchillas which have resulted in a few individuals.
References
Chinchilla
Mammals of the Andes
Mammals of Bolivia
Mammals of Chile
Mammals of Peru
EDGE species
Mammals described in 1829
Taxa named by Hinrich Lichtenstein | Short-tailed chinchilla | [
"Biology"
] | 3,869 | [
"EDGE species",
"Biodiversity"
] |
11,009,172 | https://en.wikipedia.org/wiki/Uridine%20diphosphate%20glucuronic%20acid | UDP-glucuronic acid is a sugar used in the creation of polysaccharides and is an intermediate in the biosynthesis of ascorbic acid (except in primates and guinea pigs). It also participates in the heme degradation process of human.
It is made from UDP-glucose by UDP-glucose 6-dehydrogenase (EC 1.1.1.22) using NAD+ as a cofactor. It is the source of the glucuronosyl group in glucuronosyltransferase reactions.
See also
Glucuronic acid
UDP
References
Glucuronide esters
Coenzymes
Nucleotides | Uridine diphosphate glucuronic acid | [
"Chemistry",
"Biology"
] | 151 | [
"Biotechnology stubs",
"Coenzymes",
"Biochemistry stubs",
"Organic compounds",
"Biochemistry"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.