id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
1,023,468 | https://en.wikipedia.org/wiki/Chabazite | Chabazite () is a tectosilicate mineral of the zeolite group, closely related to gmelinite, with the chemical formula . Recognized varieties include Chabazite-Ca, Chabazite-K, Chabazite-Na, and Chabazite-Sr, depending on the prominence of the indicated cation.
Chabazite crystallizes in the triclinic crystal system with typically rhombohedral shaped crystals that are pseudo-cubic. The crystals are typically twinned, and both contact twinning and penetration twinning may be observed. They may be colorless, white, orange, brown, pink, green, or yellow. The hardness ranges from 3 to 5 and the specific gravity from 2.0 to 2.2. The luster is vitreous.
It was named chabasie in 1792 by Bosc d'Antic and later changed to the current spelling.
Chabazite occurs most commonly in voids and amygdules in basaltic rocks.
Chabazite is found in India, Iceland, the Faroe Islands, the Giants Causeway in Northern Ireland, Bohemia, Italy, Germany, along the Bay of Fundy in Nova Scotia, Oregon, Arizona, and New Jersey.
Synthetic chabazite
Many different materials that are isostructural with the chabazite mineral have been synthesized in laboratories. SSZ-13 is a CHA type zeolite with an Si/Al ratio of 14. This is a composition not found in nature.
References
External links
Webmineral
Mineral Galleries
Mindat.org
Tectosilicates
Calcium minerals
Sodium minerals
Potassium minerals
Magnesium minerals
Aluminium minerals
12
Zeolites
Trigonal minerals
Minerals in space group 166
Minerals described in 1792 | Chabazite | [
"Chemistry"
] | 362 | [
"Hydrate minerals",
"Hydrates"
] |
1,023,479 | https://en.wikipedia.org/wiki/WDDX | WDDX (Web Distributed Data eXchange) is a programming language-, platform- and transport-neutral data interchange mechanism designed to pass data between different environments and different computers.
History
WDDX was created by Simeon Simeonov of Allaire Corporation in 1998, initially for the ColdFusion server environment.
WDDX was open-sourced later that year.
Usage
WDDX is functionally comparable to XML-RPC and WIDL. The specification supports simple data types such as number, string, boolean, etc., and complex aggregates of these in forms such as structures, arrays and recordsets (row/column data, typically coming from database queries).
The data is encoded into XML using an XML 1.0 DTD, producing a platform-independent but relatively bulky representation. The XML-encoded data can then be sent to another computer using HTTP, FTP, or other transmission mechanism. The receiving computer must have WDDX-aware software to translate the encoded data into the receiver's native data representation. WDDX can also be used to serialize data structures to storage (file system or database). Many applications use WDDX to pass complex data to browsers where it can be manipulated with JavaScript, as an alternative to JSON.
Example from php.net:
<wddxPacket version='1.0'>
<header comment='PHP'/>
<data>
<struct>
<var name='pi'>
<number>3.1415926</number>
</var>
<var name='cities'>
<array length='3'>
<string>Austin</string>
<string>Novato</string>
<string>Seattle</string>
</array>
</var>
</struct>
</data>
</wddxPacket>
Adoption
WDDX is mainly used by ColdFusion and, as February 2022, still supported by Adobe.
Outside ColdFusion, libraries exist to read or write this format, Ruby, Python, PHP, Java, C++, .NET, Actionscript, lisp, Haskell, Perl.
PHP used to offer a comprehensive support for WDDX, which could be used as a format to store session information until the version 7.4. It has been removed since from the base language, but still available through PECL. The rationale was a lack of standardization of the format, and new formats like JSON more mainstream. A vulnerability was fixed in 2007.
Notes
External links
GCA98 WDDX Presentation
Using WDDX with Flash
XML-based standards
Markup languages
Data serialization formats | WDDX | [
"Technology"
] | 554 | [
"Computer standards",
"XML-based standards"
] |
1,023,548 | https://en.wikipedia.org/wiki/Diopside | Diopside is a monoclinic pyroxene mineral with composition . It forms complete solid solution series with hedenbergite () and augite, and partial solid solutions with orthopyroxene and pigeonite. It forms variably colored, but typically dull green crystals in the monoclinic prismatic class. It has two distinct prismatic cleavages at 87 and 93° typical of the pyroxene series. It has a Mohs hardness of six, a Vickers hardness of 7.7 GPa at a load of 0.98 N, and a specific gravity of 3.25 to 3.55. It is transparent to translucent with indices of refraction of nα=1.663–1.699, nβ=1.671–1.705, and nγ=1.693–1.728. The optic angle is 58° to 63°.
Formation
Diopside is found in ultramafic (kimberlite and peridotite) igneous rocks, and diopside-rich augite is common in mafic rocks, such as olivine basalt and andesite. Diopside is also found in a variety of metamorphic rocks, such as in contact metamorphosed skarns developed from high silica dolomites. It is an important mineral in the Earth's mantle and is common in peridotite xenoliths erupted in kimberlite and alkali basalt.
Mineralogy and occurrence
Diopside is a precursor of chrysotile (white asbestos) by hydrothermal alteration and magmatic differentiation; it can react with hydrous solutions of magnesium and chlorine to yield chrysotile by heating at 600 °C for three days. Some vermiculite deposits, most notably those in Libby, Montana, are contaminated with chrysotile (as well as other forms of asbestos) that formed from diopside.
At relatively high temperatures, there is a miscibility gap between diopside and pigeonite, and at lower temperatures, between diopside and orthopyroxene. The calcium/(calcium+magnesium+iron) ratio in diopside that formed with one of these other two pyroxenes is particularly sensitive to temperature above 900 °C, and compositions of diopside in peridotite xenoliths have been important in reconstructions of temperatures in the Earth's mantle.
Chrome diopside () is a common constituent of peridotite xenoliths, and dispersed grains are found near kimberlite pipes, and as such are a prospecting indicator for diamonds. Occurrences are reported in Canada, South Africa, Russia, Brazil, and a wide variety of other locations. In the US, chromian diopside localities are described in the serpentinite belt in northern California, in kimberlite in the Colorado-Wyoming State Line district, in kimberlite in the Iron Mountain district, Wyoming, in lamprophyre at Cedar Mountain in Wyoming, and in numerous anthills and outcrops of the Tertiary Bishop Conglomerate in the Green River Basin of Wyoming. Much chromian diopside from the Green River Basin localities and several of the State Line Kimberlites have been gem in character.
As a gem
Gemstone quality diopside is found in two forms: black star diopside and chrome diopside (which includes chromium, giving it a rich green color). At 5.5–6.5 on the Mohs scale, chrome diopside is relatively soft to scratch. Due to the deep green color of the gem, they are sometimes referred to as Siberian emeralds, although they are on a gemological level completely unrelated, emerald being a precious stone and diopside being a semi-precious stone.
Green diopside crystals included within a white feldspar matrix are also sold as gemstones, usually as beads or cabochons. This stone is often marketed as 'green spot jasper' or green spot stone'.
Violane is a manganese-rich variety of diopside, violet to light blue in color.
Etymology and history
Diopside derives its name from the Greek dis, "twice", and òpsè, "face" in reference to the two ways of orienting the vertical prism.
Diopside was discovered and first described about 1800, by Brazilian naturalist Jose Bonifacio de Andrada e Silva.
Potential uses
Diopside based ceramics and glass-ceramics have potential applications in various technological areas. A diopside based glass-ceramic named 'silceram' was produced by scientists from Imperial College, UK during the 1980s from blast furnace slag and other waste products. They also produced glass-ceramic is a potential structural material. Similarly, diopside based ceramics and glass-ceramics have potential applications in the field of biomaterials, nuclear waste immobilization and sealing materials in solid oxide fuel cells.
References
S. Carter, C.B. Ponton, R.D. Rawlings, P.S. Rogers, Microstructure, chemistry, elastic properties and internal-friction of silceram glass-ceramics, Journal of Materials Science 23 (1988) 2622–2630.
T. Nonami, S. Tsutsumi, Study of diopside ceramics for biomaterials, Journal of Materials Science: Materials in Medicine 10 (1999) 475–479.
A. Goel, D.U. Tulyaganov, V.V. Kharton, A.A. Yaremchenko, J.M.F. Ferreira, Electrical behaviour of aluminosilicate glass-ceramic sealants and their interaction with metallic SOFC interconnects, Journal of Power Sources 195 (2010) 522–526.
Hurlbut, Cornelius S.; Klein, Cornelis, 1985, Manual of Mineralogy, 20th ed., Wiley, pp 403–404,
Mindat: Chromian diopside, with locales
Webmineral
Chrome Diopside on gemstone.org
Pyroxene group
Magnesium minerals
Calcium minerals
Inosilicates
Monoclinic minerals
Minerals in space group 15
Luminescent minerals | Diopside | [
"Chemistry"
] | 1,288 | [
"Luminescence",
"Luminescent minerals"
] |
1,023,575 | https://en.wikipedia.org/wiki/Glioblastoma | Glioblastoma, previously known as glioblastoma multiforme (GBM), is the most aggressive and most common type of cancer that originates in the brain, and has a very poor prognosis for survival. Initial signs and symptoms of glioblastoma are nonspecific. They may include headaches, personality changes, nausea, and symptoms similar to those of a stroke. Symptoms often worsen rapidly and may progress to unconsciousness.
The cause of most cases of glioblastoma is not known. Uncommon risk factors include genetic disorders, such as neurofibromatosis and Li–Fraumeni syndrome, and previous radiation therapy. Glioblastomas represent 15% of all brain tumors. They are thought to arise from astrocytes. The diagnosis typically is made by a combination of a CT scan, MRI scan, and tissue biopsy.
There is no known method of preventing the cancer. Treatment usually involves surgery, after which chemotherapy and radiation therapy are used. The medication temozolomide is frequently used as part of chemotherapy. High-dose steroids may be used to help reduce swelling and decrease symptoms. Surgical removal (decompression) of the tumor is linked to increased survival, but only by some months.
Despite maximum treatment, the cancer almost always recurs. The typical duration of survival following diagnosis is 10–13 months, with fewer than 5–10% of people surviving longer than five years. Without treatment, survival is typically three months. It is the most common cancer that begins within the brain and the second-most common brain tumor, after meningioma, which is benign in most cases. About 3 in 100,000 people develop the disease per year. The average age at diagnosis is 64, and the disease occurs more commonly in males than females.
Tumors of the central nervous system are the 10th leading cause of death worldwide, with up to 90% being brain tumors. Glioblastoma multiforme (GBM) is derived from astrocytes and accounts for 49% of all malignant central nervous system tumors, making it the most common form of central nervous system cancer. Despite countless efforts to develop new therapies for GBM over the years, the median survival rate of GBM patients worldwide is 8 months; radiation and chemotherapy standard-of-care treatment beginning shortly after diagnosis improve the median survival length to around 14 months and a five-year survival rate of 5–10%. The five-year survival rate for individuals with any form of primary malignant brain tumor is 20%. Even when all detectable traces of the tumor are removed through surgery, most patients with GBM experience recurrence of their cancer.
Signs and symptoms
Common symptoms include seizures, headaches, nausea and vomiting, memory loss, changes to personality, mood or concentration, and localized neurological problems. The kinds of symptoms produced depend more on the location of the tumor than on its pathological properties. The tumor can start producing symptoms quickly, but occasionally is an asymptomatic condition until it reaches an enormous size.
Risk factors
The cause of most cases is unclear. The best known risk factor is exposure to ionizing radiation, and CT scan radiation is an important cause. About 5% develop from certain hereditary syndromes.
Genetics
Uncommon risk factors include genetic disorders such as neurofibromatosis, Li–Fraumeni syndrome, tuberous sclerosis, or Turcot syndrome. Previous radiation therapy is also a risk. For unknown reasons, it occurs more commonly in males.
Environmental
Other associations include exposure to smoking, pesticides, and working in petroleum refining or rubber manufacturing.
Glioblastoma has been associated with the viruses SV40, HHV-6, and cytomegalovirus (CMV). Infection with an oncogenic CMV may even be necessary for the development of glioblastoma.
Other
Research has been done to see if consumption of cured meat is a risk factor. No risk had been confirmed as of 2003. Similarly, exposure to formaldehyde, and residential electromagnetic fields, such as from cell phones and electrical wiring within homes, have been studied as risk factors. As of 2015, they had not been shown to cause GBM.
Pathogenesis
The cellular origin of glioblastoma is unknown. Because of the similarities in immunostaining of glial cells and glioblastoma, gliomas such as glioblastoma have long been assumed to originate from glial-type stem cells found in the subventricular zone. More recent studies suggest that astrocytes, oligodendrocyte progenitor cells, and neural stem cells could all serve as the cell of origin.
GBMs usually form in the cerebral white matter, grow quickly, and can become very large before producing symptoms. Since the function of glial cells in the brain is to support neurons, they have the ability to divide, to enlarge, and to extend cellular projections along neurons and blood vessels. Once cancerous, these cells are predisposed to spread along existing paths in the brain, typically along white-matter tracts, blood vessels and the perivascular space. The tumor may extend into the meninges or ventricular wall, leading to high protein content in the cerebrospinal fluid (CSF) (> 100 mg/dl), as well as an occasional pleocytosis of 10 to 100 cells, mostly lymphocytes. Malignant cells carried in the CSF may spread (rarely) to the spinal cord or cause meningeal gliomatosis. However, metastasis of GBM beyond the central nervous system is extremely unusual. About 50% of GBMs occupy more than one lobe of a hemisphere or are bilateral. Tumors of this type usually arise from the cerebrum and may exhibit the classic infiltration across the corpus callosum, producing a butterfly (bilateral) glioma.
Glioblastoma classification
Brain tumor classification has been traditionally based on histopathology at macroscopic level, measured in hematoxylin-eosin sections. The World Health Organization published the first standard classification in 1979 and has been doing so since. The 2007 WHO Classification of Tumors of the Central Nervous System was the last classification mainly based on microscopy features. The new 2016 WHO Classification of Tumors of the Central Nervous System was a paradigm shift: some of the tumors were defined also by their genetic composition as well as their cell morphology.
In 2021, the fifth edition of the WHO Classification of Tumors of the Central Nervous System was released. This update eliminated the classification of secondary glioblastoma and reclassified those tumors as Astrocytoma, IDH mutant, grade 4. Only tumors that are IDH wild type are now classified as glioblastoma.
Molecular alterations
There are currently three molecular subtypes of glioblastoma that were identified based on gene expression:
Classical: Around 97% of tumors in this subtype carry extra copies of the epidermal growth factor receptor (EGFR) gene, and most have higher than normal expression of EGFR, whereas the gene TP53 (p53), which is often mutated in glioblastoma, is rarely mutated in this subtype. Loss of heterozygosity in chromosome 10 is also frequently seen in the classical subtype alongside chromosome 7 amplification.
The proneural subtype often has high rates of alterations in TP53 (p53), and in PDGFRA the gene encoding a-type platelet-derived growth factor receptor.
The mesenchymal subtype is characterized by high rates of mutations or other alterations in NF1, the gene encoding neurofibromin 1 and fewer alterations in the EGFR gene and less expression of EGFR than other types.
Initial analyses of gene expression had revealed a fourth neural subtype. However, further analyses revealed that this subtype is non-tumor specific and is potential contamination caused by the normal cells.
Many other genetic alterations have been described in glioblastoma, and the majority of them are clustered in two pathways, the RB and the PI3K/AKT. 68–78% and 88% of Glioblastomas have alterations in these pathways, respectively.
Another important alteration is methylation of MGMT, a "suicide" DNA repair enzyme. Methylation impairs DNA transcription and expression of the MGMT gene. Since the MGMT enzyme can repair only one DNA alkylation due to its suicide repair mechanism, reserve capacity is low and methylation of the MGMT gene promoter greatly affects DNA-repair capacity. MGMT methylation is associated with an improved response to treatment with DNA-damaging chemotherapeutics, such as temozolomide.
Studies using genome-wide profiling have revealed glioblastomas to have a remarkable genetic variety.
At least three distinct paths in the development of Glioblastomas have been identified with the aid of molecular investigations.
The first pathway involves the amplification and mutational activation of receptor tyrosine kinase (RTK) genes, leading to the dysregulation of growth factor signaling. Epithelial growth factor (EGF), vascular endothelial growth factor (VEGF), and platelet-derived growth factor (PDGF) are all recognized by transmembrane proteins called RTKs. Additionally, they can function as receptors for hormones, cytokines, and other signaling pathways.
The second method involves activating the intracellular signaling system known as phosphatidylinositol-3-OH kinase (PI3K)/AKT/mTOR, which is crucial for controlling cell survival.
The third pathway is defined by p53 and retinoblastoma (Rb) tumor suppressor pathway inactivation.
Cancer stem cells
Glioblastoma cells with properties similar to progenitor cells (glioblastoma cancer stem cells) have been found in glioblastomas. Their presence, coupled with the glioblastoma's diffuse nature results in difficulty in removing them completely by surgery, and is therefore believed to be the possible cause behind resistance to conventional treatments, and the high recurrence rate. Glioblastoma cancer stem cells share some resemblance with neural progenitor cells, both expressing the surface receptor CD133. CD44 can also be used as a cancer stem cell marker in a subset of glioblastoma tumour cells. Glioblastoma cancer stem cells appear to exhibit enhanced resistance to radiotherapy and chemotherapy mediated, at least in part, by up-regulation of the DNA damage response.
Metabolism
The IDH1 gene encodes for the enzyme isocitrate dehydrogenase 1 and is not mutated in glioblastoma. As such, these tumors behave more aggressively compared to IDH1-mutated astrocytomas.
Ion channels
Furthermore, GBM exhibits numerous alterations in genes that encode for ion channels, including upregulation of gBK potassium channels and ClC-3 chloride channels. By upregulating these ion channels, glioblastoma tumor cells are hypothesized to facilitate increased ion movement over the cell membrane, thereby increasing H2O movement through osmosis, which aids glioblastoma cells in changing cellular volume very rapidly. This is helpful in their extremely aggressive invasive behavior because quick adaptations in cellular volume can facilitate movement through the sinuous extracellular matrix of the brain.
MicroRNA
As of 2012, RNA interference, usually microRNA, was under investigation in tissue culture, pathology specimens, and preclinical animal models of glioblastoma. Additionally, experimental observations suggest that microRNA-451 is a key regulator of LKB1/AMPK signaling in cultured glioma cells and that miRNA clustering controls epigenetic pathways in the disease.
Tumor vasculature
GBM is characterized by abnormal vessels that present disrupted morphology and functionality. The high permeability and poor perfusion of the vasculature result in a disorganized blood flow within the tumor and can lead to increased hypoxia, which in turn facilitates cancer progression by promoting processes such as immunosuppression.
Diagnosis
When viewed with MRI, glioblastomas often appear as ring-enhancing lesions. The appearance is not specific, however, as other lesions such as abscess, metastasis, tumefactive multiple sclerosis, and other entities may have a similar appearance. Definitive diagnosis of a suspected GBM on CT or MRI requires a stereotactic biopsy or a craniotomy with tumor resection and pathologic confirmation. Because the tumor grade is based upon the most malignant portion of the tumor, biopsy or subtotal tumor resection can result in undergrading of the lesion. Imaging of tumor blood flow using perfusion MRI and measuring tumor metabolite concentration with MR spectroscopy may add diagnostic value to standard MRI in select cases by showing increased relative cerebral blood volume and increased choline peak, respectively, but pathology remains the gold standard for diagnosis and molecular characterization.
Distinguishing glioblastoma from high-grade astrocytoma is important. These tumors occur spontaneously (de novo) and have not progressed from a lower-grade glioma, as in high-grade astrocytomas Glioblastomas have a worse prognosis and different tumor biology, and may have a different response to therapy, which makes this a critical evaluation to determine patient prognosis and therapy. Astrocytomas carry a mutation in IDH1 or IDH2, whereas this mutation is not present in glioblastoma. Thus, IDH1 and IDH2 mutations are a useful tool to distinguish glioblastomas from astrocytomas, since histopathologically they are similar and the distinction without molecular biomarkers is unreliable. IDH-wildtype glioblastomas usually have lower OLIG2 expression compared with IDH-mutant lower grade astrocytomas. In patients aged over 55 years with a histologically typical glioblastoma, without a pre-existing lower grade glioma, with a non-midline tumor location and with retained nuclear ATRX expression, immunohistochemical negativity for IDH1 R132H suffices for the classification as IDH-wild-type glioblastoma. In all other instances of diffuse gliomas, a lack of IDH1 R132H immunopositivity should be followed by IDH1 and IDH2 DNA sequencing to detect or exclude the presence of non-canonical mutations. IDH-wild-type diffuse astrocytic gliomas without microvascular proliferation or necrosis should be tested for EGFR amplification, TERT promoter mutation and a +7/–10 cytogenetic signature as molecular characteristics of IDH-wild-type glioblastomas.
Prevention
There are no known methods to prevent glioblastoma. It is the case for most gliomas, unlike for some other forms of cancer, that they happen without previous warning and there are no known ways to prevent them.
Treatment
Treating glioblastoma is difficult due to several complicating factors:
The tumor cells are resistant to conventional therapies.
The brain is susceptible to damage from conventional therapy.
The brain has a limited capacity to repair itself.
Many drugs cannot cross the blood–brain barrier to act on the tumor.
Treatment of primary brain tumors consists of palliative (symptomatic) care and therapies intended to improve survival.
Symptomatic therapy
Supportive treatment focuses on relieving symptoms and improving the patient's neurologic function. The primary supportive agents are anticonvulsants and corticosteroids.
Historically, around 90% of patients with glioblastoma underwent anticonvulsant treatment, although only an estimated 40% of patients required this treatment. Neurosurgeons have recommended that anticonvulsants not be administered prophylactically, and should wait until a seizure occurs before prescribing this medication. Those receiving phenytoin concurrent with radiation may have serious skin reactions such as erythema multiforme and Stevens–Johnson syndrome.
Corticosteroids, usually dexamethasone, can reduce peritumoral edema (through rearrangement of the blood–brain barrier), diminishing mass effect and lowering intracranial pressure, with a decrease in headache or drowsiness.
Surgery
Surgery is the first stage of treatment of glioblastoma. An average GBM tumor contains 1011 cells, which is on average reduced to 109 cells after surgery (a reduction of 99%). Benefits of surgery include resection for a pathological diagnosis, alleviation of symptoms related to mass effect, and potentially removing disease before secondary resistance to radiotherapy and chemotherapy occurs.
The greater the extent of tumor removal, the better. In retrospective analyses, removal of 98% or more of the tumor has been associated with a significantly longer healthier time than if less than 98% of the tumor is removed. The chances of near-complete initial removal of the tumor may be increased if the surgery is guided by a fluorescent dye known as 5-aminolevulinic acid. GBM cells are widely infiltrative through the brain at diagnosis, and despite a "total resection" of all obvious tumor, most people with GBM later develop recurrent tumors either near the original site or at more distant locations within the brain. Other modalities, typically radiation and chemotherapy, are used after surgery in an effort to suppress and slow recurrent disease through damaging the DNA of rapidly proliferative GBM cells.
Between 60-85% of glioblastoma patients report cancer-related cognitive impairments following surgery, which refers to problems with executive functioning, verbal fluency, attention, speed of processing. These symptoms may be managed with cognitive behavioral therapy, physical exercise, yoga and meditation.
Radiotherapy
Subsequent to surgery, radiotherapy becomes the mainstay of treatment for people with glioblastoma. It is typically performed along with giving temozolomide. A pivotal clinical trial carried out in the early 1970s showed that among 303 GBM patients randomized to radiation or best medical therapy, those who received radiation had a median survival more than double those who did not. Subsequent clinical research has attempted to build on the backbone of surgery followed by radiation. Whole-brain radiotherapy does not improve when compared to the more precise and targeted three-dimensional conformal radiotherapy. A total radiation dose of 60–65 Gy has been found to be optimal for treatment.
GBM tumors are well known to contain zones of tissue exhibiting hypoxia, which are highly resistant to radiotherapy. Various approaches to chemotherapy radiosensitizers have been pursued, with limited success . , newer research approaches included preclinical and clinical investigations into the use of an oxygen diffusion-enhancing compound such as trans sodium crocetinate as radiosensitizers, and a clinical trial was underway. Boron neutron capture therapy has been tested as an alternative treatment for glioblastoma, but is not in common use.
Chemotherapy
Most studies show no benefit from the addition of chemotherapy. However, a large clinical trial of 575 participants randomized to standard radiation versus radiation plus temozolomide chemotherapy showed that the group receiving temozolomide survived a median of 14.6 months as opposed to 12.1 months for the group receiving radiation alone. This treatment regimen is now standard for most cases of glioblastoma where the person is not enrolled in a clinical trial. Temozolomide seems to work by sensitizing the tumor cells to radiation, and appears more effective for tumors with MGMT promoter methylation. High doses of temozolomide in high-grade gliomas yield low toxicity, but the results are comparable to the standard doses. Antiangiogenic therapy with medications such as bevacizumab control symptoms, but do not appear to affect overall survival in those with glioblastoma. A 2018 systematic review found that the overall benefit of anti-angiogenic therapies was unclear. In elderly people with newly diagnosed glioblastoma who are reasonably fit, concurrent and adjuvant chemoradiotherapy gives the best overall survival but is associated with a greater risk of haematological adverse events than radiotherapy alone.
Immunotherapy
Phase 3 clinical trials of immunotherapy treatments for glioblastoma have largely failed.
Other procedures
Alternating electric field therapy is an FDA-approved therapy for newly diagnosed and recurrent glioblastoma. In 2015, initial results from a phase-III randomized clinical trial of alternating electric field therapy plus temozolomide in newly diagnosed glioblastoma reported a three-month improvement in progression-free survival, and a five-month improvement in overall survival compared to temozolomide therapy alone, representing the first large trial in a decade to show a survival improvement in this setting. Despite these results, the efficacy of this approach remains controversial among medical experts. However, increasing understanding of the mechanistic basis through which alternating electric field therapy exerts anti-cancer effects and results from ongoing phase-III clinical trials in extracranial cancers may help facilitate increased clinical acceptance to treat glioblastoma in the future.
Prognosis
The most common length of survival following diagnosis is 10 to 13 months (although recent research points to a median survival rate of 15 months), with fewer than 1–3% of people surviving longer than five years. In the United States between 2012 and 2016 five-year survival was 6.8%. Without treatment, survival is typically three months. Complete cures are extremely rare, but have been reported.
Increasing age (> 60 years) carries a worse prognostic risk. Death is usually due to widespread tumor infiltration with cerebral edema and increased intracranial pressure.
A good initial Karnofsky performance score (KPS) and MGMT methylation are associated with longer survival. A DNA test can be conducted on glioblastomas to determine whether or not the promoter of the MGMT gene is methylated. Patients with a methylated MGMT promoter have longer survival than those with an unmethylated MGMT promoter, due in part to increased sensitivity to temozolomide.
Long-term benefits have also been associated with those patients who receive surgery, radiotherapy, and temozolomide chemotherapy. However, much remains unknown about why some patients survive longer with glioblastoma. Age under 50 is linked to longer survival in GBM, as is 98%+ resection and use of temozolomide chemotherapy and better KPSs. A recent study confirms that younger age is associated with a much better prognosis, with a small fraction of patients under 40 years of age achieving a population-based cure. Cure is thought to occur when a person's risk of death returns to that of the normal population, and in GBM, this is thought to occur after 10 years.
UCLA Neuro-oncology publishes real-time survival data for patients with this diagnosis.
According to a 2003 study, GBM prognosis can be divided into three subgroups dependent on KPS, the age of the patient, and treatment.
Epidemiology
About three per 100,000 people develop the disease a year, although regional frequency may be much higher. The frequency in England doubled between 1995 and 2015.
It is the second-most common central nervous system tumor after meningioma. It occurs more commonly in males than females. Although the median age at diagnosis is 64, in 2014, the broad category of brain cancers was second only to leukemia in people in the United States under 20 years of age.
History
The term glioblastoma multiforme was introduced in 1926 by Percival Bailey and Harvey Cushing, based on the idea that the tumor originates from primitive precursors of glial cells (glioblasts), and the highly variable appearance due to the presence of necrosis, hemorrhage, and cysts (multiform).
Research
Gene therapy
Gene therapy has been explored as a method to treat glioblastoma, and while animal models and early-phase clinical trials have been successful, as of 2017, all gene-therapy drugs that had been tested in phase-III clinical trials for glioblastoma had failed. Scientists have developed the core–shell nanostructured LPLNP-PPT (long persistent luminescence nanoparticles. PPT refers to polyetherimide, PEG and trans-activator of transcription, and TRAIL is the human tumor necrosis factor-related apoptosis-induced ligand) for effective gene delivery and tracking, with positive results. This is a TRAIL ligand that has been encoded to induce apoptosis of cancer cells, more specifically glioblastomas. Although this study was still in clinical trials in 2017, it has shown diagnostic and therapeutic functionalities, and will open great interest for clinical applications in stem-cell-based therapy.
Other gene therapy approaches has also been explored in the context of glioblastoma, including suicide gene therapy. Suicide gene therapy is a two step approach which includes the delivery of a foreign enzyme-gene to the cancer cells followed by activation with an pro-drug causing toxicities in the cancer-cells which induces cell death. This approach have had success in animal models and small clinical studied but could not show survival benefit in larger clinical studies. Using new more efficient delivery vectors and suicide gene-prodrug systems could improve the clinical benefit from these types of therapies.
Oncolytic virotherapy
Oncolytic virotherapy is an emerging novel treatment that is under investigation both at preclinical and clinical stages. Several viruses including herpes simplex virus, adenovirus, poliovirus, and reovirus are currently being tested in phases I and II of clinical trials for glioblastoma therapy and have shown to improve overall survival.
Intranasal drug delivery
Direct nose-to-brain drug delivery is being explored as a means to achieve higher, and hopefully more effective, drug concentrations in the brain. A clinical phase-I/II study with glioblastoma patients in Brazil investigated the natural compound perillyl alcohol for intranasal delivery as an aerosol. The results were encouraging and, as of 2016, a similar trial has been initiated in the United States.
See also
Adegramotide
Asunercept
Glioblastoma Foundation
Lomustine
List of people with brain tumors
References
External links
Information about glioblastoma from the American Brain Tumor Association
Aging-associated diseases
Brain tumor
Cancer
Oncology
Wikipedia medicine articles ready to translate | Glioblastoma | [
"Biology"
] | 5,572 | [
"Senescence",
"Aging-associated diseases"
] |
1,023,624 | https://en.wikipedia.org/wiki/Volumetric%20flask | A volumetric flask (measuring flask or graduated flask) is a piece of laboratory apparatus, a type of laboratory flask, calibrated to contain a precise volume at a certain temperature. Volumetric flasks are used for precise dilutions and preparation of standard solutions. These flasks are usually pear-shaped, with a flat bottom, and made of glass or plastic. The flask's mouth is either furnished with a plastic snap/screw cap or fitted with a joint to accommodate a PTFE or glass stopper. The neck of volumetric flasks is elongated and narrow with an etched ring graduation marking. The marking indicates the volume of liquid contained when filled up to that point. The marking is typically calibrated "to contain" (marked "TC" or "IN") at 20 °C and indicated correspondingly on a label. The flask's label also indicates the nominal volume, tolerance, precision class, relevant manufacturing standard and the manufacturer's logo. Volumetric flasks are of various sizes, containing from a fraction of a milliliter to hundreds of liters of liquid.
Classes
Calibration and toleration standards for volumetric flasks are defined in the following standard specifications and practices: ASTM E288, E542, E694, ISO 1042, and GOST 1770-74. According to these specifications, volumetric flasks come in two different classes. The higher standard flasks (Class A, Class 1, USP or equivalent depending on the country) are made with a more accurately placed graduation mark, and have a unique serial number for traceability. Where this is not required, a lower standard (Class B or equivalent) is used for qualitative or educational work.
Modifications
Volumetric flasks are generally colourless but may be amber-coloured for the handling of light-sensitive compounds such as silver nitrate or vitamin A.
A modification of the volumetric flask exists for dealing with large quantities of solids that are to be transferred into a volumetric vessel for dissolution. Such a flask has a wide mouth and is known as a Kohlrausch volumetric flask. This kind of volumetric flask is commonly used in analysis of the sugar content in sugar beets.
While conventional volumetric flasks have a single mark, industrial volumetric tests in analytical chemistry and food chemistry may employ specialized volumetric flasks with multiple marks to combine several accurately measured volumes.
A highly specialized kind of the volumetric flask is the Le Chatelier flask for use with the volumetric procedure in specific gravity determination.
See also
Graduated cylinder
Babcock bottle
Burette
References
Further reading
Laboratory glassware
Volumetric instruments | Volumetric flask | [
"Technology",
"Engineering"
] | 562 | [
"Volumetric instruments",
"Measuring instruments"
] |
1,023,906 | https://en.wikipedia.org/wiki/Phytoalexin | Phytoalexins are antimicrobial substances, some of which are antioxidative as well. They are defined not by their having any particular chemical structure or character, but by the fact that they are defensively synthesized de novo by plants that produce the compounds rapidly at sites of pathogen infection. In general phytoalexins are broad spectrum inhibitors; they are chemically diverse, and different chemical classes of compounds are characteristic of particular plant taxa. Phytoalexins tend to fall into several chemical classes, including terpenoids, glycosteroids, and alkaloids; however, the term applies to any phytochemicals that are induced by microbial infection.
Function
Phytoalexins are produced in plants to act as toxins to the attacking organism. They may puncture the cell wall, delay maturation, disrupt metabolism or prevent reproduction of the pathogen in question. Their importance in plant defense is indicated by an increase in susceptibility of plant tissue to infection when phytoalexin biosynthesis is inhibited. Mutants incapable of phytoalexin production exhibit more extensive pathogen colonization as compared to wild types. As such, host-specific pathogens capable of degrading phytoalexins are more virulent than those unable to do so.
When a plant cell recognizes particles from damaged cells or particles from the pathogen, the plant launches a two-pronged resistance: a general short-term response and a delayed long-term specific response.
As part of the induced resistance, the short-term response, the plant deploys reactive oxygen species such as superoxide and hydrogen peroxide to kill invading cells. In pathogen interactions, the common short-term response is the hypersensitive response, in which cells surrounding the site of infection are signaled to undergo apoptosis, or programmed cell death, in order to prevent the spread of the pathogen to the rest of the plant.
Long-term resistance, or systemic acquired resistance (SAR), involves communication of the damaged tissue with the rest of the plant using plant hormones such as jasmonic acid, ethylene, abscisic acid, or salicylic acid. The reception of the signal leads to global changes within the plant, which induce expression of genes that protect from further pathogen intrusion, including enzymes involved in the production of phytoalexins. Often, if jasmonates or ethylene (both gaseous hormones) are released from the wounded tissue, neighboring plants also manufacture phytoalexins in response. For herbivores, common vectors for plant diseases, these and other wound response aromatics seem to act as a warning that the plant is no longer edible. Also, in accordance with the old adage, "an enemy of my enemy is my friend", the aromatics may alert natural enemies of the plant invaders to the presence thereof.
Recent research
Allixin (3-hydroxy-5-methoxy-6-methyl-2-pentyl-4H-pyran-4-one), a non-sulfur-containing compound having a γ-pyrone skeletal structure, was the first compound isolated from garlic as a phytoalexin, a product induced in plants by continuous stress. This compound has been shown to have unique biological properties, such as anti-oxidative effects, anti-microbial effects, anti-tumor promoting effects, inhibition of aflatoxin B2 DNA binding, and neurotrophic effects. Allixin showed an anti-tumor promoting effect in vivo, inhibiting skin tumor formation by TPA in DMBA initiated mice. Herein, allixin and/or its analogs may be expected to be useful compounds for cancer prevention or chemotherapy agents for other diseases.
Role of natural phenols in the plant defense against fungal pathogens
Polyphenols, especially isoflavonoids and related substances, play a role in the plant defense against fungal and other microbial pathogens.
In Vitis vinifera grape, trans-resveratrol is a phytoalexin produced against the growth of fungal pathogens such as Botrytis cinerea and delta-viniferin is another grapevine phytoalexin produced following fungal infection by Plasmopara viticola. Pinosylvin is a pre-infectious stilbenoid toxin (i.e. synthesized prior to infection), contrary to phytoalexins which are synthesized during infection. It is present in the heartwood of Pinaceae. It is a fungitoxin protecting the wood from fungal infection.
Sakuranetin is a flavanone, a type of flavonoid. It can be found in Polymnia fruticosa and rice, where it acts as a phytoalexin against spore germination of Pyricularia oryzae. In Sorghum, the SbF3'H2 gene, encoding a flavonoid 3'-hydroxylase, seems to be expressed in pathogen-specific 3-deoxyanthocyanidin phytoalexin synthesis, for example in Sorghum-Colletotrichum interactions.
6-Methoxymellein is a dihydroisocoumarin and a phytoalexin induced in carrot slices by UV-C, that allows resistance to Botrytis cinerea and other microorganisms.
Danielone is a phytoalexin found in the papaya fruit. This compound showed high antifungal activity against Colletotrichum gloesporioides, a pathogenic fungus of papaya.
Stilbenes are produced in Eucalyptus sideroxylon in case of pathogen attacks. Such compounds can be implied in the hypersensitive response of plants. High levels of polyphenols in some woods can explain their natural preservation against rot.
Avenanthramides are phytoalexins produced by Avena sativa in its response to Puccinia coronata var. avenae f. sp. avenae, the oat crown rust. (Avenanthramides were formerly called avenalumins.)
See also
Alexin (humoral immunity)
Allicin
Garlic
Plant defense against herbivory
Pterostilbene
Salvestrol
References
Further reading
External links
Signals Regulating Multiple Responses to Wounding and Herbivores Guy L. de Bruxelles and Michael R Roberts
The Myriad Plant Responses to Herbivores Linda L. Walling
The Chemical Defenses of Higher Plants Gerald A. Rosenthal
Induced Systemic Resistance (ISR) Against Pathogens in the Context of Induced Plant Defences Martin Heil
Notes from the Underground Donald R. Strong and Donald A. Phillips
Relationships Among Plants, Insect Herbivores, Pathogens, and Parasitoids Expressed by Secondary Metabolites Loretta L. Mannix | Phytoalexin | [
"Chemistry"
] | 1,422 | [
"Phytoalexins",
"Chemical ecology"
] |
1,023,920 | https://en.wikipedia.org/wiki/NTSC-J | NTSC-J or "System J" is the informal designation for the analogue television standard used in Japan. The system is based on the US NTSC (NTSC-M) standard with minor differences. While NTSC-M is an official CCIR and FCC standard, NTSC-J or "System J" are a colloquial indicators.
The system was introduced by NHK and NTV, with regular color broadcasts starting on September 10, 1960.
NTSC-J was replaced by digital broadcasts in 44 of the country's 47 prefectures on 24 July 2011. Analogue broadcasting ended on 31 March 2012 in the three prefectures devastated by the 2011 Tōhoku earthquake and tsunami (Iwate, Miyagi, Fukushima) and the subsequent Fukushima Daiichi nuclear disaster.
The term NTSC-J is also incorrectly and informally used to distinguish regions in console video games, which use televisions (see Marketing definition below).
Technical definition
Japan implemented the NTSC standard with slight differences. The black and blanking levels of the NTSC-J signal are identical to each other (both at 0 IRE, similar to the PAL video standard), while in American NTSC the black level is slightly higher (7.5 IRE) than blanking level - because of the way this appears in the waveform, the higher black level is also called pedestal. This small difference doesn't cause any incompatibility problems, but needs to be compensated by a slight change of the TV brightness setting in order to achieve proper images.
YIQ color encoding in NTSC-J uses slightly different equations and ranges from regular NTSC. has a range of 0 to +-334 (+-309 on NTSC-M), and has a range of 0 to +-293 (+-271 on NTSC-M).
YCbCr equations for NTSC-J are , while on NTSC-M we have .
NTSC-J also uses a white reference (color temperature) of 9300K instead of the usual NTSC-US standard of 6500K.
The over-the-air RF frequencies used in Japan do not match those of the US NTSC standard. On VHF the frequency spacing for each channel is 6 MHz as in North America, South America, Caribbean, South Korea, Taiwan, Burma (Myanmar) the Philippines, except between channels 7 and 8 (which overlap). Channels 1 through 3 are reallocated for the expansion of the Japanese FM band. On UHF frequency spacing for each channel in Japan is the same, but the channel numbers are 1 lower than on the other areas mentioned - for example, channel 13 in Japan is on the same frequency as channel 14. For more information see Television channel frequencies. Channels 13-62 are used for analog and digital TV broadcasting.
The encoding of the stereo subcarrier also differs between NTSC-M/MTS and Japanese EIAJ MTS broadcasts.
Marketing definition
The term NTSC-J was informally used to distinguish regions in console video games, which use televisions. NTSC-J is used as the name of the video gaming region of Japan (hence the "J"), South East Asia (some countries only), Taiwan, Hong Kong, Macau, Philippines and South Korea (now NTSC-K) (formerly part of SE Asia with Hong Kong, Taiwan, Japan, etc.).
Most games designated as part of this region will not run on hardware designated as part of the NTSC-U, PAL (or PAL-E, "E" stands for Europe) or NTSC-C (for China) mostly due to the regional differences of the PAL (SECAM was also used in the early 1990s) and NTSC standards. Many older video game systems do not allow games from different regions to be played (accomplished by various forms of regional lockout); however more modern consoles either leave protection to the discretion of publishers, such as Microsoft's Xbox 360, or discontinue its use entirely, like Sony's PlayStation 3 (with a few exceptions).
China received its own designation due to fears of an influx of illegal copies flooding out of China, which is notorious for its rampant copyright infringements. There is also concern of copyright protection through regional lockout built into the video game systems and games themselves, as the same product can be edited by different publishers from one continent to another.
See also
Television in Japan
Broadcast television systems
ATSC
BTSC
EIAJ MTS
NTSC
Clear-Vision
PAL
SECAM
Related topics
RCA
Oldest television station
NTSC-C
NTSC-US
References
Television technology
Japanese inventions | NTSC-J | [
"Technology"
] | 950 | [
"Information and communications technology",
"Television technology"
] |
1,023,992 | https://en.wikipedia.org/wiki/Autokinetic%20effect | The autokinetic effect (also referred to as autokinesis and the autokinetic illusion) is a phenomenon of visual perception in which a stationary, small point of light in an otherwise dark or featureless environment appears to move. It was first recorded in 1799 by Alexander von Humboldt who observed illusory movement of a star in a dark sky, although he believed the movement was real. It is presumed to occur because motion perception is always relative to some reference point, and in darkness or in a featureless environment there is no reference point, so the position of the single point is undefined. The direction of the movements does not appear to be correlated with involuntary eye movements, but may be determined by errors between eye position and that specified by efference copy of the movement signals sent to the extraocular muscles. Richard Gregory suggested that, with lack of peripheral information, eye movements which correct movements due to muscle fatigue are wrongly interpreted as movement of the perceived light.
The amplitude of the movements is also undefined. Individual observers set their own frames of reference to judge amplitude (and possibly direction). Because the phenomenon is labile, it has been used to show the effects of social influence or suggestion on judgements. For example, if an observer who would otherwise say the light is moving one foot overhears another observer say the light is moving one yard, then the first observer will report that the light moved one yard. Discovery of the influence of suggestion on the autokinetic effect is often attributed to Sherif (1935), but it was recorded by Adams (1912), if not others.
Alexander von Humboldt observed the phenomenon in 1799 while looking at stars with the naked eye, but thought it was a real movement of the stars. Thus, he named them "Sternschwanken", meaning "swinging stars". It was not until 1857 that G. Schweitzer showed that it was a subjective phenomenon: several observers all simultaneously viewing the same star reported different directions of the movement.
Many sightings of UFOs have been attributed to the autokinetic effect when looking at stars or planets.
The US Navy started studying autokinesis in 1945 in an attempt to explain vertigo experiences reported by pilots, but this "kinetic illusion" is now categorized as a vestibular-induced illusion: see vestibular system.
In literature
An evocative passage appears in H. G. Wells' novel The War of the Worlds. Although Wells ascribes the apparent "swimming" of the planet to telescope vibration and eye fatigue, it is likely that the autokinetic effect is also being described:
In aviation
The effect is well known as an illusion affecting pilots who fly at night. It is particularly dangerous for pilots flying in formation or rejoining a refueling tanker at night. Steps that can be taken to prevent or overcome the phenomenon include:
Shifting your gaze frequently to avoid prolonged fixation on light sources.
Attempting to view a target with a reference to stationary structures or landmarks.
Making eye, head, and body movements to eliminate the illusion.
Monitoring the flight instruments to prevent or resolve any perceptual conflict.
In combat
In his book documenting the opening stages of the second Gulf War from his position embedded with the 1st Marine Reconnaissance Battalion, Evan Wright documents an incident during which, at night in the Iraqi desert, the Marines observed the lights of a town approximately 40 kilometers away. These lights appeared to be moving and were suspected of belonging to a large combat force moving out to attack the marines. An airstrike was called in on the estimated position of the lights—estimated to be around 15 kilometers away—which resulted in no enemy assets being destroyed. It was later suggested by Major Shoup of the battalion that this misidentification was a result of autokinesis. In the HBO mini-series based on the book, this information was imparted to the viewer by the character of Sergeant Brad Colbert, who had correctly deduced that it was a town in both versions.
Night fighter and night bomber crews during the Second World War reported encounters with mysterious aerial phenomena, nicknamed foo fighters, which may have been caused by autokinesis or a similar effect.
Autostasis
The opposite effect of autokinesis is autostasis. It is when a moving bright light in a dark sky appears stationary.
See also
Spatial disorientation
Ganzfeld effect
References
Bibliography
Adams, H. F. (1912). Autokinetic sensations. Psychological Monographs, 14, 1-45.
Schweitzer, G. (1857). Über das Sternschwanken. Bulletin de la Société impériale des naturalistes de Moscou. 30: 440–457; 31: 477–500. Source: Skeptic, Volume 17. No. 2 2012, pages 38–43.
Sherif, M. (1935). A study of some social factors in perception. Archives of Psychology, 27(187) .
U.S. Air Force (2000). Flying Operations, Instrument Flight Procedures. Air Force Manual 11-217. Volume 1, 29 December 2000.
Fundamentals of Aerospace Medicine, second edition, by Roy L. DeHart. Port City Press, 1996.
Generation Kill by Evan Wright. (2005) Chapter 17, Page 236.
Gregory, Richard L. and Oliver L. Zangwill. 1963. "The Origin of the Autokinetic Effect." Quarterly Journal of Experimental Psychology, 15, 255–261.
Vision
Illusions
Optical illusions | Autokinetic effect | [
"Physics"
] | 1,118 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
1,024,033 | https://en.wikipedia.org/wiki/Potassium%20perchlorate | Potassium perchlorate is the inorganic salt with the chemical formula KClO4. Like other perchlorates, this salt is a strong oxidizer when the solid is heated at high temperature although it usually reacts very slowly in solution with reducing agents or organic substances. This colorless crystalline solid is a common oxidizer used in fireworks, ammunition percussion caps, explosive primers, and is used variously in propellants, flash compositions, stars, and sparklers. It has been used as a solid rocket propellant, although in that application it has mostly been replaced by the more performant ammonium perchlorate.
KClO4 has a relatively low solubility in water (1.5 g in 100 mL of water at 25 °C).
Production
Potassium perchlorate is prepared industrially by treating an aqueous solution of sodium perchlorate with potassium chloride. This single precipitation reaction exploits the low solubility of KClO4, which is about 1/100 as much as the solubility of NaClO4 (209.6 g/100 mL at 25 °C).
It can also be produced by bubbling chlorine gas through a solution of potassium chlorate and potassium hydroxide, and by the reaction of perchloric acid with potassium hydroxide; however, this is not used widely due to the dangers of perchloric acid.
Another preparation involves the electrolysis of a potassium chlorate solution, causing KClO4 to form and precipitate at the anode. This procedure is complicated by the low solubility of both potassium chlorate and potassium perchlorate, the latter of which may precipitate onto the electrodes and impede the current.
Oxidizing properties
KClO4 is an oxidizer in the sense that it exothermically "transfers oxygen" to combustible materials, greatly increasing their rate of combustion relative to that in air. Thus, it reacts with glucose to give carbon dioxide, water molecules and potassium chloride:
3 KClO4 + C6H12O6 → 6 CO2 + 6 H2O + 3 KCl
The conversion of solid glucose into hot gaseous is the basis of the explosive force of this and other such mixtures. With sugar, KClO4 yields a low explosive, provided a necessary confinement. Otherwise such mixtures simply deflagrate with an intense purple flame characteristic of potassium. Flash compositions used in firecrackers usually consist of a mixture of aluminium powder and potassium perchlorate. This mixture, sometimes called flash powder, is also used in ground and air fireworks.
As an oxidizer, potassium perchlorate can be used safely in the presence of sulfur, whereas potassium chlorate cannot. The greater reactivity of chlorate is typical – perchlorates are kinetically poorer oxidants. Chlorate produces chloric acid (), which is highly unstable and can lead to premature ignition of the composition. Correspondingly, perchloric acid () is quite stable.
For a commercial use, potassium perchlorate is mixed 50/50 with potassium nitrate to fabricate Pyrodex, a black powder substitute, and when not compressed within a muzzle loading firearm or in a cartridge, burns at a sufficiently slow rate to prevent it from being categorized with the black powder as a "low explosive", and to demote it as "flammable" material.
Debated medical use
Potassium perchlorate can be used as an antithyroid agent used to treat hyperthyroidism, usually in combination with one other medication. This application exploits the similar ionic radius and hydrophilicity of perchlorate and iodide.
The administration of known goitrogen substances can also be used as a prevention in reducing the biological uptake of iodine, (whether it is the nutritional non-radioactive iodine-127 or radioactive iodine, most commonly iodine-131 (half-life = 8.02 days), as the body cannot discern between different iodine isotopes). Perchlorate ions, a common water contaminant in the USA due to the aerospace industry, has been shown to reduce iodine uptake and thus is classified as a goitrogen. Perchlorate ion is a competitive inhibitor of the process by which iodide is actively accumulated into the thyroid follicular cells. Studies involving healthy adult volunteers determined that at levels above 7 micrograms per kilogram per day (μg/(kg·d)), perchlorate begins to temporarily inhibit the thyroid gland's ability to absorb iodine from the bloodstream ("iodide uptake inhibition", thus perchlorate is a known goitrogen). The reduction of the iodide pool by perchlorate has a dual effect – reduction of excess hormone synthesis and hyperthyroidism, on the one hand, and reduction of thyroid inhibitor synthesis and hypothyroidism on the other. Perchlorate remains very useful as a single dose application in tests measuring the discharge of radioiodide accumulated in the thyroid as a result of many different disruptions in the further metabolism of iodide in the thyroid gland.
Treatment of thyrotoxicosis (including Graves' disease) with 600-2,000 mg potassium perchlorate (430-1,400 mg perchlorate) daily for periods of several months, or longer, was once a common practice, particularly in Europe, and perchlorate use at lower doses to treat thyroid problems continues to this day. Although 400 mg of potassium perchlorate divided into four or five daily doses was used initially and found effective, higher doses were introduced when 400 mg/d was discovered not to control thyrotoxicosis in all subjects.
Current regimens for treatment of thyrotoxicosis (including Graves' disease), when a patient is exposed to additional sources of iodine, commonly include 500 mg potassium perchlorate twice per day for 18–40 days.
Prophylaxis with perchlorate-containing water at concentrations of 17 ppm, corresponding to 0.5 mg/(kg·d) intake for a person of 70 kg consuming 2 litres of water per day, was found to reduce the baseline of radioiodine uptake by 67% This is equivalent to ingesting a total of just 35 mg of perchlorate ions per day. In another related study were subjects drank just 1 litre of perchlorate-containing water per day at a concentration of 10 ppm, i.e. daily 10 mg of perchlorate ions were ingested, an average 38% reduction in the uptake of Iodine was observed.
However, when the average perchlorate absorption in perchlorate plant workers subjected to the highest exposure has been estimated as approximately 0.5 mg/(kg·d), as in the above paragraph, a 67% reduction of iodine uptake would be expected. Studies of chronically exposed workers though have thus far failed to detect any abnormalities of thyroid function, including the uptake of iodine. This may well be attributable to sufficient daily exposure, or intake, of stable iodine-127 among these workers and the short 8 hr biological half life of perchlorate in the body.
To completely block the uptake of iodine-131 (half-life = 8.02 days) by the purposeful addition of perchlorate ions to a public water supply, aiming at dosages of 0.5 mg/(kg·d), or a water concentration of 17 ppm, would therefore be grossly inadequate at truly reducing a radio-iodine uptake. Perchlorate ion concentrations in a region water supply, would need to be much higher, at least 7.15 mg/kg of body weight per day or a water concentration of 250 ppm, assuming people drink 2 liters of water per day, to be truly beneficial to the population at preventing bioaccumulation when exposed to an iodine-131 contamination, independent of the availability of iodate or iodide compounds.
The distribution of perchlorate tablets, or the addition of perchlorate to the water supply, would need to continue for 80–90 days (~10 half-life of 8.02 days) after the release of iodine-131. After this time, the radioactive iodine-131 would have decayed to less than 1/1000 of its initial activity at which time the danger from the biological uptake of iodine-131 is essentially over.
Limitations and criticisms
So, perchlorate administration could represent a possible alternative to iodide tablets distribution in case of a large-scale nuclear accident releasing important quantities of iodine-131 in the atmosphere. However, the advantages are not always clear and would depend on the extent of a hypothetical nuclear accident. As for the stable iodide intake to rapidly saturate the thyroid gland before it accumulates radioactive iodine-131, a careful cost-benefit analysis has to be first done by the nuclear safety authorities. Indeed, blocking the thyroid activity of a whole population for three months can also have negative consequences for the human health, especially for young children.
So, the decision of perchlorate, or stable iodine, administration cannot be left to the individual initiative and falls under the authority of the government in case of a major nuclear accident.
Injecting perchlorate or iodide directly in the public drinking water is also probably as restrictive as tablets distribution.
See also
Chlorate
Iodide
References
Further reading
External links
WebBook page for KClO4
Potassium compounds
Perchlorates
Pyrotechnic oxidizers | Potassium perchlorate | [
"Chemistry"
] | 1,975 | [
"Perchlorates",
"Salts"
] |
1,024,043 | https://en.wikipedia.org/wiki/Paul%20Poberezny | Paul Howard Poberezny (September 14, 1921 – August 22, 2013) was an American aviator, entrepreneur, and aircraft designer. He founded the Experimental Aircraft Association (EAA) in 1953, and spent the greater part of his life promoting homebuilt aircraft.
Poberezny is widely considered as the first person to have popularized the tradition of aircraft homebuilding in the United States. Through his work founding EAA and the organization's annual convention in Oshkosh, Wisconsin, he had the reputation of helping inspire millions of people to get involved in grassroots aviation. Many attribute his legacy with the growth and sustainment of the US general aviation industry in the later part of the 20th century and into the early 21st. For the last two decades of his tenure as chairman of the EAA from 1989–2009, he worked closely with his son, aerobatic pilot and EAA president Tom Poberezny, to expand the organization and create several new programs within it, including an aviation education program for youth and the EAA Museum, among other initiatives.
In addition to his longtime experience as a military aviator (earning all seven types of pilot wings offered by the armed services), Poberezny was also an instructor, air show, air race and test pilot who frequently test flew his own homebuilt designs as well as various aircraft built by the EAA, such as the EAA Biplane. He flew for more than 70 years of his life in over 500 different types of aircraft, and was inducted into the National Aviation Hall of Fame in 1999. He also received the Wright Brothers Memorial Trophy in 2002 and was ranked fourth on Flying's list of the 51 Heroes of Aviation, the highest-ranked living person on the list at the time of its release. Poberezny died of cancer in 2013, at the age of 91.
Early life
Paul Poberezny was the oldest of three children born to Peter Poberezny, a Ukrainian migrant born and raised in Terebovlia, and Jettie Dowdy, who hailed from the southern United States. Born in Leavenworth County, Kansas, Paul grew up poor in a tar paper shack in Milwaukee, Wisconsin and never experienced indoor plumbing until he went to school. He became interested in aviation at an early age and built model airplanes as his first educational experience into aircraft design. He then learned how to fly and repair aircraft in high school, starting with a WACO Primary Glider and Porterfield 35 monoplane, and followed by an American Eagle biplane after high school. Having never attended college, Poberezny once described learning to fly and maintain the Eagle as the closest thing he ever had to a college education experience.
Experimental Aircraft Association
Poberezny founded the Experimental Aircraft Association out of his Hales Corners, Wisconsin home in 1953. It started as predominately an aircraft homebuilding organization in his basement, but later went on to capture all aspects of general aviation internationally. Poberezny retired as EAA President in 1989, remaining as chairman of the organization until 2009. As of 2017, the organization had approximately 200,000 members in more than 100 countries.
In 1953, the EAA released a two-page newsletter named The Experimenter (later renamed Sport Aviation). The newsletter was first published and written by Paul and his wife Audrey Poberezny along with other volunteers. The now-monthly magazine focuses on experimental homebuilding and other general aviation topics, including antique, war, and classic aircraft.
EAA's annual convention and fly-in (now known as EAA AirVenture Oshkosh) in Oshkosh, Wisconsin attracts a total attendance in excess of 600,000 people, 10,000 aircraft, and 1,000 different forums & workshops annually, making it the largest of its kind in the world. It was first held in 1953 at what is now Timmerman Field in Milwaukee, and attracted only a handful of airplanes. Towards the late '50s, the event outgrew Timmerman Field and was moved to the Rockford, Illinois Municipal Airport (now Chicago Rockford International Airport). There, attendance at the fly-in continued to grow until the Rockford airport was too small to accommodate the crowds, and so it was moved to Oshkosh's Wittman Regional Airport in 1970.
Paul's son, aerobatic world champion Tom Poberezny, was the chairman of the annual EAA AirVenture Convention from 1977 to August 2011, and was president of EAA from 1989 to September 2010. In March 2009, Paul stepped down as Chairman of EAA and his son took on these duties as well. Tom had a large impact on the expansive growth of the organization and convention over the more than two decades that he led them with his father.
The EAA spawned the creation of numerous aviation programs and activities within the organization, including a technical counselor program, flight advisor program, youth introduction-to-aviation program (the Young Eagles), National Cadet Special Activity program as part of the Civil Air Patrol (National Blue Beret), and more. In addition, AirVenture has nearly a $200 million annual economic impact on the surrounding region of Wisconsin and inspired the formation of other similar events such as Tannkosh in Germany and Sun 'n Fun in Florida, as well as similar organizations such as the Aircraft Kit Industry Association founded by pioneer homebuilder Richard VanGrunsven.
Military career
Poberezny served for 30 years in the Wisconsin Air National Guard and United States Air Force, including active duty during World War II and the Korean War. He retired with the rank of lieutenant colonel, and attained all seven aviation wings offered by the military: glider pilot, service pilot, rated pilot, liaison pilot, senior pilot, Army aviator, and command pilot.
Aircraft experience
Poberezny flew over 500 aircraft types, including over 170 home-built planes throughout his life. He was introduced to aviation in 1936 at the age of 16 with the gift of a donated damaged WACO Primary Glider that he rebuilt and taught himself to fly. A high school teacher owned the glider and offered to pay Poberezny to repair it. He hauled it to his father's garage, borrowed books on building/repairing airplanes, and completed the restoration soon after. A friend used his car to tow the glider into the air with Poberezny at the controls; it rose to around a hundred feet when he released the tow rope and coasted to a gentle landing in a bed of alfalfa. A year later, Poberezny soloed at age 17 in a 1935 Porterfield and soon co-owned an American Eagle biplane.
After returning home from World War II, Poberezny could not afford to buy his own aircraft, so he decided to build one himself. In 1955, he wrote a series of articles for the publication Mechanix Illustrated, where he described how an individual could buy a set of plans and build an airplane at home. In the magazine were also photos of himself fabricating the Baby Ace, an amateur-built aircraft (and the first to be marketed as a "homebuilt") that he bought the rights to for US$200 a few years prior. The articles became extremely popular and gave the concept of homebuilding worldwide acclaim.
He designed, modified, and built several home-built aircraft, and had more than 30,000 hours of flight time in his career. Aircraft that he designed and built include:
Acro Sport I & II
"Little Audrey"
Poberezny P-5 Pober Sport
Pober Jr Ace
Pober Pixie
Pober Super Ace
Poberezny made the first test flight of the EAA Biplane example Parkside Eagle in 1971, which was constructed by students of Parkside High School in Michigan.
His 1944 North American F-51D Mustang, dubbed Paul I, which he flew at air shows and air races from 1977–2003, is on display at the EAA Aviation Museum in Oshkosh.
Personal life and death
In 1996, Poberezny teamed with his daughter Bonnie, her husband Chuck Parnall, and Bill Blake to write Poberezny: The Story Begins, a recounting of the early years of Paul and Audrey, including the founding of EAA. Paul Poberezny died of cancer on August 22, 2013, in Oshkosh, Wisconsin, at age 91. His estate in Oshkosh is preserved by Aircraft Spruce & Specialty Co. and was opened to public tours beginning in the summer of 2017. Audrey Poberezny died on November 1, 2020, at age 95, and Tom Poberezny died on July 25, 2022, at age 75, severing the last direct link between EAA and the Poberezny family that founded it.
Awards and legacy
In 1971 Poberezny was the first recipient of the Duane and Judy Cole Award, presented to individuals that promote sport aviation. In 1978 he was named an honorary fellow of the Society of Experimental Test Pilots, in 1986 he was inducted into the Wisconsin Aviation Hall of Fame, and in 1987, the National Aeronautic Association (NAA) awarded him the Elder Statesman of Aviation. In 1997 he was inducted into the International Air & Space Hall of Fame and in 1999, the National Aviation Hall of Fame in Dayton, Ohio. He received the NBAA's 2001 Award for Meritorious Service to Aviation and the 2002 Wright Brothers Memorial Trophy. In 2008 the Wisconsin Historical Society named him as a "Wisconsin History Maker", recognizing his unique contributions to the state's history. Flying Magazine ranked Poberezny at number 4 on their 2013 list of the 51 Heroes of Aviation, putting him ahead of figures like Bob Hoover, Amelia Earhart, Jimmy Doolittle, and even Chuck Yeager. At the time of its release, just one month before his death, Poberezny was the highest-ranked living person on the list.
Many prominent aviation figures have praised Poberezny's legacy as being crucial to the maturation of the general aviation industry and to aviation advocacy at large. Radio newscaster and pilot Paul Harvey said that the Poberezny family "militantly manned the ramparts against those who would fence off the sky", and airshow pilot Julie Clark noted Poberezny as inspiring her and "countless thousands of others to get involved in the promotion of aviation." The Klapmeier brothers, fellow Wisconsinites who founded Cirrus Aircraft in the mid-1980s with a homebuilt design, also credited Poberezny and the EAA as essential to their success:
See also
One Six Right (2005 documentary)
Project Schoolflight
Timothy Prince
Burt Rutan
Steve Wittman
References
External links
EAA - The Spirit of Aviation official website
Paul Poberezny official EAA biography
Profile in the National Aviation Hall of Fame
Biography in the Wisconsin Aviation Hall of Fame
Biography at FirstFlight.org (verified 3/2006)
Wright Award announcement (verified 3/2006)
Poberezny obituary in The New York Times
1921 births
2013 deaths
American aerospace engineers
American aviation businesspeople
American people of Ukrainian descent
Aviators from Kansas
Aviators from Wisconsin
People from Leavenworth County, Kansas
People from Oshkosh, Wisconsin
National Aviation Hall of Fame inductees
Deaths from cancer in Wisconsin
Writers from Kansas
Writers from Wisconsin
American aviation pioneers
Aircraft designers
Military personnel from Wisconsin
United States Army Air Forces pilots of World War II
National Guard (United States) officers
Experimental Aircraft Association
People from Hales Corners, Wisconsin
Engineers from Milwaukee | Paul Poberezny | [
"Engineering"
] | 2,366 | [
"Experimental Aircraft Association",
"Aerospace engineering organizations"
] |
1,024,131 | https://en.wikipedia.org/wiki/Jean%20Bourgain | Jean Louis, baron Bourgain (; – ) was a Belgian mathematician. He was awarded the Fields Medal in 1994 in recognition of his work on several core topics of mathematical analysis such as the geometry of Banach spaces, harmonic analysis, ergodic theory and nonlinear partial differential equations from mathematical physics.
Biography
Bourgain received his PhD from the Vrije Universiteit Brussel in 1977. He was a faculty member at the University of Illinois Urbana-Champaign and, from 1985 until 1995, professor at Institut des Hautes Études Scientifiques at Bures-sur-Yvette in France, at the Institute for Advanced Study in Princeton, New Jersey from 1994 until 2018. He was an editor for the Annals of Mathematics. From 2012 to 2014, he was a visiting scholar at UC Berkeley.
His research work included several areas of mathematical analysis such as the geometry of Banach spaces, harmonic analysis, analytic number theory, combinatorics, ergodic theory, partial differential equations and spectral theory, and later also group theory. He proved the uniqueness of the solutions for the initial value problem of the Korteweg–De Vries equation. He formulated what became known as the Bourgain slicing problem in high-dimensional convex geometry. In 1985, he proved Bourgain's embedding theorem in metric dimension reduction, which states that every metric space can be embedded into an space of dimension with distortion . Together with Vitali Milman, he contributed to progress on Mahler’s conjecture in 1987. In 2000, Bourgain connected the Kakeya problem to arithmetic combinatorics. As a researcher, he was the author or coauthor of more than 500 articles.
Together with Ciprian Demeter and Larry Guth, he proved Vinogradov's mean-value theorem in 2015.
Bourgain was diagnosed with pancreatic cancer in late 2014. He died of it on 22 December 2018 at a hospital in Bonheiden, Belgium.
Awards and recognition
Bourgain received several awards during his career, the most notable being the Fields Medal in 1994.
In 2009 Bourgain was elected a foreign member of the Royal Swedish Academy of Sciences.
In 2010, he received the Shaw Prize in Mathematics.
In 2012, he and Terence Tao received the Crafoord Prize in Mathematics from the Royal Swedish Academy of Sciences.
In 2015, he was made a baron by king Philippe of Belgium.
In 2016, he received the 2017 Breakthrough Prize in Mathematics.
In 2017, he received the 2018 Leroy P. Steele Prizes.
Selected publications
Articles
(See Banach space and martingale.)
(See Sobolev space.)
(See Lindelöf hypothesis.)
Books
(Bourgain's research on nonlinear dispersive equations was, according to Carlos Kenig, "deep and influential".)
References
External links
1954 births
2018 deaths
Fields Medalists
Members of the French Academy of Sciences
Members of the Royal Swedish Academy of Sciences
Foreign associates of the National Academy of Sciences
Functional analysts
Mathematical analysts
Institute for Advanced Study faculty
University of Illinois Urbana-Champaign faculty
20th-century Belgian mathematicians
Belgian mathematicians
Vrije Universiteit Brussel alumni | Jean Bourgain | [
"Mathematics"
] | 647 | [
"Mathematical analysis",
"Mathematical analysts"
] |
1,024,256 | https://en.wikipedia.org/wiki/Reflection%20attack | In computer security, a reflection attack is a method of attacking a challenge–response authentication system that uses the same protocol in both directions. That is, the same challenge–response protocol is used by each side to authenticate the other side. The essential idea of the attack is to trick the target into providing the answer to its own challenge.
Attack
The general attack outline is as follows:
The attacker initiates a connection to a target.
The target attempts to authenticate the attacker by sending it a challenge.
The attacker opens another connection to the target, and sends the target this challenge as its own.
The target responds to the challenge.
The attacker sends that response back to the target on the original connection.
If the authentication protocol is not carefully designed, the target will accept that response as valid, thereby leaving the attacker with one fully authenticated channel connection (the other one is simply abandoned).
Solution
Some of the most common solutions to this attack are described below:
The responder sends its identifier within the response so, if it receives a response that has its identifier in it, it can reject it.
Alice initiates a connection to Bob.
Bob challenges Alice by sending a nonce N.
Alice responds by sending back the MAC calculated on her identifier and the nonce using the shared key Kab.
Bob checks the message and verifies the MAC, making sure it is from Alice and not a message he had sent in the past by making sure that it verifies with A and not B, and on the nonce which is the same as the one he sent in his challenge, then he accepts the message.
Require the initiating party to first respond to challenges before the target party responds to its challenges.
Require the key or protocol to be different between the two directions.
See also
Replay attack
Man-in-the-middle attack
Pass the hash
References
Computer security exploits
Computer access control protocols | Reflection attack | [
"Technology"
] | 388 | [
"Computer security exploits"
] |
1,024,314 | https://en.wikipedia.org/wiki/Mental%20accounting | Mental accounting (or psychological accounting) is a model of consumer behaviour developed by Richard Thaler that attempts to describe the process whereby people code, categorize and evaluate economic outcomes. Mental accounting incorporates the economic concepts of prospect theory and transactional utility theory to evaluate how people create distinctions between their financial resources in the form of mental accounts, which in turn impacts the buyer decision process and reaction to economic outcomes. People are presumed to make mental accounts as a self control strategy to manage and keep track of their spending and resources. People budget money into mental accounts for savings (e.g., saving for a home) or expense categories (e.g., gas money, clothing, utilities). People also are assumed to make mental accounts to facilitate savings for larger purposes (e.g., a home or college tuition). Mental accounting can result in people demonstrating greater loss aversion for certain mental accounts, resulting in cognitive bias that incentivizes systematic departures from consumer rationality. Through an increased understanding of mental accounting differences in decision making based on different resources, and different reactions based on similar outcomes can be greater understood.
As Thaler puts it, “All organizations, from General Motors down to single person households, have explicit and/or implicit accounting systems. The accounting system often influences decisions in unexpected ways”. Particularly, individual expenses will usually not be considered in conjunction with the present value of one’s total wealth; they will be instead considered in the context of two accounts: the current budgetary period (this could be a monthly process due to bills, or yearly due to an annual income), and the category of expense. People can even have multiple mental accounts for the same kind of resource. A person may use different monthly budgets for grocery shopping and eating out at restaurants, for example, and constrain one kind of purchase when its budget has run out while not constraining the other kind of purchase, even though both expenditures draw on the same fungible resource (income).
One detailed application of mental accounting, the Behavioral Life Cycle Hypothesis posits that people mentally frame assets as belonging to either current income, current wealth or future income and this has implications for their behavior as the accounts are largely non-fungible and marginal propensity to consume out of each account is different.
Utility, value and transaction
In mental accounting theory, the framing effect defines that the way a person subjectively frames a transaction in their mind will determine the utility they receive or expect. The concept of framing is adopted in prospect theory, which is commonly used by mental accounting theorists as the value function in their analysis (Richard Thaler Included ). In Prospect Theory, the value function is concave for gains (implying an aversion to risk), indicating decreasing marginal utility with accumulation of gain. The value function is convex for losses (implying a risk-seeking attitude). A concave value function for gain incentivizes risk-averse behavior because marginal gain decreases relative increase in value. Conversely, a convex value function for losses means that the impact of a loss is more detrimental to a person than an equivalent gain, thus incentivizing risk-seeking behavior in order to avoid loss. These proponents of the value function portray the concept of loss aversion, which asserts that people are more likely to make decisions in order to minimize loss than to maximise gain.
Given the Prospect Theory framework, how do people interpret, or ‘account for’, multiple transactions/outcomes, of the format ? They can either view the outcomes jointly, and receive , in which case the outcomes are integrated, or , in which case we say that the outcomes are segregated. The choice to integrate or segregate multiple outcomes can be beneficial or detrimental to overall utility depending on the correctness of application. Due to the nature of our value function’s different slopes for gains and losses, our utility is maximized in different ways, depending on how we code the four kinds of transactions and (as gains or as losses):1) Multiple gains: and are both considered gains. Here, we see that . Thus, we want to segregate multiple gains.
2) Multiple losses: and are both considered losses. Here, we see that . We want to integrate multiple losses.
3) Mixed gain: one of and is a gain and one is a loss, however the gain is the larger of the two. In this case, . Utility is maximized when we integrate a mixed gain.
4) Mixed loss: again, one of and is a gain and one is a loss, however the loss is now significantly larger than the gain. In this case, . Clearly, we don't want to integrate a mixed loss when the less is significantly larger than the gain. This is often referred to as a "silver lining", a reference to the folk maxim "every cloud has a silver lining". When the loss is just barely larger than the gain, integration may be preferred.
Integration and segregation of outcomes is a means of framing that can impact the overall utility derived from multiple outcomes. Mental accounting interprets the tendency of people to mentally segregate their financial resources into different categories. In the event of financial losses or gains in different mental accounts, people will be impacted differently than if the financial loss was integrated across their entire financial portfolio. In the event of multiple gain and mixed loss, mental accounting will segregate outcomes resulting in maximised utility. In the event of multiple losses and mixed gain, mental accounting will segregate outcomes resulting in minimized utility.
There are two values attached to any transaction - acquisition value and transaction value. Acquisition value is the money that one is ready to part with for physically acquiring some good. Transaction value is the value one attaches to having a good deal. If the price that one is paying is equal to the mental reference price for the good, the transaction value is zero. If the price is lower than the reference price, the transaction utility is positive. Total utility received from a transaction, then, is the sum of acquisition utility and transaction utility.
Pain of Paying
A more proximal psychological mechanism through which mental accounting influences spending is through its influence on the pain of paying that is associated with spending money from a mental account. Pain of paying is a negative affective response associated with a financial loss. Prototypical examples are the unpleasant feeling that one experiences when watching the fare increase on a taximeter or at the gas pump. When considering an expense, consumers appear to compare the cost of the expense to the size of an account that it would deplete (e.g., numerator vs. denominator). A $30 t-shirt, for example, would be a subjectively larger expense when drawn from $50 in one's wallet than $500 in one's checking account. The larger the fraction, the more pain of paying the purchase appears to generate and the less likely consumers are to then exchange money for the good. Other evidence of the relation between pain of paying and spending include the lower debt held by consumers who report experiencing a higher pain of paying for the same goods and services than consumers who report experiencing less pain of paying.
Main principles of mental accounts
Richard Thaler divided the concept mental accounting into two main principles; segregation of gains and losses, and account reference points. Both principles utilize concepts related to utility and pain of paying to interpret how people evaluate economic outcomes.
Segregation of gains and losses
A main principle of mental accounting is the assertion that people frame gains and losses by segregating into different mental accounts rather than integrating into their overall account. The impact of this tendency means that outcomes can be framed based on the context of a decision. In mental accounting the framing of a decision reduces from the overall account to a smaller segregated account which can incentivize purchase decisions. An example of this was posed by Thaler where people were more inclined to drive 20 minutes to save $5 on a $15 purchase than on a $125 purchase. The principle applies to mental accounting where if gains and losses are viewed relative to a smaller segregated account then the outcome is viewed differently.
Account reference points
Account reference points refer to the tendency for people to set a reference point on a current decision based on prior outcome in the same mental account. As a result the impact of prior outcomes integrate into the current decision when determining overall utility. An example was posed by Thaler where gamblers were more inclined to make risk-seeking bets on the last race of the day. This phenomenon was justified by the assertion that gamblers segregate the gains and losses from each day into separate accounts and integrate gains and losses for each day in an account. It can then be interpreted that end-of-day risk-seeking bets is an example of loss aversion where gamblers attempt to equalize their daily account.
Practical implications
Since the inception of the concept, mental accounting has been applied to interpret consumer behavior particularly in the contexts of online shopping, consumer reward points, public taxation policy.
Psychology
Mental accounting is subject to many logical fallacies and cognitive biases, which hold many implications such as overspending.
Credit cards and cash payments
Another example of mental accounting is the greater willingness to pay for goods when using credit cards than cash. Swiping a credit card prolongs the payment to a later date (when we pay our monthly bill) and integrates it to a large existing sum (our bill to that point). This delay causes the payment to stick in our memory less clearly and saliently. Furthermore, the payment is no longer perceived in isolation; rather, it is seen as a (relatively) small increase of an already large credit card bill. For example, it might be a change from $120 to $125, instead of a regular, out-of-pocket $5 cost. And as we can see from our value function, this V(-$125) – V(-$120) is smaller than V(-$5), thus the pain of paying is reduced.
Marketing
Mental accounting can be useful for marketers predict customer response to bundling of pricing and segregation of products. People respond more positively to incentives and costs when gains are segregated, losses are integrated, marketers segregate net losses (the silver lining principle), and integrate net gains. Automotive dealers, for example, benefit from these principles when they bundle optional features into a single price but segregate each feature included in the bundle (e.g. velvet seat covers, aluminum wheels,anti-theft car lock). Cellular phone companies can use principles of mental accounting when deciding how much to charge consumers for a new smartphone and to give them for their trade-in. When the cost of the phone is large and the value of the phone to be traded in is low, it is better to charge consumers a slightly higher price for the phone and return that money to them as a higher value on their trade in. Conversely, when the cost of the phone and the value of the trade-in are more comparable, because consumers are loss averse, it is better to charge them less for the new phone and offer them less for the trade-in.
Public policy
Mental accounting can also be utilized in public economics and public policy. Inherently, the way that people (and therefore tax-payers and voters) perceive decisions and outcomes will be influenced by their process of mental accounting. Policy-makers and public economists could potentially apply mental accounting concepts when crafting public systems, trying to understand and identify market failures, redistribute wealth or resources in a fair way, reduce the saliency of sunk costs, limiting or eliminating the Free-rider problem, or even just when delivering bundles of multiple goods or services to taxpayers. The following examples exist where mental accounting applied to public policy and programs produced positive outcomes.
A good example of the importance of considering mental accounting while crafting public policy is demonstrated by authors Justine Hastings and Jesse Shapiro in their analysis of the SNAP (Supplemental Nutritional Assistance Program). They "argue that these findings are not consistent with households treating SNAP funds as fungible with non-SNAP funds, and we support this claim with formal tests of fungibility that allow different households to have different consumption functions" Put differently, their data supports Thaler's (and the concept of mental accounting's) claim that the principle of fungibility is often violated in practice. Furthermore, they find SNAP to be very effective, calculating a marginal propensity to consume SNAP-eligible food (MPCF) out of benefits received by SNAP of 0.5 to 0.6. This is much higher than the MPCF out of cash transfers, which is usually around 0.1.
The implications of taxation policy on taxpayers was examined through mental accounting principles in Optimal Taxation with Behavioral Insights. The research paper applied the ideology of the three pillars of optimal taxation, and incorporated mental accounting concepts (as well as misperceptions and internalities). Outcomes included novel economic insights, including application of nudges present in optimal taxation frameworks, and challenging the Diamond-Mirrlees productive efficiency result and the Atkinson-Stiglitz uniform commodity taxation proposition, finding they are more likely to fail with behavioral agents.
In the paper Public vs. Private Mental Accounts: Experimental Evidence from Savings Groups in Colombia, it was demonstrated that mental accounting can be exploited to help nudge people towards saving more. The study found that publicly creating a savings goal greatly increased the savings rate of participants when compared to the control and those who set savings goals privately. The power of the labeling effect was observed to vary based on the savings success history of the participants.
Mental accounting plays a powerful role in our decision-making processes. It is important for public policy experts, researchers, and policy-makers continue to explore the ways that it can be utilized to benefit public welfare.
See also
Decision making
Behavioral economics
Framing effect (psychology)
Micropayment
Preference
Psychological pricing
Transaction cost
Sunk cost
References
Bibliography
Behavioral finance | Mental accounting | [
"Biology"
] | 2,841 | [
"Behavioral finance",
"Behavior",
"Human behavior"
] |
1,024,323 | https://en.wikipedia.org/wiki/Pyrgeometer | A pyrgeometer is a device that measures near-surface infra-red (IR) radiation, approximately from 4.5 μm to 100 μm on the electromagnetic spectrum (thereby excluding solar radiation).
It measures the resistance/voltage changes in a material that is sensitive to the net energy transfer by radiation that occurs between itself and its surroundings (which can be either in or out). By also measuring its own temperature and making some assumptions about the nature of its surroundings it can infer a temperature of the local atmosphere with which it is exchanging radiation.
Since the mean free path of IR radiation in the atmosphere is ~25 meters, this device typically measures IR flux in the nearest 25 meter layer.
Pyrgeometer components
A pyrgeometer consists of the following major components:
A thermopile sensor which is sensitive to radiation in a broad range from 200 nm to 100 μm
A silicon dome or window with a solar blind filter coating. It has a transmittance between 4.5 μm and 50 μm that eliminates solar shortwave radiation.
A temperature sensor to measure the body temperature of the instrument.
A sun shield to minimize heating of the instrument due to solar radiation.
Measurement of long wave downward radiation
The atmosphere and the pyrgeometer (in effect its sensor surface) exchange long wave IR radiation. This results in a net radiation balance according to:
Where (in SI units):
= net radiation at sensor surface [W/m2]
= long-wave radiation received from the atmosphere [W/m2]
= long-wave radiation emitted by the sensor surface [W/m2]
The pyrgeometer's thermopile detects the net radiation balance between the incoming and outgoing long wave radiation flux and converts it to a voltage according to the equation below.
Where (in SI units):
=net radiation at sensor surface [W/m2]
= thermopile output voltage [V]
= sensitivity/calibration factor of instrument [V/W/m2]
The value for is determined during calibration of the instrument. The calibration is performed at the production factory with a reference instrument traceable to a regional calibration center.
To derive the absolute downward long wave flux, the temperature of the pyrgeometer has to be taken into account. It is measured using a temperature sensor inside the instrument, near the cold junctions of the thermopile. The pyrgeometer is considered to approximate a black body. Due to this it emits long wave radiation according to:
Where (in SI units):
= long-wave radiation emitted by the earth surface [W/m2]
= Stefan–Boltzmann constant [W/(m2·K4)]
= Absolute temperature of pyrgeometer detector [K]
From the calculations above the incoming long wave radiation can be derived. This is usually done by rearranging the equations above to yield the so-called pyrgeometer equation by Albrecht and Cox.
Where all the variables have the same meaning as before.
As a result, the detected voltage and instrument temperature yield the total global long wave downward radiation.
Usage
Pyrgeometers are frequently used in meteorology, climatology studies. The atmospheric long-wave downward radiation is of interest for research into long term climate changes.
The signals are generally detected using a data logging system, capable of taking high resolution samples in the millivolt range.
See also
Pyranometer
Radiometer
References
Electromagnetic radiation meters
Radiometry | Pyrgeometer | [
"Physics",
"Technology",
"Engineering"
] | 714 | [
"Telecommunications engineering",
"Spectrum (physical sciences)",
"Electromagnetic radiation meters",
"Electromagnetic spectrum",
"Measuring instruments",
"Radiometry"
] |
1,024,337 | https://en.wikipedia.org/wiki/Steatorrhea | Steatorrhea (or steatorrhoea) is the presence of excess fat in feces. Stools may be bulky and difficult to flush, have a pale and oily appearance, and can be especially foul-smelling. An oily anal leakage or some level of fecal incontinence may occur. There is increased fat excretion, which can be measured by determining the fecal fat level.
Causes
Impaired digestion or absorption can result in fatty stools.
Possible causes include exocrine pancreatic insufficiency, with poor digestion from lack of lipases, loss of bile salts, which reduces micelle formation, and small intestinal disease-producing malabsorption. Various other causes include certain medicines that block fat absorption or indigestible or excess oil/fat in diet.
The absence of bile secretion can cause the feces to turn gray or pale. Bile is responsible for the brownish color of feces. In addition to this, bile also plays a role in fat absorption, where dietary lipids are combined so that pancreatic lipases can hydrolyze them before going towards the small intestine. Without bile acids, this pathway would have a hard time occurring, which would lead to fat malabsorption and make steatorrhea more probable to occur. Other features of fat malabsorption may also occur such as reduced bone density, difficulty with vision under low light levels, bleeding, bruising, and slow blood clotting times.
Associated diseases
Conditions affecting the pancreas. Exocrine pancreatic insufficiency can be caused by chronic pancreatitis, cystic fibrosis and pancreatic cancer (if it obstructs biliary outflow).
Conditions affecting bile salts. Obstruction of the bile ducts by gallstones (choledocholithiasis), primary sclerosing cholangitis, liver damage (hepatitis, intrahepatic cholestasis), hypolipidemic drugs, or changes following gallbladder removal (cholecystectomy).
Conditions producing intestinal malabsorption. These include celiac disease, bacterial overgrowth, tropical sprue, giardiasis (a protozoan parasite infection), Zollinger-Ellison syndrome, short bowel syndrome, inflammatory bowel disease and abetalipoproteinemia.
Other causes: Drugs that can produce steatorrhea include orlistat, a slimming pill, or as adverse effect of octreotide or lanreotide, used to treat acromegaly or other neuroendocrine tumors. It can be found in Graves' disease / hyperthyroidism.
Medications
Orlistat (also known by trade names Xenical and Alli) is a diet pill that works by blocking the enzymes that digest fat. As a result, some fat cannot be absorbed from the gut and is excreted in the feces instead of being metabolically digested and absorbed, sometimes causing oily anal leakage. Vytorin (ezetimibe/simvastatin) tablets can cause steatorrhea in some people.
Excess whole nuts in diet
Some studies have shown that stool lipids are increased when whole nuts are eaten, compared to nut butters, oils or flour and that lipids from whole nuts are significantly less well absorbed.
Natural fats
Consuming jojoba oil has been documented to cause steatorrhea and anal leakage because it is indigestible.
Consuming escolar and oilfish (sometimes mislabelled as butterfish) will often cause steatorrhea, also referred to as gempylotoxism or gempylid fish poisoning or keriorrhea.
Artificial fats
The fat substitute Olestra, used to reduce digestible fat in some foods, was reported to cause leakage in some consumers during the test-marketing phase. As a result, the product was reformulated before general release to a hydrogenated form that is not liquid at physiologic temperature. The U.S. Food and Drug Administration warning indicated excessive consumption of Olestra could result in "loose stools"; however, this warning has not been required since 2003.
Diagnosis
Steatorrhea should be suspected when the stools are bulky, floating and foul-smelling. Specific tests are needed to confirm that these properties are in fact due to excessive levels of fat. Fats in feces can be measured over a defined time (often five days). Other tests include the (13)C-mixed triglycerides test and fecal elastase, to detect possible fat maldigestion due to exocrine pancreatic insufficiency, or various specific tests to detect other causes of malabsorption such as celiac disease.
Treatment
Treatments are mainly correction of the underlying cause, as well as digestive enzyme supplements.
See also
Rectal discharge
Keriorrhea
Fecal leakage
Steatocrit
References
Feces
Diarrhea
Gastrointestinal tract disorders
Diseases of intestines
Conditions diagnosed by stool test
Symptoms and signs: Digestive system and abdomen
Colorectal surgery
Steatorrhea-related diseases | Steatorrhea | [
"Biology"
] | 1,074 | [
"Excretion",
"Feces",
"Animal waste products"
] |
1,024,349 | https://en.wikipedia.org/wiki/Altbib | Bibliography on Alternatives to the Use of Live Vertebrates in Biomedical Research and Testing, or Altbib, is a bibliography available online to assist in identifying methods and procedures helpful in supporting the development, testing, application, and validation of alternatives to the use of vertebrates in biomedical research and toxicology testing. The bibliography is produced from MEDLARS database searches analyzed by experts from the Toxicology and Environmental Health Information Program (TEHIP) of the Specialized Information Services Division (SIS) of the National Library of Medicine.
See also
Alternatives to animal testing
References
Alternatives to animal testing
Published bibliographies | Altbib | [
"Chemistry"
] | 122 | [
"Animal testing",
"Alternatives to animal testing"
] |
1,024,472 | https://en.wikipedia.org/wiki/Joseph%20Dietzgen | Peter Josef Dietzgen (December 9, 1828April 15, 1888) was a German socialist philosopher, Marxist and journalist.
Dietzgen was born in Blankenberg in the Rhine Province of Prussia. He was the first of five children of father Johann Gottfried Anno Dietzgen (1794–1887) and mother Anna Margaretha Lückerath (1808–1881). He was, like his father, a tanner by profession; inheriting his uncle's business in Siegburg. Entirely self-educated, he developed the notion of dialectical materialism independently from Marx and Engels as an independent philosopher of socialist theory. He had one son, Eugene Dietzgen.
Life
Early on in his youth, Joseph Dietzgen worked with the famed Forty-Eighters of the 1848 German Revolution. It was there that he first met Karl Marx and other socialist revolutionaries, and began his career as a socialist philosopher. Following the failure of the 1848 Revolution he spent some time in the United States from 1849 to 1851, returning once again for a visit from 1859 to 1861. While in the New World he traversed the American South and witnessed first hand the lynchings which had come to characterize the slave states. During the period between his travels, Dietzgen joined the Alliance of Communists with Karl Marx back in Germany in 1852. In 1853, after marrying his wife Cordula Finke, he established his tannery business in Winterscheid (today part of Ruppichteroth), Germany. When he returned to the United States in 1859 he set up another tannery in Montgomery, Alabama. From 1864 to 1868, he lived with his son Eugene in St. Petersburg, where he was manager of the state tannery. He worked with the Tsar of Russia on improvement of the Russian methods. During his time spent in Russia he wrote one of his earliest texts, The Nature of Human Brain-Work, which was published in 1869. Upon his first reading of the text, Marx forwarded a copy to Engels, remarking, "My opinion is that J. Dietzgen would do better to condense all his ideas into two printer's sheets and have them published under his own name as a tanner. If he publishes them in the size he is proposing, he will discredit himself with his lack of dialectical development and his way of going round in circles." While he traveled, his wife managed the family tannery business back in Germany until he returned in mid-1869. Once he was back home, he was visited by Marx and his daughter, who proclaimed that Joseph had become "the Philosopher" of socialism. By 1870, Marx had embraced Dietzgen as a friend, and later praised him and his theory of dialectical materialism in the 2nd edition of the first volume of Das Kapital.
On June 8, 1878, Dietzgen was arrested following the publication of a lecture he gave in Cologne: The future of the social democracy. He spent 3 months in prison on remand before his trial was held. Although Joseph was released along with copies of his article, he was re-arrested twice and finally released. In 1881 Joseph sent his son Eugene to the United States in order to avoid the Kaiser's upcoming army draft, to safeguard his articles and documents, as well as to secure a family home in the new world. Young Eugene was 19 when he arrived in New York, but quickly jump started a thriving family business in Chicago, the Eugene Dietzgen Company. It became one of the world's top drafting and surveying supply manufacturers and distributors and remained such through most of the 20th century. The company still exists today as a division of Nashua Paper, and its two buildings still stand in Chicago's now trendy Printer's Row and Lincoln Park areas. During this period, Eugene and Joseph kept in close contact through extensive letters which are currently being documented and published. In the same year, Joseph ran for the elections of the German Reichstag (the parliament), but emigrated in 1884 to New York City. He moved to Chicago two years later, where he became editor at the Arbeiterzeitung. Unfortunately Joseph's death in 1888 marked an end to his son's dependency, but his family line would continue to be part of some of the biggest engagements of the 20th century; from World War I, to the 1936 Berlin Olympics, to the heart of World War II.
Dietzgen's words and life have for some underscored the unity that existed on the political left at the time of the First International, before Anarchists and Marxists were later divided: "For my part, I lay little stress on the distinction, whether a man is an anarchist or a socialist, because it seems to me that too much weight is attributed to this difference." This suggests he took a more concilliatory, or at least more aloof view of the disputes of the moment (see Anarchism and Marxism).
Philosophy
Dietzgen's most significant influence is generally described as specific philosophical theory of dialectical materialism, drawing from Georg Wilhelm Friedrich Hegel's concept of dialectic and materialism, in particular that of Ludwig Feuerbach (himself earlier a Young Hegelian). Similar positions were developed independently by Karl Marx and Friedrich Engels in their writings. According to his preface to Dietzgen's "The Positive Outcome of Philosophy", it seems the Communist Manifesto in particular was significantly influential on the development of his thought prior to his earliest philosophical works.
In the earlier "Thirteenth Letter on Logic", Joseph Dietzgen gave the following summary of his philosophical positions:
"The red thread winding through all these letters deals with the following points: The instrument of thought is a thing like all other common things, a part or attribute of the universe. It belongs particularly to the general category of being and is an apparatus which produces a detailed picture of human experience by categorical classification or distinction. In order to use this apparatus correctly, one must fully grasp the fact that the world unit is multiform and that all multiformity is a unit.
It is the solution of the riddle of the ancient Eleatic philosophy: How can the one be contained in the many, and the many in one
An explicit evocation of the Eleatics (Parmenides, Zeno of Elea and Melissus of Samos) in particular is distinctive, and sets the language apart from the "mainstream" of dialectical materialism as is more commonly considered.
After his death, Joseph's son Eugen gave the following view of the relevance of his father's philosophy:
If the founders of historical materialism, and their followers, in a whole series of convincing historical investigations, proved the connection between economic and spiritual development, and the dependence of the latter, in the final analysis, on economic relations, nevertheless they did not prove that this dependence of the spirit is rooted in its nature and in the nature of the universe. Marx and Engels thought that they had ousted the last spectres of idealism from the understanding of history. This was a mistake, for the metaphysical spectres found a niche for themselves in the unexplained essence of the human spirit and in the universal whole which is closely associated with the latter. Only a scientifically verified criticism of cognition could eject idealism from here. (p iv)
This prompted a negative reaction Georgi Plekhanov, as one of the earliest Russian Marxists (as well as co-founder of the Iskra magazine and Russian Social-Democratic Labor Party, and himself the author of "The Monist View of History", an attempt at an interpretation of historical materialism, and "reconstruction" of the views of Karl Marx:
Despite all our respect for the noble memory of the German worker-philosopher, and despite our personal sympathy for his son, we find ourselves compelled to protest resolutely against the main idea of the preface from which we have just quoted. In it, the relationship of Joseph Dietzgen to Marx and Engels is quite wrongly stated"
It is of some notability Vladimir Lenin extensively quoted the writings of Joseph Dietzgen in his later notorious polemic against Ernst Mach (and more pertinently and directly, his rival Alexander Bogdanov), Materialism and Empiriocriticism: Critical Comments on a Reactionary Philosophy (which was later made part of exemplary canon, in particular during the rule of Joseph Stalin). Besides the mentioned text, Lenin also made certain erstwhile notes concerning Dietzgen among the notes later grouped as his Philosophical Notebooks (Collected Works, Vol. 38., Lawrence & Wishart, 1980).
In the note on pages 403-406 he compared him unfavourably to Feuerbach:
...To be does not mean to exist in thought.
In this respect Feuerbach’s philosophy is far clearer
than the philosophy of Dietzgen. “The proof that something exists,” Feuerbach remarks, “has no
other meaning than that something exists not in thought alone.”
Death
Dietzgen died at home smoking a cigar. He had taken a stroll in Lincoln Park, and was having a political discussion in a "vivacious and excited" manner about the "imminent collapse of capitalist production". He stopped in mid-sentence with his hand in the air – dead of paralysis of the heart. He is currently buried at the Waldheim Cemetery (now Forest Home Cemetery), in Forest Park, Chicago, near the graves of those executed after the Haymarket Affair (popularly known as the Haymarket Martyrs).
Legacy
Anton Pannekoek, the Dutch astronomer and council communist (a left-communist, belong to the group or position which Lenin called "Left-Wing" Communism: An Infantile Disorder noted that, in "Materialism and Empirio-criticism", Lenin cited Dietzgen's penultimately composed work (the "Letters on Logic"), but not the final one to be composed ("The Positive Outcome of Philosophy").
In his 1938 book on Lenin, written after the work had already been given the status of a paradigm of philosophy in the USSR, Pannekoek included a highly critical response to the text. In particular, Pannekoek charged that Lenin had completely ignored Dietzgen's last composed philosophical work and therefore misunderstood the development of Dietzgen's thought.
In his polemic against Lenin, Pannekoek appeals to Dietzgen as an authority on Marxist philosophy. The writings of Dietzgen are most discussed and attract the most interest in the context of these debates, but have fallen into obscurity in present-day philosophy.
Dietzgen figured on a commemorative postage stamp issued in the German Democratic Republic.
Major works
Das Wesen der menschlichen Kopfarbeit, 1869, engl "The Nature of Human Brainwork",
"The Religion of Social Democracy" (in six sermons from 1870 to 1875).
"Scientific Socialism" (1873).
"The Ethics of Social Democracy" (1875).
"Social Democratic Philosophy" (1876).
"The Inconceivable: a Special Chapter in Social-Democratic Philosophy" (1877).
"The Limits of Cognition" (1877).
"Our Professors on the Limits of Cognition" (1878).
"Letters on Logic" (addressed to Eugen Dietzgen) (1880–1884).
"Excursions of a Socialist into the Domain of Epistemology" (1886).
"The Positive Outcome of Philosophy" (1887).
More recent editions:
Nature of Human Brain Work: An Introduction to Dialectics, Left Bank Books, Reprint 1984
Philosophical Essays on Socialism and Science, Religion, Ethics; Critique-Of-Reason and the World-At-Large, Kessinger Publications, 2004,
The Positive Outcome of Philosophy; The Nature of Human Brain Work; Letters on Logic, Kessinger Publications, 2007,
Collected writings
Josef Dietzgen, Sämtliche Schriften, hrsg. von Eugen Dietzgen, 4. Auflage, Berlin, 1930
Joseph Dietzgen, Schriften in drei Bänden, hrsg. von der Arbeitsgruppe für Philosophie an der Akademie der Wissenschaften der DDR zu Berlin, Berlin, 1961–1965
Secondary literature
English
Anton Pannekoek: "The Standpoint and Significance of Josef Dietzgen's Philosophical Works" – Introduction to Joseph Dietzgen, The Positive Outcome of Philosophy, Chicago, 1928
German
SPD-Protokollnotizen S. 176; Liebknecht 1988, Biographisches Lexikon 1970, Dietzgen 1930, Friedrich Ebert-Stiftung, Digitale Bibliothek
P. Dr. Gabriel Busch O.S.B.: Im Spiegel der Sieg, Verlag Abtei Michaelsberg, Siegburg 1979
Josef Dietzgen, Sämtliche Schriften, hrsg. von Eugen Dietzgen, 4. Auflage, Berlin, 1930
Joseph Dietzgen, Schriften in drei Bänden, hrsg. von der Arbeitsgruppe für Philosophie an der Akademie der Wissenschaften der DDR zu Berlin, Berlin, 1961–1965
Otto Finger, Joseph Dietzgen – Beitrag zu den Leistungen des deutschen Arbeiterphilosophen, Berlin, 1977
Gerhard Huck, Joseph Dietzgen (1828–1888) – Ein Beitrag zur Ideengeschichte des Sozialismus im 19. Jahrhundert, in der Reihe Geschichte und Gesellschaft, Bochumer Historische Schriften, Band 22, Stuttgart, 1979,
Horst Gräbner, Joseph Dietzgens publizistische Tätigkeit, unveröffentlichte Magisterarbeit an der J-W-G-Universität, Frankfurt/M, 1982
Anton Pannekoek, "Die Stellung u. Bedeutung von J. Dietzgens philosophischen Arbeiten" in: Josef Dietzgen, Das Wesen der menschlichen Kopfarbeit; Eine abermalige Kritik der reinen und praktischen Vernunft, Stuttgart: J. H. W. Dietz Nachf., 1903
Dutch
Jasper Schaaf, De dialectisch-materialistische filosofie van Joseph Dietzgen, Kampen, 1993
References
External links
Joseph Dietzgen Archive
Dietzgen Family History Page includes interviews with Dietzgen's granddaughter and 145 page typescript of Dietzgen's 1880-84 correspondence with his son
Joseph Dietzgen's Philosophy
Eugene Dietzgen Corporation History Page
1828 births
1888 deaths
People from Hennef (Sieg)
People from the Rhine Province
German atheists
Members of the International Workingmen's Association
19th-century German philosophers
Marxist theorists
Atheist philosophers
Materialists
American Marxists
19th-century atheists
Burials at Forest Home Cemetery, Chicago | Joseph Dietzgen | [
"Physics"
] | 3,094 | [
"Materialism",
"Matter",
"Materialists"
] |
1,024,614 | https://en.wikipedia.org/wiki/Taylor%E2%80%93Proudman%20theorem | In fluid mechanics, the Taylor–Proudman theorem (after Geoffrey Ingram Taylor and Joseph Proudman) states that when a solid body is moved slowly within a fluid that is steadily rotated with a high angular velocity , the fluid velocity will be uniform along any line parallel to the axis of rotation. must be large compared to the movement of the solid body in order to make the Coriolis force large compared to the acceleration terms.
Derivation
The Navier–Stokes equations for steady flow, with zero viscosity and a body force corresponding to the Coriolis force, are
where is the fluid velocity, is the fluid density, and the pressure. If we assume that is a scalar potential and the advective term on the left may be neglected (reasonable if the Rossby number is much less than unity) and that the flow is incompressible (density is constant), the equations become:
where is the angular velocity vector. If the curl of this equation is taken, the result is the Taylor–Proudman theorem:
To derive this, one needs the vector identities
and
and
(because the curl of the gradient is always equal to zero).
Note that is also needed (angular velocity is divergence-free).
The vector form of the Taylor–Proudman theorem is perhaps better understood by expanding the dot product:
In coordinates for which , the equations reduce to
if . Thus, all three components of the velocity vector are uniform along any line parallel to the z-axis.
Taylor column
The Taylor column is an imaginary cylinder projected above and below a real cylinder that has been placed parallel to the rotation axis (anywhere in the flow, not necessarily in the center). The flow will curve around the imaginary cylinders just like the real due to the Taylor–Proudman theorem, which states that the flow in a rotating, homogeneous, inviscid fluid are 2-dimensional in the plane orthogonal to the rotation axis and thus there is no variation in the flow along the axis, often taken to be the axis.
The Taylor column is a simplified, experimentally observed effect of what transpires in the Earth's atmospheres and oceans.
History
The result known as the Taylor-Proudman theorem was first derived by Sydney Samuel Hough (1870-1923), a mathematician at Cambridge University, in 1897. Proudman published another derivation in 1916 and Taylor in 1917, then the effect was demonstrated experimentally by Taylor in 1923.
References
Eponymous theorems of physics
Fluid dynamics | Taylor–Proudman theorem | [
"Physics",
"Chemistry",
"Engineering"
] | 501 | [
"Equations of physics",
"Chemical engineering",
"Eponymous theorems of physics",
"Piping",
"Physics theorems",
"Fluid dynamics"
] |
1,024,667 | https://en.wikipedia.org/wiki/Ensemble%20%28fluid%20mechanics%29 | In continuum mechanics, an ensemble is an imaginary collection of notionally identical experiments.
Each member of the ensemble will have nominally identical boundary conditions and fluid properties. If the flow is turbulent, the details of the fluid motion will differ from member to member because the experimental setup will be microscopically different, and these slight differences become magnified as time progresses. Members of an ensemble are, by definition, statistically independent of one another. The concept of ensemble is useful in thought experiments and to improve theoretical understanding of turbulence.
A good image to have in mind is a typical fluid mechanics experiment such as a mixing box. Imagine a million mixing boxes, distributed over the earth; at a predetermined time, a million fluid mechanics engineers each start one experiment, and monitor the flow. Each engineer then sends his or her results to a central database. Such a process would give results that are close to the theoretical ideal of an ensemble.
It is common to speak of ensemble average or ensemble averaging when considering a fluid mechanical ensemble.
For a completely unrelated type of averaging, see Reynolds-averaged Navier–Stokes equations (the two types of averaging are often confused).
See also
Statistical ensemble (mathematical physics)
References
Continuum mechanics | Ensemble (fluid mechanics) | [
"Physics"
] | 245 | [
"Classical mechanics",
"Continuum mechanics"
] |
1,024,714 | https://en.wikipedia.org/wiki/Thermos%20bomb | Thermos bomb was the informal name for the AR-4, an air dropped anti-personnel mine used by the Italian Air Force during World War II. Large numbers were used against Malta and in the Middle East. It was named for its superficial appearance to a Thermos bottle, a popular brand of vacuum flask. The bomb was a cylinder long and weighing . It could be fitted with a very sensitive motion-sensitive fuze that would detonate if any attempt was made to move it. It could be lethal in the open to approximately . Because of this, unexploded Thermos bombs were normally destroyed where they fell, either by attaching a long piece of string to them and giving it a jerk, or detonating a small explosive charge near them.
A later variant of the fuze introduced a long time delay, which triggered between 60 and 80 hours after the fuze had armed.
References
Area denial weapons
Anti-personnel weapons
Italian inventions
World War II aerial bombs of Italy
Military equipment introduced from 1940 to 1944 | Thermos bomb | [
"Engineering"
] | 212 | [
"Area denial weapons",
"Military engineering"
] |
1,024,739 | https://en.wikipedia.org/wiki/Medical%20examiner | The medical examiner is an appointed official in some American jurisdictions that investigates deaths that occur under unusual or suspicious circumstances, to perform post-mortem examinations, and in some jurisdictions to initiate inquests. They are necessarily trained in pathology.
In the US, there are two death investigation systems: first, the coroner system based on English law; and second, the medical examiner system, which evolved from the coroner system during the latter half of the 19th century. The type of system varies across jurisdictions, with over 2,000 separate jurisdictions for investigating unnatural deaths. In 2002, 22 states had a medical examiner system, 11 states had a coroner system, and 18 states had a mixed system. Since the 1940s, the medical examiner system has gradually replaced the coroner system and serves about 48% of the US population. The largest medical examiner's office in the United States is located in Baltimore, Maryland.
The coroner is not necessarily a medical doctor. They may be a lawyer or a layperson. In the 19th century, the public became dissatisfied with lay coroners and demanded that the coroner be replaced by a physician. In 1918, New York City introduced the office of the Chief Medical Examiner and appointed physicians experienced in the field of pathology. In 1959, the medical subspecialty of forensic pathology was formally certified.
The types of death reportable to the system are determined by federal, state, or local laws. Commonly, these include violent, suspicious, sudden, and unexpected deaths, death when no physician or practitioner was present or treating the decedent, inmates in public institutions, those in custody of law enforcement, deaths during or immediately following therapeutic or diagnostic procedures or deaths due to neglect.
Duties
A medical examiner's duties vary by location, but typically include:
investigating human organs like the stomach, liver, and brain
determining cause of death
examining the condition of the body
studying tissue, organs, cells, and bodily fluids
issuing death certificates
maintaining death records
responding to deaths in mass disasters
working closely with law enforcement
identifying unknown dead
performing other functions depending on local law.
In some jurisdictions, a coroner performs these and other duties. It is common for a medical examiner to visit crime scenes or to testify in court. Medical examiners specialize in forensic knowledge and rely on this during their work. In addition to studying cadavers, they are also trained in toxicology, DNA technology and forensic serology (blood analysis). Pulling from each area of knowledge, a medical examiner is an expert in determining a cause of death. This information can help law enforcement solve cases and is crucial to their ability to track criminals in the event of a homicide or other related events.
Within the United States, there is a mixture of coroner and medical examiner systems, and in some states, dual systems. The requirements to hold office vary widely between jurisdictions.
Qualifications
United Kingdom
In England and Wales, a new statutory Medical Examiner system based in NHS Acute Trusts commenced in 2019 and is expected to be fully operational several years later. A medical examiner is always a medical doctor, whereas a coroner is a judicial officer.
Pilot studies in Sheffield and seven other areas, which involved medical examiners looking at more than 27,000 deaths since 2008, found 25% of hospital death certificates were inaccurate and 20% of causes of death were wrong. Suzy Lishman, president of the Royal College of Pathologists, said it was crucial there was "independent scrutiny of causes of death".
United States
Qualifications for medical examiners in the US vary from jurisdiction to jurisdiction. In Wisconsin, for example, some counties do not require individuals to have any special educational or medical training to hold this office. In most jurisdictions, a medical examiner is required to have a medical degree, although in many there is no requirement for specialized training in pathology. Other jurisdictions have stricter requirements, including additional education in pathology, law, and forensic pathology. Medical examiners are typically appointed officers.
Education
In the United States, medical examiners require extensive training in order to become experts in their field. After high school, the additional schooling may take 11–18 years. They must attend a college or university to earn a bachelor's degree sufficient for admission to medical school. Biology is usually the most common. A medical degree (MD or DO) is often required to become a medical examiner. To enter medical school, the MCAT (Medical College Admissions Test) is usually required after which medical school is another four years with the first two dedicated to academics and the rest of the two used to gain clinical experience.
To become experts in pathology, specifically, additional training is required after medical school. The first step is to complete pathological forensic training. This usually consists of anatomic and clinical pathology training which takes anywhere from four to five years to complete. After this, the physician may complete an anatomic pathology residency or a fellowship. Before practicing as a medical examiner, the physician must also become board certified through the American Board of Pathology.
Career
The general job outlook for medical examiners in the United States is considered to be excellent. Remuneration varies by location, but it is estimated to average between $105,000 and $500,000.
Shortage
In the United States, there are fewer than 500 board-certified forensic pathologists, but the National Commission on Forensic Science estimates the country needs 1,100–1,200 to perform the needed number of autopsies. The shortage is attributed to the nature of the work and the higher pay in other medical specialties. It has caused long delays in some states and resulted in fewer investigations and less thorough investigations in some cases.
See also
Coroner
List of fictional medical examiners
References
Further reading
See also the links at the bottom of the linked article.
Coroners
Forensic occupations
Health care occupations
Pathology
People involved with death and dying | Medical examiner | [
"Biology"
] | 1,166 | [
"Pathology"
] |
1,024,742 | https://en.wikipedia.org/wiki/D-Link | D-Link Systems, Inc. (formerly Datex Systems, Inc.) is a Taiwanese multinational manufacturer of networking hardware and telecoms equipments. It was founded in 1986 and headquartered in Taipei, Taiwan.
History
Datex Systems was founded in 1986 in Taipei, Taiwan.
In 1992, the company changed its name to D-Link.
D-Link went public and became the first networking company on the Taiwan Stock Exchange in 1994. It is now also publicly traded on the New York Stock Exchange.
In 1988, D-Link released the industry's first peer-to-peer LANSmart Network Operating System, able to run concurrently with early networking systems such as Novell's NetWare and TCP/IP, which most small network operating systems could not do at the time.
In 2007, it was the leading networking company in the small to medium business (SMB) segment worldwide, with a 21.9% market share. In March 2008, it became the market leader in Wi-Fi product shipments worldwide, with 33% of the total market. In 2007, the company was featured in the "Info Tech 100" list of the world's best IT companies. It was also ranked as the ninth best IT company in the world for shareholder returns by BusinessWeek. In the same year, D-Link released one of the first WiFiCertified802.11n draft 2.0 Wi-Fi routers (DIR-655), which subsequently became one of the most successful draft 802.11n routers.
In May 2013, D-Link released its flagship draft 802.11ac Wireless AC1750 Dual-Band Router (DIR-868L), which at that point had attained the fastest-ever wireless throughput as tested by blogger Tim Higgins.
In April 2019, D-Link was named Gartner Peer Insights Customers’ Choice for Wired and Wireless LAN Access Infrastructure.
In June 2020, D-Link joined the Taiwan Steel Group.
In 2021, D-Link announced that it had become the agent for international information security brand Cyberbit in Taiwan, and it launched the new EAGLE PRO AI series transforming home Wi-Fi experiences.
In 2022, D-Link obtained the TRUSTe Privacy seal, certification of ISO/IEC 27001:2013 and BS 10012. It also obtained the GHG Part 1 certification of ISO 14064-1 2018. Moreover, D-Link established the "D-Link Group Scholarship" with National Taiwan University of Science and Technology to encourage foreign students to study in Taiwan.
Examples of D-Link products
Controversies
Backdoors
D-Link systematically includes backdoors in their equipment that compromise its users security. One of the prominent examples is xmlset_roodkcableoj28840ybtide, which contains the substring roodkcab, which is the word backdoor written backwards.
In January 2013, version v1.13 for the DIR-100 revA was reported to include a backdoor in the firmware. By passing a specific user agent in an HTTP request to the router, normal authentication is bypassed. It was reported that this backdoor had been present for some time. This backdoor however was closed soon after with a security patch issued by the company.
In 2024-06-17 information about CVE-2024-6045 backdoor was disclosed.
Vulnerabilities
In January 2010, it was reported that HNAP vulnerabilities had been found on some D-Link routers. D-Link was also criticized for their response which was deemed confusing as to which models were affected and downplayed the seriousness of the risk. However the company issued fixes for these router vulnerabilities soon after.
Computerworld reported in January 2015 that ZynOS, a firmware used by some D-Link routers (as well as ZTE, TP-Link, and others), are vulnerable to DNS hijacking by an unauthenticated remote attacker, specifically when remote management is enabled. Affected models had already been phased out by the time the vulnerability was discovered and the company also issued a firmware patch for affected devices for those still using older hardware.
Later in 2015, it was reported that D-Link leaked the private keys used to sign firmware updates for the DCS-5020L security camera and a variety of other D-Link products. The key expired in September 2015, but had been published online for seven months. The initial investigation did not produce any evidence that the certificates were abused.
Also in 2015, D-Link was criticized for more HNAP vulnerabilities, and worse, introducing new vulnerabilities in their "fixed" firmware updates.
On 5 January 2017, the Federal Trade Commission sued D-Link for failing to take reasonable steps to secure their routers and IP cameras, as D-Link marketing was misleading customers into believing their products were secure. The complaint also says security gaps could allow hackers to watch and record people on their D-Link cameras without their knowledge, target them for theft, or record private conversations. D-Link has denied these accusations and has enlisted Cause of Action Institute to file a motion against the FTC for their "baseless" charges. On 2 July 2019, the case was settled with D-Link not found to be liable for any of the alleged violations. D-Link agreed to continue to make security enhancements in its software security program and software development, with biennial, independent, third-party assessments, approved by the FTC.
On 18 January 2021 Sven Krewitt, researcher at Risk Based Security, discovered multiple pre-authentication vulnerabilities in D-Link's DAP-2020 Wireless N Access Point product. D-Link confirmed these vulnerabilities in a support announcement and provided a patch to hot-fix the product's firmware.
In April 2024, D-Link acknowledged a security vulnerability that affected all hardware revisions of four models of network attached storage devices. Because the products have reached their end of service life date, the company stated in a release that the products are no longer supported and that a fix would not be offered.
Server misuse
In 2006, D-Link was accused of NTP vandalism, when it was found that its routers were sending time requests to a small NTP server in Denmark, incurring thousands of dollars of costs to its operator. D-Link initially refused to accept responsibility. Later, D-link products were found also to be abusing other time servers, including some operated by the US military and NASA. However, no malicious intent was discovered, and eventually D-Link and the sites owner Poul-Henning Kamp were able to agree to an amicable settlement regarding access to Kamp's GPS.Dix.dk NTP Time Server site, with existing products gaining authorized access to Kamp's server.
GPL violation
On 6 September 2006, the gpl-violations.org project prevailed in court litigation against D-Link Germany GmbH regarding D-Link's inappropriate and copyright infringing use of parts of the Linux kernel. D-Link Germany GmbH was ordered to pay plaintiff's costs. Following the judgement, D-Link agreed to a cease and desist request, ending distribution of the product, and paying legal costs.
See also
List of companies of Taiwan
References
Notes
Citations
External links
Manufacturing companies established in 1986
Computer companies established in 1986
Taiwanese companies established in 1986
Computer companies of Taiwan
Computer hardware companies
Companies based in Taipei
1994 initial public offerings
Companies listed on the Taiwan Stock Exchange
Electronics companies of Taiwan
Multinational companies headquartered in Taiwan
Networking companies
Networking hardware
Networking hardware companies
Routers (computing)
Taiwanese brands
Telecommunications equipment vendors | D-Link | [
"Technology",
"Engineering"
] | 1,582 | [
"Computer hardware companies",
"Wireless networking",
"Computer networks engineering",
"D-Link",
"Networking hardware",
"Computers"
] |
1,024,815 | https://en.wikipedia.org/wiki/CYP2D6 | Cytochrome P450 2D6 (CYP2D6) is an enzyme that in humans is encoded by the CYP2D6 gene. CYP2D6 is primarily expressed in the liver. It is also highly expressed in areas of the central nervous system, including the substantia nigra.
CYP2D6, a member of the cytochrome P450 mixed-function oxidase system, is one of the most important enzymes involved in the metabolism of xenobiotics in the body. In particular, CYP2D6 is responsible for the metabolism and elimination of approximately 25% of clinically used drugs, via the addition or removal of certain functional groups – specifically, hydroxylation, demethylation, and dealkylation. CYP2D6 also activates some prodrugs. This enzyme also metabolizes several endogenous substances, such as N,N-Dimethyltryptamine, hydroxytryptamines, neurosteroids, and both m-tyramine and p-tyramine which CYP2D6 metabolizes into dopamine in the brain and liver.
Considerable variation exists in the efficiency and amount of CYP2D6 enzyme produced between individuals. Hence, for drugs that are metabolized by CYP2D6 (that is, are CYP2D6 substrates), certain individuals will eliminate these drugs quickly (ultrarapid metabolizers) while others slowly (poor metabolizers). If a drug is metabolized too quickly, it may decrease the drug's efficacy while if the drug is metabolized too slowly, toxicity may result. So, the dose of the drug may have to be adjusted to take into account of the speed at which it is metabolized by CYP2D6. Individuals who exhibit an ultrarapid metabolizer phenotype, metabolize prodrugs, such as codeine or tramadol, more rapidly, leading to higher than therapeutic levels. A case study of the death of an infant breastfed by an ultrarapid metabolizer mother taking codeine impacted postnatal pain relief clinical practices, but was later debunked. These drugs may also cause serious toxicity in ultrarapid metabolizer patients when used to treat other post-operative pain, such as after tonsillectomy. Other drugs may function as inhibitors of CYP2D6 activity or inducers of CYP2D6 enzyme expression that will lead to decreased or increased CYP2D6 activity respectively. If such a drug is taken at the same time as a second drug that is a CYP2D6 substrate, the first drug may affect the elimination rate of the second through what is known as a drug-drug interaction.
Gene
The gene is located on chromosome 22q13.1. near two cytochrome P450 pseudogenes (CYP2D7P and CYP2D8P). Among them, CYP2D7P originated from CYP2D6 in a stem lineage of great apes and humans, the CYP2D8P originated from CYP2D6 in a stem lineage of Catarrhine and New World monkeys' stem lineage. Alternatively spliced transcript variants encoding different isoforms have been found for this gene.
Genotype/phenotype variability
CYP2D6 shows the largest phenotypical variability among the CYPs, largely due to genetic polymorphism. The genotype accounts for normal, reduced, and non-existent CYP2D6 function in subjects. Pharmacogenomic tests are now available to identify patients with variations in the CYP2D6 allele and have been shown to have widespread use in clinical practice.
The CYP2D6 function in any particular subject may be described as one of the following:
poor metabolizer – little or no CYP2D6 function
intermediate metabolizers – metabolize drugs at a rate somewhere between the poor and extensive metabolizers
extensive metabolizer – normal CYP2D6 function
ultrarapid metabolizer – multiple copies of the CYP2D6 gene are expressed, so greater-than-normal CYP2D6 function occurs
A patient's CYP2D6 phenotype is often clinically determined via the administration of debrisoquine (a selective CYP2D6 substrate) and subsequent plasma concentration assay of the debrisoquine metabolite (4-hydroxydebrisoquine).
The type of CYP2D6 function of an individual may influence the person's response to different doses of drugs that CYP2D6 metabolizes. The nature of the effect on the drug response depends not only on the type of CYP2D6 function, but also on the extent to which processing of the drug by CYP2D6 results in a chemical that has an effect that is similar, stronger, or weaker than the original drug, or no effect at all. For example, if CYP2D6 converts a drug that has a strong effect into a substance that has a weaker effect, then poor metabolizers (weak CYP2D6 function) will have an exaggerated response to the drug and stronger side-effects; conversely, if CYP2D6 converts a different drug into a substance that has a greater effect than its parent chemical, then ultrarapid metabolizers (strong CYP2D6 function) will have an exaggerated response to the drug and stronger side-effects. Information about how human genetic variation of CYP2D6 affects response to medications can be found in databases such PharmGKB, Clinical Pharmacogenetics Implementation Consortium (CPIC).
Genetic basis of variability
The variability in metabolism is due to multiple different polymorphisms of the CYP2D6 allele, located on chromosome 22. Subjects possessing certain allelic variants will show normal, decreased, or no CYP2D6 function, depending on the allele. Pharmacogenomic tests are now available to identify patients with variations in the CYP2D6 allele and have been shown to have widespread use in clinical practice. The current known alleles of CYP2D6 and their clinical function can be found in databases such as PharmVar.
Ethnic factors in variability
Ethnicity is a factor in the occurrence of CYP2D6 variability. The reduction of the liver cytochrome CYP2D6 enzyme occurs approximately in 7–10% in white populations, and is lower in most other ethnic groups such as Asians and African-Americans at 2% each. A complete lack of CYP2D6 enzyme activity, wherein the individual has two copies of the polymorphisms that result in no CYP2D6 activity at all, is said to be about 1-2% of the population. The occurrence of CYP2D6 ultrarapid metabolizers appears to be greater among Middle Eastern and North African populations. In Ethiopia, a particularly high percentage (30%) of the population are ultrametabolizers. As a result, the analgesic codeine is banned in Ethiopia due to the high rate of adverse events associated with ultrarapid metabolism of codeine in this population.
Caucasians with European descent predominantly (around 71%) have the functional group of CYP2D6 alleles, producing extensive metabolism, while functional alleles represent only around 50% of the allele frequency in populations of Asian descent.
This variability is accounted for by the differences in the prevalence of various CYP2D6 alleles among the populations–approximately 10% of whites are intermediate metabolizers, due to decreased CYP2D6 function, because they appear to have the one (heterozygous) non-functional CYP2D6*4 allele, while approximately 50% of Asians possess the decreased functioning CYP2D6*10 allele.
Ligands
Following is a table of selected substrates, inducers and inhibitors of CYP2D6. Where classes of agents are listed, there may be exceptions within the class.
Inhibitors of CYP2D6 can be classified by their potency, such as:
Strong inhibitor being one that causes at least a 5-fold increase in the plasma AUC values of sensitive substrates metabolized through CYP2D6, or more than 80% decrease in clearance thereof.
Moderate inhibitor being one that causes at least a 2-fold increase in the plasma AUC values of sensitive substrates metabolized through CYP2D6, or 50-80% decrease in clearance thereof.
Weak inhibitor being one that causes at least a 1.25-fold but less than 2-fold increase in the plasma AUC values of sensitive substrates metabolized through CYP2D6, or 20-50% decrease in clearance thereof.
Dopamine biosynthesis
References
Further reading
External links
Flockhart Lab Cyp2D6 Substrates Page at IUPUI
PharmGKB: Annotated PGx Gene Information for CYP2D6
Pharmvar Gene:CYP2D6
2
EC 1.14.14
Amphetamine
Pharmacogenomics | CYP2D6 | [
"Chemistry"
] | 1,921 | [
"Pharmacology",
"Pharmacogenomics"
] |
1,025,034 | https://en.wikipedia.org/wiki/Dakuten%20and%20handakuten | The , colloquially , is a diacritic most often used in the Japanese kana syllabaries to indicate that the consonant of a mora should be pronounced voiced, for instance, on sounds that have undergone rendaku (sequential voicing).
The , colloquially , is a diacritic used with kana for morae pronounced with or to indicate that they should instead be pronounced with .
Glyphs
The dakuten resembles a quotation mark, while the handakuten is a small circle, similar to a degree sign, both placed at the top right corner of a kana character:
Both the dakuten and handakuten glyphs are drawn identically in hiragana and katakana scripts. The combining characters are rarely used in full-width Japanese characters, as Unicode and all common multibyte Japanese encodings provide precomposed glyphs for all possible dakuten and handakuten character combinations in the standard hiragana and katakana ranges. However, combining characters are required in half-width kana, which does not provide any precomposed characters in order to fit within a single byte.
The similarity between the dakuten and quotation marks (") is not a problem, as written Japanese uses corner brackets (「」).
Phonetic shifts
The following table summarizes the phonetic shifts indicated by the dakuten and handakuten. Literally, morae with dakuten are , while those without are . However, the handakuten (lit. "half-muddy mark") does not follow this pattern.
(Yellow shading indicates non-standard use.)
Handakuten on ka, ki, ku, ke, ko (rendered as ) represent the sound of ng in singing (), which is an allophone of in many dialects of Japanese. They are not used in normal Japanese writing, but may be used by linguists and in dictionaries (or to represent characters in fiction who speak that way). This is called . Another rare application of handakuten is on the r-series, to mark them as explicitly l: , and so forth. This is only done in technical or pedantic contexts, as many Japanese speakers cannot tell the difference between r and l. Additionally, linguists sometimes use to represent in cases when speaker pronounces at the beginning of a word as a moraic nasal.
In katakana only, the dakuten may also be added to the character u and a small vowel character to create a sound, as in ヴァ va. However, a hiragana version of this character also exists, with somewhat sporadic compatibility across platforms (). As does not exist in Japanese, this usage applies only to some modern loanwords and remains relatively uncommon, and e.g. Venus is typically transliterated as (bīnasu) instead of (vīnasu). Japanese speakers, however, pronounce both the same, with or , an occasional allophone of intervocalic .
An even less common method is to add dakuten to the w-series, reviving the mostly obsolete characters for () and (). is represented by using /u/, as above; becomes despite its normally being silent. Precomposed characters exist for this method as well ( ), although most IMEs do not have a convenient way to enter them.
In Ainu texts, handakuten can be used with the katakana to make it a /t͡s/ sound, ce [t͡se] (which is interchangeable with ), and is used with small fu to represent a final p, . In addition, handakuten can be combined with either katakana or (tsu and to) to make a [tu̜] sound, or .
In Miyakoan, handakuten can be used with (normally [i]) to represent the vowel .
In informal writing, dakuten is occasionally used on vowels to indicate a shocked or strangled articulation; for example, on or . Dakuten can also be occasionally used with to indicate a guttural hum, growl, or similar sound.
Kana iteration marks
The dakuten can also be added to hiragana and katakana iteration marks, indicating that the previous kana is repeated with voicing:
Both signs are relatively rare, but can occasionally be found in personal names such as Misuzu () or brand names such as Isuzu (いすゞ). In these cases the pronunciation is identical to writing the kana out in full. A longer, multi-character iteration mark called the kunojiten (), only used in vertical writing, may also have a dakuten added ().
Other communicative representations
Representations of Dakuten
Representations of Handakuten
Voiced morae and semi-voiced morae do not have independent names in radiotelephony and are signified by the unvoiced name followed by "ni dakuten" or "ni handakuten".
Full Braille representation
History
The kun'yomi pronunciation of the character (daku in on'yomi) is nigori; hence the dakuten may also be called the nigori-ten. This character, meaning "muddy", stems from historical Chinese phonology, where consonants were traditionally classified as "fully clear" (, voiceless unaspirated obstruent), "partly clear" (, voiceless aspirated obstruent), "fully muddy" (, voiced obstruent) and "partly muddy" (, voiced sonorant) (see Middle Chinese § Initials and w:zh:清濁音). Unlike in Chinese where "clear" and "muddy" were phonological, in Japanese, these terms are purely orthographic: a is simply a kana with a "muddy mark", or a dakuten; a or is simply a kana with a "half muddy mark", or a handakuten; a is any other kana without either of these marks. In fact, the "partly clear/half muddy" consonant in Japanese would be considered "fully clear" in Chinese, while "clear" Japanese consonants such as , , , and would be "partly muddy" in Chinese. Meiji-era descriptions of the Japanese "sound" system (either the actual phonology, or the orthography) in terms of "clear" and "muddy" always referenced the kana spelling and the two diacritics dakuten and handakuten.
Dakuten were used sporadically since the start of written Japanese; their use tended to become more common as time went on. The modern practice of using dakuten in all cases of voicing in all writing only came into being in the Meiji period.
The handakuten is an innovation by Portuguese Jesuits, who first used it in the Rakuyōshū. These Jesuits needed to accurately transcribe Japanese sounds, which the Japanese tended to neglect by making no distinction between /h/, /b/ and /p/ in their own writing.
See also
Tsu (kana)
Sokuon
Dagesh (Hebrew diacritic)
References
External links
and on Japanese Wikipedia
(Trans.: Phonetic Kana with Dakuten) and (Trans.: Phonetic Kana with Handakuten)
Kana
Japanese phonology
Japanese writing system terms
Diacritics | Dakuten and handakuten | [
"Mathematics"
] | 1,544 | [
"Symbols",
"Diacritics"
] |
1,025,066 | https://en.wikipedia.org/wiki/Far%20infrared | Far infrared (FIR) or long wave refers to a specific range within the infrared spectrum of electromagnetic radiation. It encompasses radiation with wavelengths ranging from 15 μm (micrometers) to 1 mm, which corresponds to a frequency range of approximately 20 THz to 300 GHz. This places far infrared radiation within the CIE IR-B and IR-C bands. The longer wavelengths of the FIR spectrum overlap with a range known as terahertz radiation. Different sources may use different boundaries to define the far infrared range. For instance, astronomers often define it as wavelengths between 25 μm and 350 μm. Infrared photons possess significantly lower energy than photons in the visible light spectrum, with tens to hundreds of times less energy.
Applications
Astronomy
Objects within a temperature range of approximately 5 K to 340 K emit radiation in the far infrared range as a result of black-body radiation, in accordance with Wien's displacement law. This characteristic is utilized in the observation of interstellar gases, which are frequently associated with the formation of new stars.
The brightness observed in far infrared images of the center of the Milky Way galaxy arises from the high density of stars in that region, which heats the surrounding dust and induces radiation emission in the far infrared spectrum. Excluding the center of the Milky Way galaxy, the galaxy M82 is the most prominent far-infrared object in the sky, with its central region emitting amounts of far infrared light equivalent to the combined emissions of all the stars in the Milky Way. , the source responsible for heating the dust at the center of M82 remains unknown.
Human body detection
Certain human proximity sensors utilize passive infrared sensing within the far infrared wavelength range to detect the presence of stationary and/or moving human bodies.
Heating
Infrared heating (IR) is a method of heating an area through more efficient results than gas or electric convection heating. Studies show IR heats faster, more uniformly, and more efficiently than a traditional conventional system. Increasingly, IR heating is utilised as part of scheme designs to achieve spot, zonal and smart heating within occupation zones within a building. Though multiple applications of long wave or FIR heating exist, a common representation comprises radiant panel heaters. Radiant panel heaters typically contain a grid of resistance wire or ribbons which are sandwiched between a thin plate of electrical insulation on an emitting die and thermal insulation on the back side. Owing to their size and flexibility, infrared panel heaters can be fitted on walls and ceilings for added-space saving benefits. Electric FIR panel heaters are shown to have up to 98.5% efficiency from supply to production of heat with satisfactory thermal comfort, thermostatic control, and with low initial investment.
Therapeutic modality
Researchers have observed that among all forms of radiant heat, only far-infrared radiation transfers energy solely in the form of heat that can be sensed by the human body. They have found that this type of radiant heat can penetrate the skin up to a depth of approximately . In the field of biomedicine, experiments have been conducted using fabrics woven with FIR-emitting ceramics embedded in their fibers. These studies have indicated a potential delay in the onset of fatigue induced by muscle contractions in participants. The researchers have suggested that the emission of far-infrared radiation by these ceramics (referred to as cFIR) could facilitate cellular repair.
Certain heating pads have been marketed to provide "far infrared" therapy, which is claimed to offer deeper penetration. However, the infrared radiation emitted by an object is determined by its temperature. Therefore, all heating pads emit the same type of infrared radiation if they are at the same temperature. Higher temperatures will result in greater infrared radiation, but caution must be exercised to avoid burns.
References
External links
Infrared | Far infrared | [
"Physics"
] | 755 | [
"Infrared",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
1,025,249 | https://en.wikipedia.org/wiki/Government%20Open%20Systems%20Interconnection%20Profile | The Government Open Systems Interconnection Profile (GOSIP) was a specification that profiled open networking products for procurement by governments in the late 1980s and early 1990s.
Timeline
1988 - GOSIP: Government Open Systems Interconnection Profile published by CCTA, an agency of UK government
1988 - UK's CCTA commences work with France and West Germany on European Procurement Handbook (EPHOS)
1990 - The US specification requiring Open Systems Interconnection (OSI) protocols was first published as Federal Information Processing Standards document FIPS 146-1. The requirement for US Government vendors to demonstrate their support for this profile led them to join the formal interoperability and conformance testing for networking products, which had been done by industry professionals at the annual InterOp show since 1980.
1990 - Publication of European Procurement Handbook (EPHOS), intended to be a European GOSIP
1991 - 4th and final version of UK GOSIP released
1993 - Australia and New Zealand GOSIP Version 3 - 1993 Government Open Systems Interconnection Profile
1995 - FIPS 146-2 allowed "...other specifications based on open, voluntary standards such as those cited in paragraph 3 ("...such as those developed by the Internet Engineering Task Force (IETF)... and the International Telecommunications Union, Telecommunication Standardization Sector (ITU–T))"
In practice, from 1995 interest in OSI implementations declined, and worldwide the deployment of standards-based networking services since have been predominantly based on the Internet protocol suite. However, the Defense Messaging System continued to be based on the OSI protocols X.400 and X.500, due to their integrated security capabilities.
See also
OSI model
ISO Development Environment (ISODE)
Protocol Wars
References
Computer standards | Government Open Systems Interconnection Profile | [
"Technology"
] | 362 | [
"Computer standards"
] |
1,025,259 | https://en.wikipedia.org/wiki/Spark%20chamber | A spark chamber is a particle detector: a device used in particle physics for detecting electrically charged particles. They were most widely used as research tools from the 1930s to the 1960s and have since been superseded by other technologies such as drift chambers and silicon detectors. Today, working spark chambers are mostly found in science museums and educational organisations, where they are used to demonstrate aspects of particle physics and astrophysics.
Spark chambers consist of a stack of metal plates placed in a sealed box filled with a gas such as helium, neon or a mixture of the two. When a charged particle from a cosmic ray travels through the box, it ionises the gas between the plates. Ordinarily this ionisation would remain invisible. However, if a high enough voltage can be applied between each adjacent pair of plates before that ionisation disappears, then sparks can be made to form along the trajectory taken by the ray, and the cosmic ray in effect becomes visible as a line of sparks. In order to control when this voltage is applied, a separate detector (often containing a pair of scintillators placed above and below the box) is needed. When this trigger senses that a cosmic ray has just passed, it fires a fast switch to connect the high voltage to the plates. The high voltage cannot be connected to the plates permanently, as this would lead to arc formation and continuous discharging.
As research devices, spark chamber detectors have lower resolution than bubble chamber detectors. However they can be made highly selective with the help of auxiliary detectors, making them useful in searching for very rare events.
Related devices
A streamer chamber is a type of detector closely related to the spark chamber. In a spark chamber one looks at a stack of parallel plates edge-on. For this reason, best viewing is afforded when the particle comes in perpendicularly to the plates. A streamer chamber, in contrast, typically has only two plates, at least one of which is transparent (e.g. wire mesh or a conductive glass). Particles come in roughly parallel to the plane of these plates. A much shorter high-voltage pulse is used than with a spark chamber, so there is insufficient time for sparks to form. Instead very dim streamers of ionised gas are formed. These can be seen when image enhancement is applied.
See also
Electric spark
Cloud chamber
Bubble chamber
External links
University of Cambridge Spark Chambers
Spark Chamber Project - McGill University
"How does a spark chamber work?" - From an exhibitor at the 2011 Royal Society Summer Science Exhibition.
"How does a spark chamber work?" - University of Birmingham
Enhanced image of streamers taken in a steamer chamber
Particle detectors | Spark chamber | [
"Technology",
"Engineering"
] | 532 | [
"Particle detectors",
"Measuring instruments"
] |
1,025,265 | https://en.wikipedia.org/wiki/Gender%20of%20connectors%20and%20fasteners | In electrical and mechanical trades and manufacturing, each half of a pair of mating connectors or fasteners is conventionally assigned the designation male or female, a distinction referred to as its gender. The female connector is generally a receptacle that receives and holds the male connector. Alternative terminology such as plug and socket or jack are sometimes used, particularly for electrical connectors.
The assignment is a direct analogy with male and female genitalia, the part bearing one or more protrusions or which fits inside the other being designated male, in contrast to the part containing the corresponding indentations, or fitting outside the other, being designated female. Extension of the analogy results in the verb to mate being used to describe the process of connecting two corresponding parts together.
In some cases (notably electrical power connectors), the gender of connectors is selected according to rigid rules, to enforce a sense of one-way directionality (e.g. a flow of power from one device to another). This gender distinction is implemented to enhance safety or ensure proper functionality by preventing unsafe or non-functional configurations from being set up.
In terms of mathematical graph theory, an electrical power distribution network made up of plugs and sockets is a directed tree, with the directionality arrows corresponding to the female-to-male transfer of electrical power through each mated connection. This is an example where male and female connectors have been deliberately designed and assigned to physically enforce a safe network topology.
In other contexts, such as plumbing, one-way flow is not enforced through connector gender assignment. Flows through piping networks can be bidirectional, as in underground water distribution networks which have designed-in redundancy. In plumbing situations where one-way flow is desired, it is implemented through other means (e.g. air gaps or one-way check valves), and not through male-female gender schemes.
Early mentions of the metaphor
The Talmud describes arrow heads and mating shafts as potentially being either male or female, depending on their construction, i.e. a prong on a male arrow head fits into a hollowed out shaft and vice versa. This is owing to a prohibition on a female shaft, from its susceptibility as a receptacle for impurity, for use as s'chach
18th-century dictionaries and encyclopedias mention male and female screws or cochleae. A 1736 builder's manual mentions screw genders as metaphors for convex and concave shapes:
Mechanical fasteners
In mechanical design, the prototypical "male" component is a threaded bolt, but an alignment post, a mounting boss, or a sheet metal tab connector can also be considered as male. Correspondingly, a threaded nut, an alignment hole, a mounting recess, or sheet metal slot connector is considered to be female.
While some mechanical designs are "one-off" custom setups not intended to be repeated, there is an entire fastener industry devoted to manufacturing mass-produced or semi-custom components. To avoid unnecessary confusion, conventional definitions of fastener gender have been defined and agreed upon.
Modular construction toys
Several common construction toys embody gendered (and in some cases, genderless) mechanical interconnections. For example, the canonical Lego plastic blocks have "female" indentations on the lower surface, and "male" bosses or protrusions on the upper surfaces. Meccano and Erector have many gendered connections, starting with the nut-and-bolt fasteners they use frequently.
Stickle bricks, using interlocking plastic protrusions, are effectively genderless (while nonetheless maintaining an asymmetry). Lincoln logs use a very simple form of genderless connections. Kapla or KEVA planks are extremely simple genderless systems interconnected only by gravity.
Mathematicians have begun to classify well-known construction sets using group theory to study the combinatoric possibilities of structures that can be built.
Plumbing
In plumbing fittings, the "M" or "F" usually comes at the beginning rather than the end of the abbreviated designation. For example:
MIPT denotes male iron pipe thread;
FIPT denotes female iron pipe thread.
A short length of pipe having an MIP thread at both ends is sometimes called a nipple. A short pipe fitting having an FIP thread at both ends is sometimes called a coupling.
Hermaphroditic connections, which may include both male and female elements in a single unit, are used for some specialized tubing fittings, such as Storz fire hose connectors. A picture of such fittings appears in , below. Interchangeable garden hose fittings made by GEKA are also hermaphroditic, relying on a rubber gasket to make the final connection.
Downspout
Downspouts (also called downpipes, rain conductors, or leaders) are used to convey rainwater from roof gutters to the ground through hollow pipes or tubes. These tubes usually come in sections, joined by inserting the male end (often crimped with a special tool to slightly reduce its size) into the female end of the next section. These connections are usually not sealed or caulked, instead relying on gravity to move the rainwater from the male end and into the receiving female connection located directly below.
Ductwork
Sheet metal ductwork for conveying air in HVAC systems typically uses gendered connections. Typically, the airflow through a ductwork connection is from male to female. However, since one-way flow is implemented by forced-air fans or blowers, "backwards" gendered connections can be seen frequently in some systems, since all connections are typically sealed with duct sealing mastic or tape to prevent leakage anyway. The flow convention is usually loosely adhered to for simplicity of design, and to reduce the number of gender changer fittings required, but exceptions are made whenever expedient.
Electrical and electronic
Although the gender of tubing and plumbing fittings is usually obvious, this may not be true of electrical connectors because of their more complex and varying constructions. Instead, connector gender is conventionalized and thus can be somewhat obscure to the uninitiated. For example, the female D-subminiature connector body projects outward from the mounting plane of the chassis, and this protrusion could be erroneously construed as male. Instead, the "maleness" of the D-subminiature connectors is defined by specific presence of male pins, rather than by the protrusion of the connector (this is also true for many other pin-based connectors like XLR). The male/female distinction is more obvious with ring crimp lug connectors which are placed around a screw post, but again with spade or split ring crimp lug connectors the end alone is not obviously female.
Further confusion can be caused by the term "jack", which is used for both female and male connectors and typically refers to the fixed (panel) side of a connector pair. IEEE STD 100, IEEE-315-1975 and IEEE 200-1975 (replaced by ASME Y14.44-2008) define "plug" and "jack" by location or mobility, rather than gender.
A connector in a fixed location is a "jack", and a moveable connector is a "plug". The distinction is relative, so a portable radio is considered stationary compared to the cable from the headphones; the radio has a jack, and the headphone cable has a plug. Where the relationship is equal, such as when two flexible cables are connected (an "inline" connection), each is considered a plug. Jacks use the reference designator prefix of J and plugs use the reference designator prefix of P.
It is common practice to use female connectors for jacks, so the informal gender-based usage often happens to agree with the functional description of the technical standards. However, this is not always the case; often-seen exceptions include a computer's AC Power Inlet and EIA232 DE9 Serial Port, or the male coaxial power jacks for connecting external power adapters to portable equipment.
To summarize, it is considered best practice to use "male" and "female" for connector gender, and "plug" and "jack" for connector function or mobility.
Variant usages
In the United Kingdom, many Commonwealth countries, and some non-English-speaking countries such as France, the word "jack" may refer to the plug on the end of a removable cable. These connectors were originally referred to as "jack plugs", or plugs intended to be mated with fixed receptacles, or sockets (which North Americans would call "jacks"), but the second word was dropped. This variant usage is in direct contradiction to common usage and official standards in North America.
For example, in the UK, the connector on the end of a headphone lead is known as a "jack", that plugs into a socket on the main unit. The same usage also generally occurs in Italy, where the English word "jack" is commonly used to indicate the connector on the end of a headphone lead.
In Romania, female connectors are known as mamă ("mother") and male connectors are tată ("father").
Abbreviations and alternate terminology
The standard letters "M" and "F" are commonly used in part numbers to designate connector gender. For example, in Switchcraft XLR microphone or hydrophone connectors, the part numbers are denoted as follows:
A3F = Audio 3-pin female connector;
A3M = Audio 3-pin male connector.
The terms plug, pin, and prong are also often used for "male" connectors, and receptacle, socket, and slot are used for "female" connectors. In many cases these terms are more common than male and female, especially in documentation intended for the non-specialist. These nearly synonymous terms can cause a fair amount of confusion when the designations are shortened in labels.
For example, a female high-density D-subminiature connector with a size 1 shell can be named DE15F or DE15S (see accompanying pictures). Both terms mean the same thing but could be construed to be completely different items. Similarly, a male standard-density D-sub with a size 1 shell can be named DE9M or DE9P; a female standard-density D-sub with a size 2 shell can be named DA15F or DA15S; a male high-density D-sub with a size 3 shell can be named DB44M or DB44P; and so forth.
Gender selection in electronic design
Electronic designers often select female jack connectors for fixed mounting on electronic equipment they design. This is usually done because female connectors are more resistant to damage or contamination, by virtue of their concealed or recessed electrical contacts. A damaged motherboard connector can result in the scrapping of an expensive piece of electronic equipment. The risk of damage is reduced by relegating the more exposed male contacts to connecting cables, which can be repaired or replaced at lower cost.
For example, in an RS-232 serial port, the male connector is more mechanically fragile than the female connector. Cost and reliability considerations probably drove the design decision to use female jack connectors on many computer terminals (and some personal computers) for the serial port, despite being in direct violation of the connector gender convention explicitly specified in the RS-232 standard for "DTE" (data terminal equipment) connections. This confusing reversal of the RS-232 connector gender convention has caused many hours of frustration for ill-informed end users, as they tried to troubleshoot non-functional serial port equipment connections.
Safety
In electrical connections where voltage or current is sufficient to cause injury, the part permanently connected to the power source is invariably female, with concealed contacts, to prevent inadvertent touching of live conductors by people or animals, or by conductive items that may cause a short circuit. A male plug, with fully exposed protruding contacts, is installed on the cord of (or directly onto) the device receiving the power. Devices that need to be robust against mechanical damage may use a special male IEC 60320 C14 connector (see the gallery above), which is recessed below the surface of a mounting panel, providing the desired physical protection while conforming to safety regulations.
In the case of consumer-level AC power, connector gender is used to implicitly enforce safe use of power connectors. Because of this consideration, it is illegal under electrical code to make or use any gender changers to connect AC line power to consumer-level equipment.
A double-ended male connector for utility-supplied ("mains") electrical power is extremely dangerous, and sometimes is called a "suicide cable" or "widowmaker cord". Some hardware shops explicitly refuse to make or sell them when asked by customers who have mistakenly hung a string of Christmas lights backwards and wish to connect the socket end to a wall socket, or who intend to connect a generator or inverter to their home's electrical circuit in the event of a utility power outage. The exposed prongs on the live end of the cable pose serious electrical shock and fire hazards, and when improperly used in a generator setup may cause the equipment to burn out when utility power is restored. It can also backfeed power into the grid, potentially damaging utility equipment or even electrocuting linemen attempting to restore power.
Similarly, an exposed connection on a jumper cable for a 12V automotive battery can be hazardous, because of the potentially high current and energy involved. Accidental shorting of the wire to vehicle ground can cause sparks, rapid heating, or even a battery explosion.
In low-voltage use such as for data communications, electrical shock hazard is not an issue, and male or female connectors are used based on other engineering factors such as convenience of use, cost, or ease of manufacturing. For example, the common "patch cables" used for Ethernet (and the similar cords used for telephones) typically have male modular plugs on both ends, to connect to jacks on equipment or mounted in walls.
An example of a design tradeoff in power connector selection is a coaxial power connector, which is usually set up so that power is fed from the female plug into the male jack (which is typically a part of the electronic device accepting the power). Although the plug is female, with a partially recessed center contact, it is still possible for casual accidental contact with a metallic object to short-circuit the power source. Depending on the design of the power adapter, it may react to a short circuit by shutting down temporarily, or instead by blowing out an internal safety fuse.
In this example, the marginal reliability of the connector choice was deemed to be acceptable by the equipment designer, since the power adapter supplies low voltage that does not pose an electric shock hazard. The potential fire hazard from accidental short-circuiting is addressed by the internal safety fuse, although this requires that a failed power adapter be completely replaced. In a different design, if the power adapter were intended to supply a voltage sufficient to cause electrical shock, the semi-exposed center contact of the female plug would be considered unacceptably hazardous, requiring a different choice of connector.
Ambiguous gender
Some electrical connectors are hermaphroditic because they include both male and female elements in a single unit intended to interconnect freely, without regard for gender. See the discussion of genderless connectors in this article for more detailed information.
As an additional complication, certain electronic connector designs may incorporate combinations of male and female pins in a single connector body, for mating with a complementary connector with opposite gender pins in corresponding positions. For example, CEE 7/5 sockets have a male Ground pin. In these unusual cases, gender is often defined by the shape of the connector body, rather than the mixed-gender connector pins and sockets. These types of connectors are not strictly speaking hermaphroditic, since mating connectors are not freely interchangeable. An informal term that has been used for these connectors is "bisexual", in addition to the more official terminology, mixed-gender. Thus, for example, one can have a mixed-gender female plug that connects to a mixed-gender male jack (though a reversed gender assignment of connectors would be a more typical design choice in this example).
Male connector pins are often protected by a shell (also called a shroud, surround, or shield), which may envelop the entire female connector when mated. RF connectors often have multiple layers of interlocking shells to properly connect the shields of coaxial and triaxial cable. In such cases, the gender is assigned based on the innermost connecting point. With the exception of reverse polarity BNC or TNC, where the outer shell determines the gender and the innermost connecting points are opposite to a standard connector, for example a female RP-TNC connector has a solid innermost pin.
Another ambiguous situation arises with the connectors used for USB, FireWire (IEEE-1394), HDMI, and Thunderbolt serial data bus connections. Close examination of these connectors reveals that the contact "pins" are not actually pins, but instead are conductive surfaces that slide past each other when they mate. Therefore, the traditional pin and socket nomenclature is not applicable. Instead, most computer hardware people fall back to referring to the wrap-around metal shield on the plug connector as if it were a connector pin. By this convention, the connectors on serial bus cables are "male plugs", and the corresponding connectors on equipment are "female jacks".
A casual glance at a USB "Type A" plug connector may give the false impression that it is hermaphroditic. However, a physical attempt to mate two USB "Type A" cables with each other reveals the fact that the connectors will not interconnect. Classifying according to mathematical graph theory, USB buses are directed trees, whereas FireWire buses have a true bus network topology. This difference is reflected in the bus connectors used, in that USB cables are asymmetrical (one end Type A, other end Type B) while FireWire cables may have identical connectors at both ends. The more-recent USB-C cables may also have identical connectors on both ends (in which case the logical A and B ends are negotiated between the attached devices rather than being directly indicated by the plugs).
An example of a connector where the contacts themselves are hermaphroditic is the ELCO Varicon, wherein the contacts are bifurcated in recessed cross-shaped wells, and mate with one another axially at a 90 degree rotation. In this example, the plugs have the contacts oriented transversely and the sockets are arranged longitudinally.
Genderless (hermaphroditic)
By definition, a hermaphroditic connector includes mating surfaces having simultaneous male and female aspects, involving complementary paired identical parts each containing both protrusions and indentations. These mating surfaces are mounted into identical fittings which can freely mate with any other, without regard for gender (provided that the size and type are already matched). Alternative names include hermaphrodite, androgynous, genderless, sexless, combination (or combo), two-in-one, two-way, and other descriptive terms. Several of these latter alternate names are ambiguous in meaning, and should not be used unless carefully defined in context. True hermaphroditic connectors should not be confused with mixed gender connectors, which are described elsewhere in this article.
Another closely related type is the stackable connector for electronics, which typically has male pins on one surface and complementary female sockets on the opposite surface, allowing multiple units to be stacked up like plastic milk crates. Examples of this include stackable banana plugs, and interconnect cables specified for the IEEE-488 instrumentation bus. Stackable mezzanine bus connectors are used on some modular microcomputer accessory boards for systems such as the Arduino add-on daughterboards called "shields". The older PC/104 embedded PC modules use a similar stackable format for interconnection. Stackable connectors are not classified as hermaphroditic in the strictest sense, but are often described as such in looser usage.
The hermaphroditic design is useful when multiple complex or lengthy components must be arbitrarily connected in various combinations. For example, if hoses have hermaphroditic fittings, they can be connected without having to pull a lengthy hose and reverse it because it has the wrong gender to connect to another hose. Some military fiber optical cables also have hermaphroditic connectors to prevent "wrong gender" connector problems in field deployments. In a similar fashion, railcars are usually equipped with hermaphroditic railway coupling mechanisms that allow either end of the vehicle to be connected to a train without having to turn the railcar around first.
For the same reason, several spacecraft docking mechanisms are designed to be "androgynous",
including the Androgynous Peripheral Attach System, the NASA Docking System, and Chinese Docking Mechanism.
In the absence of genderless connectors, gender changer fittings might be used to enable certain connections. The designer of a connection system may use one or both schemes to allow arbitrary connectivity, or even combine both schemes into a single system.
When an enforced sense of unidirectionality or "one-way flow" is required for safety or other reasons (for example, AC electrical power connections), a strict assignment of connector genders is implemented to prevent undesired configurations, and gender changers are banned.
Some commonly seen examples of hermaphroditic connectors include the SAE connector for 12 V DC power, jackhammer air hose connectors, and the Anderson Powerpole series of modular high-current power connectors. The now-obsolete IBM Token Ring connector was another widespread example. The General Radio Corporation developed a hermaphroditic coaxial radio frequency connector often called the "GR connector".
Some audio multicore cables are fitted with hermaphroditic multipin quick-disconnect connectors for ease of use in the field. One style of this audio signal cable is fitted on both ends with connectors that are each populated half with pins and half with sockets. The advantage to the user is that it does not matter which end connects to the stage and which to the audio mixer, facilitating faster set up. Another style of connector uses hybrid male/female pins with a receiving slot fitted in the center of each two-tine pin, and relies on 90-degree rotation of the pin axes to mate. The connector housings themselves are sexed male and female.
Gender changers
Devices used for mating two connectors of the same gender have a wide variety of terms, including for example: "gender changer", "gender mender", "gender bender", and "gender blender". A specific gender changer can be referred to by either the gender of its connectors, or the gender which it is designed to connect to, resulting in a thoroughly ambiguous terminology. Thus a "male gender changer" might have female connectors to mate two male ends, or male connectors to mate two female ends.
Adding to this potential for confusion, some gender changers also combine additional functions such as cross-over pin-outs. Active cables may incorporate embedded systems for communications protocol or logic level changes, which technically makes them "adapters", but this distinction is sometimes neglected in marketing materials or common usage.
Examples
Coaxial power connector, for low-voltage DC connections
A power cord on an appliance terminates in a (male) plug; it connects to a (female) socket in a wall or on an extension cord.
Coaxial cables used for video or other high-frequency signals are normally terminated, at both ends, in a connector comprising an inner pin and an outer fixed or rotating shell; these are conventionally reckoned as male.
A threaded nut is female and a bolt is male.
Connectors for air brake hoses on heavy trucks and railroad equipment use genderless "gladhand connectors". In railroad air brake use, this makes the orientation of rolling stock irrelevant, and is used with the standard North American railroad coupler that connects cars together, also genderless.
See also
Mating connection
Piping and plumbing fittings
Screw thread
Sex bolt
Twistlock – standardized fasteners for shipping containers
References
Gender in language
Electrical connectors
Plumbing | Gender of connectors and fasteners | [
"Engineering"
] | 5,076 | [
"Construction",
"Plumbing",
"Fasteners"
] |
1,025,272 | https://en.wikipedia.org/wiki/Bohr%E2%80%93Einstein%20debates | The Bohr–Einstein debates were a series of public disputes about quantum mechanics between Albert Einstein and Niels Bohr. Their debates are remembered because of their importance to the philosophy of science, insofar as the disagreements—and the outcome of Bohr's version of quantum mechanics becoming the prevalent view—form the root of the modern understanding of physics. Most of Bohr's version of the events held in the Solvay Conference in 1927 and other places was first written by Bohr decades later in an article titled, "Discussions with Einstein on Epistemological Problems in Atomic Physics". Based on the article, the philosophical issue of the debate was whether Bohr's Copenhagen interpretation of quantum mechanics, which centered on his belief of complementarity, was valid in explaining nature. Despite their differences of opinion and the succeeding discoveries that helped solidify quantum mechanics, Bohr and Einstein maintained a mutual admiration that was to last the rest of their lives.
Although Bohr and Einstein disagreed, they were great friends all their lives and enjoyed using each other as a foil.
Pre-revolutionary debates
Einstein was the first physicist to say that Max Planck's discovery of the energy quanta would require a rewriting of the laws of physics. To support his point, in 1905 Einstein proposed that light sometimes acts as a particle which he called a light quantum (see photon and wave–particle duality). Bohr was one of the most vocal opponents of the photon idea and did not openly embrace it until 1925. The photon appealed to Einstein because he saw it as a physical reality (although a confusing one) behind the numbers presented by Planck mathematically in 1900. Bohr disliked it because it made the choice of mathematical solution arbitrary. Bohr did not like a scientist having to choose between equations. This disagreement was perhaps the first real Bohr-Einstein debate. Einstein had proposed the photon in 1905, and Arthur Compton provided experiment in 1922 with his Compton effect, but Bohr refused to believe the photon existed even then. Bohr continued to dispute the existence of the quantum of light (photon) and along with Hans Kramers and John C. Slater elaborated the BKS theory in 1924. However, after the 1925 Bothe–Geiger coincidence experiment, BKS was proved to be wrong and Einstein's hypothesis was proven to be correct.
The quantum revolution
The quantum revolution of the mid-1920s occurred under the direction of both Einstein and Bohr, and their post-revolutionary debates were about making sense of the change. Werner Heisenberg's Umdeutung paper in 1925 reinterpreted old quantum theory in terms of matrix-like operators, removing the Newtonian elements of space and time from any underlying reality. In parallel, Erwin Schrödinger redeveloped quantum theory in terms of a wave mechanics formulation, leading to the Schrödinger equation. However, when Schrödinger sent a preprint of his new equation to Einstein, Einstein wrote back hailing his equation as a decisive advance of “true genius.” Then in 1926 when Max Born, collaborating with Heisenberg, proposed that mechanics were to be understood as a probability without any causal explanation.
Both Einstein and Schrödinger rejected Born's interpretation with its renunciation of causality which had been a key feature of science previous to old quantum theory and was still a feature of general relativity. In a 1926 letter to Max Born, Einstein wrote:
At first, even Heisenberg had heated disputes with Bohr that his matrix mechanics was not compatible with the Schrödinger's wave mechanics. And Bohr was at first opposed to Heisenberg's uncertainty principle. But by the Fifth Solvay Conference held in October 1927 Heisenberg and Born concluded that the revolution was over and nothing further was needed. It was at that last stage that Einstein's skepticism turned to dismay. He believed that much had been accomplished, but the reasons for the mechanics still needed to be understood.
Einstein's refusal to accept the revolution as complete reflected his desire to see developed a model for the underlying causes from which these apparent random statistical methods resulted. He did not reject the idea that positions in space-time could never be completely known but did not want to allow the uncertainty principle to necessitate a seemingly random, non-deterministic mechanism by which the laws of physics operated. Einstein himself was a statistical thinker but denied that no more needed to be discovered or clarified. Einstein worked the rest of his life to discover a new theory that would make sense of quantum mechanics and return causality to science, what many now call the theory of everything. Bohr, meanwhile, was dismayed by none of the elements that troubled Einstein. He made his own peace with the contradictions by proposing a principle of complementarity that assigns properties only as result of measurements.
Post-revolution: First stage
As mentioned above, Einstein's position underwent significant modifications over the course of the years. In the first stage, Einstein refused to accept quantum indeterminism and sought to demonstrate that the uncertainty principle could be violated, suggesting ingenious thought experiments which should permit the accurate determination of incompatible variables, such as position and velocity, or to explicitly reveal simultaneously the wave and the particle aspects of the same process. (The main source and substance for these thought experiments is solely from Bohr's account twenty years later.) Bohr admits: “As regards the account of the conversations I am of course aware that I am relying only on my own memory, just as I am prepared for the possibility that many features of the development of quantum theory, in which Einstein has played so large a part, may appear to himself in a different light.”
Einstein's argument
The first serious attack by Einstein on the "orthodox" conception took place during the Fifth Solvay International Conference on "Electrons and Photons" in 1927. Einstein pointed out how it was possible to take advantage of the (universally accepted) laws of conservation of energy and of impulse (momentum) in order to obtain information on the state of a particle in a process of interference which, according to the principle of indeterminacy or that of complementarity, should not be accessible.
In order to follow his argumentation and to evaluate Bohr's response, it is convenient to refer to the experimental apparatus illustrated in figure A. A beam of light perpendicular to the X axis (here aligned vertically) propagates in the direction z and encounters a screen S1 with a narrow (relative to the wavelength of the ray) slit. After having passed through the slit, the wave function diffracts with an angular opening that causes it to encounter a second screen S2 with two slits. The successive propagation of the wave results in the formation of the interference figure on the final screen F.
At the passage through the two slits of the second screen S2, the wave aspects of the process become essential. In fact, it is precisely the interference between the two terms of the quantum superposition corresponding to states in which the particle is localized in one of the two slits which produces zones of constructive and destructive interference (in which the wave function is nullified). It is also important to note that any experiment designed to evidence the "corpuscular" aspects of the process at the passage of the screen S2 (which, in this case, reduces to the determination of which slit the particle has passed through) inevitably destroys the wave aspects, implies the disappearance of the interference figure and the emergence of two concentrated spots of diffraction which confirm our knowledge of the trajectory followed by the particle.
At this point Einstein brings into play the first screen as well and argues as follows: since the incident particles have velocities (practically) perpendicular to the screen S1, and since it is only the interaction with this screen that can cause a deflection from the original direction of propagation, by the law of conservation of impulse which implies that the sum of the impulses of two systems which interact is conserved, if the incident particle is deviated toward the top, the screen will recoil toward the bottom and vice versa. In realistic conditions the mass of the screen is so large that it will remain stationary, but, in principle, it is possible to measure even an infinitesimal recoil. If we imagine taking the measurement of the impulse of the screen in the direction X after every single particle has passed, we can know, from the fact that the screen will be found recoiled toward the top (bottom), whether the particle in question has been deviated toward the bottom or top, and therefore through which slit in S2 the particle has passed. But since the determination of the direction of the recoil of the screen after the particle has passed cannot influence the successive development of the process, we will still have an interference figure on the screen F. The interference takes place precisely because the state of the system is the superposition of two states whose wave functions are non-zero only near one of the two slits. On the other hand, if every particle passes through only the slit b or the slit c, then the set of systems is the statistical mixture of the two states, which means that interference is not possible. If Einstein is correct, then there is a violation of the principle of indeterminacy.
This thought experiment was begun in a simpler form during the general discussion portion of the actual proceedings during the 1927 Solvay conference. In those official proceedings, Bohr's reply is recorded as: “I feel myself in a very difficult position because I don’t understand precisely the point that Einstein is trying to make.” Einstein had explained, “it could happen that the same elementary process produces an action in two or several places on the screen. But the interpretation, according to which psi squared expresses the probability that this particular particle is found at a given point, assumes an entirely peculiar mechanism of action at a distance.” It is clear from this that Einstein was referring to separability (in particular, and most importantly local causality, i.e. locality), not indeterminacy. In fact, Paul Ehrenfest wrote a letter to Bohr stating that the 1927 thought experiments of Einstein had nothing to do with the uncertainty principle, as Einstein had already accepted these “and for a long time never doubted.”
Bohr's response
Bohr evidently misunderstood Einstein's argument about the quantum mechanical violation of relativistic causality (locality) and instead focused on the consistency of quantum indeterminacy. Bohr's response was to illustrate Einstein's idea more clearly using the diagram in Figure C. (Figure C shows a fixed screen S1 that is bolted down. Then try to imagine one that can slide up or down along a rod instead of a fixed bolt.) Bohr observes that extremely precise knowledge of any (potential) vertical motion of the screen is an essential presupposition in Einstein's argument. In fact, if its velocity in the direction X before the passage of the particle is not known with a precision substantially greater than that induced by the recoil (that is, if it were already moving vertically with an unknown and greater velocity than that which it derives as a consequence of the contact with the particle), then the determination of its motion after the passage of the particle would not give the information we seek. However, Bohr continues, an extremely precise determination of the velocity of the screen, when one applies the principle of indeterminacy, implies an inevitable imprecision of its position in the direction X. Before the process even begins, the screen would therefore occupy an indeterminate position at least to a certain extent (defined by the formalism). Now consider, for example, the point d in figure A, where the interference is destructive. Any displacement of the first screen would make the lengths of the two paths, a–b–d and a–c–d, different from those indicated in the figure. If the difference between the two paths varies by half a wavelength, at point d there will be constructive rather than destructive interference. The ideal experiment must average over all the possible positions of the screen S1, and, for every position, there corresponds, for a certain fixed point F, a different type of interference, from the perfectly destructive to the perfectly constructive. The effect of this averaging is that the pattern of interference on the screen F will be uniformly grey. Once more, our attempt to evidence the corpuscular aspects in S2 has destroyed the possibility of interference in F, which depends crucially on the wave aspects.
As Bohr recognized, for the understanding of this phenomenon "it is decisive that, contrary to genuine instruments of measurement, these bodies along with the particles would constitute, in the case under examination, the system to which the quantum-mechanical formalism must apply. With respect to the precision of the conditions under which one can correctly apply the formalism, it is essential to include the entire experimental apparatus. In fact, the introduction of any new apparatus, such as a mirror, in the path of a particle could introduce new effects of interference which influence essentially the predictions about the results which will be registered at the end." Further along, Bohr attempts to resolve this ambiguity concerning which parts of the system should be considered macroscopic and which not:
In particular, it must be very clear that...the unambiguous use of spatiotemporal concepts in the description of atomic phenomena must be limited to the registration of observations which refer to images on a photographic lens or to analogous practically irreversible effects of amplification such as the formation of a drop of water around an ion in a dark room.
Bohr's argument about the impossibility of using the apparatus proposed by Einstein to violate the principle of indeterminacy depends crucially on the fact that a macroscopic system (the screen S1) obeys quantum laws. On the other hand, Bohr consistently held that, in order to illustrate the microscopic aspects of reality, it is necessary to set off a process of amplification, which involves macroscopic apparatuses, whose fundamental characteristic is that of obeying classical laws and which can be described in classical terms. This ambiguity would later come back in the form of what is still called today the measurement problem.
However, Bohr in his article refuting the EPR paper, states “there is no question of a mechanical disturbance of the system under investigation.” Heisenberg quotes Bohr as saying, “I find all such assertions as ‘observation introduces uncertainty into the phenomenon’ inaccurate and misleading.” Manjit Kumar's book on the Bohr–Einstein debates finds these assertions by Bohr contrary to his arguments. Others, such as the physicist Leon Rosenfeld, did find Bohr's argument convincing.
Uncertainty principle applied to time and energy
In many textbook examples and popular discussions of quantum mechanics, the principle of indeterminacy is explained by reference to the pair of variables position and velocity (or momentum). It is important to note that the wave nature of physical processes implies that there must exist another relation of indeterminacy: that between time and energy. In order to comprehend this relation, it is convenient to refer to the experiment illustrated in
Figure D, which results in the propagation of a wave which is limited in spatial extension. Assume that, as illustrated in the figure, a ray which is extremely extended longitudinally is propagated toward a screen with a slit furnished with a shutter which remains open only for a very brief interval of time . Beyond the slit, there will be a wave of limited spatial extension which continues to propagate toward the right.
A perfectly monochromatic wave (such as a musical note which cannot be divided into harmonics) has infinite spatial extent. In order to have a wave which is limited in spatial extension (which is technically called a wave packet), several waves of different frequencies must be superimposed and distributed continuously within a certain interval of frequencies around an average value, such as .
It then happens that at a certain instant, there exists a spatial region (which moves over time) in which the contributions of the various fields of the superposition add up constructively. Nonetheless, according to a precise mathematical theorem, as we move far away from this region, the phases of the various fields, at any specified point, are distributed causally and destructive interference is produced. The region in which the wave has non-zero amplitude is therefore spatially limited. It is easy to demonstrate that, if the wave has a spatial extension equal to (which means, in our example, that the shutter has remained open for a time where v is the velocity of the wave), then the wave contains (or is a superposition of) various monochromatic waves whose frequencies cover an interval which satisfies the relation:
Remembering that in the Planck relation, frequency and energy are proportional:
it follows immediately from the preceding inequality that the particle associated with the wave should possess an energy which is not perfectly defined (since different frequencies are involved in the superposition) and consequently there is indeterminacy in energy:
From this it follows immediately that:
which is the relation of indeterminacy between time and energy.
Einstein's second criticism
At the sixth Congress of Solvay in 1930, the indeterminacy relation just discussed was Einstein's target of criticism. His idea contemplates the existence of an experimental apparatus which was subsequently designed by Bohr in such a way as to emphasize the essential elements and the key points which he would use in his response.
Einstein considers a box (called Einstein's box, or Einstein's light box; see figure) containing electromagnetic radiation and a clock which controls the opening of a shutter which covers a hole made in one of the walls of the box. The shutter uncovers the hole for a time which can be chosen arbitrarily. During the opening, we are to suppose that a photon, from among those inside the box, escapes through the hole. In this way a wave of limited spatial extension has been created, following the explanation given above. In order to challenge the indeterminacy relation between time and energy, it is necessary to find a way to determine with adequate precision the energy that the photon has brought with it. At this point, Einstein turns to mass–energy equivalence of special relativity: . From this it follows that knowledge of the mass of an object provides a precise indication about its energy. The argument is therefore very simple: if one weighs the box before and after the opening of the shutter and if a certain amount of energy has escaped from the box, the box will be lighter. The variation in mass multiplied by
will provide precise knowledge of the energy emitted.
Moreover, the clock will indicate the precise time at which the event of the particle's emission took place. Since, in principle, the mass of the box can be determined to an arbitrary degree of accuracy, the energy emitted can be determined with a precision as accurate as one desires. Therefore, the product can be rendered less than what is implied by the principle of indeterminacy.
The idea is particularly acute and the argument seemed unassailable. It's important to consider the impact of all of these exchanges on the people involved at the time. Leon Rosenfeld, who had participated in the Congress, described the event several years later:
It was a real shock for Bohr...who, at first, could not think of a solution. For the entire evening he was extremely agitated, and he continued passing from one scientist to another, seeking to persuade them that it could not be the case, that it would have been the end of physics if Einstein were right; but he couldn't come up with any way to resolve the paradox. I will never forget the image of the two antagonists as they left the club: Einstein, with his tall and commanding figure, who walked tranquilly, with a mildly ironic smile, and Bohr who trotted along beside him, full of excitement...The morning after saw the triumph of Bohr.
Bohr's triumph
The triumph of Bohr consisted in his demonstrating, once again, that Einstein's subtle argument was not conclusive, but even more so in the way that he arrived at this conclusion by appealing precisely to one of the great ideas of Einstein: the principle of equivalence between gravitational mass and inertial mass, together with the time dilation of special relativity, and a consequence of these—the gravitational redshift. Bohr showed that, in order for Einstein's experiment to function, the box would have to be suspended on a spring in the middle of a gravitational field. In order to obtain a measurement of the weight of the box, a pointer would have to be attached to the box which corresponded with the index on a scale. After the release of a photon, a mass could be added to the box to restore it to its original position and this would allow us to determine the energy that was lost when the photon left. The box is immersed in a gravitational field of strength , and the gravitational redshift affects the speed of the clock, yielding uncertainty in the time required for the pointer to return to its original position. Bohr gave the following calculation establishing the uncertainty relation .
Let the uncertainty in the mass be denoted by . Let the error in the position of the pointer be . Adding the load to the box imparts a momentum that we can measure with an accuracy , where ≈ . Clearly , and therefore . By the redshift formula (which follows from the principle of equivalence and the time dilation), the uncertainty in the time is , and , and so . We have therefore proven the claimed .
More recent analyses of the photon box debate questions Bohr's understanding of Einstein's thought experiment, referring instead to a prelude to the EPR paper, focusing on inseparability rather than indeterminism being at issue.
Post-revolution: Second stage
The second phase of Einstein's "debate" with Bohr and the orthodox interpretation is characterized by an acceptance of the fact that it is, as a practical matter, impossible to simultaneously determine the values of certain incompatible quantities, but the rejection that this implies that these quantities do not actually have precise values. Einstein rejects the probabilistic interpretation of Born and insists that quantum probabilities are epistemic and not ontological in nature. As a consequence, the theory must be incomplete in some way. He recognizes the great value of the theory, but suggests that it "does not tell the whole story", and, while providing an appropriate description at a certain level, it gives no information on the more fundamental underlying level:
I have the greatest consideration for the goals which are pursued by the physicists of the latest generation which go under the name of quantum mechanics, and I believe that this theory represents a profound level of truth, but I also believe that the restriction to laws of a statistical nature will turn out to be transitory....Without doubt quantum mechanics has grasped an important fragment of the truth and will be a paragon for all future fundamental theories, for the fact that it must be deducible as a limiting case from such foundations, just as electrostatics is deducible from Maxwell's equations of the electromagnetic field or as thermodynamics is deducible from statistical mechanics.
These thoughts of Einstein would set off a line of research into hidden variable theories, such as the Bohm interpretation, in an attempt to complete the edifice of quantum theory. If quantum mechanics can be made complete in Einstein's sense, it cannot be done locally; this fact was demonstrated by John Stewart Bell with the formulation of Bell's inequality in 1964. Although, the Bell inequality ruled out local hidden variable theories, Bohm's theory was not ruled out. A 2007 experiment ruled out a large class of non-Bohmian non-local hidden variable theories, though not Bohmian mechanics itself.
Post-revolution: Third stage
The argument of EPR
In 1935 Einstein, Boris Podolsky and Nathan Rosen developed an argument, published in the magazine Physical Review with the title Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?, based on an entangled state of two systems. Before coming to this argument, it is necessary to formulate another hypothesis that comes out of Einstein's work in relativity: the principle of locality. The elements of physical reality which are objectively possessed cannot be influenced instantaneously at a distance.
David Bohm picked up the EPR argument in 1951. In his textbook Quantum Theory, he reformulated it in terms of an entangled state of two particles, which can be summarized as follows:
1) Consider a system of two photons which at time t are located, respectively, in the spatially distant regions A and B and which are also in the entangled state of polarization described below:
2) At time t the photon in region A is tested for vertical polarization. Suppose that the result of the measurement is that the photon passes through the filter. According to the reduction of the wave packet, the result is that, at time t + dt, the system becomes
3) At this point, the observer in A who carried out the first measurement on photon 1, without doing anything else that could disturb the system or the other photon ("assumption (R)", below), can predict with certainty that photon 2 will pass a test of vertical polarization. It follows that photon 2 possesses an element of physical reality: that of having a vertical polarization.
4) According to the assumption of locality, it cannot have been the action carried out in A which created this element of reality for photon 2. Therefore, we must conclude that the photon possessed the property of being able to pass the vertical polarization test before and independently of the measurement of photon 1.
5) At time t, the observer in A could have decided to carry out a test of polarization at 45°, obtaining a certain result, for example, that the photon passes the test. In that case, he could have concluded that photon 2 turned out to be polarized at 45°. Alternatively, if the photon did not pass the test, he could have concluded that photon 2 turned out to be polarized at 135°. Combining one of these alternatives with the conclusion reached in 4, it seems that photon 2, before the measurement took place, possessed both the property of being able to pass with certainty a test of vertical polarization and the property of being able to pass with certainty a test of polarization at either 45° or 135°. These properties are incompatible according to the formalism.
6) Since natural and obvious requirements have forced the conclusion that photon 2 simultaneously possesses incompatible properties, this means that, even if it is not possible to determine these properties simultaneously and with arbitrary precision, they are nevertheless possessed objectively by the system. But quantum mechanics denies this possibility and it is therefore an incomplete theory.
Bohr's response
Bohr's response to this argument was published, five months later than the original publication of EPR, in the same magazine Physical Review and with exactly the same title as the original. The crucial point of Bohr's answer is distilled in a passage which he later had republished in Paul Arthur Schilpp's book Albert Einstein, scientist-philosopher in honor of the seventieth birthday of Einstein. Bohr attacks assumption (R) of EPR by stating:
The statement of the criterion in question is ambiguous with regard to the expression "without disturbing the system in any way". Naturally, in this case no mechanical disturbance of the system under examination can take place in the crucial stage of the process of measurement. But even in this stage there arises the essential problem of an influence on the precise conditions which define the possible types of prediction which regard the subsequent behaviour of the system...their arguments do not justify their conclusion that the quantum description turns out to be essentially incomplete...This description can be characterized as a rational use of the possibilities of an unambiguous interpretation of the process of measurement compatible with the finite and uncontrollable interaction between the object and the instrument of measurement in the context of quantum theory.
Bohr's presentation of his argument was hard to follow for many of the scientists (although his views were generally accepted). Rosenfeld, who had worked closely with Bohr for many years, later explains Bohr's argument in a way that is perhaps more accessible:
In the case of the two particles, it is true that the measurement carried out on the first particle does not cause any direct physical disturbance of the second; but the measurement decisively affects the nature of verifiable predictions we will be able to make about this second particle. (...) [A]s long as we do not carry out any measurement (...) we have no control at all over this correlation [between the two particles]. If we really want the system to be subject to study and communication, we must carry out some measurement. If we now observe the position of the first particle, the correlation between the positions of the particles can be used to give us information about where the second particle is, but we have no way of making use of the correlation between the pulses of the particles (...). If we observe the momentum of the first particle, it is just the opposite. We retain control over the momentum correlation, but lose it over the position correlation. The two different measurements define two complementary phenomena that can never be reconciled into a single description of the given two-particle system.
Confirmatory experiments
Years after the exposition of Einstein via his EPR experiment, many physicists started performing experiments to show that Einstein's view of a spooky action in a distance is indeed consistent with the laws of physics. The first experiment to definitively prove that this was the case was in 1949, when physicists Chien-Shiung Wu and her colleague Irving Shaknov showcased this theory in real time using photons. Their work was published in the new year of the succeeding decade.
Later in 1975, Alain Aspect proposed in an article, an experiment meticulous enough to be irrefutable: Proposed experiment to test the non-separability of quantum mechanics. This led Aspect, together with his assistant Gérard Roger, and Jean Dalibard and (two young physics students at the time) to set up several increasingly complex experiments between 1980 and 1982 that further established quantum entanglement. Finally in 1998, the Geneva experiment tested the correlation between two detectors set 30 kilometres apart, virtually across the whole city, using the Swiss optical fibre telecommunication network. The distance gave the necessary time to commute the angles of the polarizers. It was therefore possible to have a completely random electrical shunting. Furthermore, the two distant polarizers were entirely independent. The measurements were recorded on each side, and compared after each experiment by dating each measurement using an atomic clock. The experiment once again verified entanglement under the strictest and most ideal conditions possible. If Aspect's experiment implied that a hypothetical coordination signal travel twice as fast as c, Geneva's reached 10 million times c.
Post-revolution: Fourth stage
In his last writing on the topic, Einstein further refined his position, making it completely clear that what really disturbed him about the quantum theory was the problem of the total renunciation of all minimal standards of realism, even at the microscopic level, that the acceptance of the completeness of the theory implied. Since the early days of quantum theory the assumption of locality and Lorentz invariance guided his thoughts and led to his determination that if we demand strict locality then hidden variables are naturally implied apropos EPR. Bell, starting from this EPR logic (which is widely misunderstood or forgotten) showed that local hidden variables imply a conflict with experiment. Ultimately what was at stake for Einstein was the assumption that physical reality be universally local. Although the majority of experts in the field agree that Einstein was wrong, the current understanding is still not complete (see Interpretation of quantum mechanics).
See also
Bell test experiments
Bell's theorem
Complementarity
Copenhagen interpretation
Double-slit experiment
Einstein's thought experiments
Invariant set postulate
Quantum eraser
Schrödinger's cat
Uncertainty principle
Wheeler's delayed choice experiment
Superdeterminism
References
Further reading
Boniolo, G., (1997) Filosofia della Fisica, Mondadori, Milan.
Bolles, Edmund Blair (2004) Einstein Defiant, Joseph Henry Press, Washington, D.C.
Born, M. (1973) The Born Einstein Letters, Walker and Company, New York, 1971.
Ghirardi, Giancarlo, (1997) Un'Occhiata alle Carte di Dio, Il Saggiatore, Milan.
Schilpp, P.A., (1958) Albert Einstein: Philosopher-Scientist, Northwestern University and Southern Illinois University, Open Court, 1951.
Quantum measurement
Albert Einstein
Philosophy of physics
History of physics
Niels Bohr
Scientific debates | Bohr–Einstein debates | [
"Physics"
] | 6,755 | [
"Philosophy of physics",
"Quantum measurement",
"Applied and interdisciplinary physics",
"Quantum mechanics"
] |
1,025,345 | https://en.wikipedia.org/wiki/Defense%20Message%20System | The Defense Message System or Defense Messaging System (DMS) is a deployment of secure electronic mail and directory services in the United States Department of Defense. DMS was intended to replace the AUTODIN network, and is based on implementations of the OSI X.400 mail, X.500 directory and X.509 public key certificates, with several extensions to meet the specific needs of military messaging.
DMS is sometimes operated in conjunction with third-party products, such as the Navy's DMDS (Defense Message Dissemination System), a profiling system that takes a message and forwards it, based on message criteria, to parties that are required to take action on a message. This combination has met with success with the upper echelons of command, since parties do not have to wait for messaging center operators to route the messages to the proper channels for action. The Navy also uses Navy Regional Enterprise Messaging System (NREMS). NREMS uses an AMHS backend to send secure Organizational Messages via a web interface to Naval commands.
The US Army's version of DMS is run solely on an AMHS platform both for CONUS and OCONUS operations. The Pentagon Telecommunications Center (PTC) is the hub for CONUS operations and there are several AMHS sites OCONUS for strategic messaging. In the tactical environment the Army deploys an independent Tactical Message Systems (TMS) that is also built on an AMHS platform for secure messaging capability in austere environments when communications with OCONUS AMHS sites are unavailable.
DMS has been coordinated by the Defense Information Systems Agency (DISA), and testing began in 1995. DMS has many third-party vendor products, such as DMDS, DMDS Proxy MR, CP-XP (the CommPower XML Portal), AMHS (Automated Message Handling System), MMHS, and CMS 1.0.
See also
Defense Switched Network
GOSIP
External links
Defense Message System at Defense Information Systems Agency
Defense Messaging System at Joint Interoperability Test Command
Defense Message System at GlobalSecurity.org
Navy Moving Towards Web-based Naval Messaging
PM DMS-Army streamlines tactical message system, receives defense acquisition executive recognition
Military communications
Telecommunications equipment of the Cold War
Email | Defense Message System | [
"Engineering"
] | 463 | [
"Military communications",
"Telecommunications engineering"
] |
1,025,384 | https://en.wikipedia.org/wiki/Automatic%20Digital%20Network | The Automatic Digital Network System, known as AUTODIN, is a legacy data communications service in the United States Department of Defense. AUTODIN originally consisted of numerous AUTODIN Switching Centers (ASCs) located in the United States and in countries such as England and Japan.
Background
The design of the system, originally named "ComLogNet", began in 1958 by a team of Western Union, RCA and IBM. The customer was the U.S. Air Force and the system's purpose was to improve the speed and reliability of logistics traffic (spare parts for missiles) between five logistics centers and roughly 350 bases and contractor locations. An implementation contract was awarded in the fall of 1959 to Western Union as prime contractor and system integrator, RCA to build the 5 switching center computers and IBM for the compound terminals which provided for both IBM punched card and Teletype data entry. The first site became operational in 1962. During the implementation the government realized the broader value of the system and transferred it to the Defense Communications Agency (DCA) which renamed it "AUTODIN". In 1962 the government solicited competitive bids for a 9 center expansion which was won by Philco-Ford.
Deployment started in 1966. On March 22, 1968, Autodin multimedia terminal in Europe became operational at Ramstein Air base in Germany. This system linked more than 300 Air Forces bases, material areas, depots and other authorized agencies into a single communications network. In the ASCs; operational until the late 1980s the Philco-Ford OL9 computers were still in use with periodic technological updates. In the 1988 to 1990 timeframe an initiative by the Department of Defense for "off the shelf" hardware initiated a replacement of the Philco-Ford processors by DEC VAX 11/780 series systems.
In 1982, a follow-on project, AUTODIN II, was terminated in favor of using ARPANET technology for the Defense Data Network (including a military subnet known as MILNET).
AUTODIN Switching Centers have been replaced by various hardware/software combinations. The following are some examples:
A program called NOVA to operate circuits and route messages. The system is designed to run at 2400 baud, however speeds up to 9600 baud are possible. The system is able to run down to 15 baud if communications systems require it.
A series of hardware/software systems called DABS (DoDIIS Autodin Bypass System) which allows the transmission of messages over Serial connections at up to 9600 Baud as well as TCP/IP connections that allow the transmission of messages across Ethernet connects at speeds limited only by the network bandwidth.
In 1996, DoD decided to phase out AUTODIN by December 31, 1999. Early in the 21st century, all but one of the AUTODIN Switching Centers had been shut down. The intention is to transition secure messaging traffic to the Defense Message System.
See also
Defense Switched Network
Defense Message System
Western Union
AUTOVON contemporaneous voice network
STARCOM (communications system), U.S. Army Signal Corps predecessor, replaced by AUTODIN
References
Wide area networks
History of telecommunications in the United States
Military communications
Western Union | Automatic Digital Network | [
"Engineering"
] | 636 | [
"Military communications",
"Telecommunications engineering"
] |
1,025,417 | https://en.wikipedia.org/wiki/Electromagnetic%20theories%20of%20consciousness | Electromagnetic theories of consciousness propose that consciousness can be understood as an electromagnetic phenomenon.
Overview
Theorists differ in how they relate consciousness to electromagnetism. Electromagnetic field theories (or "EM field theories") of consciousness propose that consciousness results when a brain produces an electromagnetic field with specific characteristics. Susan Pockett and Johnjoe McFadden have proposed EM field theories; William Uttal has criticized McFadden's and other field theories.
In general, quantum mind theories do not treat consciousness as an electromagnetic phenomenon, with a few exceptions.
AR Liboff has proposed that "incorporating EM field-mediated communication into models of brain function has the potential to reframe discussions surrounding consciousness".
Also related are E. Roy John's work and Andrew and Alexander Fingelkurts theory "Operational Architectonics framework of brain-mind functioning".
Cemi theory
The starting point for McFadden and Pockett's theory is the fact that every time a neuron fires to generate an action potential, and a postsynaptic potential in the next neuron down the line, it also generates a disturbance in the surrounding electromagnetic field. McFadden has proposed that the brain's electromagnetic field creates a representation of the information in the neurons. Studies undertaken towards the end of the 20th century are argued to have shown that conscious experience correlates not with the number of neurons firing, but with the synchrony of that firing. McFadden views the brain's electromagnetic field as arising from the induced EM field of neurons. The synchronous firing of neurons is, in this theory, argued to amplify the influence of the brain's EM field fluctuations to a much greater extent than would be possible with the unsynchronized firing of neurons.
McFadden thinks that the EM field could influence the brain in a number of ways. Redistribution of ions could modulate neuronal activity, given that voltage-gated ion channels are a key element in the progress of axon spikes. Neuronal firing is argued to be sensitive to the variation of as little as one millivolt across the cell membrane, or the involvement of a single extra ion channel. Transcranial magnetic stimulation is similarly argued to have demonstrated that weak EM fields can influence brain activity.
McFadden proposes that the digital information from neurons is integrated to form a conscious electromagnetic information (cemi) field in the brain. Consciousness is suggested to be the component of this field that is transmitted back to neurons, and communicates its state externally. Thoughts are viewed as electromagnetic representations of neuronal information, and the experience of free will in our choice of actions is argued to be our subjective experience of the cemi field acting on our neurons.
McFadden's view of free will is deterministic. Neurons generate patterns in the EM field, which in turn modulate the firing of particular neurons. There is only conscious agency in the sense that the field or its download to neurons is conscious, but the processes of the brain themselves are driven by deterministic electromagnetic interactions. The feel of subjective experience or qualia corresponds to a particular configuration of the cemi field. This field representation is in this theory argued to integrate parts into a whole that has meaning, so a face is not seen as a random collection of features, but as somebody's face. The integration of information in the field is also suggested to resolve the binding/combination problem.
In 2013, McFadden published two updates to the theory. In the first, 'The CEMI Field Theory: Closing the Loop' McFadden cites recent experiments in the laboratories of Christof Koch and David McCormick which demonstrate that external EM fields, that simulate the brain's endogenous EM fields, influence neuronal firing patterns within brain slices. The findings are consistent with a prediction of the cemi field theory that the brain's endogenous EM field - consciousness - influences brain function. In the second, 'The CEMI Field Theory Gestalt Information and the Meaning of Meaning', McFadden claims that the cemi field theory provides a solution to the binding problem of how complex information is unified within ideas to provide meaning: the brain's EM field unifies the information encoded in millions of disparate neurons.
Susan Pockett has advanced a theory, which has a similar physical basis to McFadden's, with consciousness seen as identical to certain spatiotemporal patterns of the EM field. However, whereas McFadden argues that his deterministic interpretation of the EM field is not out-of-line with mainstream thinking, Pockett suggests that the EM field comprises a universal consciousness that experiences the sensations, perceptions, thoughts and emotions of every conscious being in the universe. However, while McFadden thinks that the field is causal for actions, albeit deterministically, Pockett does not see the field as causal for our actions.
Quantum brain dynamics
The concepts underlying this theory derive from the physicists, Hiroomi Umezawa and Herbert Fröhlich in the 1960s. More recently, their ideas have been elaborated by Mari Jibu and Kunio Yasue. Water comprises 70% of the brain, and quantum brain dynamics (QBD) proposes that the electric dipoles of the water molecules constitute a quantum field, referred to as the cortical field, with corticons as the quanta of the field. This cortical field is postulated to interact with quantum coherent waves generated by the biomolecules in neurons, which are suggested to propagate along the neuronal network. The idea of quantum coherent waves in the neuronal network derives from Fröhlich. He viewed these waves as a means by which order could be maintained in living systems, and argued that the neuronal network could support long-range correlation of dipoles. This theory suggests that the cortical field not only interacts with the neuronal network, but also to a good extent controls it.
The proponents of QBD differ somewhat as to the way in which consciousness arises in this system. Jibu and Yasue suggest that the interaction between the energy quanta (corticons) of the quantum field and the biomolecular waves of the neuronal network produces consciousness. However, another theorist, Giuseppe Vitiello, proposes that the quantum states produce two poles, a subjective representation of the external world and also the internal self.
Advantages
Locating consciousness in the brain's EM field, rather than the neurons, has the advantage of neatly accounting for how information located in millions of neurons scattered through the brain can be unified into a single conscious experience (called the binding problem): the information is unified in the EM field. In this way, EM field consciousness can be considered to be "joined-up information". This theory accounts for several otherwise puzzling facts, such as the finding that attention and awareness tend to be correlated with the synchronous firing of multiple neurons rather than the firing of individual neurons. When neurons fire together, their EM fields generate stronger EM field disturbances; so synchronous neuron firing will tend to have a larger impact on the brain's EM field (and thereby consciousness) than the firing of individual neurons. However their generation by synchronous firing is not the only important characteristic of conscious electromagnetic fields—in Pockett's original theory, spatial pattern is the defining feature of a conscious (as opposed to a non-conscious) field.
Objections
In a circa-2002 publication of The Journal of Consciousness Studies, the electromagnetic theory of consciousness faced an uphill battle for acceptance among cognitive scientists.
"No serious researcher I know believes in an electromagnetic theory of consciousness", Bernard Baars wrote in an e-mail. Baars is a neurobiologist and co-editor of Consciousness and Cognition, another scientific journal in the field. "It's not really worth talking about scientifically", he was quoted as saying.
McFadden acknowledges that his theory, which he calls the "cemi field theory", is far from proven but he argues that it is certainly a legitimate line of scientific inquiry. His article underwent peer review before publication.
The field theories of consciousness do not appear to have been as widely discussed as other quantum consciousness theories, such as those of Penrose, Stapp or Bohm. However, David Chalmers argues against quantum consciousness. He instead discusses how quantum mechanics may relate to dualistic consciousness. Chalmers is skeptical that any new physics can resolve the hard problem of consciousness. He argues that quantum theories of consciousness suffer from the same weakness as more conventional theories. Just as he argues that there is no particular reason why particular macroscopic physical features in the brain should give rise to consciousness, he also thinks that there is no particular reason why a particular quantum feature, such as the EM field in the brain, should give rise to consciousness either. Despite the existence of transcranial magnetic stimulation with medical purposes, Y. H. Sohn, A. Kaelin-Lang and M. Hallett have denied it, and later Jeffrey Gray states in his book Consciousness: Creeping up on the Hard Problem, that tests looking for the influence of electromagnetic fields on brain function have been universally negative in their result. However, a number of studies have found clear neural effects from EM stimulation.
Dobson, et al. (2000): 1.8 millitesla = 18,000 mG
Thomas, et al. (2007): 400 microtesla = 4000 milligauss
Huesser, et al. (1997): 0.1 millitesla = 1000 mG
Bell, et al. (2007) 0.78 Gauss = 780 mG
Marino, et al. (2004): 1 Gauss = 1000 mG
Carrubba, et al. (2008): 1 Gauss = 1000 mG
Jacobson (1994): 5 picotesla = 0.00005 mG
Sandyk (1999): Picotesla range
In April 2022, the results of two related experiments at the University of Alberta and Princeton University were announced at The Science of Consciousness conference, providing further evidence to support quantum processes operating within microtubules. In a study Stuart Hameroff was part of, Jack Tuszyński of the University of Alberta demonstrated that anesthetics hasten the duration of a process called delayed luminescence, in which microtubules and tubulins trapped light. Tuszyński suspects that the phenomenon has a quantum origin, with superradiance being investigated as one possibility. In the second experiment, Gregory D. Scholes and Aarat Kalra of Princeton University used lasers to excite molecules within tubulins, causing a prolonged excitation to diffuse through microtubules further than expected, which did not occur when repeated under anesthesia. However, diffusion results have to be interpreted carefully, since even classical diffusion can be very complex due to the wide range of length scales in the fluid filled extracellular space. Nevertheless, University of Oxford quantum physicist Vlatko Vedral told that this connection with consciousness is a really long shot.
Also in 2022, a group of Italian physicists conducted several experiments that failed to provide evidence in support of a gravity-related quantum collapse model of consciousness, weakening the possibility of a quantum explanation for consciousness.
Influence on brain function
The different EM field theories disagree as to the role of the proposed conscious EM field on brain function. In McFadden's cemi field theory, as well as in Drs Fingelkurts' Brain-Mind Operational Architectonics theory, the brain's global EM field modifies the electric charges across neural membranes, and thereby influences the probability that particular neurons will fire, providing a feed-back loop that drives free will. However, in the theories of Susan Pockett and E. Roy John, there is no necessary causal link between the conscious EM field and our consciously willed actions.
References to "Mag Lag" also known as the subtle effect on cognitive processes of MRI machine operators who sometimes have to go into the scanner room to check the patients and deal with issues that occur during the scan could suggest a link between magnetic fields and consciousness. Memory loss and delays in information processing have been reported, in some cases several hours after exposure.
One hypothesis is that magnetic fields in the 0.5-9 Tesla range can affect the ion permeability of neural membranes, in fact this could account for a lot of the issues seen as this would affect many different brain functions.
Implications for artificial intelligence
If true, the theory has major implications for efforts to design consciousness into artificial intelligence machines; current microprocessor technology is designed to transmit information linearly along electrical channels, and more general electromagnetic effects are seen as a nuisance and damped out; if this theory is right, however, this is directly counterproductive to creating an artificially conscious computer, which on some versions of the theory would instead have electromagnetic fields that synchronized its outputs—or in the original version of the theory would have spatially patterned electromagnetic fields.
See also
Orchestrated objective reduction
Quantum mind
Quantum neural network
References
External links
The electromagnetic field theory of consciousness, Scholarpedia
Consciousness Based on Wireless?
Electromagnetism
Obsolete scientific theories | Electromagnetic theories of consciousness | [
"Physics"
] | 2,696 | [
"Electromagnetism",
"Physical phenomena",
"Fundamental interactions"
] |
1,025,510 | https://en.wikipedia.org/wiki/Coxiella%20burnetii | Coxiella burnetii is an obligate intracellular bacterial pathogen, and is the causative agent of Q fever. The genus Coxiella is morphologically similar to Rickettsia, but with a variety of genetic and physiological differences. C. burnetii is a small Gram-negative, coccobacillary bacterium that is highly resistant to environmental stresses such as high temperature, osmotic pressure, and ultraviolet light. These characteristics are attributed to a small cell variant form of the organism that is part of a biphasic developmental cycle, including a more metabolically and replicatively active large cell variant form. It can survive standard disinfectants, and is resistant to many other environmental changes like those presented in the phagolysosome.
History and naming
Research in the 1920s and 1930s identified what appeared to be a new type of Rickettsia, isolated from ticks, that was able to pass through filters. The first description of what may have been Coxiella burnetii was published in 1930 by Hideyo Noguchi, but since his samples did not survive, it remains unclear as to whether it was the same organism. The definitive descriptions were published in the late 1930s as part of research into the cause of Q fever, by Edward Holbrook Derrick and Macfarlane Burnet in Australia, and Herald Rea Cox and Gordon Davis at the Rocky Mountain Laboratory (RML) in the United States.
The RML team proposed the name Rickettsia diaporica, derived from the Greek word for having the ability to pass through filter pores, to avoid naming it after either Cox or Davis if indeed Noguchi's description had priority. Around the same time, Derrick proposed the name Rickettsia burnetii, in recognition of Burnet's contribution in identifying the organism as a Rickettsia. As it became clear that the species differed significantly from other Rickettsia, it was first elevated to a subgenus named after Cox, Coxiella, and then in 1948 to its own genus of that name, proposed by Cornelius B. Philip, another RML researcher. Research in the 1960s1970s by French Canadian-American microbiologist and virologist Paul Fiset was instrumental in the development of the first successful Q fever vaccine.
Coxiella was difficult to study because it could not be reproduced outside a host. However, in 2009, scientists reported a technique allowing the bacteria to grow in an axenic culture and suggested the technique may be useful for study of other pathogens.
Pathogenesis
Of the many C. burnetii strains, two of the most studied are the Nine Mile phase I and Priscilla phase I strain. In recent years, more strains have been studied. Nonetheless, it has been demonstrated that the Nine Mile strain is one of the most virulent strains of C. burnetti with as few as four organisms needed to cause infection. This is particularly relevant as murine rodents are poorly susceptible to C. burnetii, necessitating a higher dose and a more virulent dose to inoculate murine rodents for disease study.
The ID50 (the dose needed to infect 50% of experimental subjects) is one via inhalation; i.e., inhalation of one organism will yield disease in 50% of the population. This is an extremely low infectious dose (only 1-10 organisms required), making C. burnetii one of the most infectious known organisms. Disease occurs in two stages: an acute stage that presents with headaches, chills, and respiratory symptoms, and an insidious chronic stage.
C. burnettii infections begins within the alveoli. Upon inhalation, it targets alveolar macrophages and passively enters them via actin-dependent phagocytosis. After initial binding, it is suggested that C. burnetii enters phagocytotic cells via passive actin-dependent phagocytosis and enters non-professional phagocytes via an active zipper mechanism. C. burnetii exploits the αVβ3 integrin to enter using RAC1-dependent phagocytosis, which is believed to have evolved as a mechanism to avoid the induction of an inflammatory response.
Following infection, C. burnetii has a biphasic developmental cycle, which consists of small cell variant (SCV) and large cell variant (LCV) morphological forms, which are both infectious. As the SCV is metabolically repressed and resistant to many environmental stressors, it is likely the form that initiates natural infections. Having entered a host cell, C. burnetii SCVs transit through the phagolysosomal maturation pathway. In the first six hours post-infection, endosomes, autophagosomes, and lysosomes containing acid phosphatase fuse with the nascent phagosome to form early PV, which fosters the transition from SCV to LCV. Resultantly, C. burnetii is metabolically activated and produces the T4SS to translocate effector proteins into the host cytoplasm. After 6 days, C. burnetii transitions back to SCV.
While most infections clear up spontaneously, treatment with tetracycline or doxycycline appears to reduce the symptomatic duration and reduce the likelihood of chronic infection. A combination of erythromycin and rifampin is highly effective in curing the disease, and vaccination with Q-VAX vaccine (CSL) is effective for prevention of it.
The bacteria use a type IVB secretion system known as Icm/Dot (intracellular multiplication / defect in organelle trafficking genes) to inject over 100 effector proteins into the host. These effectors increase the bacteria's ability to survive and grow inside the host cell by modulating many host cell pathways, including blocking cell death, inhibiting immune reactions, and altering vesicle trafficking. In Legionella pneumophila, which uses the same secretion system and also injects effectors, survival is enhanced because these proteins interfere with fusion of the bacteria-containing vacuole with the host's degradation endosomes.
Use as a biological weapon
The United States ended its biological warfare program in 1969. When it did, C. burnetii was one of seven agents it had standardized as biological weapons.
Genomics
At least 75 completely sequenced genomes of Coxiella burnetii strains exist, which contain about 2.1 Mbp of DNA each and encode around 2,100 open reading frames; 746 (or about 35%) of these genes have no known function.
In bacteria small regulatory RNAs are activated during stress and virulence conditions. Coxiella burnetii small RNAs (CbSRs 1, 11, 12, and 14) are encoded within intergenic region (IGR). CbSRs 2, 3, 4 and 9 are located antisense to identified ORFs. The CbSRs are up-regulated during intracellular growth in host cells.
All C. burnetii isolates either carry one of four conserved independently-replicating large plasmids (QpH1, QpDG, QpRS, or QpDV) or a chromosomal element derived from QpRS. QpH1 carries virluence factors important for the bacterium's survival inside mouse macrophages and Vero cells; growth on axenic media is unaffected. QpH1 also contains a toxin-antitoxin system. Among all plasmids, 8 conserved genes code for proteins that are inserted into the host cell via the secretion system.
Additional images
References
External links
Coxiella burnetii genomes and related information at PATRIC, a Bioinformatics Resource Center funded by NIAID
Legionellales
Biological agents
Gram-negative bacteria
Bacteria described in 1939 | Coxiella burnetii | [
"Biology",
"Environmental_science"
] | 1,624 | [
"Biological agents",
"Toxicology",
"Biological warfare"
] |
1,025,538 | https://en.wikipedia.org/wiki/Findability | Findability is the ease with which information contained on a website can be found, both from outside the website (using search engines and the like) and by users already on the website. Although findability has relevance outside the World Wide Web, the term is usually used in that context. Most relevant websites do not come up in the top results because designers and engineers do not cater to the way ranking algorithms work currently. Its importance can be determined from the first law of e-commerce, which states "If the user can’t find the product, the user can’t buy the product." As of December 2014, out of 10.3 billion monthly Google searches by Internet users in the United States, an estimated 78% are made to research products and services online.
Findability encompasses aspects of information architecture, user interface design, accessibility and search engine optimization (SEO), among others.
Introduction
Findability is similar to discoverability, which is defined as the ability of something, especially a piece of content or information, to be found. It is different from web search in that the word find refers to locating something in a known space while 'search' is in an unknown space or not in an expected location.
Mark Baker, the author of Every Page is Page One, mentions that findability "is a content problem, not a search problem". Even when the right content is present, users often find themselves deep within the content of a website but not in the right place. He further adds that findability is intractable, perfect findability is unattainable, but we need to focus on reducing the effort for finding that a user would have to do for themselves.
Findability can be divided into external findability and on-site findability, based on where the customers need to find the information.
History
Heather Lutze is thought to have created the term in the early 2000s. The popularization of the term findability for the Web is usually credited to Peter Morville. In 2005 he defined it as: "the ability of users to identify an appropriate Web site and navigate the pages of the site to discover and retrieve relevant information resources", though it appears to have been first coined in a public context referring to the web and information retrieval by Alkis Papadopoullos in a 2005 article entitled "Findability".
External findability
External findability is the domain of Internet marketing and search engine optimization (SEO) tactics. External findability can be very influential for businesses. Smaller companies may have trouble influencing external findability, due to being less aware to consumers. Other means are taken to make sure that they are found in search results.
Several factors affect external findability:
Search engine indexing: As the very first step, webpages need to be found by indexing crawler in order to be shown in the search results. It would be helpful to avoid factors that may lead to webpages being ignored by indexing crawlers. Those factors may include elements that require user interaction, such as entering log-in credentials. Algorithms for indexing vary by the search engine which means the number of webpages of a website successfully being indexed may be very different between Google and Yahoo!'s search engines. Also, in countries like China, government policies could significantly influence the indexing algorithms. In this case, local knowledge about laws and policies could be valuable.
Page descriptions in search results: Once the webpages are successfully indexed by web crawlers and show in the search results with decent ranking, the next step is to attract customers to click the link to the web pages. However, the customers can't see the whole web pages at this point; they can only see an excerpt of the webpage's content and metadata. Therefore, displaying meaningful information in a limited space, usually a couple of sentences, in search results is important for increasing click traffic of the webpages, and thus the findability of the web content on your webpages.
Keyword matching: At a semantic level, terminology used by the searcher and the content producer be different. Bridging the gap between the terms used by customers and developers is helpful for making web content more findable to more potential content consumers.
On-site findability
On-site findability is concerned with the ability of a potential customer to find what they are looking for within a specific site. More than 90 percent of customers use internal searches in a website compared to browsing. Of those, only 50 percent find what they are looking for. Improving the quality of on-site searches highly improves the business of the website. Several factors affect findability on a website:
Site search: If searchers within a site do not find what they are looking for, they tend to leave rather than browse through the website. Users who had successful site searches are twice as likely to ultimately convert.
Related links and products: User experience can be enhanced by trying to understand the needs of the customer and provide suggestions for other, related information.
Site match to customer needs and preferences: Site design, content creation, and recommendations are major factors for affecting the customer experience.
Evaluation and measures
Baseline findability is the existing findability before changes are made in order to improve it. This is measured by participants who represent the customer base of the website, who try to locate a sample set of items using the existing navigation of the website.
In order to evaluate how easily information can be found by searching a site using a search engine or information retrieval system, retrievability measures were developed, and similarly, navigability measures now measure ease of information access through browsing a site (e.g. PageRank, MNav, InfoScent (see Information foraging), etc.).
Findability also can be evaluated via the following techniques:
Usability testing: Conducted to find out how and why users navigate through a website to accomplish tasks.
Tree testing: An information architecture based technique, to determine if critical information can be found on the website.
Closed card sorting: A usability technique based on information architecture, for evaluating the strength of categories.
Click testing: Accounts for the implicit data collected through clicks on the user interface.
Beyond findability
Findability Sciences defines a findability index in terms of each user's influence, context, and sentiments. For seamless search, current websites focus on a combination of structured hypertext-based information architectures and rich Internet application-enabled visualization techniques.
See also
References
Further reading
Morville, P. (2005) Ambient findability. Sebastopol, CA: O'Reilly
Wurman, R.S. (1996). Information architects. New York: Graphis.
External links
: a collection of links to people, software, organizations, and content related to findability
The age of findability (article)
Use Old Words When Writing for Findability (article on the findability impact of a site's choice of words)
Building Findable Websites: Web Standards SEO and Beyond (book)
The Findability Formula: The Easy, Non-Technical Guide to Search Engine Marketing by Heather Lutze
Web design
Knowledge representation
Information science
Information architecture | Findability | [
"Engineering"
] | 1,448 | [
"Design",
"Web design"
] |
1,025,765 | https://en.wikipedia.org/wiki/Type-in%20program | A type-in program or type-in listing was computer source code printed in a home computer magazine or book. It was meant to be entered via the keyboard by the reader and then saved to cassette tape or floppy disk. The result was a usable game, utility, or application program.
Type-in programs were common in the home computer era from the late 1970s through the early 1990s, when the RAM of 8-bit systems was measured in kilobytes and most computer owners did not have access to networks such as bulletin board systems.
Magazines such as Softalk, Compute!, ANALOG Computing, and Ahoy! dedicated much of each issue to type-in programs. The magazines could contain multiple games or other programs for a fraction of the cost of purchasing commercial software on removable media, but the user had to spend up to several hours typing each one in. Most listings were either in a system-specific BASIC dialect or machine code. Machine code programs were long lists of decimal or hexadecimal numbers, often in the form of DATA statements in BASIC. Most magazines had error checking software to make sure a program was typed correctly.
Type-in programs did not carry over to 16-bit computers such as the Amiga and Atari ST in a significant way, as both programs and data (such as graphics) became much larger. It became common to include a covermount 3 -inch floppy disk or CD-ROM with each issue of a magazine.
Description
A reader would take a printed copy of the program listing, such as from a magazine or book, sit down at a computer, and manually enter the lines of code. Computers of this era automatically booted into a programming environment – even the commands to load and run a prepackaged program were really programming commands executed in direct mode. After typing the program in, the user would be able to run it and also to save it to disk or a cassette for future use. Users were often cautioned to save the program before running it, as errors could result in a crash requiring a reboot, which would render the program irretrievable unless it had been saved. While some type-in programs were short, simple utility or demonstration programs, many type-ins were fully functional games or application software, sometimes rivaling commercial packages.
Type-ins were usually written in BASIC or a combination of a BASIC loader and machine code. In the latter case, the opcodes and operands of the machine code part were often simply given as DATA statements within the BASIC program, and were loaded using a POKE loop, since few users had access to an assembler. In some cases, a special program for entering machine code numerically was provided. Programs with a machine code component sometimes included assembly language listings for users who had assemblers and who were interested in the internal workings of the program.
The downside of type-ins was labor. The work required to enter a medium-sized type-in was on the order of hours. If the resulting program turned out not to be to the user's taste, it was quite possible that the user spent more time keying in the program than using it. Additionally, type-ins were error-prone, both for users and for the magazines. This was especially true of the machine code parts of BASIC programs, which were nothing but line after line of data, e.g. DATA statements in the BASIC language. In some cases where the version of ASCII used on the type of computer the program was published for included printable characters for each value from 0–255, the code could have been printed using strings that contained the glyphs that the values mapped to, or a mnemonic such as [SHIFT-R] instructing the user which keys to press. While a BASIC program would often stop with an error at an incorrect statement, the machine code parts of a program could fail in untraceable ways. This made the correct entry of programs difficult.
Other solutions existed for the tedium of typing in seemingly-endless lines of code. Freelance authors wrote most magazine type-in programs and, in the accompanying article, often provided readers a mailing address to send a small sum (US$3 was typical) to buy the program on disk or tape. By the mid-1980s, recognising this demand from readers, many US-published magazines offered all of each issue's type-ins on an optional disk, often with a bonus program or two. Some of these disks became electronic publications in their own right, outlasting their parent magazine as happened with Loadstar. Some UK magazines occasionally offered a free flexi disc that played on a turntable connected to the microcomputer's cassette input. Other input methods, such as the Cauzin Softstrip, were tried, without much success.
Not all type-ins were long. Run magazine's "Magic" column specialized in one-liner programs for the Commodore 64. These programs were often graphic demos or meant to illustrate a technical quirk of the computer's architecture; the text accompanying the graphics demo programs would avoid explicitly describing the resultant image, enticing the reader to type it in.
History
Type-in programs preceded the home computer era. As David H. Ahl wrote in 1983:
Upon Ahl's departure from DEC in July 1974, he initiated a bimonthly magazine titled Creative Computing while serving as an educational marketing manager at AT&T. The inaugural issue was released in October of that year, and by the fourth year, a team of eight individuals were working on it. The magazine featured computer games and its debut coincided with the introduction of the Altair 8800 - the first widely accessible computer kit - which was announced in January 1975, according to Ahl.
Most early computer magazines published type-in programs. The professional and business-oriented journals such as Byte and Popular Computing printed them less frequently, often as a test program to illustrate a technical topic covered in the magazine rather than an application for general use. Consumer-oriented publications such as Compute! and Family Computing ran several each issue. The programs were sometimes specific to a given home computer and sometimes compatible with several computers. Platform-specific magazines such as Compute!'s Gazette (VIC-20 and Commodore 64) and Antic (Atari 8-bit computers), since they only had to print one version of each program, were able to print more, longer listings.
Although type in programs were usually copyrighted, like the many games in BASIC Computer Games, authors often encouraged users to modify them, adding capabilities or otherwise changing them to suit their needs. Many authors used the article accompanying the type-ins to suggest modifications for the reader and programmer to perform. Users would sometimes send their changes back into the magazine for later publication. This could be considered a predecessor to open source software, but today most open source licenses specify that code be available in a machine-readable format.
Antic stated in 1985 that its staff "spends a good portion of our time diligently combing the incoming submissions for practical application programs. We receive a lot of disk directory programs, recipe file storers, mini word processors, and other rehashed versions of old ideas". While most type-ins were simple games or utilities and likely only to hold a user's interest for a short time, some were very ambitious, rivaling commercial software. Perhaps the most famous example is the type-in word processor SpeedScript, published by Compute!'s Gazette and Compute! for several 8-bit computers starting in 1984. Compute! also published SpeedScript, along with some accessory programs, in book form. It retained a following into the next decade as users refined and added capabilities to it.
Compute! discontinued type-in programs in May 1988, stating "As computers and software have grown more powerful, we've realized it's not possible to offer top quality type-in programs for all machines. And we also realize that you're less inclined to type in those programs". As the cost of cassette tapes and floppy disks declined, and as the sophistication of commercial programs and the technical capabilities of the computers they ran on steadily increased, the importance of the type-in declined. In Europe, magazine covermount disks became common, and type-ins became virtually non-existent.
Validation software
To prevent errors when typing in listings, most publications provided short programs to verify that code was entered correctly. These were specific to a magazine or family of magazines, and different validation programs were usually used for BASIC source and binary data. Compute! and Compute!'s Gazette printed a short listing in each issue for The Automatic Proofreader to check BASIC programs, while ANALOG Computing used D:CHECK (for disk) and C:CHECK (for cassette tape).
For binary listings, Compute! and Ahoy! provided MLX and Flankspeed respectively, which were both interactive programs for entering data. The MIKBUG machine code monitor for the Motorola 6800 of the late 1970s incorporated a checksum into its hexadecimal program listings. ANALOG Computing presented machine code programs as BASIC DATA statements, then prepended a short program to compute checksums. Running the program output a list of values to be checked against those printed in the magazine. Upon successful validation, the program was saved as a binary file and the BASIC code no longer needed.
See also
Cauzin Softstrip
Notes
References
External links
Full text of classic type in program books
Classic Computer Magazine Archive
THE TYPE FANTASTIC (TTFn): The Sinclair magazine type-in programs archive – By Jim Grimwood; original archive by Michael Bruhn
List of Commodore 64 Type-In Games Books
First encounter: COMPUTE! magazine and its glorious, tedious type-in code - by Nate Anderson; Ars Technica
Comprehensive list of type-in program checkers
Home computer software
Type-in program
History of software | Type-in program | [
"Technology"
] | 2,014 | [
"History of software",
"History of computing"
] |
1,025,802 | https://en.wikipedia.org/wiki/The%20Physicists | The Physicists () is a satiric drama/tragic comedy written in 1961 by Swiss writer Friedrich Dürrenmatt. The play was mainly written as a result of the Second World War and many advances in science and nuclear technology. The play deals with questions of scientific ethics and humanity's general ability to manage its intellectual responsibility. It is often recognized as his most impressive yet most easily understood work.
The play was first performed in Zürich in 1962 and published the same year by Verlags AG "Die Arche". It was translated into English by James Kirkup, and published in the US in 1964 by Grove Press, under its Evergreen imprint.
Synopsis
The story is set in the drawing room of Les Cerisiers sanatorium, which is a psychiatric home for the mentally ill, run by a doctor and psychologist, Mathilde von Zahnd. The main room, where the play is set, is connected to three other rooms, each of which is inhabited by a patient. These three men, who are all “physicists” by trade, are permitted use of the drawing room, where they are monitored and checked on by the female nurses. Herbert Georg Beutler is the first patient, and he is convinced that he is Sir Isaac Newton. The second patient is Ernst Heinrich Ernesti, who believes himself to be Albert Einstein. The third patient is Johann Wilhelm Möbius, and he believes that he is regularly visited by the biblical King Solomon. In reality, “Sir Isaac Newton” and “Albert Einstein” are spies from different sides of the cold war conflict.
Once the play begins, it is revealed to the audience that "Einstein" has just killed one of his nurses, and the police are examining the scene. The “Inspector” continuously questions Fräulein Mathilde von Zahnd and indirectly insults the “mentally ill” patients.
It is revealed through their discussion that this is the second murder of a nurse by one of the three patients in just three months. The first having been committed by "Newton", which is later revealed to be a result of the nurse’s discovery of “Newton’s” real identity.
The motives behind the two murders become clearer as the play advances into the second act, where it is revealed with startling abruptness that not even one of the three patients is actually mad; they are all only faking insanity for various reasons.
Möbius is actually an incredibly brilliant physicist whose discoveries include such fabled results as a solution to the problem of gravitation, a "Unitary Theory of Elementary Particles", and the "Principle of Universal Discovery". Fearing what humanity could do with these powerful discoveries, he feigned madness, in hopes that he might be put in a home for the mentally ill and thus be protected along with his knowledge.
However, he failed to avoid attention which he so dearly feared. "Einstein" and "Newton" are both spies, representatives of two different countries, and they have infiltrated the Les Cerisiers in order to secure Möbius' documents and, if possible, the man himself. Each spy had to murder a nurse to protect his secrets and to strengthen his simulation of madness, as well as to further conceal their identities.
In the play's climactic scene, the three men reveal their secrets, and each of the two spies attempts to convince Möbius to come with them. Möbius, however, persuades them that the secrets he has discovered are too terrible for man to know and assures them that their efforts are in vain because he recently burned all the papers that he developed during his time in the sanatorium. After much debate, the three men finally agree that they are content to protect humanity by living out the rest of their lives in captivity, while furthering and serving physics.
These noble plans are quickly changed by the play's final plot twist; Fräulein Doktor Mathilde von Zahnd enters the room and reveals to the three men that she has eavesdropped on their entire conversation. Furthermore, she admits to knowing about Möbius for years and has been secretly copying his documents. She reveals that she has been using his scientific discoveries to construct an international empire which she would later rule. She believes that King Solomon is speaking to her, and she believes that with his guidance and Möbius' discoveries, she can become the most powerful woman on earth.
The story ends with a sense of impending doom. Möbius, "Newton", and "Einstein" have been tricked and trapped. The play ends with each of the men speaking to the audience, emphasizing their plight and the plight of all humanity. In their eyes, humanity was lost and could not be helped.
Adaptations
A 1963 radio play adaptation by Schweizer Radio DRS.
It was adapted for Australian TV.
A was produced by Süddeutscher Rundfunk and directed by Fritz Umgelter.
In 1988, a TV movie The Physicists () was produced by the Lentelefilm studio in the USSR.
BBC radio version 17/10/1963 (repeated 8/11/1963 & 5/3/1972) produced by William Glen-Doepel; and BBC World Service radio versions from c1981 & 7/7/1991.
In January 2013, BBC Radio 3 broadcast an adaptation by Matt Thompson with Samantha Bond as Doctor von Zahnd, Geoffrey Whitehead as the inspector, John Hodgkinson as Möbius, Thom Tuck as Newton, John Bett as Einstein, and Madeleine Worrall as both Nurse Monika and Mrs Rose.
1964 Australian TV version
The play was adapted for Australian TV in 1964 by the Australian Broadcasting Commission in Melbourne. Australian drama was relatively rare at the time and it was common for local versions of overseas plays to be produced.
Premise
At a mental asylum in Europe, police investigate the murder of two nurses who were assigned to three inmates, all physicists: Mobius, Beutler, and Ernesti. Mobius imagines himself as in contact with King Solomon. Beutel and Ernesti maintain they are Newton and Einstein.
Cast
Terry Norris as Beutler
Wynn Roberts as Mobius
Robert Peach as Ernesti
Syd Conabere
Brian James
Patricia Kennedy as psychiatrist
Gerda Nicolson
Elizabeth Wing
Production
The play had been first produced in London in January 1963 and made its Australian stage premiere in St Martins Theatre Sydney, October 1963.
It was one of 20 TV plays produced by the ABC in 1964. Chris Muir described the play as "full of the unexpected and rich in dramatic climaxes. It is also a play of black humour. Dürrenmatt, while making us laugh at ourselves, makes us feel uncomfortable in the process by showing us our failings often through grotesque imagery."
Reception
The critic for the Sydney Morning Herald wrote that the production:
Shifted the convincing effects in the play from the chaff of its thriller-comedy element. The light relief dialogue is there for the purpose of keeping a puzzled live audience amused, and on television this doubtful sprinkling of humour did not come through; similarly the two murders and police investigations range false in such unrealistic treatment. Christopher Muir... followed Duerrenmatt's directions closely, imposing on television the geometrical pattern of the asylum common room with its three cell doors and the curiously clockwork behaviour of the characters. As though seen under a magnifying glass, the gripping features of the play showed clear and sharp; the only real and understandable figure, fortunately one central to the play, was given a worthy portrayal by Wynn Roberts (although one of his big scenes was cut). This was Mobius, the genius impelled by both fear and courage. Tension is well supplied to the second half of the play by, the unexpected twists of the plot, and the cold, lucid arguments of the three physicists were excellently focused in this production.
References
External links
1961 plays
German-language plays
Plays about science
Plays by Friedrich Dürrenmatt
Satirical plays
Fiction about mental health
Plays adapted into radio programs
Plays adapted into television shows
Swiss speculative fiction works
Cultural depictions of Isaac Newton
Cultural depictions of Albert Einstein
Apocalyptic literature
Australian Broadcasting Corporation television plays
Television plays filmed in Melbourne
Television plays directed by Chris Muir
1964 Australian television plays | The Physicists | [
"Astronomy"
] | 1,674 | [
"Cultural depictions of Isaac Newton",
"Cultural depictions of astronomers"
] |
1,025,835 | https://en.wikipedia.org/wiki/Hardy%20Spicer | Hardy Spicer is a brand of automotive transmission or driveline equipment best known for its mechanical constant velocity universal joint originally manufactured in Britain by Hardy employing patents belonging to US-based Spicer Manufacturing. Hardy and Spicer soon became partners. Later Spicer became Dana Holding Corporation.
Since the commercial success of front wheel drive cars began in the 1960s the industry manufacturing universal joints has grown enormously.
The Hardy Spicer and Laycock Engineering group of businesses, later known as Birfield, have been part of the GKN Driveline group since 1966.
History
Ed. J Hardy Limited was founded and later formed into a limited liability company by Birmingham-born cycle-parts manufacturer Edward John Hardy (1874–1950) in 1903 to import components for British motor manufacturers from France. The French industry was then dominant.
Bearings
Bound Brook Bearings of Bound Brook, New Jersey sold to Ed J Hardy and Company in 1922 the rights to manufacture their oil-less bearings and oil retaining bearings and sell them in Europe and the British Empire.
Flexible coupling
Just before the first World War Hardy designed, patented and made a flexible laminated fabric and rubber coupling which soon became standard on British cars and trucks. A licence to manufacture the Hardy flexible coupling in USA was granted to the Thermoid Rubber Company.
Mechanical universal joint
More powerful engines and higher speeds required a mechanical universal joint. In USA, already with a link to Thermoid, Hardy established a contact with Spicer Manufacturing Corporation of Toledo, Ohio. Spicer took a share of Ed. J Hardy Limited in exchange for British patent rights and all engineering data of the Spicer mechanical joint and in 1926 the name of Ed. J Hardy & Co was changed to Hardy, Spicer and Co Limited.
Other businesses
The Phosphor Bronze Company was bought in 1937 for its manufacture of high grade non-ferrous castings and the following year Hardy, Spicer elected to make their own forgings in their own forging plant. The plant's name was Forgings and Presswork (Birmingham) Limited. Sheffield's Laycock Engineering also made a flexible coupling known as Layrub as well as being a large manufacturer of garage and railway equipment.
Birfield Industries
In 1939 Hardy-Spicer joined with Laycock Engineering both becoming subsidiaries of a new holding company named Birfield Industries Limited incorporated by Laycock Engineering's chairman, Herbert Hill (1901–1987) for that purpose.
Constant velocity joints
Herbert Hill pushed his team to make continuous improvements to the basic Rzeppa constant velocity joint and was rewarded in the 1960s when much of the world's motor industry switched to front wheel drive using Birfield joints, the CV joints now made by GKN Driveline and currently installed in more than one-third of all new cars worldwide.
Notable improvements to the original Rzeppa design have been the elimination of the need for a splined coupling and Birfield's modifications to the ball grooves and their track-steered ball cage introduced with BMC's Minis in 1959.
Rear axles
Salisbury Axle in USA was also part of the Spicer Group and in 1939 the Salisbury Transmission Company was formed in Britain to manufacture hypoid rear axles and, in the late 1950s, Powr-Lok limited-slip differentials.
Automotive clutches
Laycock's principal product became spring diaphragm clutches.
From the late 1940s into the 1970s it made under de Normanville patents an add-on epicyclic overdrive unit which saved manufacturers from incorporating a 4th or 5th gear within their gearboxes.
Birfield multinational group 1964
Group members included Hardy Spicer; Laycock Engineering; Forgings & Presswork; Salisbury Transmission; Kent Alloys; Bound Brook; Felco Hoists; Hardy Spicer Walterscheid; John C Carlson; T B Ford; Shotton Bros; Felco Hoists; Birfield Filtration; R Jones & Co; Oddy Engineering; Birfield Machine Tools; Birfield Extrusions; Foundry Mechanisations (Baillot); A E Callaghan & Son; Micron Sprayers; Birfield Industries.
Overseas group members Bound Brook Bearing Corporation of America, Nordiska Kardan AB; Birfield (Ireland); Birfield (Nederland) Tranmissie; Felco France.
Overseas interests 37.5 per cent of Uni-Cardan AG, 21 per cent of France's Glaenzer-Spicer SA in any case controlled by Uni-Cardan, Birfield Transmissioni SpA in Italy jointly owned with Glaenzer-Spicer, an association with Toyo Bearing of Japan and lesser interests in businesses in many other industrialised countries.
GKN Driveline
In 1966 Guest Keen & Nettlefold seeing advantage in amalgamating with its local competition and wanting to pre-empt an expected bid from USA's TRW Inc. bought Birfield the sole UK supplier of CVJs.
There were also particular significant advantages in the amalgamation such as Hardy Spicer's strength in the EU and USA whereas GKN was weak in both those parts of the world.
As GKN Driveline its constant velocity joints take near 50 per cent of the world market and Driveline employs about 22,000 people at 46 locations across 23 countries.
References
External links
CVJ
Mechanisms (engineering) | Hardy Spicer | [
"Engineering"
] | 1,094 | [
"Mechanical engineering",
"Mechanisms (engineering)"
] |
1,025,901 | https://en.wikipedia.org/wiki/Reynolds%20decomposition | In fluid dynamics and turbulence theory, Reynolds decomposition is a mathematical technique used to separate the expectation value of a quantity from its fluctuations.
Decomposition
For example, for a quantity the decomposition would be
where denotes the expectation value of , (often called the steady component/time, spatial or ensemble average), and , are the deviations from the expectation value (or fluctuations). The fluctuations are defined as the expectation value subtracted from quantity such that their time average equals zero.
The expected value, , is often found from an ensemble average which is an average taken over multiple experiments under identical conditions. The expected value is also sometime denoted , but it is also seen often with the over-bar notation.
Direct numerical simulation, or resolution of the Navier–Stokes equations completely in , is only possible on extremely fine computational grids and small time steps even when Reynolds numbers are low, and becomes prohibitively computationally expensive at high Reynolds' numbers. Due to computational constraints, simplifications of the Navier-Stokes equations are useful to parameterize turbulence that are smaller than the computational grid, allowing larger computational domains.
Reynolds decomposition allows the simplification of the Navier–Stokes equations by substituting in the sum of the steady component and perturbations to the velocity profile and taking the mean value. The resulting equation contains a nonlinear term known as the Reynolds stresses which gives rise to turbulence.
See also
Reynolds-averaged Navier–Stokes equations
References
Turbulence | Reynolds decomposition | [
"Chemistry"
] | 294 | [
"Turbulence",
"Fluid dynamics"
] |
1,025,941 | https://en.wikipedia.org/wiki/Emergency%20exit | An emergency exit in a building or other structure is a special exit used during emergencies such as fires. The combined use of regular and emergency exits allows for faster evacuation, and emergency exits provide alternative means of evacuation if regular exits are inaccessible.
Emergency exits must:
Be clearly marked (usually with signage that is normally illuminated, or is illuminated by a backup power source if central power fails)
Be in easily-accessible locations
Direct people to safe areas (usually outside)
Be regularly maintained and free of obstructions (they may not be used for storage)
Be secured to prevent unauthorized entry during normal operations
An emergency exit's path usually ends in an outward-opening door with a crash bar with exit signs pointing to it. It is usually a door to an area outside of the building, but may also lead to an adjoining, fire-isolated structure with clear exits of its own.
A fire escape is a special kind of emergency exit consisting of stairs and/or extendable ladders mounted on the outside of a building.
Buildings
Local building codes or building regulations often dictate the number of fire exits required for a building of a given size, including the number of stairwells. For any buildings bigger than a private house, modern codes invariably specify at least two sets of stairs, completely isolated from each other so that if one becomes impassable due to smoke or flames, the other remains usable.
The traditional way to satisfy this requirement was to construct two separate stairwell stacks, each occupying its own footprint within each floorplan. Each stairwell is internally configured into an arrangement often called a "U-return" or "return" design. The two stairwells may be constructed next to each other, separated by a fireproof partition, or optionally the two stairwells may be located at some distance from each other within the floorplan. The traditional arrangement has the advantage of being easily understood by building occupants and occasional visitors.
Some architects save space while still meeting the exit requirement, by housing two stairwells in a "double helix" or "scissors stairs" configuration whereby two stairwells occupy the same floor footprint, but are intertwined while being separated by fireproof partitions along their entire run. However, this design deposits anybody descending the stack into alternating locations on each successive floor, and this can be very disorienting. Some building codes recommend using a color-coded stripe and signage to distinguish otherwise identical-looking stairwells from each other, and to make following a quick exit path easier.
In older buildings that predate modern fire codes, and which lack space for a second stairwell, having intertwining stairs so close to each other may not allow firefighters going up and evacuees going down to use separate stairways.
For example, Westfield Stratford City uses a scissors stairway configuration in its upper car park. This part of the building has eight storeys: LG, G, and 1 are part of the shopping centre; 2 has some offices and a storage area; CP1, CP2, CP3, and CP4 are a multi-storey car park. The floors are served by the main public lifts and escalators, and by 1 set of a double-helix stairway and lift per , going into the service areas. The main public escalators do not count as fire exits, as the doors may be locked during less busy periods. The building has one fire exit per of floor space.
Knowing where the emergency exits are in buildings can save lives. Some buildings, such as schools, have fire drills to practice using emergency exits. Many disasters could have been prevented if people had known where fire escapes were and if emergency exits had not been blocked. For example, in the September 11, 2001, attacks on the World Trade Center, some of the emergency exits inside the building were inaccessible, while others were locked. In the Stardust Disaster and the 2006 Moscow hospital fire, the emergency exits were locked and most windows barred shut. In the case of the Station Nightclub, the premises were over capacity the night fire broke out, the front exit was not designed well (right outside the door, the concrete approach split 90 degrees and a railing ran along the edge), and an emergency exit swung inward, not outward as code requires.
In many countries, it is required that all new commercial buildings include well-marked emergency exits. Some older buildings must be retrofitted with fire escapes. In countries where emergency exits are not standard, or the standards are not enforced, fires will often result in a much greater loss of life.
Signage
The UK Health and Safety (Safety Signs and Signals) Regulations 1996 define a fire safety sign as an illuminated sign or acoustic signal that provides information on escape routes and emergency exits. Well-designed emergency exit signs are necessary for emergency exits to be effective.
Fire escape signs usually display the word "EXIT" or the equivalent word in the local language with large, well-lit, green letters, or the green pictorial "running-man" symbol developed and adopted in Japan around 1980 and introduced in 2003 by ISO 7010. Pictorial green "running-man" sign is mandatory in Japan, European Union, South Korea, Australia, New Zealand and Canada, and increasingly becoming common elsewhere.
Some states in the United States currently require the exit signs to be colored red, despite the usage of color red in signage usually implies hazards, prohibited actions or stop, while the color green implies safe place/actions or to proceed. Older building code in Canada required red exit signs, but no new installation is allowed.
Emergency door release
An emergency door release call point (or a pull station in the United States) is used to disengage locking devices such as electromagnets, bolt locks, and electric locks while also ensuring positive security and failsafe operation.
Nightclubs, restaurants, and similar venues
Worldwide, there have been repeated mass casualties in nightclubs and related venues where large numbers of people may gather. A violent personal dispute, fire, terrorist attack, or other incident can cause a mass panic or stampede for the exits. If the exits are blocked, locked, hidden, or inadequate, large numbers of casualties and deaths can result. The 1942 Cocoanut Grove fire in Boston caused over 400 deaths from a flash fire in a blacked-out nightclub with only a single obvious exit through a revolving door. Building codes and life safety regulations were extensively reformed in the US in the following years, and influenced changes in many other countries as well.
However, mass casualty incidents still occur in the US and elsewhere in the world, due to inadequate enforcement of safety rules. For a list of some of the most notable incidents, see Template:Club fires.
Blocked exits
Firefighters have cited overzealous security guards who told people during a fire that they are not allowed to use emergency exits. The practice is actually quite common in the absence of fires, as well. Some skyscrapers have stairwells with standard emergency exit signs on each door, which then lock upon closing. Users of these stairwells can get trapped if they do not know that the only door that opens from the inside is the one on the ground floor.
A further problem becoming very common in the US is that retail stores at night close one of their main entrance/exits through makeshift heavy metal barriers, signage, paper notes, or junk placed in front of the exits. Some actually lock their exits. A large array of signage and mechanical exit systems have also been devised, including signage that says contradictorily, "This is not an exit", "Do not use this exit", or warning users that a heavy penalty will be assessed for non-emergency use.
Some systems do not allow the exit to be opened until the user signals the intention to exit (through a button or lever) for some amount of time, such as 20 seconds. It is also common for these exits to remain completely locked until somebody tests them.
Some have alarms activated when they are opened, to alert staff of unauthorized use during non-emergencies. On many exits, the user may have to push against a crash bar or other door opening device for a period of time to unlock the door. Many exits have a sign reading, "Emergency exit only, alarm will sound if opened", to warn of the fact that it is an emergency exit only.
Aircraft
In aircraft terms, an "exit" is any one of the main doors (entry doors on the port side of the aircraft and service doors on the starboard side) and an "emergency exit" is defined as an exit that is only ever used in an emergency (such as overwing exits and permanently-armed exits).
In the early years, the emergency exit was a hatch in the ceiling of the aircraft. Because in the 1928 KLM Fokker F.III Waalhaven crash the passengers didn't know the location of this emergency exit, one passenger couldn't escape in time and died. As a result, better visibility of the emergency exit inside the cabin was advised by the investigative committee.
Passengers seated in exit rows may be called upon to assist and open exits in the event of an emergency.
The number and type of exits on an aircraft is regulated through strict rules within the industry, and is based on whether the aircraft is single or twin-aisled; the maximum passenger load; and the maximum distance from a seat to an exit. The goal of these regulations is to make possible the evacuation of an airliner's designed maximum occupancy of passengers and crew within 90 seconds even if half of the available exits are blocked.
Any aircraft where the emergency exit door sill height is above that which would make unaided escape possible is fitted with an automatic inflatable evacuation slide, which allows occupants to slide to the ground safely.
† 9 passenger aircraft only
Ventral exits must allow the same rate of egress as a Type I exit, tailcone exit are aft of the fuselage.
Aircraft for less than 19 passenger must have one sufficient exit in each side of the fuselage, two per side for more, no more than apart from each other.
In November 2019, the EASA allowed "Type-A+" exits with a dual-lane evacuation slide to increase maximum accommodation increased to 480 seats up from 440 with four pairs of doors on the A350-1000, and up to 460 on the A330-900.
Gallery
History
Following the events of the Victoria Hall disaster in Sunderland, England, in 1883 in which more than 180 children died because a door had been bolted at the bottom of a stairwell, the British government began legal moves to enforce minimum standards for building safety. This slowly led to the legal requirement that venues must have a minimum numbers of outward opening emergency exits as well as locks which could be opened from the inside.
These moves were not globally copied for some time. For example, in the United States, 146 factory workers died in the Triangle Shirtwaist Factory fire in 1911 when they were stopped by locked exits, and 492 people died in the Cocoanut Grove fire in a Boston nightclub in 1942. This led to regulations requiring that exits of large buildings open outward, and that enough emergency exits be provided to accommodate the building's capacity.
Similar disasters around the world also resulted in public fury and calls for changes to emergency regulations and enforcement. An investigation was launched by the Argentine federal government after 194 people were killed during the 2004 República Cromañón nightclub fire in Buenos Aires, Argentina. The emergency exits had been chained shut by the owners, to prevent people from sneaking into the nightclub without paying.
References
External links
Information on fire exit signs in Britain
Architectural elements
Construction law
Safety | Emergency exit | [
"Technology",
"Engineering"
] | 2,347 | [
"Building engineering",
"Construction",
"Architectural elements",
"Construction law",
"Components",
"Architecture"
] |
1,025,943 | https://en.wikipedia.org/wiki/Logical%20schema | A logical data model or logical schema is a data model of a specific problem domain expressed independently of a particular database management product or storage technology (physical data model) but in terms of data structures such as relational tables and columns, object-oriented classes, or XML tags. This is as opposed to a conceptual data model, which describes the semantics of an organization without reference to technology.
Overview
Logical data models represent the abstract structure of a domain of information. They are often diagrammatic in nature and are most typically used in business processes that seek to capture things of importance to an organization and how they relate to one another. Once validated and approved, the logical data model can become the basis of a physical data model and form the design of a database.
Logical data models should be based on the structures identified in a preceding conceptual data model, since this describes the semantics of the information context, which the logical model should also reflect. Even so, since the logical data model anticipates implementation on a specific computing system, the content of the logical data model is adjusted to achieve certain efficiencies.
The term 'Logical Data Model' is sometimes used as a synonym of 'domain model' or as an alternative to the domain model. While the two concepts are closely related, and have overlapping goals, a domain model is more focused on capturing the concepts in the problem domain rather than the structure of the data associated with that domain.
History
When ANSI first laid out the idea of a logical schema in 1975, the choices were hierarchical and network. The relational model – where data is described in terms of tables and columns – had just been recognized as a data organization theory but no software existed to support that approach. Since that time, an object-oriented approach to data modelling – where data is described in terms of classes, attributes, and associations – has also been introduced.
Logical data model topics
Reasons for building a logical data structure
Helps common understanding of business data elements and requirement
Provides foundation for designing a database
Facilitates avoidance of data redundancy and thus prevent data and business transaction inconsistency
Facilitates data re-use and sharing
Decreases development and maintenance time and cost
Confirms a logical process model and helps impact analysis.
Conceptual, logical and physical data model
A logical data model is sometimes incorrectly called a physical data model, which is not what the ANSI people had in mind. The physical design of a database involves deep use of particular database management technology. For example, a table/column design could be implemented on a collection of computers, located in different parts of the world. That is the domain of the physical model.
Conceptual, logical and physical data models are very different in their objectives, goals and content. Key differences noted below.
See also
DODAF
Core architecture data model
Database design
Entity-relationship model
Database schema
Object-role modeling
FCO-IM
References
External links
Building a Logical Data Model By George Tillmann, DBMS, June 1995.
Data modeling | Logical schema | [
"Engineering"
] | 597 | [
"Data modeling",
"Data engineering"
] |
1,026,482 | https://en.wikipedia.org/wiki/IEEE%20802.10 | IEEE 802.10 is a former standard for security functions that could be used in both local area networks and metropolitan area networks based on IEEE 802 protocols.
802.10 specifies security association management and key management, as well as access control, data confidentiality and data integrity.
The IEEE 802.10 standards were withdrawn in January 2004 and this working group of the IEEE 802 is not currently active. Security for wireless networks was standardized in 802.11i.
The Cisco Inter-Switch Link (ISL) protocol for supporting VLANs on Ethernet and similar LAN technologies was based on IEEE 802.10; in this application 802.10 has largely been replaced by IEEE 802.1Q.
The standard being developed has 8 parts:
a. Model, including security management
b. Secure Data Exchange (SDE) protocol
c. Key Management
d. - has now been incorporated in 'a' -
e. SDE Over Ethernet 2.0
f. SDE Sublayer Management
g. SDE Security Labels
h. SDE PICS Conformance.
Parts b, e, f, g, and h are incorporated in IEEE Standard 802.10-1998.
External links
IEEE Std 802.10-1998
IEEE 802
Computer security standards | IEEE 802.10 | [
"Technology",
"Engineering"
] | 248 | [
"Cybersecurity engineering",
"Computer network stubs",
"Computer security standards",
"Computer standards",
"Computing stubs"
] |
1,026,521 | https://en.wikipedia.org/wiki/Pusher%20configuration | In aeronautical and naval engineering, pusher configuration is the term used to describe a drivetrain of air- or watercraft with propulsion device(s) after the engine(s). This is in contrast to the more conventional tractor configuration, which places them in front.
Though the term is most commonly applied to aircraft, its most ubiquitous propeller example is a common outboard motor for a small boat.
“Pusher configuration” describes the specific (propeller or ducted fan) thrust device attached to a craft, either aerostats (airship) or aerodynes (aircraft, WIG, paramotor, rotorcraft) or others types such as hovercraft, airboats, and propeller-driven snowmobiles.
History
The rubber-powered "Planophore", designed by Alphonse Pénaud in 1871, was an early successful model aircraft with a pusher propeller.
Many early aircraft (especially biplanes) were "pushers", including the Wright Flyer (1903), the Santos-Dumont 14-bis (1906), the Voisin-Farman I (1907), and the Curtiss Model D used by Eugene Ely for the first ship landing on January 18, 1911. Henri Farman's pusher Farman III and its successors were so influential in Britain that pushers in general became known as the "Farman type". Other early pusher configurations were variations on this theme.
The classic "Farman" pusher had the propeller "mounted (just) behind the main lifting surface" with the engine fixed to the lower wing or between the wings, immediately forward of the propeller in a stub fuselage (that also contained the pilot) called a nacelle. The main difficulty with this type of pusher design was attaching the tail (empennage). This needed to be in the same general location as on a tractor aircraft, but its support structure had to avoid the propeller.
The earliest examples of pushers relied on a canard but this has serious aerodynamic implications that the early designers were unable to resolve. Typically, mounting the tail was done with a complex wire-braced framework that created a lot of drag. Well before the beginning of the First World War, this drag was recognized as just one of the factors that would ensure that a Farman-style pusher would have an inferior performance to an otherwise similar tractor type.
The U.S. Army banned pusher aircraft in late 1914 after several pilots died in crashes of aircraft of this type, so from about 1912 onwards, the great majority of new U.S. landplane designs were tractor biplanes, with pushers of all types becoming regarded as old-fashioned on both sides of the Atlantic. However, new pusher designs continued to be designed right up to the armistice, such as the Vickers Vampire, although few entered service after 1916.
At least up to the end of 1916, however, pushers (such as the Airco DH.2 fighter) were still favored as gun-carrying aircraft by the British Royal Flying Corps, because a forward-firing gun could be used without being obstructed by the arc of the propeller. With the successful introduction of Fokker's mechanism for synchronizing the firing of a machine gun with the blades of a moving propeller, followed quickly by the widespread adoption of synchronization gears by all the combatants in 1916 and 1917, the tractor configuration became almost universally favored, and pushers were reduced to the tiny minority of new aircraft designs that had a specific reason for using the arrangement.
Both the British and French continued to use pusher-configured bombers, though there was no clear preference either way until 1917. Such aircraft included (apart from the products of the Farman company) the Voisin bombers (3,200 built), the Vickers F.B.5 "Gunbus", and the Royal Aircraft Factory F.E.2; however, even these found themselves being shunted into training roles before disappearing entirely. Possibly the last fighter to use the Farman pusher configuration was the 1931 Vickers Type 161 COW gun fighter.
During the long eclipse of the configuration the use of pusher propellers continued in aircraft which derived a small benefit from the installation and could have been built as tractors. Biplane flying boats had for some time often been fitted with engines located above the fuselage to offer maximum clearance from the water, often driving pusher propellers to avoid spray and the hazards involved by keeping them well clear of the cockpit. The Supermarine Walrus was a late example of this layout.
The so-called push/pull layout, combining the tractor and pusher configurations—that is, with one or more propellers facing forward and one or more others facing back—was another idea that continues to be used from time to time as a means of reducing the asymmetric effects of an outboard engine failure, such as on the Farman F.222, but at the cost of a severely reduced efficiency on the rear propellers, which were often smaller and attached to lower-powered engines as a result.
By the late 1930s, the widespread adoption of all-metal stressed skin construction of aircraft meant, at least in theory, that the aerodynamic penalties that had limited the performance of pushers (and indeed any unconventional layout) were reduced; however, any improvement that boosts pusher performance also boosts the performance of conventional aircraft, and they remained a rarity in operational service—so the gap was narrowed but was closed entirely.
During World War II, experiments were conducted with pusher fighters by most of the major powers. Difficulties remained, particularly that a pilot having to bail out of a pusher was liable to pass through the propeller arc. This meant that of all the types concerned, only the relatively conventional Swedish SAAB 21 of 1943 went into series production. Other problems related to the aerodynamics of canard layouts, which had been used on most of the pushers, proved more difficult to resolve. One of the world's first ejection seats was (per force) designed for this aircraft, which later re-emerged with a jet engine.
The largest pusher aircraft to fly was the Convair B-36 "Peacemaker" of 1946, which was also the largest bomber ever operated by the United States. It had six 28-cylinder Pratt & Whitney Wasp Major radial engines mounted in the wing, each driving a pusher propeller located behind the trailing edge of the wing, plus four jet engines.
Although the vast majority of propeller-driven aircraft continue to use a tractor configuration, there has been in recent years something of a revival of interest in pusher designs: in light homebuilt aircraft such as Burt Rutan's canard designs since 1975, ultralights such as the Quad City Challenger (1983), flexwings, paramotors, powered parachutes, and autogyros. The configuration is also often used for unmanned aerial vehicles, due to requirements for a forward fuselage free of any engine interference.
The Aero Dynamics Sparrow Hawk was another homebuilt aircraft constructed chiefly in the 1990s.
Configurations
Airships are the oldest type of pusher aircraft, going back to Frenchman Henri Giffard's pioneering airship of 1852.
Pusher aircraft have been built in many different configurations. In the vast majority of fixed-wing aircraft, the propeller or propellers are still located just behind the trailing edge of the "main lifting surface", or below the wing (paramotors) with the engine being located behind the crew position.
Conventional aircraft layout have a tail (empennage) for stabilization and control. The propeller may be close to the engine, as the usual direct drive:
The propeller may be ahead of the tail: inside the framework (Farman III), in line with the fuselage (RFB Fantrainer), between tail booms (Cessna Skymaster), above the fuselage on wing (Quad City Challenger), on nacelle or axial pod (Lake Buccaneer), or coaxially around rear fuselage (Gallaudet D-4).
The propeller may be located behind the vertical tail, under the horizontal tail (Prescott Pusher).
Engines and propellers may be located on wings (Piaggio P.180 Avanti) or on lateral pods (Embraer/FMA CBA 123 Vector).
The engine may be buried in a forward remote location, driving the propeller by drive shaft or belt:
The propeller may be located ahead of the tail, behind the wing (Eipper Quicksilver) or inside the airframe (Rhein Flugzeugbau RW 3 Multoplan).
The propeller may be located inside the tail, either cruciform or ducted fan (Marvelette).
The propeller may be located at the rear, behind a conventional tail (Bede BD-5).
The propeller may be located above the fuselage such as on many small flying boats (Lake Buccaneer)
In canard designs a smaller wing is sited forward of the aircraft's main wing. This class mainly uses a direct drive, either single-engine axial propeller, or twin engines with a symmetrical layout, or an in line layout (push-pull) as the Rutan Voyager.
In tailless aircraft such as Lippisch Delta 1 and Westland-Hill Pterodactyl types I and IV, horizontal stabilizers at the rear of the aircraft are absent. Flying wings like the Northrop YB-35 are tailless aircraft without a distinct fuselage. In these installations, the engines are either mounted in nacelles or the fuselage on tailless aircraft, or buried in the wing on flying wings, driving propellers behind the trailing edge of the wing, often by extension shaft.
Almost without exception, flexwing aircraft, paramotors, and powered parachutes use a pusher configuration.
Other craft with pusher configurations run on flat surfaces, land, water, snow, or ice. Thrust is provided by propellers and ducted fans, located to the rear of the vehicle. These include:
Hovercraft, lifted by an air cushion, such as the 58 passengers SR.N6.
Airboats, flat bottomed vessels planing on water.
Aerosledges, also known as the aerosleigh, propeller-driven sledge, or propeller-driven snowmobile.
In aircraft
Advantages
The drive shaft of a pusher engine is in compression in normal operation, which places less stress on it than being in tension in a tractor configuration.
Practical requirements
Placing the cockpit forward of the wing to balance the weight of the engine(s) aft improves visibility for the crew. In military aircraft, front armament could be used more easily on account of the gun not needing to synchronize itself with the propeller, although the risk that spent casings fly into the props at the back somewhat offset this advantage.
Aircraft where the engine is carried by, or very close to, the pilot (such as paramotors, powered parachutes, autogyros, and flexwing trikes) place the engine behind the pilot to minimize the danger to the pilot's arms and legs. These two factors mean that this configuration was widely used for early combat aircraft, and remains popular today among ultralight aircraft, unmanned aerial vehicles (UAVs), and radio-controlled airplanes.
Aerodynamics
A pusher may have a shorter fuselage and hence a reduction in both fuselage wetted area and weight.
In contrast to tractor layout, a pusher propeller at the end of the fuselage is stabilizing. A pusher needs less stabilizing vertical tail area and hence presents less weathercock effect; at takeoff roll, it is generally less sensitive to crosswind.
When there is no tail within the slipstream, unlike a tractor, there is no rotating propwash around the fuselage inducing a side force to the fin. At takeoff, a canard pusher pilot does not have to apply rudder input to balance this moment.
Efficiency can be gained by mounting a propeller behind the fuselage, because it re-energizes the boundary layer developed on the body, and reduces the form drag by keeping the flow attached to the fuselage. However, it is usually a minor gain compared to the airframe's detrimental effect on propeller efficiency.
Wing profile drag may be reduced due to the absence of prop-wash over any section of the wing.
Safety
The engine is mounted behind the crew and passenger compartments, so fuel oil and coolant leaks will vent behind the aircraft, and any engine fire will be directed behind the aircraft. Similarly, propeller failure is less likely to directly endanger the crew.
A pusher ducted fan system offers a supplementary safety feature attributed to enclosing the rotating fan in the duct, therefore making it an attractive option for various advanced UAV configurations or for small/personal air vehicles or for aircraft models.
Disadvantages
Structural and weight considerations
A pusher design with an empennage behind the propeller is structurally more complex than a similar tractor type. The increased weight and drag degrades performance compared with a similar tractor type. Modern aerodynamic knowledge and construction methods may reduce but never eliminate the difference. A remote or buried engine requires a drive shaft and associated bearings, supports, and torsional vibration control, and adds weight and complexity.
Center of gravity and landing gear considerations
To maintain a safe center of gravity (CG) position, there is a limit to how far aft an engine can be installed. The forward location of the crew may balance the engine weight and will help determine the CG. As the CG location must be kept within defined limits for safe operation load distribution must be evaluated before each flight.
Due to a generally high thrust line needed for propeller ground clearance, negative (down) pitching moments, and in some cases the absence of prop-wash over the tail, a higher speed and a longer roll may be required for takeoff compared to tractor aircraft. The Rutan answer to this problem is to lower the nose of the aircraft at rest such that the empty center of gravity is then ahead of the main wheels. In autogyros, a high thrust line results in a control hazard known as power push-over.
Aerodynamic considerations
Due to the generally-high thrust line to ensure ground clearance, a low-wing pusher layout may suffer power-change-induced pitch changes, also known as pitch/power coupling. Pusher seaplanes with especially high thrust lines and tailwheels may find the vertical tail masked from the airflow, severely reducing control at low speeds, such as when taxiing. The absence of prop-wash over the wing reduces the lift and increases takeoff roll length. Pusher engines mounted on the wing may obstruct sections of the wing trailing edge, reducing the total width available for control surfaces such as flaps and ailerons. When a propeller is mounted in front of the tail, changes in engine power alter the airflow over the tail and can give strong pitch or yaw changes.
Propeller ground clearance and foreign object damage
Due to the pitch rotation at takeoff, the propeller diameter may have to be reduced (with a loss of efficiency) and/or landing gear made longer and heavier. Many pushers have ventral fins or skids beneath the propeller to prevent the propeller from striking the ground, at an added cost in drag and weight. On tailless pushers such as the Rutan Long-EZ, the propeller arc is very close to the ground while flying nose-high during takeoff or landing. Objects on the ground kicked up by the wheels can pass through the propeller disc, causing damage or accelerated wear to the blades; in extreme cases, the blades may strike the ground.
When an airplane flies in icing conditions, ice can accumulate on the wings. If an airplane with wing-mounted pusher engines experiences icing, the props will ingest shredded chunks of ice, endangering the propeller blades and parts of the airframe that can be struck by ice violently redirected by the props. In early pusher combat aircraft, spent ammunition casings caused similar problems, and devices for collecting them had to be devised.
Propeller efficiency and noise
The propeller passes through the fuselage wake, wing wake, and other flight surface downwashes—moving asymmetrically through a disk of irregular airspeed. This reduces propeller efficiency and causes vibration inducing structural propeller fatigue and noise.
Prop efficiency is usually at least 2–5% less and in some cases more than 15% less than an equivalent tractor installation. Full-scale wind tunnel investigation of the canard Rutan VariEze showed a propeller efficiency of 0.75 compared to 0.85 for a tractor configuration, a loss of 12%. Pusher props are noisy, and cabin noise may be higher than tractor equivalent (Cessna XMC vs Cessna 152). Propeller noise may increase because the engine exhaust flows through the props. This effect may be particularly pronounced when using turboprop engines due to the large volume of exhaust they produce.
Engine cooling and exhaust
Power-plant cooling design is more complex in pusher engines than for the tractor configuration, where the propeller forces air over the engine or radiator. Some aviation engines have experienced cooling problems when used as pushers. To counter this, auxiliary fans may be installed, adding additional weight. The engine of a pusher exhausts forward of the propeller, and in this case, the exhaust may contribute to corrosion or other damage to the propeller. This is usually minimal, and may be mainly visible in the form of soot stains on the blades.
Safety
Propeller
In case of propeller/tail proximity, a blade break can hit the tail or produce destructive vibrations, leading to a loss of control.
Crew members risk striking the propeller while attempting to bail out of a single-engined airplane with a pusher prop. At least one early ejector seat was designed specifically to counter this risk. Some modern light aircraft include a parachute system that saves the entire aircraft, thus averting the need to bail out.
Engine
Engine location in the pusher configuration might endanger the aircraft's occupants in a crash or crash-landing in which engine momentum projects through the cabin. For example, with the engine placed directly behind the cabin, during a nose-on impact, the engine momentum may carry the engine through the firewall and cabin, and might injure some cabin occupants.
Aircraft loading
Spinning propellers are always a hazard on ground working, such as loading or embarking the airplane. The tractor configuration leaves the rear of the plane as relatively safe working area, while a pusher is dangerous to approach from behind, while a spinning propeller may suck in things and people nearby in front of it with fatal results to both the plane and the people sucked in. Even more hazardous are unloading operations, especially mid-air, such as dropping supplies on parachute or skydiving operations, which are next to impossible with a pusher configuration airplane, especially if propellers are mounted on fuselage or sponsons.
See also
Ducted fan
List of pusher aircraft by configuration
List of pusher aircraft by configuration and date
References
Notes
Citations
Sources
External links
Aircraft configurations | Pusher configuration | [
"Engineering"
] | 3,874 | [
"Aircraft configurations",
"Aerospace engineering"
] |
1,026,522 | https://en.wikipedia.org/wiki/Boltzmann%20equation | The Boltzmann equation or Boltzmann transport equation (BTE) describes the statistical behaviour of a thermodynamic system not in a state of equilibrium; it was devised by Ludwig Boltzmann in 1872.
The classic example of such a system is a fluid with temperature gradients in space causing heat to flow from hotter regions to colder ones, by the random but biased transport of the particles making up that fluid. In the modern literature the term Boltzmann equation is often used in a more general sense, referring to any kinetic equation that describes the change of a macroscopic quantity in a thermodynamic system, such as energy, charge or particle number.
The equation arises not by analyzing the individual positions and momenta of each particle in the fluid but rather by considering a probability distribution for the position and momentum of a typical particle—that is, the probability that the particle occupies a given very small region of space (mathematically the volume element ) centered at the position , and has momentum nearly equal to a given momentum vector (thus occupying a very small region of momentum space ), at an instant of time.
The Boltzmann equation can be used to determine how physical quantities change, such as heat energy and momentum, when a fluid is in transport. One may also derive other properties characteristic to fluids such as viscosity, thermal conductivity, and electrical conductivity (by treating the charge carriers in a material as a gas). See also convection–diffusion equation.
The equation is a nonlinear integro-differential equation, and the unknown function in the equation is a probability density function in six-dimensional space of a particle position and momentum. The problem of existence and uniqueness of solutions is still not fully resolved, but some recent results are quite promising.
Overview
The phase space and density function
The set of all possible positions r and momenta p is called the phase space of the system; in other words a set of three coordinates for each position coordinate x, y, z, and three more for each momentum component , , . The entire space is 6-dimensional: a point in this space is , and each coordinate is parameterized by time t. The small volume ("differential volume element") is written
Since the probability of molecules, which all have and within , is in question, at the heart of the equation is a quantity which gives this probability per unit phase-space volume, or probability per unit length cubed per unit momentum cubed, at an instant of time . This is a probability density function: , defined so that,
is the number of molecules which all have positions lying within a volume element about and momenta lying within a momentum space element about , at time . Integrating over a region of position space and momentum space gives the total number of particles which have positions and momenta in that region:
which is a 6-fold integral. While is associated with a number of particles, the phase space is for one-particle (not all of them, which is usually the case with deterministic many-body systems), since only one and is in question. It is not part of the analysis to use , for particle 1, , for particle 2, etc. up to , for particle N.
It is assumed the particles in the system are identical (so each has an identical mass ). For a mixture of more than one chemical species, one distribution is needed for each, see below.
Principal statement
The general equation can then be written as
where the "force" term corresponds to the forces exerted on the particles by an external influence (not by the particles themselves), the "diff" term represents the diffusion of particles, and "coll" is the collision term – accounting for the forces acting between particles in collisions. Expressions for each term on the right side are provided below.
Note that some authors use the particle velocity instead of momentum ; they are related in the definition of momentum by .
The force and diffusion terms
Consider particles described by , each experiencing an external force not due to other particles (see the collision term for the latter treatment).
Suppose at time some number of particles all have position within element and momentum within . If a force instantly acts on each particle, then at time their position will be and momentum . Then, in the absence of collisions, must satisfy
Note that we have used the fact that the phase space volume element is constant, which can be shown using Hamilton's equations (see the discussion under Liouville's theorem). However, since collisions do occur, the particle density in the phase-space volume changes, so
where is the total change in . Dividing () by and taking the limits and , we have
The total differential of is:
where is the gradient operator, is the dot product,
is a shorthand for the momentum analogue of , and , , are Cartesian unit vectors.
Final statement
Dividing () by and substituting into () gives:
In this context, is the force field acting on the particles in the fluid, and is the mass of the particles. The term on the right hand side is added to describe the effect of collisions between particles; if it is zero then the particles do not collide. The collisionless Boltzmann equation, where individual collisions are replaced with long-range aggregated interactions, e.g. Coulomb interactions, is often called the Vlasov equation.
This equation is more useful than the principal one above, yet still incomplete, since cannot be solved unless the collision term in is known. This term cannot be found as easily or generally as the others – it is a statistical term representing the particle collisions, and requires knowledge of the statistics the particles obey, like the Maxwell–Boltzmann, Fermi–Dirac or Bose–Einstein distributions.
The collision term (Stosszahlansatz) and molecular chaos
Two-body collision term
A key insight applied by Boltzmann was to determine the collision term resulting solely from two-body collisions between particles that are assumed to be uncorrelated prior to the collision. This assumption was referred to by Boltzmann as the "" and is also known as the "molecular chaos assumption". Under this assumption the collision term can be written as a momentum-space integral over the product of one-particle distribution functions:
where and are the momenta of any two particles (labeled as A and B for convenience) before a collision, and are the momenta after the collision,
is the magnitude of the relative momenta (see relative velocity for more on this concept), and is the differential cross section of the collision, in which the relative momenta of the colliding particles turns through an angle into the element of the solid angle , due to the collision.
Simplifications to the collision term
Since much of the challenge in solving the Boltzmann equation originates with the complex collision term, attempts have been made to "model" and simplify the collision term. The best known model equation is due to Bhatnagar, Gross and Krook. The assumption in the BGK approximation is that the effect of molecular collisions is to force a non-equilibrium distribution function at a point in physical space back to a Maxwellian equilibrium distribution function and that the rate at which this occurs is proportional to the molecular collision frequency. The Boltzmann equation is therefore modified to the BGK form:
where is the molecular collision frequency, and is the local Maxwellian distribution function given the gas temperature at this point in space. This is also called "relaxation time approximation".
General equation (for a mixture)
For a mixture of chemical species labelled by indices the equation for species is
where , and the collision term is
where , the magnitude of the relative momenta is
and is the differential cross-section, as before, between particles i and j. The integration is over the momentum components in the integrand (which are labelled i and j). The sum of integrals describes the entry and exit of particles of species i in or out of the phase-space element.
Applications and extensions
Conservation equations
The Boltzmann equation can be used to derive the fluid dynamic conservation laws for mass, charge, momentum, and energy. For a fluid consisting of only one kind of particle, the number density is given by
The average value of any function is
Since the conservation equations involve tensors, the Einstein summation convention will be used where repeated indices in a product indicate summation over those indices. Thus and , where is the particle velocity vector. Define as some function of momentum only, whose total value is conserved in a collision. Assume also that the force is a function of position only, and that f is zero for . Multiplying the Boltzmann equation by A and integrating over momentum yields four terms, which, using integration by parts, can be expressed as
where the last term is zero, since is conserved in a collision. The values of correspond to moments of velocity (and momentum , as they are linearly dependent).
Zeroth moment
Letting , the mass of the particle, the integrated Boltzmann equation becomes the conservation of mass equation:
where is the mass density, and is the average fluid velocity.
First moment
Letting , the momentum of the particle, the integrated Boltzmann equation becomes the conservation of momentum equation:
where is the pressure tensor (the viscous stress tensor plus the hydrostatic pressure).
Second moment
Letting , the kinetic energy of the particle, the integrated Boltzmann equation becomes the conservation of energy equation:
where is the kinetic thermal energy density, and is the heat flux vector.
Hamiltonian mechanics
In Hamiltonian mechanics, the Boltzmann equation is often written more generally as
where is the Liouville operator (there is an inconsistent definition between the Liouville operator as defined here and the one in the article linked) describing the evolution of a phase space volume and is the collision operator. The non-relativistic form of is
Quantum theory and violation of particle number conservation
It is possible to write down relativistic quantum Boltzmann equations for relativistic quantum systems in which the number of particles is not conserved in collisions. This has several applications in physical cosmology, including the formation of the light elements in Big Bang nucleosynthesis, the production of dark matter and baryogenesis. It is not a priori clear that the state of a quantum system can be characterized by a classical phase space density f. However, for a wide class of applications a well-defined generalization of f exists which is the solution of an effective Boltzmann equation that can be derived from first principles of quantum field theory.
General relativity and astronomy
The Boltzmann equation is of use in galactic dynamics. A galaxy, under certain assumptions, may be approximated as a continuous fluid; its mass distribution is then represented by f; in galaxies, physical collisions between the stars are very rare, and the effect of gravitational collisions can be neglected for times far longer than the age of the universe.
Its generalization in general relativity is
where is the Christoffel symbol of the second kind (this assumes there are no external forces, so that particles move along geodesics in the absence of collisions), with the important subtlety that the density is a function in mixed contravariant-covariant phase space as opposed to fully contravariant phase space.
In physical cosmology the fully covariant approach has been used to study the cosmic microwave background radiation. More generically the study of processes in the early universe often attempt to take into account the effects of quantum mechanics and general relativity. In the very dense medium formed by the primordial plasma after the Big Bang, particles are continuously created and annihilated. In such an environment quantum coherence and the spatial extension of the wavefunction can affect the dynamics, making it questionable whether the classical phase space distribution f that appears in the Boltzmann equation is suitable to describe the system. In many cases it is, however, possible to derive an effective Boltzmann equation for a generalized distribution function from first principles of quantum field theory. This includes the formation of the light elements in Big Bang nucleosynthesis, the production of dark matter and baryogenesis.
Solving the equation
Exact solutions to the Boltzmann equations have been proven to exist in some cases; this analytical approach provides insight, but is not generally usable in practical problems.
Instead, numerical methods (including finite elements and lattice Boltzmann methods) are generally used to find approximate solutions to the various forms of the Boltzmann equation. Example applications range from hypersonic aerodynamics in rarefied gas flows to plasma flows. An application of the Boltzmann equation in electrodynamics is the calculation of the electrical conductivity - the result is in leading order identical with the semiclassical result.
Close to local equilibrium, solution of the Boltzmann equation can be represented by an asymptotic expansion in powers of Knudsen number (the Chapman–Enskog expansion). The first two terms of this expansion give the Euler equations and the Navier–Stokes equations. The higher terms have singularities. The problem of developing mathematically the limiting processes, which lead from the atomistic view (represented by Boltzmann's equation) to the laws of motion of continua, is an important part of Hilbert's sixth problem.
Limitations and further uses of the Boltzmann equation
The Boltzmann equation is valid only under several assumptions. For instance, the particles are assumed to be pointlike, i.e. without having a finite size. There exists a generalization of the Boltzmann equation that is called the Enskog equation. The collision term is modified in Enskog equations such that particles have a finite size, for example they can be modelled as spheres having a fixed radius.
No further degrees of freedom besides translational motion are assumed for the particles. If there are internal degrees of freedom, the Boltzmann equation has to be generalized and might possess inelastic collisions.
Many real fluids like liquids or dense gases have besides the features mentioned above more complex forms of collisions, there will be not only binary, but also ternary and higher order collisions. These must be derived by using the BBGKY hierarchy.
Boltzmann-like equations are also used for the movement of cells. Since cells are composite particles that carry internal degrees of freedom, the corresponding generalized Boltzmann equations must have inelastic collision integrals. Such equations can describe invasions of cancer cells in tissue, morphogenesis, and chemotaxis-related effects.
See also
Vlasov equation
The Vlasov–Poisson equation
Landau kinetic equation
Fokker–Planck equation
Williams–Boltzmann equation
Derivation of Navier–Stokes equation from LBE
Derivation of Jeans equation from BE
Jeans's theorem
H-theorem
Notes
References
. Very inexpensive introduction to the modern framework (starting from a formal deduction from Liouville and the Bogoliubov–Born–Green–Kirkwood–Yvon hierarchy (BBGKY) in which the Boltzmann equation is placed). Most statistical mechanics textbooks like Huang still treat the topic using Boltzmann's original arguments. To derive the equation, these books use a heuristic explanation that does not bring out the range of validity and the characteristic assumptions that distinguish Boltzmann's from other transport equations like Fokker–Planck or Landau equations.
External links
The Boltzmann Transport Equation by Franz Vesely
Boltzmann gaseous behaviors solved
Eponymous equations of physics
Partial differential equations
Statistical mechanics
Transport phenomena
Equation
1872 in science
1872 in Germany
Thermodynamic equations | Boltzmann equation | [
"Physics",
"Chemistry",
"Engineering"
] | 3,191 | [
"Transport phenomena",
"Physical phenomena",
"Thermodynamic equations",
"Equations of physics",
"Chemical engineering",
"Eponymous equations of physics",
"Thermodynamics",
"Statistical mechanics"
] |
1,026,606 | https://en.wikipedia.org/wiki/Sweet%20crude%20oil | Sweet crude oil is a type of petroleum. The New York Mercantile Exchange designates petroleum with less than 0.5% sulfur as sweet.
Petroleum containing higher levels of sulfur is called sour crude oil.
Sweet crude oil contains small amounts of hydrogen sulfide and carbon dioxide. High-quality, low-sulfur crude oil is commonly used for processing into gasoline and is in high demand, particularly in industrialized nations. Light sweet crude oil is the most sought-after version of crude oil as it contains a disproportionately large fraction that is directly processed (fractionation) into gasoline (naphtha), kerosene, and high-quality diesel (gas oil).
The term sweet originates from the fact that a low level of sulfur provides the oil with a relatively sweet taste and pleasant smell, compared to sulfurous oil. Nineteenth-century prospectors would taste and smell small quantities of oil to determine its quality.
Producers
Producers of sweet crude oil include:
Asia/Pacific:
The Far East/Oceania:
Australia
Brunei
China
India
Indonesia
Malaysia
New Zealand
Vietnam
The Middle East
Iran
Iraq
Saudi Arabia
United Arab Emirates
North America:
Canada
United States
Europe:
Russia
Azerbaijan
The North Sea area:
Norway
United Kingdom (Brent Crude)
England
Scotland
Africa:
North Africa:
Algeria
Libya
Western Africa
Nigeria
Ghana
Central Africa
Angola
Democratic Republic of the Congo
Republic of the Congo
South Sudan
South America:
The Guianas:
Suriname, Guyana Basin
Andean Region
Colombia
Peru
Southern Cone
Argentina
Brazil
Pricing
The term "price of oil", as used in the U.S. media, generally means the cost per barrel (42 U.S. gallons) of West Texas Intermediate Crude, to be delivered to Cushing, Oklahoma during the upcoming month. This information is available from NYMEX or the U.S. Energy Information Administration.
See also
Petroleum Classification
Light crude oil
Sour crude oil
Mazut
List of crude oil products
Oil price increases since 2003
References
External links
EIA oil prices
NYMEX website
Petroleum | Sweet crude oil | [
"Chemistry"
] | 400 | [
"Petroleum",
"Chemical mixtures"
] |
10,971,731 | https://en.wikipedia.org/wiki/Step%20%28unit%29 | A step (, ) was a Roman unit of length equal to 2½ Roman feet () or ½ Roman pace (). Following its standardization under Agrippa, one step was roughly equivalent to .
The Byzantine pace (, bḗma) was an adaption of the Roman step, a distance of 2½ Greek feet.
Similarly, the US customary pace is a distance of 2½ feet or .
See also
Pace (unit)
Roman and Byzantine units
US customary units
References
Units of length
Ancient Roman units of measurement | Step (unit) | [
"Mathematics"
] | 105 | [
"Quantity",
"Units of measurement",
"Units of length"
] |
10,972,219 | https://en.wikipedia.org/wiki/Gun%20chronograph | A ballistic chronograph or gun chronograph is a measuring instrument used to measure the velocity of a projectile in flight, typically fired from a gun or other firearm. The instrument is often useful for tasks such as gauging the utility of a firearm or safety of non-lethal projectiles fired from items such as a paintball gun or BB gun.
History
Benjamin Robins (1707–1751) invented the ballistic pendulum that measures the momentum of the projectile fired by a gun. Dividing the momentum by the projectile mass gives the velocity. Robbins published his results as New Principles of Gunnery in 1742. The ballistic pendulum could make only one measurement per firing because the device catches the projectile. The gun's accuracy also limited how far down range a measurement could be made.
Alessandro Vittorio Papacino d'Antoni published results in 1765 using a wheel chronometer. This used a horizontal spinning wheel with a vertical paper mounted on the rim. The bullet was fired across the diameter of the wheel so that it pierced the paper on both sides, and the angular difference along with the rotation speed of the wheel was used to compute the bullet velocity.
An early chronograph that measures velocity directly was built in 1804 by Grobert, a colonel in the French Army. This used a rapidly rotating axle with two disks mounted on it about 13 feet apart. The bullet was fired parallel to the axle, and the angular displacement of the holes in the two disks, together with the rotational speed of the axle, yielded the bullet velocity.
describes Bashforth's chronograph that could make many measurements over long distances:
In 1865 the Rev. Francis Bashforth, M. A., who had then been recently appointed Professor of Applied Mathematics to the advanced class of artillery officers at Woolwich, began a series of experiments for determining the resistance of the air to the motion of both spherical and oblong projectiles, which he continued from time to time until 1880. As the instruments then in use for measuring velocities were incapable of giving the times occupied by a shot in passing over a series of successive equal spaces, he began his labors by inventing and constructing a chronograph to accomplish this object, which was tried late in 1865 in Woolwich Marshes, with ten screens, and with perfect success.
The Bashforth screens were made with several threads and series connected switches. A projectile passing through a screen would break one or more threads, the broken thread caused a switch to momentarily (about 20 ms) interrupt a current as the switch arm moved from its weighted position to its unweighted position, and the momentary interruption would be recorded on a paper chart.
The first electronic ballistic chronograph was invented by Kiryako ("Jerry") Arvanetakis in the 1950s. As consulting engineer under contract by NACA (later NASA), he was asked to find a way to accurately measure the velocity of various projectiles fired at hyper-velocities into a variety of engineered materials in anticipation of crewed space flight. His first design was an open rectangular frame of square aluminum tubing with a screen of fine copper wire at both ends. Breaking the first wire started charging a capacitor, breaking the second wire stopped it. Measuring the accumulated voltage and knowing the rate of charge the elapsed time could be accurately calculated.
Modern chronograph
The modern chronograph consists of two sensing areas framed by rods topped by diffusing screens or artificial lighting above (or below) along with optical sensors that detect the passage of the bullet. The time it takes the bullet to travel the distance between the sensors is measured electronically from which velocity is calculated and displayed.
Advanced ballistic chronographs include a type employing Doppler radar to measure bullets in free flight at various distances; another is a device mounted at the end of a barrel, which uses magnetic field sensors for the measurement of a bullet's velocity as it exits the muzzle.
See also
Aberdeen chronograph
References
Further reading
Horology
Firearm terminology
Ballistics | Gun chronograph | [
"Physics"
] | 823 | [
"Applied and interdisciplinary physics",
"Physical quantities",
"Horology",
"Time",
"Spacetime",
"Ballistics"
] |
10,972,685 | https://en.wikipedia.org/wiki/N-Acetylaspartylglutamic%20acid | N-Acetylaspartylglutamic acid (''N''-acetylaspartylglutamate or NAAG) is a peptide neurotransmitter and the third-most-prevalent neurotransmitter in the mammalian nervous system. NAAG consists of N-acetylaspartic acid (NAA) and glutamic acid coupled via a peptide bond.
NAAG was discovered as a nervous system-specific peptide in 1965 by Curatolo and colleagues but initially disregarded as a neurotransmitter and not extensively studied. However it meets the criteria for a neurotransmitter, including being concentrated in neurons, packed in synaptic vesicles, released in a calcium-dependent manner, and hydrolyzed in the synaptic space by enzymatic activity.
NAAG activates a specific receptor, the metabotropic glutamate receptor type 3. It is synthesized enzymatically from its two precursors and catabolized by NAAG peptidases in the synapse. The inhibition of the latter enzymes has potentially important therapeutic effects in animal models of several neurologic conditions and disorders.
Under the INN spaglumic acid, NAAG is used as an antiallergic medication in eye drops and nasal preparations.
Research history
After its discovery in 1965, NAAG was disregarded as a neurotransmitter for several reasons. First, neuropeptides were not considered neurotransmitters until years later. Second, it did not seem to directly affect membrane potential, so it was classified as a metabolic intermediate. The importance of brain peptides became clearer with the discovery of endogenous opioids. Whereas the ability of NAAG to interact with NMDA receptors in a manner relevant to physiology is controversial, its primary receptor was long believed to be the mGluR3. Its interaction with the mGluR3 causes an activation of G proteins that reduce the concentration of the second messengers cAMP and cGMP in both the nerve cells and glia. This can lead to several changes in the cellular activity, including regulation of gene expression, reduction in the release of transmitter, and inhibition of long-term potentiation. Stimulation of the mGluR3 by NAAG has been, however, questioned, finding relevant glutamate contamination in commercially available NAAG.
According to one publication, NAAG can be differentiated from NAA in vivo by MR spectroscopy at 3 Tesla.
Biosynthesis
NAAG synthetase activity mediates the biosynthesis of NAAG from glutamate and NAA, but little is known about the mechanism or regulation of this enzyme, and no NAAG synthetase activity has been isolated in cell-free preparations. Since other neuropeptides and nearly all vertebrate peptides are synthesized by post-translational processing, NAAG synthetase activity is relatively unique. As with NAA, the synthesis of NAAG is primarily restricted to neurons, although glial cells also contain and synthesize this peptide. In vitro, NAAG synthesis appears to be regulated by the availability of its precursor, NAA. In addition, during differentiation of neuroblastoma cells, it has been shown that a protein kinase A (PKA) activator will increase the quantity of NAAG, while a protein kinase C (PKC) activator will decrease its concentration. This finding suggests that PKA and PKC have opposing regulatory effects on the NAAG synthetase enzyme.
Catabolism
NAAG is catabolized via NAAG peptidase activity. Two enzymes with NAAG peptidase activity have been cloned, glutamate carboxypeptidase II and glutamate carboxypeptidase III. These enzymes mediate the hydrolysis of NAAG to NAA and glutamate. Their inhibition can produce therapeutic benefits. Two main types of inhibitors of this enzyme are known: compounds related to 2-(phosphonomethyl)pentanedioic acid (2-PMPA) and urea-based analogs of NAAG, including ZJ43, ZJ17, and ZJ11. In rat models, ZJ43 and 2-PMPA reduce perception of inflammatory and neuropathic pain when administered systemically, intracerebrally, or locally, suggesting that NAAG modulates neurotransmission in pain circuits via mGlu3 receptors. The inhibition of NAAG hydrolysis increases the concentration of NAAG in the synaptic space analogous to the effects of MAOIs in increasing the concentration of serotonin. This elevated NAAG gives greater activation of presynaptic mGluR3 receptors, which decrease release of transmitter (glutamate) in the pain signaling pathways of the spinal cord and brain. In the case of traumatic brain injury, the injection of a NAAG peptidase inhibitor reduces neuron and astrocyte death in the hippocampus nearest the site of the injury. In a mouse model of amyotrophic lateral sclerosis (ALS), the chronic inhibition of NAAG peptidase activity delayed the onset of ALS symptoms and slowed the progress of the neuronal death. To model schizophrenia, animals were injected with phencyclidine (PCP) and, therefore, exhibited symptoms of the disorder, such as social withdrawal and motor responses. Upon injection with ZJ43, these behaviors were decreased, suggesting that an increase in NAAG in the synapse — and its subsequent activation of mGluR3 receptors — has potential as a co-therapy for schizophrenia. In these cases, NAAG peptidase inhibition reduces the adverse effects in these disorders. Future research focuses on the role of NAAG in pain perception, brain injury, and schizophrenia while developing NAAG peptidase inhibitors with even greater ability to cross the blood–brain barrier.[16][17]
See also
Aspartate
Glutamate
References
16. Neale JH, Olszewski R. (2019) "A role for N-acetylaspartylglutamate (NAAG) and mGluR3 in cognition" Neurobiol Learn Mem. 2019 Feb;158:9-13. doi: 10.1016/j.nlm.2019.01.006. PMID: 30630041.
17. Neale JH, Yamamoto T. (2020) "N-acetylaspartylglutamate (NAAG) and glutamate carboxypeptidase II: An abundant peptide neurotransmitter-enzyme system with multiple clinical applications" Prog Neurobiol.184:101722. doi: 10.1016/j.pneurobio.2019.101722. PMID: 31730793
Neuropeptides
Neurotransmitters | N-Acetylaspartylglutamic acid | [
"Chemistry"
] | 1,453 | [
"Neurochemistry",
"Neurotransmitters"
] |
10,974,112 | https://en.wikipedia.org/wiki/Alternatives%20to%20car%20use | Established alternatives to car use include cycling, walking, kick scooters, rollerblading, skateboarding, twikes and (electric or internal combustion) motorcycles. Other alternatives are public transport vehicles (buses, guided buses, trolleybuses, trains, subways, monorails, tramways).
History
Prior to the popularity of car use which dominated motorised transport (and consequently urban planning) from around the 1950s onwards, several transportation modes were used. Pedestrianism for both short and long distances was used, but also travel by horse especially for long distances. Trams, especially powered trams, achieved widespread popularity in the 19th century. Carriages, used for centuries, are still used but mainly for tourism.
Public transport
The public transport with the highest modal share worldwide is travelling by bus followed by travelling by rail due to infrastructure cost. A pedestrian form of public transport is a walking bus predominantly used by schools. An attempt to transform private transport by bicycle into public transport has been bicycle sharing schemes. Effectively they are renting access to a fleet.
Bicycle-sharing systems have been implemented in over 1000 cities worldwide, and are especially common in many European and Chinese cities of all sizes. Similar programs have been implemented across the United States as well, including large cities like Washington, D.C., and New York City, as well as smaller cities like Buffalo, New York and Fort Collins, Colorado.
Personal rapid transit is a scheme that has been discussed, in which small, automated vehicles would run on special elevated tracks spaced within walking distance throughout a city, and could provide direct service to a chosen station without stops. However, despite several concepts existing for decades personal rapid transit has failed to gain significant ground and several prototypes and experimental systems have been dismantled as failures.
Private transport
Unmotorised
The private transport with the highest modal share, worldwide that is unmotorised, is pedestrianism followed by cycling.
Motorised
Another possibility is forms of personal transport such as the electric skateboard/mountainboard, electric kick scooter, or personal transporters, such as self-balancing unicycles (i.e. Segway PT and others), which could serve as an alternative to cars and bicycles if they prove to be socially accepted.
Electric or internal combustion motorcycles (which also include scooters) are also an option. Internal combustion motorcycles do create some degree of local air pollution however. That said, the degree of local air pollution varies considerably depending on which fuel (i.e. gasoline, LPG, CNG/biogas, hydrogen) is injected to the internal combustion engine. This fuel can be freely chosen, and existing motorcycle engine can be converted to run on these. Hydrogen for instance is described as being "near-emissionless" when burned in an internal combustion engine.
Also, velomobiles exist (including electric assisted versions), which compared to regular bicycles have the benefit of being enclosed (hence protecting the driver from the weather), and the potential of being motorized, which can allow one to travel greater distances (at a faster speed).
Benefits
All of these alternative modes of transport pollute less than at least the petroleum-powered car and contribute to transport sustainability. They also provide other significant benefits such as reduced traffic-related injuries and fatalities, reduced space requirements, both for parking and driving, reduced resource usage and pollution related to both manufacturing and driving, increased social inclusion, increased economic and social equity, and more livable streets and cities. Some alternative modes of transportation, especially cycling, also provide regular, low-impact exercise, tailored to the needs of human bodies. Public transport is also linked to increased exercise, because they are combined in a multi-modal transport chain that includes walking or cycling.
A study which checked the costs and the benefits of introducing Low Traffic Neighbourhood in London found the benefits overpass the costs approximately by 100 times in the first 20 years and the difference is growing over time. The health benefits are "£4,800 per local adult" but became prominent generally 1-2 years after the scheme is introduced.
See also
Bicycle parking station
Bike lane
Bus rapid transit
Car costs
Car dependency
Car-free movement
Cargo bike
Cycling infrastructure
Effects of the car on societies
Electric bicycle
Environmental effects of transport
Environmental movement
Environmentalism
Green transport hierarchy
Green vehicle
Human–electric hybrid vehicle
List of bicycle-sharing systems
List of emerging technologies – Transport
Manufacturing emissions of electric cars
Noise pollution
Removal of curbside parking spaces: frees up space for bicycle lanes
Sustainable transport
Walkability
References
Sustainable transport
Cycling | Alternatives to car use | [
"Physics"
] | 917 | [
"Physical systems",
"Transport",
"Sustainable transport"
] |
10,975,396 | https://en.wikipedia.org/wiki/Des-gamma%20carboxyprothrombin | Des-gamma carboxyprothrombin (DCP), also known as protein induced by vitamin K absence/antagonist-II (PIVKA-II), is an abnormal form of the coagulation protein, prothrombin. Normally, the prothrombin precursor undergoes post-translational carboxylation (addition of a carboxylic acid group) by gamma-glutamyl carboxylase in the liver prior to secretion into plasma. DCP/PIVKA-II may be detected in people with deficiency of vitamin K (due to poor nutrition or malabsorption) and in those taking warfarin or other medication that inhibits the action of vitamin K.
Diagnostic use
Hepatocellular carcinoma
A 1984 study first described the use of DCP as a marker of hepatocellular carcinoma (HCC); it was present in 91% of HCC patients, while not being detectable in other liver diseases. The DCP level did not change with the administration of vitamin K, suggesting a defect in gamma-carboxylation activity rather than vitamin K deficiency. A number of subsequent studies have since confirmed this phenomenon.
A 2007 comparison of various HCC tumor markers found DCP the least sensitive to risk factors for HCC (such as cirrhosis), and hence the most useful in predicting HCC. It differentiates HCC from non-malignant liver diseases. Moreover, it has been demonstrated that a combined analysis of DCP and Alpha-fetoprotein (AFP) can lead to a better prediction in early stages of HCC.
Despite many years of use in Japan, only did a 2003 American study reevaluate its use in an American patient series. It also identified HCC at an earlier stage.
Anticoagulant intoxication
A 1987 report described the use of DCP determination in the detection of intoxication with acenocoumarol, a vitamin K antagonist.
References
Coagulation system
Tumor markers | Des-gamma carboxyprothrombin | [
"Chemistry",
"Biology"
] | 418 | [
"Chemical pathology",
"Tumor markers",
"Biomarkers"
] |
10,976,022 | https://en.wikipedia.org/wiki/Euclidean%20shortest%20path | The Euclidean shortest path problem is a problem in computational geometry: given a set of polyhedral obstacles in a Euclidean space, and two points, find the shortest path between the points that does not intersect any of the obstacles.
Two dimensions
In two dimensions, the problem can be solved in polynomial time in a model of computation allowing addition and comparisons of real numbers, despite theoretical difficulties involving the numerical precision needed to perform such calculations. These algorithms are based on two different principles, either performing a shortest path algorithm such as Dijkstra's algorithm on a visibility graph derived from the obstacles or (in an approach called the continuous Dijkstra method) propagating a wavefront from one of the points until it meets the other.
Higher dimensions
In three (and higher) dimensions the problem is NP-hard in the general case, but there exist efficient approximation algorithms that run in polynomial time based on the idea of finding a suitable sample of points on the obstacle edges and performing a visibility graph calculation using these sample points.
There are many results on computing shortest paths which stays on a polyhedral surface. Given two points s and t, say on the surface
of a convex polyhedron, the problem is to compute a shortest path that never leaves the surface and connects s with t.
This is a generalization of the problem from 2-dimension but it is much easier than the 3-dimensional problem.
Variants
There are variations of this problem, where the obstacles are weighted, i.e., one can go through an obstacle, but it incurs
an extra cost to go through an obstacle. The standard problem is the special case where the obstacles have infinite weight. This is
termed as the weighted region problem in the literature.
See also
Shortest path problem, in a graph of edges and vertices
Any-angle path planning, in a grid space
Notes
References
.
.
.
.
.
.
.
.
.
.
.
External links
Implementation of Euclidean Shortest Path algorithm in Digital Geometric Kernel software
Geometric algorithms
Computational geometry | Euclidean shortest path | [
"Mathematics"
] | 398 | [
"Geometry stubs",
"Computational mathematics",
"Combinatorics",
"Computational geometry",
"Geometry",
"Combinatorics stubs"
] |
10,976,170 | https://en.wikipedia.org/wiki/EF-4 | Elongation factor 4 (EF-4) is an elongation factor that is thought to back-translocate on the ribosome during the translation of RNA to proteins. It is found near-universally in bacteria and in eukaryotic endosymbiotic organelles including the mitochondria and the plastid. Responsible for proofreading during protein synthesis, EF-4 is a recent addition to the nomenclature of bacterial elongation factors.
Prior to its recognition as an elongation factor, EF-4 was known as leader peptidase A (LepA), as it is the first cistron on the operon carrying the bacterial leader peptidase. In eukaryotes it is traditionally called GUF1 (GTPase of Unknown Function 1). It has the preliminary EC number 3.6.5.n1.
Evolutionary background
LepA has a highly conserved sequence. LepA orthologs have been found in bacteria and almost all eukaryotes. The conservation in LepA has been shown to cover the entire protein. More specifically, the amino acid identity of LepA among bacterial orthologs ranges from 55%-68%.
Two forms of LepA have been observed; one form of LepA branches with mitochondrial LepA sequences, while the second form branches with cyanobacterial orthologs. These findings demonstrate that LepA is significant for bacteria, mitochondria, and plastids. LepA is absent from archaea.
Structure
The gene encoding LepA is known to be the first cistron as part of a bicistron operon. LepA is a polypeptide of 599 amino acids with a molecular weight of 67 kDa. The amino acid sequence of LepA indicates that it is a G protein, which consists of five known domains. The first four domains are strongly related to domains I, II, III, and V of primary elongation factor EF-G. However, the last domain of LepA is unique. This specific domain resides on the C-terminal end of the protein structure. This arrangement of LepA has been observed in the mitochondria of yeast cells to human cells.
Function
LepA is suspected to improve the fidelity of translation by recognizing a ribosome with mistranslocated tRNA and consequently inducing a back-translocation. By back-translocating the already post-transcriptionally modified ribosome, the EF-G factor capable of secondary translocation. Back-translocation by LepA occurs at a similar rate as an EF-G-dependent translocation. As mentioned above, EF-G's structure is highly analogous to LepA's structure; LepA's function is thus similarly analogous to EF-G's function. However, Domain IV of EF-G has been shown through several studies to occupy the decoding sequence of the A site after the tRNAs have been translocated from A and P sites to the P and E sites. Thus, domain IV of EF-G prevents back-movement of the tRNA. Despite the structural similarities between LepA and EF-G, LepA lacks this Domain IV. Thus LepA reduces the activation barrier between Pre and POST states in a similar way to EF-G but is, at the same time, able to catalyze a back-translocation rather that a canonical translocation.
Activity
LepA exhibits uncoupled GTPase activity. This activity is stimulated by the ribosome to the same extent as the activity of EF-G, which is known to have the strongest ribosome-dependent GTPase activity among all characterized G proteins involved in translation. Conversely, uncoupled GTPase activity occurs when the ribosome stimulation of GTP cleavage is not directly dependent on protein synthesis. In the presence of GTP, LepA works catalytically. On the other hand, in the presence of the nonhydrolysable GTP – GDPNP – the LepA action becomes stoichiometric, saturating at about one molecule per 70S ribosomes. This data demonstrates that GTP cleavage is required for dissociation of LepA from the ribosome, which is demonstrative of a typical G protein. At low concentrations of LepA (less than or equal to 3 molecules per 70S ribosome), LepA specifically recognizes incorrectly translocated ribosomes, back-translocates them, and thus allows EF-G to have a second chance to catalyze the correct translocation reaction. At high concentrations (about 1 molecule per 70S ribosome), LepA loses its specificity and back-translocates every POST ribosome. This places the translational machinery in a nonreproductive mode. This explains the toxicity of LepA when it is found in a cell in high concentrations. Hence, at low concentrations LepA significantly improves the yield and activity of synthesized proteins; however, at high concentrations LepA is toxic to cells.
Additionally, LepA has an effect on peptide bond formation. Through various studies in which functional derivatives of ribosomes were mixed with puromycin (an analog of the 3' end of an aa-tRNA) it was determined that adding LepA to a post transcriptionally modified ribosome prevents dipeptide formation as it inhibits the binding of aa-tRNA to the A site.
Experimental data
There have been various experiments elucidating the structure and function of LepA. One notable study is termed the "toeprinting experiment": this experiment helped to determine LepA's ability to back-translocate. In this case, a primer was extended via reverse transcription along mRNA which was ribosome-bound. The primers from modified mRNA strands from various ribosomes were extended with and without LepA. An assay was then conducted with both PRE and POST states, and cleavage studies revealed enhanced positional cleavage in the POST state as opposed to the PRE state. Since the POST state had been in the presence of LepA (plus GTP), it was determined that the strong signal characteristic of the POST state was the result of LepA which then brought the signal down to the level of the PRE state. Such a study demonstrated that that ribosome, upon binding to the LepA-GTP complex assumes the PRE state configuration.
See also
Prokaryotic elongation factors
Eukaryotic elongation factors
References
Investigation into the biological function of the highly conserved GTPase LepA PhD thesis.
External links
Protein biosynthesis | EF-4 | [
"Chemistry"
] | 1,384 | [
"Protein biosynthesis",
"Gene expression",
"Biosynthesis"
] |
10,977,211 | https://en.wikipedia.org/wiki/Diprenorphine | Diprenorphine (brand name Revivon; former developmental code name M5050), also known as diprenorfin, is a non-selective, high-affinity, weak partial agonist of the μ- (MOR), κ- (KOR), and δ-opioid receptor (DOR) (with equal affinity) which is used in veterinary medicine as an opioid antagonist. It is used to reverse the effects of super-potent opioid analgesics such as etorphine and carfentanil that are used for tranquilizing large animals. The drug is not approved for use in humans.
Diprenorphine is the strongest opioid antagonist that is commercially available (some 100 times more potent as an antagonist than nalorphine), and is used for reversing the effects of very strong opioids for which the binding affinity is so high that naloxone does not effectively or reliably reverse the narcotic effects. These super-potent opioids, with the single exception of buprenorphine (which has an improved safety-profile due to its partial agonism character), are not used in humans because the dose for a human is so small that it would be difficult to measure properly , so there is an excessive risk of overdose leading to fatal respiratory depression. However conventional opioid derivatives are not strong enough to rapidly tranquilize large animals, like elephants and rhinos, so drugs such as etorphine and carfentanil are available for this purpose.
Diprenorphine is considered to be the specific reversing agent/antagonist for etorphine and carfentanil, and is normally used to remobilise animals once veterinary procedures have been completed. Since diprenorphine also has partial agonistic properties of its own, it should not be used on humans if they are accidentally exposed to etorphine or carfentanil. Naloxone or naltrexone is the preferred human opioid receptor antagonist.
In theory, diprenorphine could also be used as an antidote for treating overdose of certain opioid derivatives which are used in humans, particularly buprenorphine for which the binding affinity is so high that naloxone does not reliably reverse the narcotic effects. However, diprenorphine is not generally available in hospitals; instead a vial of diprenorphine is supplied with etorphine or carfentanil specifically for reversing the effects of the drug, so the use of diprenorphine for treating a buprenorphine overdose is not usually carried out in practice.
Because diprenorphine is a weak partial agonist of the opioid receptors rather than a silent antagonist, it can produce some opioid effects in the absence of other opioids at sufficient doses. Moreover, due to partial agonism of the KOR, where it appears to possess significantly greater intrinsic activity relative to the MOR, diprenorphine can produce sedation as well as, in humans, hallucinations.
References
Delta-opioid receptor antagonists
Ethers
Kappa-opioid receptor agonists
Kappa-opioid receptor antagonists
4,5-Epoxymorphinans
Mu-opioid receptor antagonists
Hydroxyarenes
Semisynthetic opioids
Tertiary alcohols | Diprenorphine | [
"Chemistry"
] | 697 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
10,977,339 | https://en.wikipedia.org/wiki/Xenon-135 | Xenon-135 (135Xe) is an unstable isotope of xenon with a half-life of about 9.2 hours. 135Xe is a fission product of uranium and it is the most powerful known neutron-absorbing nuclear poison (2 million barns; up to 3 million barns under reactor conditions), with a significant effect on nuclear reactor operation. The ultimate yield of xenon-135 from fission is 6.3%, though most of this is from fission-produced tellurium-135 and iodine-135.
135Xe effects on reactor restart
In a typical nuclear reactor fueled with uranium-235, the presence of 135Xe as a fission product presents designers and operators with problems due to its large neutron cross section for absorption. Because absorbing neutrons can impair a nuclear reactor's ability to increase power, reactors are designed to mitigate this effect and operators are trained to anticipate and react to these transients. This practice dates to the first fission piles, constructed by the Manhattan Project during the Second World War. Enrico Fermi suspected that 135Xe would act as a powerful neutron poison and followed the advice of Emilio Segrè by contacting his student Chien-Shiung Wu. Wu's unpublished paper on 135Xe verified Fermi's guess that it absorbed neutrons and was the cause of the disruptions to the B Reactor then in use at Hanford, Washington to breed plutonium for the American implosion bomb.
During periods of steady state operation at a constant neutron flux level, the 135Xe concentration builds up to its equilibrium value for that reactor power in about 40 to 50 hours. When the reactor power is increased, 135Xe concentration initially decreases because the burn up is increased at the new higher power level. Because 95% of the 135Xe production is from decay of 135I, which has a 6.57 hour half-life, the production of 135Xe remains constant; at this point, the 135Xe concentration reaches a minimum. The concentration then increases to the new equilibrium level (more accurately steady state level) for the new power level in roughly 40 to 50 hours. During the initial 4 to 6 hours following the power change, the magnitude and the rate of change of concentration is dependent upon the initial power level and on the amount of change in power level; the 135Xe concentration change is greater for a larger change in power level. When reactor power is decreased, the process is reversed.
Iodine-135 is a fission product of uranium with a yield of about 6% (counting also the 135I produced almost immediately from decay of fission-produced tellurium-135). This 135I decays with a 6.57 hour half-life to 135Xe. Thus, in an operating nuclear reactor, 135Xe is being continuously produced. 135Xe has a very large neutron absorption cross-section, so in the high-neutron-flux environment of a nuclear reactor core, the 135Xe soon absorbs a neutron and becomes effectively stable . (The half life of is >1021 years, and it is not treated as a radioisotope.) Thus, in about 50 hours, the 135Xe concentration reaches equilibrium where its creation by 135I decay is balanced with its destruction by neutron absorption.
When reactor power is decreased or shut down by inserting neutron-absorbing control rods, the reactor neutron flux is reduced and the equilibrium shifts initially towards higher 135Xe concentration. The 135Xe concentration peaks about 11.1 hours after reactor power is decreased. Since 135Xe has a 9.2 hour half-life, the 135Xe concentration gradually decays back to low levels over 72 hours.
The temporarily high level of 135Xe with its high neutron absorption cross-section makes it difficult to restart the reactor for several hours. The neutron-absorbing 135Xe acts like a control rod, reducing reactivity. The inability of a reactor to be started due to the effects of 135Xe is sometimes referred to as xenon-precluded start-up, and the reactor is said to be "poisoned out". The period of time that the reactor is unable to overcome the effects of 135Xe is called the "xenon dead time".
If sufficient reactivity control authority is available, the reactor can be restarted, but the xenon burn-out transient must be carefully managed. As the control rods are extracted and criticality is reached, neutron flux increases many orders of magnitude and the 135Xe begins to absorb neutrons and be transmuted to . The reactor burns off the nuclear poison. As this happens, the reactivity and neutron flux increases, and the control rods must be gradually reinserted to counter the loss of neutron absorption by the 135Xe. Otherwise, the reactor neutron flux will continue to increase, burning off even more xenon poison, on a path to runaway criticality. The time constant for this burn-off transient depends on the reactor design, power level history of the reactor for the past several days, and the new power setting. For a typical step up from 50% power to 100% power, 135Xe concentration falls for about 3 hours.
Xenon poisoning was a contributing factor to the Chernobyl disaster; during a run-down to a lower power, a combination of operator error and xenon poisoning caused the reactor thermal power to fall to near-shutdown levels. The crew's resulting efforts to restore power placed the reactor in a highly unsafe configuration. A flaw in the SCRAM system inserted positive reactivity, causing a thermal transient and a steam explosion that tore the reactor apart.
Reactors using continuous reprocessing like many molten salt reactor designs might be able to extract 135Xe from the fuel and avoid these effects. Fluid fuel reactors cannot develop xenon inhomogeneity because the fuel is free to mix. Also, the Molten Salt Reactor Experiment demonstrated that spraying the liquid fuel as droplets through a gas space during recirculation can allow xenon and krypton to leave the fuel salts. Removing 135Xe from neutron exposure improves neutron economy, but causes the reactor to produce more of the long-lived fission product 135Cs. The long lived (but 76000 times less radioactive) caesium-135 condenses in a separate tank after the decay of 135Xe, and is physically separate from the 30.05 year half life caesium-137 (137Cs) produced in the fuel, and it is practical to handle them separately (fission yield is approximately 6% for both).
Decay and capture products
A 135Xe atom that does not capture a neutron undergoes beta decay to 135Cs, one of the 7 long-lived fission products, while a 135Xe that does capture a neutron becomes almost-stable 136Xe.
The probability of capturing a neutron before decay varies with the neutron flux, which itself depends on the kind of reactor, fuel enrichment and power level; and the 135Cs / 136Xe ratio switches its predominant branch very near usual reactor conditions.
Estimates of the proportion of 135Xe during steady-state reactor operation that captures a neutron include 90%, 39%–91% and "essentially all".
For instance, in a (somewhat high) neutron flux of 1014 n·cm−2·s−1, the xenon cross section of σ = cm2 ( barn) would lead to a capture probability of s−1, which corresponds to a half-life of about one hour. Compared to the 9.17 hour half-life of 135Xe, this nearly ten-to-one ratio means that under such conditions, essentially all 135Xe would capture a neutron before decay. But if the neutron flux is lowered to one-tenth of this value, like in CANDU reactors, the ratio would be 50-50, and half the 135Xe would decay to 135Cs before neutron capture.
136Xe from neutron capture ends up as part of the eventual stable fission xenon which also includes 134Xe, 132Xe, and 131Xe produced by fission and beta decay rather than neutron capture.
Nuclei of 133Xe, 137Xe, and 135Xe that have not captured a neutron all beta decay to isotopes of caesium. Fission produces 133Xe, 137Xe, and 135Xe in roughly equal amounts but, after neutron capture, fission caesium contains more stable 133Cs (which however can become 134Cs on further neutron activation) and highly radioactive 137Cs than 135Cs.
Spatial xenon oscillations
Large thermal reactors with low flux coupling between regions may experience spatial power oscillations because of the non-uniform presence of xenon-135. Xenon-induced spatial power oscillations occur as a result of rapid perturbations to power distribution that cause the xenon and iodine distribution to be out of phase with the perturbed power distribution. This results in a shift in xenon and iodine distributions that causes the power distribution to change in an opposite direction from the initial perturbation.
The instantaneous production rate of xenon-135 is dependent on the iodine-135 concentration and therefore on the local neutron flux history. On the other hand, the destruction rate of xenon-135 is dependent on the instantaneous local neutron flux.
The combination of delayed generation and high neutron-capture cross section produces a diversity of impacts on nuclear reactor operation. The mechanism is described in the following four steps.
An initial lack of symmetry (for example, axial symmetry, in the case of axial oscillations) in the core power distribution (for example as a result of significant control rods movement) causes an imbalance in fission rates within the reactor core, and therefore, in the iodine-135 buildup and the xenon-135 absorption.
In the high-flux region, xenon-135 burnout allows the flux to increase further, while in the low-flux region, the increase in xenon-135 causes a further reduction in flux. The iodine concentration increases where the flux is high and decreases where the flux is low. This shift in the xenon distribution is such as to increase (decrease) the multiplication properties of the region in which the flux has increased (decreased), thus enhancing the flux tilt.
As soon as the iodine-135 levels build up sufficiently, decay to xenon reverses the initial situation. Flux decreases in this area, and the former low-flux region increases in power.
Repetition of these patterns can lead to xenon oscillations moving about the core with periods on the order of about 24 hours.
With little change in overall power level, these oscillations can significantly change the local power levels. This oscillation may go unnoticed and reach dangerous local flux levels if only the total power of the core is monitored. Therefore, most PWRs use tandem power range excore neutron detectors to monitor upper and lower halves of the core separately.
See also
Isotopes of xenon
Shutdown (nuclear reactor)
References
Further reading
"Xenon Poisoning" or Neutron Absorption in Reactors
Fission products
Isotopes of xenon
Neutron poisons | Xenon-135 | [
"Chemistry"
] | 2,309 | [
"Nuclear fission",
"Isotopes",
"Fission products",
"Nuclear fallout",
"Isotopes of xenon"
] |
10,977,382 | https://en.wikipedia.org/wiki/Isothermal%20transformation%20diagram | Isothermal transformation diagrams (also known as time-temperature-transformation (TTT) diagrams) are plots of temperature versus time (usually on a logarithmic scale). They are generated from percentage transformation-vs time measurements, and are useful for understanding the transformations of an alloy steel at elevated temperatures.
An isothermal transformation diagram is only valid for one specific composition of material, and only if the temperature is held constant during the transformation, and strictly with rapid cooling to that temperature. Though usually used to represent transformation kinetics for steels, they also can be used to describe the kinetics of crystallization in ceramic or other materials. Time-temperature-precipitation diagrams and time-temperature-embrittlement diagrams have also been used to represent kinetic changes in steels.
Isothermal transformation (IT) diagram or the C-curve is associated with mechanical properties, microconstituents/microstructures, and heat treatments in carbon steels. Diffusional transformations like austenite transforming to a cementite and ferrite mixture can be explained using the sigmoidal curve; for example the beginning of pearlitic transformation is represented by the pearlite start (Ps) curve. This transformation is complete at Pf curve. Nucleation requires an incubation time. The rate of nucleation increases and the rate of microconstituent growth decreases as the temperature decreases from the liquidus temperature reaching a maximum at the bay or nose of the curve. Thereafter, the decrease in diffusion rate due to low temperature offsets the effect of increased driving force due to greater difference in free energy. As a result of the transformation, the microconstituents, pearlite and bainite, form; pearlite forms at higher temperatures and bainite at lower.
Austenite is slightly undercooled when quenched below Eutectoid temperature. When given more time, stable microconstituents can form: ferrite and cementite. Coarse pearlite is produced when atoms diffuse rapidly after phases that form pearlite nucleate. This transformation is complete at the pearlite finish time (Pf).
However, greater undercooling by rapid quenching results in formation of martensite or bainite instead of pearlite. This is possible provided the cooling rate is such that the cooling curve intersects the martensite start temperature or the bainite start curve before intersecting the Ps curve. The martensite transformation being a diffusionless shear transformation is represented by a straight line to signify the martensite start temperature.
See also
Continuous cooling transformation
Phase diagram
References
Materials science and Engineering, an introduction. William D. Callister Jr, 7th Ed, Wiley and sons publishing. Pages 258, 326, 462
The Science and Engineering of Materials. Donald R. Askeland, Pradeep P. Fulay, Wendelin J. Wright, 6th Ed, Cengage Learning. Pages 470–5.
Metallurgy
Phase transitions
Diagrams | Isothermal transformation diagram | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 607 | [
"Physical phenomena",
"Phase transitions",
"Metallurgy",
"Phases of matter",
"Critical phenomena",
"Materials science",
"nan",
"Statistical mechanics",
"Matter"
] |
10,977,572 | https://en.wikipedia.org/wiki/Haradh%20gas%20plant | The Haradh Gas Plant is one of the major gas plants in Saudi Arabia. It is located near Haradh village, 300 km southwest of Dhahran. The plant has a capacity of producing 1.6 BSCFD of natural gas and 170,000 BBL/day of condensate (oil). The plant processes only non-associated gas. The plant is considered to be a mid-size, when compared to other sister plants in the region. However, the amount of oil processed is considered to be relatively large.
The plant started operating in April 2003.
References
Haradh
Natural gas in Saudi Arabia
Natural gas plants | Haradh gas plant | [
"Chemistry"
] | 129 | [
"Natural gas technology",
"Natural gas plants"
] |
10,977,794 | https://en.wikipedia.org/wiki/Technetium-99 | Technetium-99 (99Tc) is an isotope of technetium which decays with a half-life of 211,000 years to stable ruthenium-99, emitting beta particles, but no gamma rays. It is the most significant long-lived fission product of uranium fission, producing the largest fraction of the total long-lived radiation emissions of nuclear waste. Technetium-99 has a fission product yield of 6.0507% for thermal neutron fission of uranium-235.
The metastable technetium-99m (99mTc) is a short-lived (half-life about 6 hours) nuclear isomer used in nuclear medicine, produced from molybdenum-99. It decays by isomeric transition to technetium-99, a desirable characteristic, since the very long half-life and type of decay of technetium-99 imposes little further radiation burden on the body.
Radiation
The weak beta emission is stopped by the walls of laboratory glassware. Soft X-rays are emitted when the beta particles are stopped, but as long as the body is kept more than 30 cm away these should pose no problem. The primary hazard when working with technetium is inhalation of dust; such radioactive contamination in the lungs can pose a significant cancer risk.
Role in nuclear waste
Due to its high fission yield, relatively long half-life, and mobility in the environment, technetium-99 is one of the more significant components of nuclear waste. Measured in becquerels per amount of spent fuel, it is the dominant producer of radiation in the period from about 104 to 106 years after the creation of the nuclear waste. The next shortest-lived fission product is samarium-151 with a half-life of 90 years, though a number of actinides produced by neutron capture have half-lives in the intermediate range.
Releases
An estimated 160 TBq (about 250 kg) of technetium-99 was released into the environment up to 1994 by atmospheric nuclear tests. The amount of technetium-99 from civilian nuclear power released into the environment up to 1986 is estimated to be on the order of 1000 TBq (about 1600 kg), primarily by outdated methods of nuclear fuel reprocessing; most of this was discharged into the sea. In recent years, reprocessing methods have improved to reduce emissions, but the primary release of technetium-99 into the environment is by the Sellafield plant, which released an estimated 550 TBq (about 900 kg) from 1995–1999 into the Irish Sea. From 2000 onwards the amount has been limited by regulation to 90 TBq (about 140 kg) per year.
In the environment
The long half-life of technetium-99 and its ability to form an anionic species make it (along with 129I) a major concern when considering long-term disposal of high-level radioactive waste. Many of the processes designed to remove fission products from medium-active process streams in reprocessing plants are designed to remove cationic species like caesium (e.g., 137Cs, 134Cs) and strontium (e.g., 90Sr). Hence the pertechnetate escapes through these treatment processes. Current disposal options favor burial in geologically stable rock. The primary danger with such a course is that the waste is likely to come into contact with water, which could leach radioactive contamination into the environment. The natural cation-exchange capacity of soils tends to immobilize plutonium, uranium, and caesium cations. However, the anion-exchange capacity is usually much smaller, so minerals are less likely to adsorb the pertechnetate and iodide anions, leaving them mobile in the soil. For this reason, the environmental chemistry of technetium is an active area of research.
Separation of technetium-99
Several methods have been proposed for technetium-99 separation including: crystallization, liquid-liquid extraction, molecular recognition methods, volatilization, and others.
In 2012 the crystalline compound Notre Dame Thorium Borate-1 (NDTB-1) was presented by researchers at the University of Notre Dame. It can be tailored to safely absorb radioactive ions from nuclear waste streams. Once captured, the radioactive ions can then be exchanged for higher-charged species of a similar size, recycling the material for re-use. Lab results using the NDTB-1 crystals removed approximately 96 percent of technetium-99.
Transmutation of technetium to stable ruthenium-100
An alternative disposal method, transmutation, has been demonstrated at CERN for technetium-99. This transmutation process bombards the technetium ( as a metal target) with neutrons, forming the short-lived (half-life 16 seconds) which decays by beta decay to stable ruthenium (). Given the relatively high market value of ruthenium and the particularly undesirable properties of technetium, this type of nuclear transmutation appears particularly promising.
See also
Isotopes of technetium
List of elements facing shortage
Technetium-99m
References
Fission products
Isotopes of technetium
Radiochemistry
Radiopharmaceuticals | Technetium-99 | [
"Chemistry"
] | 1,088 | [
"Nuclear fission",
"Medicinal radiochemistry",
"Isotopes",
"Fission products",
"Nuclear fallout",
"Radiopharmaceuticals",
"Radiochemistry",
"Chemicals in medicine",
"Radioactivity",
"Isotopes of technetium"
] |
10,977,940 | https://en.wikipedia.org/wiki/Photobiotin | Photobiotin is a derivative of biotin used as a biochemical tool. It is composed of a biotin group, a linker group, and a photoactivatable aryl azide group.
The photoactivatable group provides nonspecific labeling of proteins, DNA and RNA probes or other molecules. Biotinylation of DNA and RNA with photoactivatable biotin is easier and less expensive than enzymatic methods since the DNA and RNA does not degrade. Photobiotin is most effectively activated by light at 260-475 nm.
References
Billingsley, M. and J. Polli. “Preparation, characterization and biological properties of biotinylated derivatives of calmodulin.” Biochem J. 275 Pt 3(1991): 733–743
"EZ-Link Photoactivatable Biotin." Pierce Biotechnology, Inc. Rockford, IL: June, 2003.
"Components of Avidin-Biotin Technology: A Handbook." Pierce Biotechnology, Inc. Rockford, IL: June, 2003.
"Photobiotin acetate." Sigma-Aldrich, Co. 2006.
"Photoprobe biotin", Vector Laboratories, Inc., www.vectorlabs.com.
Biotechnology | Photobiotin | [
"Biology"
] | 261 | [
"nan",
"Biotechnology"
] |
10,977,950 | https://en.wikipedia.org/wiki/Desmetramadol | Desmetramadol (), also known as O-desmethyltramadol (O-DSMT), is an opioid analgesic and the main active metabolite of tramadol. Tramadol is demethylated by the liver enzyme CYP2D6 to desmetramadol in the same way as codeine, and so similarly to the variation in effects seen with codeine, individuals who have a less active form of CYP2D6 will tend to have reduced analgesic effects from tramadol. Because desmetramadol itself does not need to be metabolized to induce an analgesic effect, it can be used in individuals with CYP2D6 inactivating mutations.
Pharmacology
Pharmacodynamics
(+)-Desmetramadol is a G-protein biased μ-opioid receptor full agonist. It shows comparatively far lower affinity for the δ- and κ-opioid receptors. The two enantiomers of desmetramadol show quite distinct pharmacological profiles; both (+) and (−)-desmetramadol are inactive as serotonin reuptake inhibitors, but (−)-desmetramadol retains activity as a norepinephrine reuptake inhibitor, and so the mix of both the parent compound and metabolites contributes significantly to the complex pharmacological profile of tramadol. While the multiple receptor targets can be beneficial in the treatment of pain (especially complex pain syndromes such as neuropathic pain), it increases the potential for drug interactions compared to other opioids, and may also contribute to side effects. Desmetramadol is also an antagonist of the serotonin 5-HT2C receptor, at pharmacologically relevant concentrations, via competitive inhibition. This suggests that the apparent anti-depressant properties of tramadol may be at least partially mediated by desmetramadol, thus prolonging the duration of therapeutic benefit. Inhibition of the 5-HT2C receptor is a suggested factor in the mechanism of anti-depressant effects of agomelatine and maprotiline. The potential selectivity and favorable side effect profile of desmetramadol compared to tramadol, makes it more suitable for use as antidepressant, although clinical development appears to have stopped. Upon inhibition of the receptor, downstream signaling causes dopamine and norepinephrine release, and the receptor is thought to significantly regulate mood, anxiety, feeding, and reproductive behavior. 5-HT2C receptors regulate dopamine release in the striatum, prefrontal cortex, nucleus accumbens, hippocampus, hypothalamus, and amygdala, among others.. Research indicates that some suicide victims have an abnormally high number of 5-HT2C receptors in the prefrontal cortex. There is some mixed evidence that agomelatine, a 5-HT2C antagonist, is an effective antidepressant. Antagonism of 5-HT2C receptors by agomelatine results in an increase of dopamine and norepinephrine activity in the frontal cortex.
Pharmacokinetics
Metabolites
Desmetramadol is metabolized in the liver into the active metabolite N,O-didesmethyltramadol via CYP3A4 and CYP2B6. The inactive tramadol metabolite N-desmethyltramadol is metabolized into the active metabolite N,O-didesmethyltramadol by CYP2D6.
History
The history of desmetramadol is intrinsically linked to its discovery and development within the pharmaceutical industry. This journey begins with its synthesis in the research laboratories of Grünenthal GmbH, a prominent pharmaceutical establishment based in Germany, during the late 1970s.
This innovative synthesis marked the inception of desmetramadol as a pharmacological entity. While tramadol, its precursor, was introduced to the global pharmaceutical market in the early 1980s under various brand names and gained adoption as a pain-relieving medication notable for its dual-action characteristics, desmetramadol emerged as a significant metabolite derived from tramadol's metabolism.
In the realm of pharmacology, desmetramadol garnered attention for its unique pharmacological profile. Researchers and healthcare professionals recognized its distinct properties and utility. This recognition proved particularly crucial in cases where tramadol's effectiveness was influenced by individual variations in CYP2D6 enzyme activity. Today, desmetramadol stands as a noteworthy component of the pharmaceutical landscape, offering valuable insights into pain management and pharmacogenetics.
Society and culture
Recreational use
An herbal remedy called Krypton was found to contain kratom leaf powder and desmetramadol. Krypton was reportedly linked to at least 9 accidental opioid overdose deaths in Sweden during 2010–2011.
Medicinal use
Unusually for a compound that first came to prominence as a recreational designer drug, desmetramadol has recently been reevaluated as a potential novel analgesic drug for use in medicine, with its well studied pharmacology and toxicology as an active metabolite of the widely used analgesic drug tramadol offering advantages over more structurally novel alternatives. Human clinical trials have shown it to offer similar analgesic benefits to drugs such as oxycodone and fentanyl but with reduced respiratory depression and a comparatively favorable safety profile.
Legality
United Kingdom
Desmetramadol was made a Class A drug in the United Kingdom on 26 Feb 2013.
See also
7-Hydroxymitragynine
List of investigational analgesics
Tapentadol
Nortilidine
O-AMKD
TLR4
References
5-HT2C antagonists
Dimethylamino compounds
Analgesics
Cyclohexanols
Designer drugs
Mu-opioid receptor agonists
Norepinephrine reuptake inhibitors
Opioid metabolites
3-Hydroxyphenyl compounds
Synthetic opioids
Human drug metabolites | Desmetramadol | [
"Chemistry"
] | 1,302 | [
"Chemicals in medicine",
"Human drug metabolites"
] |
10,978,350 | https://en.wikipedia.org/wiki/Stalin%20Monument%20%28Budapest%29 | The Stalin Monument (, ) was a statue of Joseph Stalin in Budapest, Hungary. Completed in December 1951 as a "gift to Joseph Stalin from the Hungarians on his seventieth birthday", it was torn down on October 23, 1956, by enraged anti-Soviet crowds during Hungary's October Revolution.
Monument
The monument was erected on the edge of Városliget, the city park of Budapest. The large monument stood 25 metres tall in total. The bronze statue stood eight metres high on a four-metre high limestone base on top of a tribune eighteen metres wide. Stalin was portrayed as a speaker, standing tall and rigid with his right hand at his chest. The sides of the tribune were decorated with relief sculptures depicting the Hungarian people welcoming their leader. The Hungarian sculptor, Sándor Mikus, created the statue and was awarded the Kossuth Prize, the highest distinction that can be attained by a Hungarian artist.
Background
The Stalin monument was built during the classical period of socialist realism, the official art of Stalinism, which was a tool to instill the ideology of the Party into the people. This realistic and didactic aesthetic style celebrated the hard working proletariat and especially the cult of personality surrounding figures like Vladimir Lenin, Stalin and other Eastern European Communist leaders.
Stalin statues sprang up everywhere in Eastern Europe from the 1930s to the 1950s. They were cult objects that demonstrated the almost mystical powers of Stalin. Upon the completion of the Stalin statue, a journalist in Budapest said:
The monument not only demonstrated Stalin's power, but the power of the Hungarian Working People's Party as well. Directly across from Stalin's monument was MÉMOSZ, the house of the builder's union, condemned for its modernist architecture influenced by the West.
After the death of Stalin in 1953, Socialist Realism went into decline, in connection with the political changes initiated by Nikita Khrushchev in 1956 at the 20th Congress of the Communist Party of the Soviet Union, when he denounced Stalin's cult of personality.
Destruction
On October 23, 1956, around two hundred thousand Hungarians gathered in Budapest to demonstrate in sympathy for the Poles who had just gained political reform during the Polish October. The Hungarians broadcast sixteen demands over the radio, one of them being the dismantling of Stalin's statue. A hundred thousand Hungarian revolutionaries demolished the Stalin statue, leaving only his boots, in which they planted a Hungarian flag. The bronze inscribed name of the Hungarians' leader, teacher and "best friend" was ripped off from the pedestal. Before the toppling of the statue, someone had placed a sign over Stalin's mouth that read "RUSSIANS, WHEN YOU RUN AWAY DON'T LEAVE ME BEHIND!" The revolutionaries chanted "Russia go home!" while pulling down the statue. "W.C." and other insulting remarks were scrawled over the fragmented parts of the statue.
The account of the incident by Sándor Kopácsi, head of Budapest's police: "[The demonstrators] placed ... a thick steel rope around the neck of the 25-metre tall Stalin's statue while other people, arriving in trucks with oxygen cylinders and metal cutting blowpipes, were setting to work on the statue's bronze shoes. ... An hour later the statue fell down from its pedestal."
Present
The site of the former Stalin Monument is now occupied by the Monument of the 1956 Revolution, completed in 2006 for the 50th anniversary of the historic event.
A life-sized representation of the Stalin Monument was built in Budapest's Statue Park with the broken bronze boots on top of the pedestal in 2006. This is not an accurate copy of the original but only an artistic recreation by sculptor Ákos Eleőd.
See also
History of Hungary
Hungarian Revolution of 1956
On the Cult of Personality and Its Consequences
Polish October
Socialist realism
Stalin Monument (Prague)
Stalinism
References
Bibliography
Åman, Anders. Architecture and Ideology in Eastern Europe During the Stalin Era. Cambridge, MA: The MIT P, 1992.
Bown, Matthew C. Art Under Stalin. Oxford: Phaidon P Limited, 1991. 73–86.
Demaitre, Ann. "The Great Debate on Socialist Realism" The Modern Language Journal 50.5 (1966): 263–268.
Sinko, Katalin. "Political Rituals: the Raising and Demolition of Monuments." Art and Society in the Age of Stalin. Ed. Peter Gyorgy and Hedvig Turai. Budapest: Corvina Bookk, 1992. 81.
Terras, Victor. "Phenomenological Observations on the Aesthetics of Socialist Realism" The Slavic and East European Journal 22.4 (Winter, 1979), pp. 445–457.
External links
Day by day account of the 1956 Revolution
American Hungarian Foundation's 1956 Site with Photos/Audio/Video
Reflection of BBC's Reporting of the Hungarian Revolution
Continuance of Stalin's cult of personality in Georgia
Hungarian Revolution of 1956
Buildings and structures in Budapest
Demolished buildings and structures in Hungary
Monuments and memorials in Budapest
Hungary–Soviet Union relations
Colossal statues
Statues of Joseph Stalin
1951 sculptures
Destroyed sculptures
Removed statues
Sculptures in Hungary
History of Budapest
Soviet monuments outside Russia | Stalin Monument (Budapest) | [
"Physics",
"Mathematics"
] | 1,050 | [
"Quantity",
"Colossal statues",
"Physical quantities",
"Size"
] |
10,978,393 | https://en.wikipedia.org/wiki/XIAP | X-linked inhibitor of apoptosis protein (XIAP), also known as inhibitor of apoptosis protein 3 (IAP3) and baculoviral IAP repeat-containing protein 4 (BIRC4), is a protein that stops apoptotic cell death. In humans, this protein (XIAP) is produced by a gene named XIAP gene located on the X chromosome.
XIAP is a member of the inhibitor of apoptosis family of proteins (IAP). IAPs were initially identified in baculoviruses, but XIAP is one of the homologous proteins found in mammals. It is so called because it was first discovered by a 273 base pair site on the X chromosome. The protein is also called human IAP-like Protein (hILP), because it is not as well conserved as the human IAPS: hIAP-1 and hIAP-2. XIAP is the most potent human IAP protein currently identified.
Discovery
Neuronal apoptosis inhibitor protein (NAIP) was the first homolog to baculoviral IAPs that was identified in humans. With the sequencing data of NIAP, the gene sequence for a RING zinc-finger domain was discovered at site Xq24-25. Using PCR and cloning, three BIR domains and a RING finger were found on the protein, which became known as X-linked Inhibitor of Apoptosis Protein. The transcript size of Xiap is 9.0kb, with an open reading frame of 1.8kb. Xiap mRNA has been observed in all human adult and fetal tissues "except peripheral blood leukocytes". The XIAP sequences led to the discovery of other members of the IAP family.
Structure
XIAP consists of three major types of structural elements (domains). Firstly, there is the baculoviral IAP repeat (BIR) domain consisting of approximately 70 amino acids, which characterizes all IAP. Secondly, there is a UBA domain, which allows XIAP to bind to ubiquitin. Thirdly, there is a zinc-binding domain, or a "carboxy-terminal RING Finger". XIAP has been characterized with three amino-terminal BIR domains followed by one UBA domain and finally one RING domain. Between the BIR-1 and BIR-2 domains, there is a linker-BIR-2 region that is thought to contain the only element that comes into contact with the caspase molecule to form the XIAP/Caspase-7 complex. In solution the full length form of XIAP forms a homodimer of approximately 114 kDa.
Function
XIAP stops apoptotic cell death that is induced either by viral infection or by overproduction of caspases. Caspases are the enzymes primarily responsible for cell death. XIAP binds to and inhibits caspase 3, 7 and 9. The BIR2 domain of XIAP inhibits caspase 3 and 7, while BIR3 binds to and inhibits caspase 9. The RING domain utilizes E3 ubiquitin ligase activity and enables IAPs to catalyze ubiquination of self, caspase-3, or caspase-7 by degradation via proteasome activity. However, mutations affecting the RING Finger do not significantly affect apoptosis, indicating that the BIR domain is sufficient for the protein's function. When inhibiting caspase-3 and caspase-7 activity, the BIR2 domain of XIAP binds to the active-site substrate groove, blocking access of the normal protein substrate that would result in apoptosis.
Caspases are activated by cytochrome c, which is released into the cytosol by dysfunctioning mitochondria. Studies show that XIAP does not directly affect cytochrome c.
XIAP distinguishes itself from the other human IAPs because it is able to effectively prevent cell death due to "TNF-α, Fas, UV light, and genotoxic agents".
Inhibiting XIAP
XIAP is inhibited by DIABLO (Smac) and HTRA2 (Omi), two death-signaling proteins released into the cytoplasm by the mitochondria. Smac/DIABLO, a mitochondrial protein and negative regulator of XIAP, can enhance apoptosis by binding to XIAP and preventing it from binding to caspases. This allows normal caspase activity to proceed. The binding process of Smac/DIABLO to XIAP and caspase release requires a conserved tetrapeptide motif.
Clinical significance
Deregulation of XIAP can result in "cancer, neurodegenerative disorders, and autoimmunity". High proportions of XIAP may function as a tumor marker. In the development of lung cancer NCI-H460, the overexpression of XIAP not only inhibits caspase, but also stops the activity of cytochrome c (Apoptosis). In developing prostate cancer, XIAP is one of four IAPs overexpressed in the prostatic epithelium, indicating that a molecule that inhibits all IAPs may be necessary for effective treatment. Apoptotic regulation is an extremely important biological function, as evidenced by "the conservation of the IAPs from humans to Drosophila".
Mutations in the XIAP gene can result in a severe and rare type of inflammatory bowel disease. Defects in the XIAP gene can also result in an extremely rare condition called X-linked lymphoproliferative disease type 2.
Interactions
XIAP has been shown to interact with:
ALS2CR2,
Caspase 3.
Caspase 7,
Caspase-9,
Diablo homolog
HtrA serine peptidase 2,
MAGED1,
MAP3K2,
TAB1, and
XAF1.
References
Further reading
External links
GeneReviews/NCBI/NIH/UW entry on Lymphoproliferative Disease, X-Linked
Cell signaling
Programmed cell death
Apoptosis
EC 6.3.2 | XIAP | [
"Chemistry",
"Biology"
] | 1,276 | [
"Senescence",
"Programmed cell death",
"Apoptosis",
"Signal transduction"
] |
10,978,553 | https://en.wikipedia.org/wiki/Iodine-129 | Iodine-129 (129I) is a long-lived radioisotope of iodine that occurs naturally but is also of special interest in the monitoring and effects of man-made nuclear fission products, where it serves as both a tracer and a potential radiological contaminant.
Formation and decay
129I is one of seven long-lived fission products. It is primarily formed from the fission of uranium and plutonium in nuclear reactors. Significant amounts were released into the atmosphere by nuclear weapons testing in the 1950s and 1960s, by nuclear reactor accidents and by both military and civil reprocessing of spent nuclear fuel.
It is also naturally produced in small quantities, due to the spontaneous fission of natural uranium, by cosmic ray spallation of trace levels of xenon in the atmosphere, and by cosmic ray muons striking tellurium-130.
129I decays with a half-life of 16.14 million years, with low-energy beta and gamma emissions, to stable xenon-129 (129Xe).
Long-lived fission product
129I is one of the seven long-lived fission products that are produced in significant amounts. Its yield is 0.706% per fission of 235U. Larger proportions of other iodine isotopes such as 131I are produced, but because these all have short half-lives, iodine in cooled spent nuclear fuel consists of about 5/6 129I and 1/6 the only stable iodine isotope, 127I.
Because 129I is long-lived and relatively mobile in the environment, it is of particular importance in long-term management of spent nuclear fuel. In a deep geological repository for unreprocessed used fuel, 129I is likely to be the radionuclide of most potential impact at long times.
Since 129I has a modest neutron absorption cross-section of 30 barns, and is relatively undiluted by other isotopes of the same element, it is being studied for disposal by nuclear transmutation by re-irradiation with neutrons or gamma irradiation.
Release by nuclear fuel reprocessing
A large fraction of the 129I contained in spent fuel is released into the gas phase, when spent fuel is first chopped and then dissolved in boiling nitric acid during reprocessing. At least for civil reprocessing plants, special scrubbers are supposed to withhold 99.5% (or more) of the Iodine by adsorption, before exhaust air is released into the environment. However, the Northeastern Radiological Health Laboratory (NERHL) found, during their measurements at the first US civil reprocessing plant, which was operated by Nuclear Fuel Services, Inc. (NFS) in Western New York, that "between 5 and 10% of the total 129I available from the dissolved fuel" was released into the exhaust stack. They further wrote that "these values are greater than predicted output (Table 1). This was expected since the iodine scrubbers were not operating during the dissolution cycles monitored."
The Northeastern Radiological Health Laboratory further states that, due to limitations of their measuring systems, the actual release of 129I may have even been higher, "since [129I] losses [by adsorption] probably occurred in the piping and ductwork between the stack and the sampler". Furthermore, the sample taking system used by the NERHL had a bubbler trap for measuring the tritium content of the gas samples before the iodine trap. The NERHL found out only after taking the samples that "the bubbler trap retained 60 to 90% of the 129I sampled". NERHL concluded: "The bubblers located upstream of the ion exchangers removed a major portion of the gaseous 129I before it reached the ion exchange sampler. The iodine removal ability of the bubbler was anticipated, but not in the magnitude that it occurred." The documented release of "between 5 and 10% of the total 129I available from the dissolved fuel" is not corrected for those two measurement deficiencies.
Military isolation of plutonium from spent fuel has also released 129I to the atmosphere: "More than 685,000 curies of iodine 131 spewed from the stacks of Hanford's separation plants in the first three years of operation." As 129I and 131I have very similar physical and chemical properties, and no isotope separation was performed at Hanford, 129I must have also been released there in large quantities during the Manhattan project. As Hanford reprocessed "hot" fuel, that had been irradiated in a reactor only a few months earlier, the activity of the released short-lived 131I, with a half-life time of just 8 days, was much higher than that of the long-lived 129I. However, while all of the 131I released during the times of the Manhattan project has decayed by now, over 99.999% of the 129I is still in the environment.
Ice borehole data obtained from the university of Bern at the Fiescherhorn glacier in the Alpian mountains at a height of 3950 m show a somewhat steady increase in the 129I deposit rate (shown in the image as a solid line) with time. In particular, the highest values obtained in 1983 and 1984 are about six times as high as the maximum that was measured during the period of the atmospheric bomb testing in 1961. This strong increase following the conclusion of the atmospheric bomb testing indicates that nuclear fuel reprocessing has been the primary source of atmospheric iodine-129 since then. These measurements lasted until 1986.
Applications
Groundwater age dating
129I is not deliberately produced for any practical purposes. However, its long half-life and its relative mobility in the environment have made it useful for a variety of dating applications. These include identifying older groundwaters based on the amount of natural 129I (or its 129Xe decay product) present, as well as identifying younger groundwaters by the increased anthropogenic 129I levels since the 1960s.
Meteorite age dating
In 1960, physicist John H. Reynolds discovered that certain meteorites contained an isotopic anomaly in the form of an overabundance of 129Xe. He inferred that this must be a decay product of long-decayed radioactive 129I. This isotope is produced in quantity in nature only in supernova explosions. As the half-life of 129I is comparatively short in astronomical terms, this demonstrated that only a short time had passed between the supernova and the time the meteorites had solidified and trapped the 129I. These two events (supernova and solidification of gas cloud) were inferred to have happened during the early history of the Solar System, as the 129I isotope was likely generated before the Solar System was formed, but not long before, and seeded the solar gas cloud isotopes with isotopes from a second source. This supernova source may also have caused collapse of the solar gas cloud.
See also
Isotopes of iodine
Iodine in biology
Xenon tetrachloride
References
Further reading
External links
ANL factsheet
Monitoring iodine-129 in air and milk samples collected near the Hanford Site: an investigation of historical iodine monitoring data
Studies with natural and anthropogenic iodine isotopes: iodine distribution and cycling in the global environment
Some Publications using 129I Data from IsoTrace, 1997-2002
Isotopes of iodine
Fission products
Radionuclides used in radiometric dating | Iodine-129 | [
"Chemistry"
] | 1,538 | [
"Nuclear fission",
"Isotopes of iodine",
"Radionuclides used in radiometric dating",
"Isotopes",
"Fission products",
"Nuclear fallout"
] |
10,978,612 | https://en.wikipedia.org/wiki/Effective%20dimension | In mathematics, effective dimension is a modification of Hausdorff dimension and other fractal dimensions that places it in a computability theory setting. There are several variations (various notions of effective dimension) of which the most common is effective Hausdorff dimension. Dimension, in mathematics, is a particular way of describing the size of an object (contrasting with measure and other, different, notions of size). Hausdorff dimension generalizes the well-known integer dimensions assigned to points, lines, planes, etc. by allowing one to distinguish between objects of intermediate size between these integer-dimensional objects. For example, fractal subsets of the plane may have intermediate dimension between 1 and 2, as they are "larger" than lines or curves, and yet "smaller" than filled circles or rectangles. Effective dimension modifies Hausdorff dimension by requiring that objects with small effective dimension be not only small but also locatable (or partially locatable) in a computable sense. As such, objects with large Hausdorff dimension also have large effective dimension, and objects with small effective dimension have small Hausdorff dimension, but an object can have small Hausdorff but large effective dimension. An example is an algorithmically random point on a line, which has Hausdorff dimension 0 (since it is a point) but effective dimension 1 (because, roughly speaking, it can't be effectively localized any better than a small interval, which has Hausdorff dimension 1).
Rigorous definitions
This article will define effective dimension for subsets of Cantor space 2ω; closely related definitions exist for subsets of Euclidean space Rn. We will move freely between considering a set X of natural numbers, the infinite sequence given by the characteristic function of X, and the real number with binary expansion 0.X.
Martingales and other gales
A martingale on Cantor space 2ω is a function d: 2ω → R≥ 0 from Cantor space to nonnegative reals which satisfies the fairness condition:
A martingale is thought of as a betting strategy, and the function gives the capital of the better after seeing a sequence σ of 0s and 1s. The fairness condition then says that the capital after a sequence σ is the average of the capital after seeing σ0 and σ1; in other words the martingale gives a betting scheme for a bookie with 2:1 odds offered on either of two "equally likely" options, hence the name fair.
(Note that this is subtly different from the probability theory notion of martingale. That definition of martingale has a similar fairness condition, which also states that the expected value after some observation is the same as the value before the observation, given the prior history of observations. The difference is that in probability theory, the prior history of observations just refers to the capital history, whereas here the history refers to the exact sequence of 0s and 1s in the string.)
A supermartingale on Cantor space is a function d as above which satisfies a modified fairness condition:
A supermartingale is a betting strategy where the expected capital after a bet is no more than the capital before a bet, in contrast to a martingale where the two are always equal. This allows more flexibility, and is very similar in the non-effective case, since whenever a supermartingale d is given, there is a modified function d which wins at least as much money as d and which is actually a martingale. However it is useful to allow the additional flexibility once one starts talking about actually giving algorithms to determine the betting strategy, as some algorithms lend themselves more naturally to producing supermartingales than martingales.
An s-gale is a function d as above of the form
for e some martingale.
An s-supergale is a function d as above of the form
for e some supermartingale.
An s-(super)gale is a betting strategy where some amount of capital is lost to inflation at each step. Note that s-gales and s-supergales are examples of supermartingales, and the 1-gales and 1-supergales are precisely the martingales and supermartingales.
Collectively, these objects are known as "gales".
A gale d succeeds on a subset X of the natural numbers if where denotes the n-digit string consisting of the first n digits of X.
A gale d succeeds strongly on X if .
All of these notions of various gales have no effective content, but one must necessarily restrict oneself to a small class of gales, since some gale can be found which succeeds on any given set. After all, if one knows a sequence of coin flips in advance, it is easy to make money by simply betting on the known outcomes of each flip. A standard way of doing this is to require the gales to be either computable or close to computable:
A gale d is called constructive, c.e., or lower semi-computable if the numbers are uniformly left-c.e. reals (i.e. can uniformly be written as the limit of an increasing computable sequence of rationals).
The effective Hausdorff dimension of a set of natural numbers X is .
The effective packing dimension of X is .
Kolmogorov complexity definition
Kolmogorov complexity can be thought of as a lower bound on the algorithmic compressibility of a finite sequence (of characters or binary digits). It assigns to each such sequence w a natural number K(w) that, intuitively, measures the minimum length of a computer program (written in some fixed programming language) that takes no input and will output w when run.
The effective Hausdorff dimension of a set of natural numbers X is .
The effective packing dimension of a set X is .
From this one can see that both the effective Hausdorff dimension and the effective packing dimension of a set are between 0 and 1, with the effective packing dimension always at least as large as the effective Hausdorff dimension. Every random sequence will have effective Hausdorff and packing dimensions equal to 1, although there are also nonrandom sequences with effective Hausdorff and packing dimensions of 1.
Comparison to classical dimension
If Z is a subset of 2'''ω, its Hausdorff dimension is .
The packing dimension of Z'' is .
Thus the effective Hausdorff and packing dimensions of a set are simply the classical Hausdorff and packing dimensions of (respectively) when we restrict our attention to c.e. gales.
Define the following:
A consequence of the above is that these all have Hausdorff dimension .
and all have packing dimension 1.
and all have packing dimension .
References
Fractals
Measure theory
Metric geometry
Dimension theory
Computable analysis | Effective dimension | [
"Mathematics"
] | 1,406 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical objects",
"Fractals",
"Mathematical relations"
] |
10,979,583 | https://en.wikipedia.org/wiki/Line%20Impedance%20Stabilization%20Network | A line impedance stabilization network (LISN) is a device used in conducted and radiated radio-frequency emission and susceptibility tests, as specified in various electromagnetic compatibility (EMC)/EMI test standards (e.g., by CISPR, International Electrotechnical Commission, CENELEC, U.S. Federal Communications Commission, MIL-STD, DO-160 Sections 20-21-22).
A LISN is a low-pass filter typically placed between an AC or DC power source and the EUT (equipment under test) to create a known impedance and to provide a radio frequency (RF) noise measurement port. It also isolates the unwanted RF signals from the power source. In addition, LISNs can be used to predict conducted emission for diagnostic and pre-compliance testing.
Functions of a LISN
Stable line impedance
The main function of a LISN is to provide a precise impedance to the power input of the EUT, in order to get repeatable measurements of the EUT noise present at the LISN measurement port. This is important because the impedance of the power source and the impedance of the EUT effectively operate as a voltage divider. The impedance of the power source varies, depending on the geometry of the supply wiring behind it.
The anticipated inductance of the power line for the intended installation of the EUT also plays a role in identifying the correct type of LISN needed for testing. For example, a connection in a building will often use 50 μH inductor, whereas in automobile measurement standards a 5 μH inductor is used to emulate a shorter typical wire length.
Isolation of the power source noise
Another important function of a LISN is to prevent the high-frequency noise of the power source from coupling in the system. A LISN functions as a low-pass filter, which provides high impedance to the outside RF noise while allowing the low-frequency power to flow through to the EUT.
Safe connection of the measuring equipment
Typically, a spectrum analyzer or an EMI receiver is used to take the measurements during an EMC test. The input port of such an equipment is very sensitive and prone to damage if overloaded. A LISN provides a measurement port with, usually, 50 Ω output impedance. The stabilized impedance, the built-in low-pass filter function, and the DC rejection properties of the LISN measurement port makes it easy to couple the high frequency noise signal to the input of the measuring equipment.
LISN types
Under a particular EMC test standard, a specific LISN type is required for evaluating and characterizing the operation of the EUT.
Different types of LISNs are available for analyzing DC, single-phase or 3-phase AC power connections. The main parameters for selecting the proper type of LISN are impedance, insertion loss, voltage rating, current rating, number of power conductors and connector types. The upper frequency limit of the LISN also plays an important role when conducted emissions measurements are used for predicting radiated emissions problems. A 100 MHz LISN is used in those cases.
Notes
References
Sources
MIL-STD 461E, REQUIREMENTS FOR THE CONTROL OF ELECTROMAGNETIC INTERFERENCE CHARACTERISTICS OF SUBSYSTEMS AND EQUIPMENT, 1999
Laboratory equipment
Electronic test equipment | Line Impedance Stabilization Network | [
"Technology",
"Engineering"
] | 683 | [
"Electronic test equipment",
"Measuring instruments"
] |
10,979,820 | https://en.wikipedia.org/wiki/Armillaria%20luteobubalina | Armillaria luteobubalina, commonly known as the Australian honey fungus, is a species of mushroom in the family Physalacriaceae. Widely distributed in southern Australia, the fungus is responsible for a disease known as Armillaria root rot, a primary cause of Eucalyptus tree death and forest dieback. It is the most pathogenic and widespread of the six Armillaria species found in Australia. The fungus has also been collected in Argentina and Chile. Fruit bodies have cream- to tan-coloured caps that grow up to in diameter and stems that measure up to long by thick. The fruit bodies, which appear at the base of infected trees and other woody plants in autumn (March–April), are edible, but require cooking to remove the bitter taste. The fungus is dispersed through spores produced on gills on the underside of the caps, and also by growing vegetatively through the root systems of host trees. The ability of the fungus to spread vegetatively is facilitated by an aerating system that allows it to efficiently diffuse oxygen through rhizomorphs—rootlike structures made of dense masses of hyphae.
Armillaria luteobubalina was first described in 1978, after having been discovered several years earlier growing in a Eucalyptus plantation in southeastern Australia. It distinguished itself from other known Australian Armillaria species by its aggressive pathogenicity. It may take years for infected trees to show signs of disease, leading to an underestimation of disease prevalence. Studies show that the spread of disease in eucalypt forests is associated with infected stumps left following logging operations. Although several methods have been suggested to control the spread of disease, they are largely economically or environmentally unfeasible. Phylogenetic analyses have determined that A. luteobubalina is closely related to A. montagnei and that both of these species are in turn closely related to the Brazilian species A. paulensis. The distribution of A. luteobubalina suggests that it is an ancient species that originated before the separation of the precursor supercontinent Gondwana.
History and phylogeny
Armillaria luteobubalina was first described in 1978 by mycologists Roy Watling and Glen Kile, who studied its effects on a fast-growing plantation of Eucalyptus regnans near Traralgon, Victoria. The plantation, established in 1963, consisted largely of trees with a mean height of about . A cluster of dead and dying trees discovered in 1973 suggested attack by a virulent primary pathogen, that is, one capable of infecting a host before invasion by other, secondary pathogens. This finding was inconsistent with the pathogenic behaviour of the known Armillaria species in Australia at the time, A. mellea and A. elegans. Further study over the next few years showed that the fungus spread by the growth of underground mycelia in root systems, expanding outward from the initial infected stump at an average of per year. Most Australian records of Armillaria infections referred to A. mellea, based on the presence of black rhizomorphs. For over one hundred years, A. mellea was thought to be a pleiomorphic (occurring in various distinct forms) species with a widespread distribution and host range, and variable pathogenicity. which led to great confusion among taxonomists and plant pathologists alike. In 1973, Veikko Hintikka reported a technique to distinguish between Armillaria species by growing them together as single spore isolates on petri dishes and observing changes in the morphology of the cultures. Using similar techniques, mycologists eventually determined that the Armillaria mellea species complex in Europe and North America in fact consisted of five and ten distinct "biological species", respectively.
Watling and Kile compared the macroscopic and microscopic characters of the pathogenic Armillaria with A. polymyces (now known as A. obscura), A. mellea, A. limonea and A. novae-zelandiae and found sufficient differences between them to warrant designating the species as new. Its specific epithet is derived from the Latin lutea "yellow", and was chosen to highlight an important distinguishing characteristic: the strong yellow colour of the cap and lack of reddish or brown tones in the stem typical of other resident Armillaria.
A phylogenetic study of South American Armillaria species concluded that A. luteobubalina is in a lineage that includes A. montagnei, and these are sister to a lineage containing A. paulensis, a species known from a single specimen collected in São Paulo, Brazil. Although they are very similar, specimens of A. luteobubalina have smaller spores than Argentinian specimens of A. montagnei, and their distinctness is well-supported with phylogenetic analysis. Based on analysis of pectic enzymes, A. luteobubalina is closely related to A. limonea, a species found in New Zealand; this result corroborates phylogenetic analyses reported in 2003 and 2006. Molecular analysis of 27 collections of A. luteobubalina from southwest Western Australia and one from Traralgon revealed four distinct polymorphic groups. The genetic variety suggests it is native to Australia.
Description
Up to in diameter, the cap is convex to flattened in shape with a central umbo (a rounded elevation) and is various shades of cream, yellow and tan. The cap surface is covered with darker scales and feels rough to the touch. The cap edge, or margin, is rolled inward in young specimens. The crowded gills are sinuate and white to cream in colour initially, brownish-cream or pinkish brown in maturity, and sometimes with yellow or rust-coloured marks close to the margins. The stem is central (that is, it joins the cap in the centre) and is up to long by thick. It is slightly thicker at its base than its apex, sometimes almost bulb-like. The stem surface is streaked with fibrils that run up and down its length. It has a floppy yellow wool-like ring which may develop irregular, jagged edges with time. The flesh is white, and in the stem has a woolly or stringy consistency. Although it has a hot-bitter taste, Armillaria luteobubalina is edible, and cooking removes the bitterness.
Microscopic characteristics
The spore print is white when fresh, but becomes more cream-coloured when dry. The smooth spores are oval to ellipsoid, hyaline (translucent), non-amyloid (meaning they do not absorb iodine from Melzer's reagent), and typically measure 6.5–7.5 by 4.5–5.5 μm. The basidia (spore-bearing cells) are thin-walled, hyaline, and lack clamp connections at their bases. They are usually four-spored but occasionally two-spored, with sterigmata (projections that attach to the spores) up to 4 μm long. The cheilocystidia (cystidia that occur on the edge of a gill) are mostly club-shaped, thin-walled, hyaline, and measure 15–30 by 6–10 μm.
Similar species
Five other Armillaria species are found in Australia. Within the range of A. luteobubalina, A. hinnulea is restricted to gully habitats. A. fumosa is a rarer species found only in poorly drained or seasonally wet locations. A. luteobubalina and A. montagnei share cap features and a similar unpleasant flavour, but the latter species has an olive-tinged cap, larger spores (9.5–11 by 5.5–7 μm compared to 6.5–7.5 by 4.5–5.5 μm) and a more conspicuous annulus than those found in A. luteobubalina. The morphology of the vegetative structures of A. limonea is distinctly different than A. luteobubalina, and can be used to distinguish the two species. A. novae-zelandiae has a sticky more flattened cap and stem below the ring and is found in wet forests, and A. pallidula is a species with cream gills maturing to pale pink found in tropical Australia arising from dead tree stumps or the roots of dead or living trees. A. luteobubalina is the only Armillaria species which occurs in Western Australia. Distinguishing Australian species is economically important, because A. luteobubalina is more pathogenic than the other members of the genus. A molecular diagnostic test, developed in 2002, can accurately identify each species using DNA extracted from its mycelia. Before this, species identification was limited to times when fruit bodies were in season. This technology also revealed a variation in the molecular material of A. luteobubalina that suggested sexual reproduction.
Habitat and distribution
Armillaria luteobubalina has been recorded in southeastern Australia, from the southeastern corner of Queensland through eastern New South Wales and across Victoria into southeastern South Australia. It also occurs in Tasmania and southwestern Western Australia. Those of the karri forests (consisting largely of the species E. diversicolor) of the southwest have paler and yellower caps than those in the jarrah forests (which contain predominantly Eucalyptus marginata) further north. The fruit bodies arise on wood, especially on stumps or around the base of trees, and often in huge numbers. They usually appear between April and July, although most production occurs in the second half of May. Abundant in woodlands, it can invade gardens and orchards, where it can attack many woody plants. The honey fungus infected and killed many plants near tuart trees (Eucalyptus gomphocephala) which had been cut down near Kings Park in suburban Perth. Armillaria luteobubalina is commonly found in eucalyptus forests in Australia, and is thought to be the most pathogenic and most widespread Armillaria species in the major western Australian forest types. The mushroom has also been reported from southern South America, in Argentina and Chile. A 2003 study of the molecular phylogenetics and pattern of its distribution in South America and Australia indicate that A. luteobubalina is an ancient species, originating before the separation of the precursor supercontinent Gondwana. Genetic differences between isolates in the South American and Australian populations indicate a long period of geographical separation, and the authors suggest that they "later might be regarded as independent taxa".
Root rot
Appearance of infected trees
Trees that are infected by A. luteobubalina show characteristic symptoms both above and below ground. Above the ground, the base of the tree develops inverted V-shaped lesions, and the infected wood undergoes white rot, a fungal wood decay process where the cellulose and lignin of the sapwood are both broken down, leaving the wood stringy. The bark of the stem dies and becomes discoloured up to above the ground. Clusters of fruit bodies appear at the base of the tree in autumn. Crowns may show gradual deterioration, or tree death may occur suddenly. Below the ground, characteristic symptoms of infections include rotting the ends of tree roots, white-rotted sapwood, and the presence of fan-shaped areas of white mycelium below dead or infected bark.
Occurrence
In selectively logged eucalypt forests in the central highlands of Victoria, it has been estimated that about 3–5% of the forest area is "moderately to severely affected" by Armillaria root rot caused by A. luteobubalina. A review of eucalypt plantations planted in New South Wales from 1994 to 2005 found that infection by A. luteobubalina was rare, and only accounted for 1% of mortality in total. In this instance, the cases had been restricted to Eucalyptus nitens on the Dorrigo Plateau. Unlike other Armillaria species found in Australia's native forests, which require a host tree to become weakened by prior infection by a different species, A. luteobubalina is a primary pathogen, and can infect healthy trees. Tree roots may be infected for years before showing above-ground symptoms, making it difficult to accurately assess the true extent of disease in a forest stand. Surveys are usually conducted in autumn, to coincide with the appearance of fruit bodies; infection is assessed by the presence of basal scars on the trees, and the appearance of fruit bodies. Several factors, however—such as cost, variable on-site conditions, and non-symptomatic diseased trees—make it difficult for such surveys to reliably detect all infections. One study showed that above-ground examinations detected only 50% of the trees actually infected, leading to underestimation of the incidence of true infection by 20–40%. The study used more intensive surveying methods to determine that 25- to 30-year-old karri regrowth forests in western Australia showed an average of 40–45% incidence of infection.
Disease spread
Several studies have shown that the spread of Armillaria root rot in eucalypt forests is associated with infected stumps that remain after an area has been logged. Armillaria luteobubalina can persist on these stumps, using them as a source of food for up to 25 or more years. In one case reported in Ovens, Victoria, the disease was spread to blueberry plants (Vaccinium species) via buried fragments of infected Eucalyptus that remained following preparation of the previously forested site for planting. In individual forest stands, fungal infection is usually found in discrete disease patches separated by stands of healthy trees—a discontinuous distribution. Large-scale aerial photography can be used to identify regions of forest infected by the species. The species also causes damage to trees and bushes in coastal dune woodlands, shrubland, and heath communities. It can be found on a wide range of hosts, but is most commonly associated with (in order of decreasing frequency) jarrah (Eucalyptus marginata), bull banksia (Banksia grandis), marri (E. calophylla), Lasiopetalum floribundum, and Acacia saligna. It has also infected scattered populations of wandoo (E. wandoo). The fungus has also been reported to infect Nothofagus species in Argentina, and Pinus radiata in Chile.
Armillaria luteobubalina uses "an elaborate, sophisticated aeration system" that enables it to efficiently deliver oxygen into the rhizomorphs, helping it thrive in low-oxygen environments. When grown in culture, the mycelium develops into a continuous region of tissue with a perforated crust. This tissue is hydrophobic and resistant to becoming waterlogged. Rhizomorphs develop beneath clusters of so-called "air-pores" near the perforations. These gas spaces connect the atmosphere with the central canal of the rhizomorph, to facilitate diffusion of oxygen and satiate the organism's high oxygen requirement during growth. This aeration system is thought to be an important factor in the organism's pathogenicity, allowing it to grow on wet or waterlogged root surfaces and send hyphae or rhizomorphs into live roots or cut stumps, where conditions may be hypoxic. The rhizomorphs have a dichotomous branching pattern, so that they split or bifurcate at various intervals. Experiments and field observations have shown that this allows the fungus to be a more aggressive and virulent pathogen than Armillaria species whose rhizomorphs branch monopodially (where lateral branches grow from a main stem). Although the structure of A. luteobubalina rhizomorphs is specialised for spread in potentially anaerobic conditions, the soil mycelium is adaptive and can amplify the absorptive surface of peripheral hyphae in response to the presence of nutrient-rich soil.
Control
Methods for controlling the spread of Armillaria root rot include physical removal of infected trees, stumps and large roots; fumigation of soil around infected hosts; and injection of fumigants directly into infected hosts. These methods are often not practical due to high cost, introduction of toxic chemicals that affect other organisms, or health and safety issues for the operator. Biological control is another method that has been investigated to control root rot caused by A. luteobubalina. In one study, thinning stumps of Eucalyptus diversicolor were simultaneously inoculated with A. luteobubalina and one of the saprobic wood decay fungi Coriolus versicolor, Stereum hirsutum and Xylaria hypoxylon; all three fungi significantly reduced infection by A. luteobubalinea. These results were echoed in another study of stumps in karri regrowth forests, where it was shown that the presence of other wood decay fungi suppressed the growth of A. luteobubalina on the stump base.
See also
List of Armillaria species
References
luteobubalina
Edible fungi
Fungal tree pathogens and diseases
Fungi described in 1978
Fungi of Australia
Fungi of South America
Taxa named by Roy Watling
Fungus species | Armillaria luteobubalina | [
"Biology"
] | 3,571 | [
"Fungi",
"Fungus species"
] |
10,980,227 | https://en.wikipedia.org/wiki/Snubbing | Snubbing is a type of heavy well intervention performed on oil and gas wells. It involves running the BHA on a pipe string using a hydraulic workover rig. Unlike wireline or coiled tubing, the pipe is not spooled off a drum but made up and broken up while running in and pulling out, much like conventional drill pipe. Due to the large rigup, it is only used for the most demanding of operations when lighter intervention techniques do not offer the strength and durability. The first snubbing unit was primarily designed to work in well control situations to "snub" drill pipe and or casing into, or out of, a well bore when conventional well killing methods could not be used. Unlike conventional drilling and completions operations, snubbing can be performed with the well still under pressure (not killed). When done so, it is called hydraulic workover. It can also be performed without having to remove the Christmas tree from the wellhead.
Rigup
A snubbing rigup is a very tall structure. It consists of a hydraulically powered snubbing unit, which provides the force on a pipe, above a string of multi-layered pressure control components.
At the top of the snubbing unit is the basket, which serves as the control post for the rigup. Below the basket are the hydraulic jacks, which power the pipe into and out of the hole. It consists of two mechanisms for applying force to the pipe in either direction. Each mechanism consists of travelling and stationary slips. The travelling slips are used to move the pipe, while the stationary slips are used to hold the pipe while the travelling slips are repositioned between strokes.
Stripping the pipe
Unlike coiled tubing or wireline, where the wire or tubing is always the same diameter allowing for a single unmoving primary barrier (stuffing box or stripper), snubbing uses a pipe, which will have an enlarged collar at the connection between the joints. Therefore, the pressure control system must be able to accommodate this variable diameter. The stripping rams accomplish this. The first stage of lowering a collar through the stripping system is to close the lower rams so as to seal off the mechanism above from wellbore pressure. The space between the rams can then be bled off allowing the upper rams to be opened. The collar can then pass through the opened upper rams. Once the collar is in between the rams, the upper rams are closed and pressure is equalised either side of the lower rams. The lower rams are then safely opened and the collar is lowered through the rams.
This process is repeated as successive collars are lowered into the well. When pulling out of hole, this procedure is reversed.
Another popular method of stripping tubulars in/out of a wellbore is with the use of an "Annular" Blow Out Preventer (BOP). An Annular BOP consists of a natural or synthetic rubber element with encased metal reinforcement sections. A hydraulic piston pushes the annular element up into a concaved cap which forces the element diameter to decrease in size. When the element diameter is closed sufficiently it forms a seal around the body of the pipe. The upset or larger diameter section of a pipe connection can be pulled or pushed through a closed annular element without damage and while still maintaining a gas tight seal.
Annular BOPs come in various sizes and pressure ratings and are ideal for lower pressure gas wells. Generally, the maximum pressure for stripping pipe through an annular is equal to 40% of the maximum static pressure rating dry or 60% if the pipe is lubricated as it is being stripped through the annular.
Heavy-pipe and light-pipe
Because snubbing is normally done under pressure, initially, the weight of pipe in wellbore is less than the force due to the wellbore pressure. This is described as light-pipe: downward force is required on the pipe to force it in against resistance. Once a sufficient amount of pipe has been run into the hole, the weight becomes sufficient to overpower the wellbore pressure and the pipe naturally wants to fall in the hole; this is heavy-pipe. At this point, the snubbing mechanism is changed over to the one which provides upward force to hold the pipe and lower it controllably into the well.
When pulling out of hole, upward force is initially used to lift the pipe until the equilibrium point, henceforth downward force is used to prevent wellbore pressure from blowing the light-pipe out of hole.
Risks
The more complex method of pressure control, as compared to coiled tubing and wireline, naturally invites more opportunity for things to go wrong. One such peril was seen in June 2007 on the Shearwater platform. Snubbing was being used to clean out large pebble, which had entered the well through a collapsed liner. While pulling out of hole, one stripping ram was not opened sufficiently and a collar on the pipe string caught on the ram. The excessive force applied to the pipe caused it to break apart, dropping the string below the failure into the well. In the time it took to prepare to fish out the pipe, the pebbles in the process of being circulated out, settled on the pipe, preventing successful fishing.
Although problems such as the one described above can happen they are extremely rare and always avoidable. In the case above, adequate supervision could have prevented this dramatic consequence of operator error by limiting the hydraulic force allowable to be an applied to a level below what was required to part the pipe string.
Note
Not all Snubbing units are large and time-consuming to rig up. In the Canadian oilfield many companies use small "Stand Alone" snubbing units which can be broken down and rigged up in less than 3hrs. These Units consist of 4 segments which can be placed onto 4 separate trucks. These 4 segments consist of the following:
PUMP TRUCK – The Engine of the truck is used to power the pump (which is a series of valves mounted behind the cab).
-----> The Pump truck has a trailer which is the MUD TANK
SNUBBING BASKET – Once again the trucks engine is used to power the unit's hydraulics. The Basket lies on the bed of the truck behind the cab.
ACCUMULATOR TRUCK – The Accumulator Unit (or Coomie) is run off a PTO that is connected to the trucks engine. The Coomie Unit also pulls a trailer.
------> The trailer off the Coomie Unit becomes the CATWALK and PIPE RACKS.
------> Mounted on the trailer is the TOOL SHED (or JUNK SKID – a small Shipping container full of tools), and also the LMS
(Load Management System), which is used to support the weight of the basket while operational.
PICKER – The Picker, is a truck with a small crane (or Picker) on its back. This Picker is used to Rig up the basket. It also tows a trailer.
------> The trailer for the Picker is the DOGHOUSE. The Doghouse is then split into the TOILET BLOCK and OFFICE, and the GENERATORS
(GEN-SETS) which provide electrical power to the rig.
These units are set up in such a fashion so as to be able to cope with the harsh roads and remote locations required in the Canadian winter.
Units structures and capacities
Units varies in strength, there are 95K, 120K, 150K, 170K, 225K, 340K, 460K, 600K The number indicates their working strength in pulling force, and 150K means the unit is capable of pulling maximal 150000 pounds. This is based on the hydraulic force acting on the size of the unit's piston size. Also are there more complex special built unit to find as the CSU 160 a special build rig assist unit, and stand alone units like
See also
Oil well
Drilling rig
Well intervention
Blowout preventer
Completion
References
Oil wells
Petroleum production | Snubbing | [
"Chemistry"
] | 1,637 | [
"Petroleum technology",
"Oil wells"
] |
10,980,467 | https://en.wikipedia.org/wiki/Unate%20function | A unate function is a type of boolean function which has monotonic properties.
They have been studied extensively in switching theory.
A function is said to be positive unate in
if for all possible values of ,
Likewise, it is negative unate in if
If for every f is either positive or negative unate in the variable then it is said to be unate (note that some may be positive unate and some negative unate to satisfy the definition of unate function). A function is binate if it is not unate (i.e., is neither positive unate nor negative unate in at least one of its variables).
For example, the logical disjunction function or with boolean values used for true (1) and false (0) is positive unate. Conversely, Exclusive or is non-unate, because the transition from 0 to 1 on input x0 is both positive unate and negative unate, depending on the input value on x1.
Positive unateness can also be considered as passing the same slope (no change in the input) and negative unate is passing the opposite slope....
non unate is dependence on more than one input (of same or different slopes)
Syntax (logic) | Unate function | [
"Mathematics"
] | 260 | [
"Boolean algebra",
"Fields of abstract algebra",
"Mathematical logic"
] |
10,980,696 | https://en.wikipedia.org/wiki/Separator%20%28oil%20production%29 | The term separator in oilfield terminology designates a pressure vessel used for separating well fluids produced from oil and gas wells into gaseous and liquid components. A separator for petroleum production is a large vessel designed to separate production fluids into their constituent components of oil, gas and water. A separating vessel may be referred to in the following ways: Oil and gas separator, Separator, Stage separator, Trap, Knockout vessel (Knockout drum, knockout trap, water knockout, or liquid knockout), Flash chamber (flash vessel or flash trap), Expansion separator or expansion vessel, Scrubber (gas scrubber), Filter (gas filter). These separating vessels are normally used on a producing lease or platform near the wellhead, manifold, or tank battery to separate fluids produced from oil and gas wells into oil and gas or liquid and gas. An oil and gas separator generally includes the following essential components and features:
A vessel that includes (a) primary separation device and/or section, (b) secondary "gravity" settling (separating) section, (c) mist extractor to remove small liquid particles from the gas, (d) gas outlet, (e) liquid settling (separating) section to remove gas or vapor from oil (on a three-phase unit, this section also separates water from oil), (f) oil outlet, and (g) water outlet (three-phase unit).
Adequate volumetric liquid capacity to handle liquid surges (slugs) from the wells and/or flowlines.
Adequate vessel diameter and height or length to allow most of the liquid to separate from the gas so that the mist extractor will not be flooded.
A means of controlling an oil level in the separator, which usually includes a liquid-level controller and a diaphragm motor valve on the oil outlet.
A back pressure valve on the gas outlet to maintain a steady pressure in the vessel.
Pressure relief devices.
Separators work on the principle that the three components have different densities, which allows them to stratify when moving slowly with gas on top, water on the bottom and oil in the middle. Any solids such as sand will also settle in the bottom of the separator. The functions of oil and gas separators can be divided into the primary and secondary functions which will be discussed later on.
Classification of oil and gas separators
By operating configuration
Oil and gas separators can have three general configurations: vertical, horizontal, and spherical.
Vertical separators can vary in size from 10 or 12 inches in diameter and 4 to 5 feet seam to seam (S to S) up to 10 or 12 feet in diameter and 15 to 25 feet S to S. Horizontal separators may vary in size from 10 or 12 inches in diameter and 4 to 5 feet S to S up to 15 to 16 feet in diameter and 60 to 70 feet S to S. Spherical separators are usually available in 24 or 30 inch up to 66 to 72 inch in diameter.
Horizontal oil and gas separators are manufactured with monotube and dual-tube shells. Monotube units have one cylindrical shell, and dual-tube units have two cylindrical parallel shells with one above the other. Both types of units can be used for two-phase and three-phase service. A monotube horizontal oil and gas separator is usually preferred over a dual-tube unit. The monotube unit has greater area for gas flow as well as a greater oil/gas interface area than is usually available in a dual-tube separator of comparable price. The monotube separator will usually afford a longer retention time because the larger single-tube vessel retains a larger volume of oil than the dual-tube separator. It is also easier to clean than the dual-tube unit. In cold climates, freezing will likely cause less trouble in the monotube unit because the liquid is usually in close contact with the warm stream of gas flowing through the separator. The monotube design normally has a lower silhouette than the dual-tube unit, and it is easier to stack them for multiple-stage separation on offshore platforms where space is limited. It was illustrated by Powers et al (1990) that vertical separators should be constructed such that the flow stream enters near the top and passes through a gas/liquid separating chamber even though they are not competitive alternatives unlike the horizontal separators.
By function
The three configurations of separators are available for two-phase operation and three-phase operation. In the two-phase units, gas is separated from the liquid with the gas and liquid being discharged separately. Oil and gas separators are mechanically designed such that the liquid and gas components are separated from the hydrocarbon steam at specific temperature and pressure according to Arnold et al (2008). In three-phase separators, well fluid is separated into gas, oil, and water with the three fluids being discharged separately. The gas–liquid separation section of the separator is determined by the maximum removal droplet size using the Souders–Brown equation with an appropriate K factor. The oil-water separation section is held for a retention time that is provided by laboratory test data, pilot plant operating procedure, or operating experience. In the case where the retention time is not available, the recommended retention time for three-phase separator in API 12J is used. The sizing methods by K factor and retention time give proper separator sizes. According to Song et al (2010), engineers sometimes need further information for the design conditions of downstream equipment, i.e., liquid loading for the mist extractor, water content for the crude dehydrator/desalter or oil content for the water treatment.
By operating pressure
Oil and gas separators can operate at pressures ranging from a high vacuum to 4,000 to 5,000 psi. Most oil and gas separators operate in the pressure range of 20 to 1,500 psi. Separators may be referred to as low pressure, medium pressure, or high pressure. Low-pressure separators usually operate at pressures ranging from 10 to 20 up to 180 to 225 psi. Medium-pressure separators usually operate at pressures ranging from 230 to 250 up to 600 to 700 psi. High-pressure separators generally operate in the wide pressure range from 750 to 1,500 psi.
By application
Oil and gas separators may be classified according to application as test separator, production separator, low temperature separator, metering separator, elevated separator, and stage separators (first stage, second stage, etc.).
Test separator A test separator is used to separate and to meter the well fluids. The test separator can be referred to as a well tester or well checker. Test separators can be vertical, horizontal, or spherical. They can be two-phase or three-phase. They can be permanently installed or portable (skid or trailer mounted). Test separators can be equipped with various types of meters for measuring the oil, gas, and/or water for potential tests, periodic production tests, marginal well tests, etc.
Production separator A production separator is used to separate the produced well fluid from a well, group of wells, or a lease on a daily or continuous basis. Production separators can be vertical, horizontal, or spherical. They can be two-phase or three-phase. Production separators range in size from 12 in. to 15 ft in diameter, with most units ranging from 30 in. to 10 ft in diameter. They range in length from 6 to 70 ft, with most from 10 to 40 ft long. In small onshore oilfield applications, a production separator can be integrated in a vapor-tight tank package.
Low-temperature separator A low-temperature separator is a special one in which high-pressure well fluid is jetted into the vessel through a choke or pressure reducing valve so that the separator temperature is reduced appreciably below the well-fluid temperature. The temperature reduction is obtained by the Joule–Thomson effect of expanding well fluid as it flows through the pressure-reducing choke or valve into the separator. The lower operating temperature in the separator causes condensation of vapors that otherwise would exit the separator in the vapor state. Liquids thus recovered require stabilization to prevent excessive evaporation in the storage tanks.
Metering separator The function of separating well fluids into oil, gas, and water and metering the liquids can be accomplished in one vessel. These vessels are commonly referred to as metering separators and are available for two-phase and three-phase operation. These units are available in special models that make them suitable for accurately metering foaming and heavy viscous oil.
Primary functions of oil and gas separators
Separation of oil from gas may begin as the fluid flows through the producing formation into the well bore and may progressively increase through the tubing, flow lines, and surface handling equipment. Under certain conditions, the fluid may be completely separated into liquid and gas before it reaches the oil and gas separator. In such cases, the separator vessel affords only an "enlargement" to permit gas to ascend to one outlet and liquid to descend to another.
Removal of oil from gas
Difference in density of the liquid and gaseous hydrocarbons may accomplish acceptable separation in an oil and gas separator. However, in some instances, it is necessary to use mechanical devices commonly referred to as "mist extractors" to remove liquid mist from the gas before
it is discharged from the separator. Also, it may be desirable or necessary to use some means to remove non solution gas from the oil before the oil is discharged from the separator.
Removal of gas from oil
The physical and chemical characteristics of the oil and its conditions of pressure and temperature determine the amount of gas it will contain in solution. The rate at which the gas is liberated from a given oil is a function of change in pressure and temperature. The volume of gas that an oil and gas separator will remove from crude oil is dependent on (1) physical and chemical characteristics of the crude, (2) operating pressure, (3) operating temperature, (4) rate of throughput, (5) size and configuration of the separator, and (6) other factors.
Agitation, heat, special baffling, coalescing packs, and filtering materials can assist in the removal of nonsolution gas that otherwise may be retained in the oil because of the viscosity and surface tension of the oil. Gas can be removed from the top of the drum by virtue of being gas. Oil and water are separated by a baffle at the end of the separator, which is set at a height close to the oil-water contact, allowing oil to spill over onto the other side, while trapping water on the near side. The two fluids can then be piped out of the separator from their respective sides of the baffle. The produced water is then either injected back into the oil reservoir, disposed of, or treated. The bulk level (gas–liquid interface) and the oil water interface are determined using instrumentation fixed to the vessel. Valves on the oil and water outlets are controlled to ensure the interfaces are kept at their optimum levels for separation to occur. The separator will only achieve bulk separation. The smaller droplets of water will not settle by gravity and will remain in the oil stream. Normally the oil from the separator is routed to a coalescer to further reduce the water content.
Separation of water from oil
The production of water with oil continues to be a problem for engineers and the oil producers. Since 1865 when water was coproduced with hydrocarbons, separation of valuable hydrocarbons from disposable water has challenged and frustrated the oil industry. According to Rehm et al (1983), innovation over the years has led from the skim pit to installation of the stock tank, to the gunbarrel, to the freewater knockout, to the hay-packed coalescer and most recently to the Performax Matrix Plate Coalescer, an enhanced gravity settling separator. The history of water treating for the most part has been sketchy and spartan. There is little economic value to the produced water, and it represents an extra cost for the producer to arrange for its disposal.
Today, oil fields produce greater quantities of water than they produce oil. Along with greater water production are emulsions and dispersions which are more difficult to treat. The separation process becomes interlocked with a myriad of contaminants as the last drop of oil is being recovered from the reservoir. In some instances it is preferable to separate and to remove water from the well fluid before it flows through pressure reductions, such as those caused by chokes and valves. Such water removal may prevent difficulties that could be caused downstream by the water, such as corrosion which can be referred to as being a chemical reactions that occurs whenever a gas or liquid chemically attacks an exposed metallic surface. Corrosion is usually accelerated by warm temperatures and likewise by the presence of acids and salts.
Other factors that affect the removal of water from oil include hydrate formation and the formation of tight emulsion that may be difficult to resolve into oil and water. The water can be separated from the oil in a three-phase separator by use of chemicals and gravity separation. If the three-phase separator is not large enough to separate the water adequately, it can be separated in a free-water knockout vessel installed upstream or downstream of the separators.
Secondary functions
Maintenance of optimum pressure on separator
For an oil and gas separator to accomplish its primary functions, pressure must be maintained in the separator so that the liquid and gas can be discharged into their respective processing or gathering systems. Pressure is maintained on the separator by use of a gas backpressure valve on each separator or with one master backpressure valve that controls the pressure on a battery of two or more separators. The optimum pressure to maintain on a separator is the pressure that will result in the highest economic yield from the sale of the liquid and gaseous hydrocarbons.
Maintenance of liquid seal in separator
To maintain pressure on a separator, a liquid seal must be effected in the lower portion of the vessel. This liquid seal prevents loss of gas with the oil and requires the use of a liquid-level controller and a valve.
Methods used to remove oil from gas
Effective oil-gas separation is important not only to ensure that the required export quality is achieved but also to prevent problems in downstream process equipment and compressors. Once the bulk liquid has been knocked out, which can be achieved in many ways, the remaining liquid droplets are separated from by a demisting device. Until recently the main technologies used for this application were reverse-flow cyclones, mesh pads and vane packs. More recently new devices with higher gas-handling have been developed which have enabled potential reduction in the scrubber vessel size. There are several new concepts currently under development in which the fluids are degassed upstream of the primary separator. These systems are based on centrifugal and turbine technology and have additional advantages in that they are compact and motion insensitive, hence ideal for floating production facilities. Below are some of the ways in which oil is separated from gas in separators.
Density difference (gravity separation)
Natural gas is lighter than liquid hydrocarbon. Minute particles of liquid hydrocarbon that are temporarily suspended in a stream of natural gas will, by density difference or force of gravity, settle out of the stream of gas if the velocity of the gas is sufficiently slow. The larger droplets of hydrocarbon will quickly settle out of the gas, but the smaller ones will take longer. At standard conditions of pressure and temperature, the droplets of liquid hydrocarbon may have a density 400 to 1,600 times that of natural gas. However, as the operating pressure and temperature increase, the difference in density decreases. At an operating pressure of 800 psig, the liquid hydrocarbon may be only 6 to 10 times as dense as the gas. Thus, operating pressure materially affects the size of the separator and the size and type of mist extractor required to separate adequately the liquid and gas. The fact that the liquid droplets may have a density 6 to 10 times that of the gas may indicate that droplets of liquid would quickly settle out of and separate from the gas. However, this may not occur because the particles of liquid may be so small that they tend to "float" in the gas and may not settle out of the gas stream in the short period of time the gas is in the oil and gas separator. As the operating pressure on a separator increases, the density difference between the liquid and gas decreases. For this reason, it is desirable to operate oil and gas separators at as low a pressure as is consistent with other process variables, conditions, and requirements.
Impingement
If a flowing stream of gas containing liquid, mist is impinged against a surface, the liquid mist may adhere to and coalesce on the surface. After the mist coalesces into larger droplets, the droplets will gravitate to the liquid section of the vessel. If the liquid content of the gas is high, or if the mist particles are extremely fine, several successive impingement surfaces may be required to effect satisfactory removal of the mist.
Change of flow direction
When the direction of flow of a gas stream containing liquid mist is changed abruptly, inertia causes the liquid to continue in the original direction of flow. Separation of liquid mist from the gas thus can be effected because the gas will more readily assume the change of flow direction and will flow away from the liquid mist particles. The liquid thus removed may coalesce on a surface or fall to the liquid section below.
Change of flow velocity
Separation of liquid and gas can be effected with either a sudden increase or decrease in gas velocity. Both conditions use the difference in inertia of gas and liquid. With a decrease in velocity, the higher inertia of the liquid mist carries it forward and away from the gas. The liquid may then coalesce on some surface and gravitate to the liquid section of the separator. With an increase in gas velocity, the higher inertia of the liquid causes the gas to move away from the liquid, and the liquid may fall to the liquid section of the vessel.
Centrifugal force
If a gas stream carrying liquid mist flows in a circular motion at sufficiently high velocity, centrifugal force throws the liquid mist outward against the walls of the container. Here the liquid coalesces into progressively larger droplets and finally gravitates to the liquid section below. Centrifugal force is one of the most effective methods of separating liquid mist from gas. However, according to Keplinger (1931), some separator designers have pointed out a disadvantage in that a liquid with a free surface rotating as a whole will have its surface curved around its lowest point lying on the axis of rotation. This created false level may cause difficulty in regulating the fluid level control on the separator. This is largely overcome by placing vertical quieting baffles which should extend from the bottom of the separator to above the outlet. Efficiency of this type of mist extractor increases as the velocity of the gas stream increases. Thus for a given rate of throughput, a smaller centrifugal separator will suffice.
Methods used to remove gas from oil
Because of higher prices for natural gas, the widespread reliance on metering of liquid hydrocarbons, and other reasons, it is important to remove all nonsolution gas from crude oil during field processing. Methods used to remove gas from crude oil in oil and gas separators are discussed below:
Agitation
Moderate, controlled agitation which can be defined as movement of the crude oil with sudden force is usually helpful in removing nonsolution gas that may be mechanically locked in the oil by surface tension and oil viscosity. Agitation usually will cause the gas bubbles to coalesce and to separate from the oil in less time than would be required if agitation were not used.
Heat
Heat as a form of energy that is transferred from one body to another results in a difference in temperature. This reduces surface tension and viscosity of the oil and thus assists in releasing gas that is hydraulically retained in the oil. The most effective method of heating crude oil is to pass it through a heated-water bath. A spreader plate that disperses the oil into small streams or rivulets increases the effectiveness of the heated-water bath. Upward flow of the oil through the water bath affords slight agitation, which is helpful in coalescing and separating entrained gas from the oil. A heated-water bath is probably the most effective method of removing foam bubbles from foaming crude oil. A heated-water bath is not practical in most oil and gas separators, but heat can be added to the oil by direct or indirect fired heaters and/or heat exchangers, or heated free-water knockouts or emulsion treaters can be used to obtain a heated-water bath.
Centrifugal force
Centrifugal force which can be defined as a fictitious force, peculiar to a particle moving on a circular path, that has the same magnitude and dimensions as the force that keeps the particle on its circular path (the centripetal force) but points in the opposite direction is effective in separating gas from oil. The heavier oil is thrown outward against the wall of the vortex retainer while the gas occupies the inner portion of the vortex. A properly shaped and sized vortex will allow the gas to ascend while the liquid flows downward
to the bottom of the unit.
Flow measurements
The direction of flow in and around a separator along with other flow instruments are usually illustrated on the Piping and instrumentation diagram, (P&ID). Some of these flow instruments include the Flow Indicator (FI), Flow Transmitter (FT) and the Flow Controller (FC). Flow is of paramount importance in the oil and gas industry because flow, as a major process variable is essentially important in that its understanding helps engineers come up with better designs and enables them to confidently carry out additional research. Mohan et al (1999) carried out a research into the design and development of separators for a three-phase flow system. The purpose of the study was to investigate the complex multiphase hydrodynamic flow behaviour in a three-phase oil and gas separator. A mechanistic model was developed alongside a computational fluid dynamics (CFD) simulator. These were then used to carry out a detailed experimentation on the three-phase separator. The experimental and CFD simulation results were suitably integrated with the mechanistic model. The simulation time for the experiment was 20 seconds with the oil specific gravity as 0.885, and the separator lower part length and diameter were 4-ft and 3-inches respectively. The first set of experiment became a basis through which detailed investigations were used to carry out and to conduct similar simulation studies for different flow velocities and other operating conditions as well.
Flow calibration
As earlier stated, flow instruments that function with the separator in an oil and gas environment include the flow indicator, flow transmitter and the flow controller. Due to maintenance (which will be discussed later) or due to high usage, these flowmeters do need to be calibrated from time to time. Calibration can be defined as the process of referencing signals of known quantity that has been predetermined to suit the range of measurements required. Calibration can also be seen from a mathematical point of view in which the flowmeters are standardized by determining the deviation from the predetermined standard so as to ascertain the proper correction factors. In determining the deviation from the predetermined standard, the actual flowrate is usually first determined with the use of a master meter which is a type of flowmeter that has been calibrated with a high degree of accuracy or by weighing the flow so as to be able to obtain a gravimetric reading of the mass flow.
Another type of meter used is the transfer meter. However, according to Ting et al (1989), transfer meters have been proven to be less accurate if the operating conditions are different from its original calibrated points. According to Yoder (2000), the types of flowmeters used as master meters include turbine meters, positive displacement meters, venturi meters, and Coriolis meters. In the U.S., master meters are often calibrated at a flow lab that has been certified by the National Institute of Standards and Technology, (NIST). NIST certification of a flowmeter lab means that its methods have been approved by NIST. Normally, this includes NIST traceability, meaning that the standards used in the flowmeter calibration process have been certified by NIST or are causally linked back to standards that have been approved by NIST. However, there is a general belief in the industry that the second method which involves the gravimetric weighing of the amount of fluid (liquid or gas) that actually flows through the meter into or out of a container during the calibration procedure is the most ideal method for measuring the actual amount of flow. Apparently, the weighing scale used for this method also has to be traceable to the National Institute of Standards and Technology (NIST) as well.
In ascertaining a proper correction factor, there is often no simple hardware adjustment to make the flowmeter start reading correctly. Instead, the deviation from the correct reading is recorded at a variety of flowrates. The data points are plotted, comparing the flowmeter output to the actual flowrate as determined by the standardized National Institute of Standards and Technology master meter or weigh scale.
Controls and features
Controls
The controls required for oil and gas separators are liquid level controllers for oil and oil/water interface (three-phase operation) and gas back-pressure control valve with pressure controller. Although the use of controls is expensive making the cost of operating fields with separators so high, installations has resulted in substantial savings in the overall operating expense as in the case of the 70 gas wells in the Big Piney, Wyo sighted by Fair (1968). The wells with separators were located above 7,200 ft elevation, ranging upward to 9,000 ft. Control installations were sufficiently automated such that the field operations around the controllers could be operated from a remote-control station at the field office using the Distributed Control System. All in all, this improved the efficiency of personnel and the operation of the field, with a corresponding increase in production from the area.
Valves
The valves required for oil and gas separators are oil discharge control valve, water-discharge control valve (three-phase operation), drain valves, block valves, pressure relief valves, and emergency shutdown valves (ESD). ESD valves typically stay in open position for months or years awaiting a command signal to operate. Little attention is paid to these valves outside of scheduled turnarounds. The pressures of continuous production often stretch these intervals even longer. This leads to build up or corrosion on these valves that prevents them from moving. For safety critical applications, it must be ensured that the valves operate upon demand.
Accessories
The accessories required for oil and gas separators are pressure gauges, thermometers, pressure-reducing regulators (for control gas), level sight glasses, safety head with rupture disk, piping, and tubing.
Safety features
Oil and gas separators should be installed at a safe distance from other lease equipment. Where they are installed on offshore platforms or in close proximity to other equipment, precautions should be taken to prevent injury to personnel and damage to surrounding equipment in case the separator or its controls or accessories fail. The following safety features are recommended for most oil and gas separators.
High- and low-liquid-level controls High- and low liquid-level controls normally are float-operated pilots that actuate a valve on the inlet to the separator, open a bypass around the separator, sound a warning alarm, or perform some other pertinent function to prevent damage that might result from high or low liquid levels in the separator.
High- and low-pressure controls High- and low pressure controls are installed on separators to prevent excessively high or low pressures from interfering with normal operations. These high- and low-pressure controls can be mechanical, pneumatic, or electric and can sound a warning, actuate a shut-in valve, open a bypass, or perform other pertinent functions to protect personnel, the separator, and surrounding equipment.
High- and low-temperature controls Temperature controls may be installed on separators to shut in the unit, to open or to close a bypass to a heater, or to sound a warning should the temperature in the separator become too high or too low. Such temperature controls are not normally used on separators, but they may be appropriate in special cases. According to Francis (1951), low-temperature controls in separators is another tools used by gas producers which finds its application in the high-pressure gas fields, usually referred to as "vapour-phase" reservoirs. Low temperatures obtainable from the expansion of these high-pressure gas streams are utilized to a profitable advantage. A more efficient recovery of the hydrocarbon condensate and a greater degree of dehydration of the gas as compared to the conventional heater and separator installation is a major advantage of low-temperature controls in oil and gas separators.
Safety relief valves A spring-loaded safety relief valve is usually installed on all oil and gas separators. These valves normally are set at the design pressure of the vessel. Safety relief valves serve primarily as a warning, and in most instances are too small to handle the full rated fluid capacity of the separator. Full-capacity safety relief valves can be used and are particularly recommended when no safety head (rupture disk) is used on the separator.
Safety heads or rupture disks A safety head or rupture disk is a device containing a thin metal membrane that is designed to rupture when the pressure in the separator exceeds a predetermined value. This is usually from 1 1/4 to 1% times the design pressure of the separator vessel. The safety head disk is usually selected so that it will not rupture until the safety relief valve has opened and is incapable of preventing excessive pressure buildup in the separator.
Operation and maintenance considerations
Over the life of a production system, the separator is expected to process a wide range of produced fluids. With break through from water flood and expanded gas lift circulation, the produced fluid water cut and gas-oil ratio is ever changing. In many instances, the separator fluid loading may exceed the original design capacity of the vessel. As a result, many operators find their separator no longer able to meet the required oil and water effluent standards, or experience high liquid carry-over in the gas according to Power et al (1990). Some operational maintenance and considerations are discussed below:
Periodic inspection
In refineries and processing plants, it is normal practice to inspect all pressure vessels and piping periodically for corrosion and erosion. In the oil fields, this practice is not generally followed (they are inspected at a predetermined frequency, normally decided by an RBI assessment) and equipment is replaced only after actual failure. This policy may create hazardous conditions for operating personnel and surrounding equipment. It is recommended that periodic inspection schedules for all pressure equipment be established and followed to protect against undue failures.
Installation of safety devices
All safety relief devices should be installed as close to the vessel as possible and in such manner that the reaction force from exhausting fluids will not break off, unscrew, or otherwise dislodge the safety device. The discharge from safety devices should not endanger personnel
or other equipment.
Low temperature
Separators should be operated above hydrate-formation temperature. Otherwise hydrates may form in the vessel and partially or completely plug it thereby reducing the capacity of the separator. In some instances when the liquid or gas outlet is plugged or restricted, this causes the safety valve to open or the safety head to rupture. Steam coils can be installed in the liquid section of oil and gas separators to melt hydrates that may form there. This is especially appropriate on low-temperature separators.
Corrosive fluids
A separator handling corrosive fluid should be checked periodically to determine whether remedial work is required. Extreme cases of corrosion may require a reduction in the rated working pressure of the vessel. Periodic hydrostatic testing is recommended, especially if the fluids being handled are corrosive. Expendable anode can be used in separators to protect them against electrolytic corrosion. Some operators determine separator shell and head thickness with ultrasonic thickness indicators and calculate the maximum allowable working pressure from the remaining metal thickness. This should be done yearly offshore and every two to four years onshore.
Solids separation
Sand and other solids from upstream will tend to settle out in the bottom of the separators. If allowed to accumulate the solids reduce the volume available for oil/gas/water separation reducing efficiency. The vessel may be taken offline and drained down and the solids removed by digging out by hand. Or water sparge pipes in the base of the separator used to fluidize the sand which can be drained from the drain valves in the base.
See also
Piping and instrumentation diagram
Fluid dynamics
Computational fluid dynamics
Souders–Brown equation
Joule–Thomson effect
Vapor–liquid separator
Natural gas condensate
Oil production plant
Heat
Cyclone separator
Valve
Stokes' law
Safety
External links
The Flottweg Separator – Parameters and influencing factors for the best possible separation results including Separator video
Pictorial illustration of what the internal structure of an Oil and Gas Separator looks like – This shows how the Defoaming Internals, Coalescing Internals, Demister Internals – Wiremesh Demister, Vane Mist Eliminators, Desanding Internals, Vortex Breakers and other internal components of a typical separator are arranged in the separator.
Typical P&ID arrangement for three-phase separator vessels – Piping and instrumentation diagram (P&ID) illustrates the direction of flow in and around an Oil and Gas Separator. It likewise shows the connectivity of other instruments e.g. valves, level controller, level indicator, flow indicator, flow transmitter, pressure indicator, pressure transmitter, etc. around the separator.
Computational fluid dynamics (CFD) simulation illustrating a three-phase oil, gas and water separator – This illustrates the direction of flow in the separator.
Quick calculator for horizontal knock out drum sizing – Based on settling time required for liquid droplets of a given minimum size to be separated.
References
Petroleum technology
Natural gas technology
Industrial equipment | Separator (oil production) | [
"Chemistry",
"Engineering"
] | 7,292 | [
"Petroleum engineering",
"Petroleum technology",
"Natural gas technology",
"nan"
] |
10,980,741 | https://en.wikipedia.org/wiki/Van%20der%20Waals%20molecule | A Van der Waals molecule is a weakly bound complex of atoms or molecules held together by intermolecular attractions such as Van der Waals forces or by hydrogen bonds.
The name originated in the beginning of the 1970s when stable molecular clusters were regularly observed in molecular beam microwave spectroscopy.
Examples
Examples of well-studied vdW molecules are Ar2, H2-Ar, H2O-Ar, benzene-Ar, (H2O)2, and (HF)2.
Others include the largest diatomic molecule He2, and LiHe.
A notable example is the He-HCN complex, studied for its large amplitude motions and the applicability of the adiabatic approximation in separating its angular and radial motions. Research has shown that even in such 'floppy' systems, the adiabatic approximation can be effectively utilized to simplify quantum mechanical analyses.
Supersonic beam spectroscopy
In (supersonic) molecular beams temperatures are very low (usually less than 5 K). At these low temperatures Van der Waals (vdW) molecules are stable and can be investigated by microwave, far-infrared spectroscopy and other modes of spectroscopy.
Also in cold equilibrium gases vdW molecules are formed, albeit in small, temperature dependent concentrations. Rotational and vibrational transitions in vdW molecules have been observed in gases, mainly by UV and IR spectroscopy.
Van der Waals molecules are usually very non-rigid and different versions are separated by low energy barriers, so that tunneling splittings, observable in far-infrared spectra, are relatively large.
Thus, in the far-infrared one may observe intermolecular vibrations, rotations, and tunneling motions of Van der Waals molecules.
The VRT spectroscopic study of Van der Waals molecules is one of the most direct routes to the understanding of intermolecular forces.
In study of helium-containing van der Waals complexes, the adiabatic or Born–Oppenheimer approximation has been adapted to separate angular and radial motions. Despite the challenges posed by the weak interactions leading to large amplitude motions, research demonstrates that this approximation can still be valid, offering a quicker computational method for Diffusion Monte Carlo studies of molecular rotation within ultra-cold helium droplets. The non-rigid nature of these complexes, especially those with helium, complicates traditional quantum mechanical approaches. However, recent studies have validated the use of the adiabatic approximation for separating different types of molecular motion, even in these 'floppy' systems.
See also
Van der Waals radius
Van der Waals strain
Van der Waals surface
–articles about specific chemicals
References
Further reading
So far three special issues of Chemical Reviews have been devoted to vdW molecules: I. Vol. 88(6) (1988). II. Vol. 94(7) (1994). III. Vol. 100(11) (2000).
Early reviews of vdW molecules: G. E. Ewing, Accounts of Chemical Research, Vol. 8, pp. 185-192, (1975): Structure and Properties of Van der Waals molecules. B. L. Blaney and G. E. Ewing, Annual Review of Physical Chemistry, Vol. 27, pp. 553-586 (1976): Van der Waals Molecules.
About VRT spectroscopy: G. A. Blake, et al., Review Scientific Instruments, Vol. 62, p. 1693, 1701 (1991). H. Linnartz, W.L. Meerts, and M. Havenith, Chemical Physics, Vol. 193, p. 327 (1995).
Physical chemistry
Molecule | Van der Waals molecule | [
"Physics",
"Chemistry"
] | 745 | [
"Applied and interdisciplinary physics",
"Van der Waals molecules",
"Molecules",
"nan",
"Physical chemistry",
"Matter"
] |
10,980,912 | https://en.wikipedia.org/wiki/Smart%20transducer | A smart transducer is an analog or digital transducer, actuator, or sensor combined with a processing unit and a communication interface.
As sensors and actuators become more complex, they provide support for various modes of operation and interfacing. Some applications require additionally fault-tolerant and distributed computing. Such functionality can be achieved by adding an embedded microcontroller to the classical sensor/actuator, which increases the ability to cope with complexity at a fair price. Typically, these on-board technologies in smart sensors are used for digital processing, either frequency-to-code or analog-to-digital conversations, interfacing functions and calculations. Interfacing functions include decision-making tools like self-adaption, self-diagnostics, and self-identification functions, but also the ability to control how long and when the sensor will be fully awake, to minimize power consumption and to decide when to dump and store data.
They are often made using CMOS, VLSI technology and may contain MEMS devices leading to lower cost. They may provide full digital outputs for easier interface or they may provide quasi-digital outputs like pulse-width modulation. In the machine vision field, a single compact unit that combines the imaging functions and the complete image processing functions is often called a smart sensor.
Smart sensors are a crucial element in the phenomenon Internet of Things (IoT). Within such a network, multiple physical vehicles and devices are embedded with sensors, software and electronics. Data will be collected and shared for better integration between digital environments and the physical world. The connectivity between sensors is an important requirement for an IoT innovation to perform well. Interoperability can therefore be seen as an consequence of connectivity. The sensors work and complement each other.
Improvement over traditional sensors
The key features of smart sensors as part of the IoT that differentiate them from traditional sensors are:
Small size
Self-validation and self-identification
Low power requirements
Self-diagnosis
Self-calibration
Connection to the Internet and other devices
The traditional sensor collects information about an object or a situation and translates it into an electrical signal. It gives feedback of the physical environment, process, or substance in a measurable way, and signals or indicates when change in this environment occurs. Traditional sensors in a network of sensors can be divided in three parts: one, the sensors; two, a centralized interface where the data is collected and processed; and three, an infrastructure that connects the network, like plugs, sockets and wires.
A network of smart sensors can be divided in two parts; (1) the sensors, and (2) a centralized interface. The fundamental difference with traditional sensors, is that the microprocessors embedded in the smart sensors already process the data. Therefore, less data has to be transmitted and the data can immediately be used and accessed on different devices. The switch to smart sensors entails that the tight coupling between transmission and processing technologies is removed.
Digital traces
Within a digital environment, actions or activities leave a digital trace. Smart sensors measure these activities in the physical environment and translate this into a digital environment. Therefore, every step within the process becomes digitally traceable. Whenever a mistake is made somewhere in a production process, this can be tracked down using these digital traces. As a result, it will be easier to track down inefficiencies within a production process and simplify process innovations, because one can easier analyze what part of the production process is inefficient. Due to the fact that all the information is digitized, the company is exposed to cyber attacks. To protect itself from these information breaches, ensuring a secure platform is crucial.
Layered modular architecture of digital sensors
The term layered modular architecture is a combination between the modular architecture of the physical components of a product with the layered architecture of the digital system. There is a contents layer, a service layer, a network layer ((1) logical transmission, (2) physical transport), and a device layer ((1) logical capability, (2) physical machinery). Starting at the device layer, the smart sensor itself is the physical machinery, measuring its physical environment. The logical capacity refers to operating systems, which can be Windows, MacOS or another operating system that is used to run the platform on. At the network layer, the logical transmission can consist of various transmission methods; Wi-Fi, Bluetooth, NFC, Zigbee and RFID. For smart sensors, physical transport is not necessary, since smart sensors are usually wireless. Yet charging wires and sockets are still commonly used. The service layer is about the service that is provided by the smart sensor. The sensors are able to process the data themselves. Therefore, there is not one specific service of the sensors because they process multiple things simultaneously. They can for example signal that certain assets need to be repaired. The content layer would be the centralised platforms, that are created and used to gain insights and create value.
Usage across industries
Insurance
Traditionally, insurance companies tried to assess the risk of their clients by looking over their application form, trust their answers and then simply cover it with a monthly premium. However, due to asymmetric information, it was difficult to accurately determine risk of a certain client. The introduction of smart sensors in the insurance industry is disrupting the traditional practice in multiple ways. Smart sensors generate a large amount of (big) data and affects the business models of insurance companies as follows.
Smart sensors in client’s homes or in wearables help insurance companies to get much more detailed information. Wearables can for example monitor heart-related metrics, location-based systems like security technologies, or smart thermostats can generate important data of your house. They can use this information to improve risk assessment and risk management, reduce asymmetric information, and ultimately reduce costs.
Additionally, if clients agree upon providing this data of sensors in their homes, they can even get a discount on their premium. This approach of trading information in return for special deals is called bartering and it is one form of data monetization. Data monetization is the act of exchanging information-based products and services for legal tender or something of perceived equivalent value. In other words, data monetization is exploiting opportunities to generate new revenues. Another form of data monetization, which insurers regularly use nowadays, is selling data to third parties.
Manufacturing
One of the recent trends in manufacturing is the revolution of Industry 4.0, in which data exchanging and automation play a crucial role. Traditionally, machines were already able to automate certain small tasks (e.g. open/close valves). Automation in smart factories go beyond these easy tasks. It increasingly includes complex optimization decisions that humans typically make. For machines to be able to make human decisions, it is imperative to get detailed information, and that’s were smart sensors come in.
For manufacturing, efficiency is one of the most important aspects. Smart sensors pull data from assets to which they are connected and process the data continuously. They can provide detailed real-time information about the plant and process and reveal performance issues. If this is just a small performance issue, the smart factory can even solve the problem itself. Smart sensors can predict defects as well, so rather than fixing a problem afterwards, maintenance workers can prevent it. This all leads to outstanding asset efficiency and reduces downtime, which is the enemy of every production process.
Smart sensors can also be applied beyond the factory. For example sensors on objects like vehicles or shipping containers can give detailed information about delivery status. This affects both manufacturing and the whole supply chain.
Automotive
The last couple of years, the automotive industry has been challenging their ‘old’ ecosystems. Several new technologies like smart sensors play a crucial role in this process. Nowadays, these sensors only enable some small autonomous features like automatic parking services, obstacle detection and emergency braking, which improves security. Although a lot of companies are focused on technologies that improve cars and work towards automation, complete disruption of the industry has not yet been reached. Yet, experts expect that autonomous cars without any human interference will dominate the roads in 10 years.
Smart sensors generate data of the car and their surroundings, connect them into a car network, and translate this into valuable information which allows the car to see and interpret the world. Basically, the sensor works as follows. It has to pull physical and environmental data, use that information for calculations, analyze the outcomes and translate it into action. Sensors in other cars have to be connected into the car network and communicate with each other.
However, smart sensors in the automotive industry can also be used in a more sustaining way. Car manufacturers place smart sensors in different parts of the car, which collects and shares information. Drivers and manufacturers can use this information to transform from scheduled to predictive maintenance. Established firms have a strong focus on these sustaining innovations, but the risk is that they do not see new entrants coming and have difficulties to adapt. Therefore, making a distinction between a disruptive and sustaining innovation is important and brings different implications to managers.
See also
Ambient intelligence
Edge computing
IEEE 1451
Internet of things
Intelligent sensor
Machine to machine
Sentroller
SensorML
System on a chip
Transducer electronic data sheet
TransducerML
References
External links
IEEE Spectrum: Smart Sensors
Smart devices
Transducers | Smart transducer | [
"Technology"
] | 1,892 | [
"Home automation",
"Smart devices"
] |
10,981,668 | https://en.wikipedia.org/wiki/Spo11 | Spo11 is a protein that in humans is encoded by the SPO11 gene. Spo11, in a complex with mTopVIB, creates double strand breaks to initiate meiotic recombination. Its active site contains a tyrosine which ligates and dissociates with DNA to promote break formation. One Spo11 protein is involved per strand of DNA, thus two Spo11 proteins are involved in each double stranded break event.
Genetic exchange between two DNA molecules by homologous recombination can begin with a break in both strands of DNA—called a double-strand break—and recombination is started by an endonuclease enzyme that cuts the DNA molecule that "receives" the exchanged DNA. In meiosis the enzyme is SPO11, which is related to DNA topoisomerases. Topoisomerases change DNA by transiently breaking one or both strands, passing the unbroken DNA strand or strands through the break and repairing the break; the broken ends of the DNA are covalently linked to topoisomerase. SPO11 is similarly attached to the DNA when it forms double-strand breaks during meiosis.
Meiotic recombination
SPO11 is considered to play a predominant role in initiating meiotic recombination. However, recombination may also occur by alternative SPO11-independent mechanisms that can be studied experimentally using spo11 mutants.
In the budding yeast Saccharomyces cerevisiae, the meiotic defects in recombination and chromosome disjunction of spo11 mutants are alleviated by X-irradiation. This finding indicates that X-ray induced DNA damages can initiate crossover recombination leading to proper disjunction independently of SPO11.
In the worm Caenorhabditis elegans, a homolog of spo11 is ordinarily employed in the initiation of meiotic recombination. However, radiation induced-breaks can also initiate recombination in mutants deleted for this spo11 homolog.
Deamination of cytosine resulting in the dU:dG mismatch is one of the most common single-base-altering lesions in non-replicating DNA. Spo11 mutants of the fission yeast Schizosaccharomyces pombe and C. elegans undergo meiotic crossover recombination and proper chromosome segregation when dU:dG lesions are produced in their DNA. This crossover recombination does not involve the formation of large numbers of double-strand breaks, but does require uracil DNA-glycosylase, an enzyme that removes uracil from the DNA phosphodiester backbone and initiates base excision repair. Thus, it was proposed that base excision repair of DNA damage such as a uracil base, an abasic site, or a single-strand nick is sufficient to initiate meiotic crossover recombination in S. pombe and C. elegans.
In S. pombe, a mutant defective in the spo11 homolog Rec12 is deficient in meiotic recombination. However recombination can be restored to near normal levels by a deletion in rad2, a gene that encodes an endonuclease involved in Okazaki fragment processing (Farah et al., 2005). Both crossover and non-crossover recombination were increased but double-strand breaks were undetectable. On the basis of the biochemical properties of the rad2 deletion, it was proposed that meiotic recombination can be initiated by DNA lesions other than double-strand breaks, such as nicks and gaps which accumulate during premeiotic DNA replication when Okasaki fragment processing is deficient.
The above findings indicate that DNA damages arising from a variety of sources can be repaired by meiotic recombination and that such a process can occur independently of SPO11.
Absence in some sexual species
The most recent common ancestor of the social amoeba genera Dictyostelium, Polysphondylium and Acytostelium, appears to have lacked the Spo11 gene. Such an ancestor likely lived several hundred million years ago. Dictyostelium discoideum and Polysphondylium pallidum are both capable of meiotic sexual reproduction (see D. discoideum sexual reproduction and P. pallidum sexual reproduction). Bloomfield speculated that dormant cells in the soil might be exposed to many kinds of stress, such as desiccation or radiation, that could induce spontaneous DNA damage. Such damage would make the induction of double-strand breaks by Spo11 redundant for the initiation of recombination during meiosis, and thus explain its absence in this group.
References
Proteins | Spo11 | [
"Chemistry"
] | 1,003 | [
"Proteins",
"Biomolecules by chemical classification",
"Molecular biology"
] |
11,955,434 | https://en.wikipedia.org/wiki/International%20Maritime%20Hall%20of%20Fame | The International Maritime Hall of Fame is a museum honouring people who have made a large contribution in the maritime field. The hall of fame inducted its first set of honorees in or about 1994. The hall is sponsored by the Maritime Association of the Port of New York and New Jersey.
Inductees
For 1994-2008 inductees, see footnote
Malcom McLean - creator of international standard shipping containers.
2000
See footnote
2001
See footnote
2003
Christopher L. Koch, president and CEO, World Shipping Council
Olav K. Rakkenes, former president and CEO, Atlantic Container Line
Charles G. Raymond, chairman, CEO and president, Horizon Lines
Paul F. Richardson, principal, Paul F. Richardson & Associates
Henk Van Hemmen, president emeritus, Martin, Ottaway Van Hemmen & Dolan Inc.:
John Arnold Witte, Sr., CEO and president, Donjon Marine Co., Inc.
2009
See footnote
2010
See footnote
2011
See footnote
2012 & 2013
See footnote
2014
Peter Friedmann, executive director, Agriculture Transportation Coalition
Jorn Hinge, president and chief executive officer, United Arab Shipping Company (S.A.G.)
Carol Notias Lambos, partner, The Lambos Firm, LLP
Lambros C. Varnavides, managing director and global head of shipping, Royal Bank of Scotland
Wolf von Appen, president, Ultramar Agencia Maritima Ltda., and chairman, Grupo Ultramar
2016
C. Duff Hughes, president, The Vane Brothers Company
Simeon P. Palios, chairman and CEO, Diana Shipping, Inc.
Robert P. (Rob) Kusiciel, vice president of logistics & transportation, Center of Excellence, Honeywell International, Inc.
William (Bill) Payne, vice chairman, NYK Line (NA), Inc. and vice president, NYK Ports, LLC.
Christopher J. Wiernicki, chairman, CEO and president, American Bureau of Shipping, Inc.
2017
Jean-Jacques (JJ) Ruest, executive vice-president and chief marketing officer, Canadian National Railway Company
Harley V. Franco, chairman and chief executive officer, Harley Marine Services
Angeliki N. Frangou, chairman and chief executive officer, Navios Maritime Holdings Inc.
James C. McKenna, president and chief executive officer, Pacific Maritime Association
Madeleine Paquin, president and chief executive officer, Logistec Corporation
2018
Michael A. Jordan, founding principal and technical direction, Liftech
Donato Caruso, of counsel, The Lambos Firm, LLP
Adam M. Goldstein, vice chairman, Royal Caribbean Cruises Ltd
Juergen Pump, president for North America, Hamburg Süd
2019
Harold J. Daggett, international president, International Longshoremen’s Association, AFL-CIO
George Economou, founder, chairman, CEO, DryShips Inc.
John F. Reinhart, chief executive officer, executive director, Virginia Port Authority
Rodolphe J. Saadé, chairman and chief executive officer, CMA CGM S.A.
Richard S. (Rich) Weeks, president and chief executive officer, Weeks Marine, Inc.
2020
Lisa Lutoff-Perlo, president and CEO, Celebrity Cruises Inc.
James R. Mara, president emeritus, Metropolitan Marine Maintenance Contractors’ Association
James I. Newsome III, president and CEO, South Carolina Ports Authority
Dr. Nikolas P. Tsakos, president and CEO, Tsakos Energy Navigation Corp.
Lois K. Zabrocky, president and CEO, International Seaways Inc.
2021
James Angelo Ruggieri, Eur Ing, General Machine Corp.
Larry Wilkerson, United States Coast Guard
Similar Halls
Delaware Maritime Hall of Fame
National Maritime Hall of Fame
Footnotes
External links
International Maritime Hall of Fame Awards 1994-2008
Port of New York and New Jersey
Science and technology halls of fame
Maritime
Maritime museums in the United States
Maritime history organizations | International Maritime Hall of Fame | [
"Technology"
] | 796 | [
"Science and technology awards",
"Science and technology halls of fame"
] |
11,956,019 | https://en.wikipedia.org/wiki/Blaschke%20selection%20theorem | The Blaschke selection theorem is a result in topology and convex geometry about sequences of convex sets. Specifically, given a sequence of convex sets contained in a bounded set, the theorem guarantees the existence of a subsequence and a convex set such that converges to in the Hausdorff metric. The theorem is named for Wilhelm Blaschke.
Alternate statements
A succinct statement of the theorem is that the metric space of convex bodies is locally compact.
Using the Hausdorff metric on sets, every infinite collection of compact subsets of the unit ball has a limit point (and that limit point is itself a compact set).
Application
As an example of its use, the isoperimetric problem can be shown to have a solution. That is, there exists a curve of fixed length that encloses the maximum area possible. Other problems likewise can be shown to have a solution:
Lebesgue's universal covering problem for a convex universal cover of minimal size for the collection of all sets in the plane of unit diameter,
the maximum inclusion problem,
and the Moser's worm problem for a convex universal cover of minimal size for the collection of planar curves of unit length.
Notes
References
Geometric topology
Compactness theorems
ru:Теорема выбора Бляшке | Blaschke selection theorem | [
"Mathematics"
] | 276 | [
"Compactness theorems",
"Topology",
"Geometric topology",
"Theorems in topology"
] |
11,956,089 | https://en.wikipedia.org/wiki/Seismic%20migration | Seismic migration is the process by which seismic events are geometrically re-located in either space or time to the location the event occurred in the subsurface rather than the location that it was recorded at the surface, thereby creating a more accurate image of the subsurface. This process is necessary to overcome the limitations of geophysical methods imposed by areas of complex geology, such as: faults, salt bodies, folding, etc.
Migration moves dipping reflectors to their true subsurface positions and collapses diffractions, resulting in a migrated image that typically has an increased spatial resolution and resolves areas of complex geology much better than non-migrated images. A form of migration is one of the standard data processing techniques for reflection-based geophysical methods (seismic reflection and ground-penetrating radar)
The need for migration has been understood since the beginnings of seismic exploration and the very first seismic reflection data from 1921 were migrated. Computational migration algorithms have been around for many years but they have only entered wide usage in the past 20 years because they are extremely resource-intensive. Migration can lead to a dramatic uplift in image quality so algorithms are the subject of intense research, both within the geophysical industry as well as academic circles.
Rationale
Seismic waves are elastic waves that propagate through the Earth with a finite velocity, governed by the elastic properties of the rock in which they are travelling. At an interface between two rock types, with different acoustic impedances, the seismic energy is either refracted, reflected back towards the surface or attenuated by the medium. The reflected energy arrives at the surface and is recorded by geophones that are placed at a known distance away from the source of the waves. When a geophysicist views the recorded energy from the geophone, they know both the travel time and the distance between the source and the receiver, but not the distance down to the reflector.
In the simplest geological setting, with a single horizontal reflector, a constant velocity and a source and receiver at the same location (referred to as zero-offset, where offset is the distance between the source and receiver), the geophysicist can determine the location of the reflection event by using the relationship:
where d is the distance, v is the seismic velocity (or rate of travel) and t is the measured time from the source to the receiver.
In this case, the distance is halved because it can be assumed that it only took one-half of the total travel time to reach the reflector from the source, then the other half to return to the receiver.
The result gives us a single scalar value, which actually represents a half-sphere of distances, from the source/receiver, which the reflection could have originated from. It is a half-sphere, and not a full sphere, because we can ignore all possibilities that occur above the surface as unreasonable.
In the simple case of a horizontal reflector, it can be assumed that the reflection is located vertically below the source/receiver point (see diagram).
The situation is more complex in the case of a dipping reflector, as the first reflection originates from further up the direction of dip (see diagram) and therefore the travel-time plot will show a reduced dip that is defined the “migrator’s equation” :
where is the apparent dip and is the true dip.
Zero-offset data is important to a geophysicist because the migration operation is much simpler, and can be represented by spherical surfaces. When data is acquired at non-zero offsets, the sphere becomes an ellipsoid and is much more complex to represent (both geometrically, as well as computationally).
Use
For a geophysicist, complex geology is defined as anywhere there is an abrupt or sharp contrast in lateral and/or vertical velocity (e.g. a sudden change in rock type or lithology which causes a sharp change in seismic wave velocity).
Some examples of what a geophysicist considers complex geology are: faulting, folding, (some) fracturing, salt bodies, and unconformities. In these situations a form of migration is used called pre-stack migration (PreSM), in which all traces are migrated before being moved to zero-offset. Consequently, much more information is used, which results in a much better image, along with the fact that PreSM honours velocity changes more accurately than post-stack migration.
Types of migration
Depending on budget, time restrictions and the subsurface geology, geophysicists can employ 1 of 2 fundamental types of migration algorithms, defined by the domain in which they are applied: time migration and depth migration.
Time migration
Time migration is applied to seismic data in time coordinates. This type of migration makes the assumption of only mild lateral velocity variations and this breaks down in the presence of most interesting and complex subsurface structures, particularly salt. Some popularly used time migration algorithms are: Stolt migration, Gazdag and Finite-difference migration.
Depth migration
Depth Migration is applied to seismic data in depth (regular Cartesian) coordinates, which must be calculated from seismic data in time coordinates. This method does therefore require a velocity model, making it resource-intensive because building a seismic velocity model is a long and iterative process. The significant advantage to this migration method is that it can be successfully used in areas with lateral velocity variations, which tend to be the areas that are most interesting to petroleum geologists. Some of the popularly used depth migration algorithms are Kirchhoff depth migration, Reverse Time Migration (RTM), Gaussian Beam Migration and Wave-equation migration.
Resolution
The goal of migration is to ultimately increase spatial resolution and one of the basic assumptions made about the seismic data is that it only shows primary reflections and all noise has been removed. In order to ensure maximum resolution (and therefore maximum uplift in image quality) the data should be sufficiently pre-processed before migration. Noise that may be easy to distinguish pre-migration could be smeared across the entire aperture length during migration, reducing image sharpness and clarity.
A further basic consideration is whether to use 2D or 3D migration. If the seismic data has an element of cross-dip (a layer that dips perpendicular to the line of acquisition) then the primary reflection will originate from out-of-plane and 2D migration cannot put the energy back to its origin. In this case, 3D migration is needed to attain the best possible image.
Modern seismic processing computers are more capable of performing 3D migration, so the question of whether to allocate resources to performing 3D migration is less of a concern.
Graphical migration
The simplest form of migration is that of graphical migration. Graphical migration assumes a constant velocity world and zero-offset data, in which a geophysicist draws spheres or circles from the receiver to the event location for all events. The intersection of the circles then form the reflector's "true" location in time or space. An example of such can be seen in the diagram.
Technical details
Migration of seismic data is the correction of the flat-geological-layer assumption by a numerical, grid-based spatial convolution of the seismic data to account for dipping events (where geological layers are not flat). There are many approaches, such as the popular Kirchhoff migration, but it is generally accepted that processing large spatial sections (apertures) of the data at a time introduces fewer errors, and that depth migration is far superior to time migration with large dips and with complex salt bodies.
Basically, it repositions/moves the energy (seismic data) from the recorded locations to the locations with the correct common midpoint (CMP). While the seismic data is received at the proper locations originally (according to the laws of nature), these locations do not correspond with the assumed CMP for that location. Though stacking the data without the migration corrections yields a somewhat inaccurate picture of the subsurface, migration is preferred for better most imaging recorder to drill and maintain oilfields. This process is a central step in the creation of an image of the subsurface from active source seismic data collected at the surface, seabed, boreholes, etc., and therefore is used on industrial scales by oil and gas companies and their service providers on digital computers.
Explained in another way, this process attempts to account for wave dispersion from dipping reflectors and also for the spatial and directional seismic wave speed (heterogeneity) variations, which cause wavefields (modelled by ray paths) to bend, wave fronts to cross (caustics), and waves to be recorded at positions different from those that would be expected under straight ray or other simplifying assumptions. Finally, this process often attempts to also preserve and extract the formation interface reflectivity information imbedded in the seismic data amplitudes, so that they can be used to reconstruct the elastic properties of the geological formations (amplitude preservation, seismic inversion). There are a variety of migration algorithms, which can be classified by their output domain into the broad categories of time migration or depth migration, and pre-stack migration or post-stack migration (orthogonal) techniques. Depth migration begins with time data converted to depth data by a spatial geological velocity profile. Post-stack migration begins with seismic data which has already been stacked, and thus already lost valuable velocity analysis information.
See also
Reflection seismology
Seismic Unix, open source software for processing of seismic reflection data
References
Geophysics
Seismology | Seismic migration | [
"Physics"
] | 1,928 | [
"Applied and interdisciplinary physics",
"Geophysics"
] |
11,956,097 | https://en.wikipedia.org/wiki/System%20migration | System migration involves moving a set of instructions or programs, e.g., PLC (programmable logic controller) programs, from one platform to another, minimizing reengineering.
Migration of systems can also involve downtime, while the old system is replaced with a new one.
Migration can be from a mainframe computer which has a closed architecture, to an open system which employ x86 servers. As well, migration can be from an open system to a Cloud Computing platform. The motivation for this can be the cost savings. Migration can be simplified by tools that can automatically convert data from one form to another. There are also tools to convert the code from one platform to another to be either compiled or interpreted. Vendors of such tools include Micro Focus and Metamining. An alternative to converting the code is the use of software that can run the code from the old system on the new system. Examples are Oracle Tuxedo Application Rehosting Workbench, Morphis - Transformer and products for LINC 4GL, Ispirer - products and services for database and application migration.
Migration may also be required when the hardware is no longer available. See JOVIAL.
See also
Data conversion
Data migration
Data transformation
Software migration
Software modernization
List of Linux adopters
References
Software maintenance | System migration | [
"Technology",
"Engineering"
] | 263 | [
"Computer science stubs",
"Software engineering",
"Computer science",
"Software maintenance",
"Computing stubs"
] |
11,956,390 | https://en.wikipedia.org/wiki/Environmental%20Health%20Perspectives | Environmental Health Perspectives (EHP) is a peer-reviewed open access journal published monthly with support from the U.S. National Institute of Environmental Health Sciences (NIEHS). The primary purposes of EHP are to communicate recent scientific findings and trends in the environmental health sciences; to improve the environmental health knowledge base among researchers, administrators, and policy makers; and to inform the public about important topics in environmental health.
References
Environmental social science journals
Academic journals established in 1972
Monthly journals
English-language journals
Open access journals
Environmental health journals
Academic journals published by the United States government | Environmental Health Perspectives | [
"Environmental_science"
] | 117 | [
"Environmental social science journals",
"Environmental science journals",
"Environmental social science stubs",
"Environmental science journal stubs",
"Environmental social science",
"Environmental health journals"
] |
11,956,461 | https://en.wikipedia.org/wiki/United%20States%20House%20Transportation%20Subcommittee%20on%20Railroads%2C%20Pipelines%2C%20and%20Hazardous%20Materials | The Subcommittee on Railroads, Pipelines, and Hazardous Materials is a subcommittee within the House Transportation and Infrastructure Committee.
Jurisdiction
The Subcommittee oversees regulation of railroads by the Surface Transportation Board, including economic regulations; Amtrak, rail safety, the Federal Railroad Administration, and the National Mediation Board, which handles railway labor disputes. It is also oversees of the Pipeline and Hazardous Materials Safety Administration within the U.S. Department of Transportation, which is responsible for the safety of the nation's oil and gas pipelines as well as the transportation of hazardous materials.
Members, 118th Congress
Historical membership rosters
115th Congress
116th Congress
117th Congress
References
External links
Subcommittee website
Transportation Railroads
United States railroad regulation
Hazardous materials
Transport safety organizations
Pipelines in the United States | United States House Transportation Subcommittee on Railroads, Pipelines, and Hazardous Materials | [
"Physics",
"Chemistry",
"Technology"
] | 150 | [
"Materials",
"Hazardous materials",
"Matter"
] |
11,957,006 | https://en.wikipedia.org/wiki/Tekla | Tekla is a software product family that consists of programs for analysis and design, detailing and project communication. Tekla software is produced by Trimble, the publicly listed US-based technology company.
History
, Tekla Corporation was a software engineering company specialised in model-based software products for building, construction and infrastructure management. The company was listed on the Helsinki Stock Exchange from May 2000 until February 2012.
The name Tekla is a given name, used in the Nordic countries, in Poland and in Georgia. However, in this case it is an abbreviation of the Finnish words Teknillinen laskenta, which means technical computation.
In May 2011, California-based business technology specialist Trimble Navigation announced a public tender offer to acquire Tekla for $450 million. The acquisition was completed in February 2012.
In November 2013, Trimble acquired CSC, a UK-based engineering software company and Tekla's former business partner. This added Fastrack, Orion, and Tedds to their software portfolio. In November 2013, CSC was formally merged with Tekla.
In January 2016, Tekla Corporation as an organization changed its name to Trimble.
Software
Tekla engineering software has been around since the late 1960s.
Tekla Structures is 3D building information modeling (BIM) software used in the building and construction industries for steel and concrete detailing, precast and cast in-situ. The software enables users to create and manage 3D structural models in concrete or steel, and guides them through the process from concept to fabrication. The process of shop drawing creation is automated. Along with the creation of CNC-files, files for controlling reinforcement bending machines, controlling precast concrete manufacturing, importing in PLM-systems etc. Tekla Structures is available in different configurations and localised environments to suit different segment- and culture-specific needs.
Tekla Structural Designer is software for analysis and design concrete and steel buildings.
Tekla Tedds is an application for automating repetitive structural and civil calculations. The software is used in engineering for creating output such as calculations, sketches and notes.
Tekla BIMsight is a software application for building information model-based construction project collaboration. It can import models from other BIM applications using the Industry Foundation Classes (IFC) format, also DWG and DGN. With Tekla BIMsight, users can perform spatial co-ordination (clash or conflict checking) to avoid design and constructability issues, and communicate with others in their construction project by sharing models and notes.
See also
Comparison of CAD editors for CAE
References
Engineering companies of Finland
Software companies of Finland
Computer-aided design software
Computer-aided engineering software
Building information modeling
Product lifecycle management
Companies based in Espoo
Design companies established in 1966
Electronics companies established in 1966
Technology companies established in 1966
Finnish brands
Finnish companies established in 1966
Companies formerly listed on Nasdaq Helsinki | Tekla | [
"Engineering"
] | 582 | [
"Building engineering",
"Building information modeling"
] |
11,958,355 | https://en.wikipedia.org/wiki/Young%20Generation%20Network | The Young Generation Network, or YGN, is a branch of the Nuclear Institute founded in 1996. It is a British version of the European Young Generation Network created earlier in Sweden by Jan Runermark, a president of ABB Atom who had been concerned with preserving the know-how of retiring nuclear-energy pioneers and who perceived a need for greater efforts to retain young professionals. The YGN, which is open to NI members under the age of 37, organizes lectures, speaking competitions, and facility tours for new nuclear workers in Great Britain. It also conducts its own lobbying efforts, serves as a source for journalists seeking information about the nuclear-industry labour market and promotes careers in science and engineering in schools, colleges and universities.
The UK's objectives are based on those established by the European Nuclear Societies, which are to:
focus on the next generation
promote knowledge in a wide perspective of the nuclear industry
transfer the 'know-how' within generations
provide a platform for:
- personal networks
- exchange of experience
- exchange of best practice across companies
- development of nuclear technology
- recruitment and job opportunities
- career development.
Chairs
Mike Roberts 2019
Rob Ward 2020
Hannah Paterson 2021
Notes
External links
YGN - The NI YGN website, contains lots of information on the work of the YGN and links to useful sites
Nuclear Institute
Nuclear Industry Association - Trade association and information body for the UK civil nuclear industry
European YGN links
References
1 YGN Aims
2 Objectives for a Young Generation Network
Nuclear industry organizations | Young Generation Network | [
"Engineering"
] | 299 | [
"Nuclear industry organizations",
"Nuclear organizations"
] |
11,958,485 | https://en.wikipedia.org/wiki/Britton%E2%80%93Robinson%20buffer | The Britton–Robinson buffer (BRB or PEM) is a "universal" pH buffer used for the pH range from 2 to 12. It has been used historically as an alternative to the McIlvaine buffer, which has a smaller pH range of effectiveness (from 2 to 8).
Universal buffers consist of mixtures of acids of diminishing strength (increasing pKa), so that the change in pH is approximately proportional to the amount of alkali added. It consists of a mixture of 0.04 M boric acid, 0.04 M phosphoric acid and 0.04 M acetic acid that has been titrated to the desired pH with 0.2 M sodium hydroxide. Britton and Robinson also proposed a second formulation that gave an essentially linear pH response to added alkali from pH 2.5 to pH 9.2 (and buffers to pH 12). This mixture consists of 0.0286 M citric acid, 0.0286 M monopotassium phosphate, 0.0286 M boric acid, 0.0286 M veronal and 0.0286 M hydrochloric acid titrated with 0.2 M sodium hydroxide.
The buffer was invented in 1931 by the English chemist Hubert Thomas Stanley "Kevin" Britton (1892–1960) and the New Zealand chemist Robert Anthony Robinson (1904–1979).
See also
Buffer solution
Good's buffers
References
Acid–base chemistry
Buffer solutions
Chemical tests | Britton–Robinson buffer | [
"Chemistry"
] | 307 | [
"Acid–base chemistry",
"Buffer solutions",
"Chemical tests",
"Equilibrium chemistry",
"nan",
"Analytical chemistry stubs"
] |
11,958,900 | https://en.wikipedia.org/wiki/RechargeIT | RechargeIT is one of five initiatives within Google.org, the charitable arm of Google, created with the aim to reduce CO2 emissions, cut oil use, and stabilize the electrical grid by accelerating the adoption of plug-in electric vehicles. Google.org's official RechargeIT blog has not been updated since 2008.
History
The RechargeIT initiative was unveiled in June 2007. As part of the program Google.org awarded US$1 million in grants and announced plans for a US$10 million request for proposals to fund development, adoption and commercialization of plug-in hybrids, fully electric cars and related vehicle-to-grid (V2G) technology. As part of the program Google established a partnership with Pacific Gas and Electric Co. to develop software for energy management.
Together with the announcement of the initiative, Google also announced that it had switched on the solar panel installation at its Mountain View, California headquarters in order to help the company reduce its environmental footprint and also to power its plug-ins with clean solar electricity. At 1.6 megawatts the project became the largest solar installation at that time on any corporate campus in the U.S. and one of the largest on any corporate site in the world.
The actual rollout of the initiative took place in January 2008.
Google's carsharing program
By early 2010 Google's Mountain View campus had 100 available charging stations for its share-use fleet of converted plug-in hybrids available to its employees through a free carsharing program and for those employees who drive to work in their Tesla Roadster (2008) electric cars. Solar panels are used to generate the electricity, and this pilot program is being monitored on a daily basis and performance results are published on RechargeIT's website.
Driving experiment
In addition to the data collected for two years when the converted plug-ins were driven by Google employees, RechargeIT set up a controlled test using three conventional gasoline vehicles, two regular hybrids and two plug-in converted Ford Escape Hybrid and Toyota Prius. The results of the seven-week driving experiment for the converted Prius plug-in showed an average fuel economy of across all trips, and for city trips, the maximum reached for any of the driving conditions tested. A summary of the results of the seven-week driving experiment are the following
GFleet and employee shuttles
In order to reduce the carbon footprint of its employees' commute and based on the results of the RechargeIT pilot, the company expanded its corporate carsharing program to create Google GFleet and also introduced shuttle buses powered with biodiesel. In addition, more charging stations were deployed at the Googleplex for employees owning plug-in electric vehicles.
The initial GFleet was made of the converted plug-in hybrids from the RechargeIT initiative, and by mid-2011 Nissan Leafs and Chevrolet Volts were added, expanding the carsharing corporate fleet to more than 30 plug-in electric vehicles. In December 2011, the first production Ford Focus Electric was delivered to Google and incorporated into the GFleet. By early 2012, Honda Fit EVs and Mitsubishi i-MiEVs have also been added to the GFleet. The Fit EV was incorporated as part of Honda's field testing program of its upcoming electric car. Through the partnership Google will analyze the vehicle environmental performance including CO2 reduction, energy consumption and overall energy cost.
Employees who use the biodiesel shuttle system to commute to work at Mountain View, have the GFleet vehicles available for their errands, off-site meetings, and emergencies. Employees can also use GBikes, Google's on-campus bike fleet. As of June 2011, a total of 71 Level 2 chargers were added to the existing 150 Level 1 chargers, bringing the Googleplex total capacity to more than 200 chargers, and another 250 new ones are scheduled to be installed. Google's goal is to electrify 5 percent of the parking spaces—all over campus, free of charge to its employees.
Daily, up to a third of Bay Area employees take the shuttle to work. The corporate coach fleet exceeds the United States Environmental Protection Agency's 2010 bus emission standards. The buses run on 5% biodiesel and are fitted with filtration systems that eliminate many harmful emissions, including nitrogen oxide. Google is testing solar panels on some to power air circulation, so that shuttles can turn off their engines while they wait for passengers, thus reducing fuel use and emissions. As of mid-2011, Google estimated that its Gfleet and biodiesel shuttle system resulted in net annual savings of more than 5,400 tonnes of , the equivalent of taking over 2,000 cars per day off the road, or avoiding 14 million vehicle miles every year.
See also
Better Place
CalCars
Google driverless car
Google PowerMeter
Plug-in electric vehicles in California
Plug-in electric vehicles in the United States
United States energy independence
References
External links
RechargeIT home page.
Google.org RechargeIT: Plug-in Hybrids by Google at YouTube. 15 June 2007.
RechargeIT: The Driving Experiment by Google.org at YouTube. 22 July 2008.
Google introduces its G Fleet of Electric Vehicles by Google.org at YouTube. 21 July 2011.
Google.org
Road transportation in California
United States
Sustainable transport | RechargeIT | [
"Physics"
] | 1,085 | [
"Physical systems",
"Transport",
"Sustainable transport"
] |
11,959,917 | https://en.wikipedia.org/wiki/Cage%20bed | A cage bed is a bed with either metal bars or netting designed to restrain a person of any given age, including children, within the boundaries of the bed. They were once commonplace in Central and Eastern Europe and used to restrain disabled people, including autistic people and those with learning difficulties, epilepsy, hyperactivity and mental health problems in psychiatric institutions. , the Mental Disability Advocacy Center says cage beds are used in Greece, the Czech Republic and Romania.
Psychiatrists in the Czech Republic previously defended the use of the beds in social care but their use in children's care homes was later banned in the country due to international pressure, and an appeal by J. K. Rowling, who later went on to found Lumos, which promotes an end to the institutionalisation of children worldwide.
See also
Medical restraint
Sedation
References
Beds
Psychiatric restraint
Physical restraint
de:Zwangsbett | Cage bed | [
"Biology"
] | 182 | [
"Beds",
"Behavior",
"Sleep"
] |
11,960,343 | https://en.wikipedia.org/wiki/Artificio%20de%20Juanelo | The ("Gianello's artifice") was the name of two devices built in Toledo in the 16th century by Juanelo Turriano. They were designed to supply the city with a source of readily available water by lifting it from the Tagus () river to the Alcázar. Now in ruins, the precise details of the operation of the devices are unknown, but at the time they were considered engineering wonders.
History
Juanelo Turriano, an Italian-Spanish clock maker, engineer and mathematician, was called to Spain in 1529 and appointed Court Clock Master by the Holy Roman Emperor, Charles V. By 1534 he was working in Toledo, then the capital of the Spanish Empire. Both the Roman aqueduct that had originally supplied the city with water and a giant water wheel constructed by the Moors during the time of the Caliphate of Córdoba had been destroyed and various attempts to supply the demands of the growing city with water by employing the technologies of the time had failed. When Turriano arrived in Toledo, asses were being used to transport pitchers of water from the river to the city. This involved a climb of about 100 metres (330 ft) over uneven ground and was highly inefficient.
Sometime after his arrival in Toledo, Turriano was challenged by Alfonso de Avalos, Marqués de Vasto, to provide a method of transporting water from the Tagus to supply the city. Turriano produced detailed plans for the device but various stoppages and obstacles hindered the construction, among which was the death of Charles V in 1558. Although his successor, Philip II was not as interested in engineering as his father had been and the prestige that Turriano had enjoyed under Charles diminished under his son, Philip nevertheless recognized the value of Turriano, named him and employed him on various engineering projects. Although it appears that at times the construction of the water lifting device may have been abandoned, by 1565 Turriano had managed to re-exert his control over the project and contracted with the city council to provide water to the amount of 1,600 pitchers daily (which amounted to around 12,400 litres (3,275 US gallons)) to the city on a permanent basis within three years.
The exact start date of operation is unknown, but by 1568 the machine was delivering around 14,100 litres (3,725 US gallons) a day, well over the agreed levels. However, the city refused to pay Turriano the agreed price, arguing that since the water was stored at the Alcázar it was for the exclusive use of the royal palace, rather than the town. Frustrated by the council's refusal to pay, and in debt from the costs of the device's construction, Turriano entered into another agreement, underwritten by the Crown, to build a second device for the supply of the city. However, it was agreed that this time he and his heirs would retain the rights for the operation. This second version was completed in 1581, and although the Crown paid the costs of construction, Turriano was unable to cover the costs of maintenance and was forced to give up control of the machine to the city. He died shortly afterwards in 1585.
The machines continued to operate until around 1639 when thefts of parts and lack of maintenance led to both machines falling into disrepair. The first machine was disassembled and the second left standing as a symbol of the city, but operation ceased and water once again had to be transported in pitchers on the backs of asses. Further theft of parts reduced the second machine to ruins, and nowadays little of it remains.
Operation
The device caused a great sensation as the height to which the water was raised was more than double what had been previously achieved. Various waterwheel constructions had previously managed modest lifts, but before the construction of the Artificio, the highest lift had been just under 40 metres (130 ft) at Augsburg using an Archimedean screw.
The details of the construction are the subject of debate, but the most widely accepted design is that proposed by Ladislao Reti, based on fragments of contemporary descriptions. A large water wheel powered a revolving belt with buckets or amphora that transported water to the top of a tower. When the buckets reached the top of the tower they would upend pouring the water into a small tank from where it would travel down to a smaller tower via a pipe. A second water wheel provided mechanical power to pumps that drove a series of cups mounted on arms inside the second tower. The arms of the cups were hollow with an opening at the end which allowed water to run down inside the arm and out of the opposite end. A see-sawing motion of the arms lifted the water to successive levels in the cups. Once the final level was reached the water flowed down a second pipe to a third tower which contained further cups on arms and was also activated by the mechanical power derived from the second water wheel. This final tower raised the water high enough to allow it to flow into the storage tanks at the Alcázar.
References
D. Joaquín Martínez Copeiro del Villar,
"The Codex of Juanelo Turriano (1500-1585)", Technology and Culture 8 (1967): pp. 53–66
José A. Garcia-Diego, "Restoration of Technological Monuments in Spain", Society for the History of Technology., 1972
The new model of the hydraulic machine known as El Artificio de Juanelo in three-dimensional computer simulation
Pumps
Demolished buildings and structures in Spain
1560s works
Toledo, Spain
Italian inventions
Spanish inventions
Buildings and structures demolished in the 17th century
1639 disestablishments in Europe
1560s establishments in Spain | Artificio de Juanelo | [
"Physics",
"Chemistry"
] | 1,167 | [
"Physical systems",
"Hydraulics",
"Turbomachinery",
"Pumps"
] |
11,960,452 | https://en.wikipedia.org/wiki/Chemically%20strengthened%20glass | Chemically strengthened glass is a type of glass that has increased strength as a result of a post-production chemical process. When broken, it still shatters in long pointed splinters similar to float glass. For this reason, it is not considered a safety glass and must be laminated if safety glass is required. However, chemically strengthened glass is typically six to eight times the strength of float glass. The most common trademark for this kind of glass is Gorilla glass.
History
Glass is one of the oldest materials created by humans, dating back to about 4,000 years ago, when craftsmen working in Mesopotamia, the land between the Tigris and Euphrates Rivers, discovered the art of mixing sand, soda, and lime to make glass. Throughout the ages, humans have explored early ion exchange in glass to decorate and color glass artefacts with silver or copper powders.
It was only around the beginning of the 20th century that the foundations for a possible application of the ion-exchange process in the technical-industrial field were laid. In 1913 Günther Schulze was the first to study the diffusion of silver ions into the glass using silver nitrate salt (AgNO3) as ion source, starting a whole series of studies aimed to understand the chemical and physical nature of the phenomenon and its effects on some physical properties of the glass so treated. In particular, a few years later, in 1918 at the Schott Glass Laboratory, it was demonstrated that ion-exchange produces an increase of the refractive index of the layer of the glass involved in the diffusive process.
After the development of Pyroceram in the 1950s, Corning Inc. began a research and development program called Project Muscle to improve the hardness of glass. By that time, the ion exchange technique had become a well-understood industrial process. S. Donald Stookey started research into using it for strengthening by June 1960, and the topic was discussed at a symposium in Florence in September 1961, but it was Steven Kistler and, independently, Paul Henri Acloque and Jean Paul Tochon of Saint-Gobain who managed to improve the compressive strength threefold in 1962. Replacement of smaller sodium ions (Na+) with larger potassium ones (K+) in the pristine glass matrix was able to prevent or heal over the possible formation of micro/nano-cracks on the specimen surface, increasing its mechanical strength. Soon Corning researchers found that addition of aluminium and zirconium oxides improved the qualities even further. Since then, many efforts have been carried out in this field both at the research and industrial levels.
Process principle
The actual strength of the glass is not significantly altered by the ion exchange process. Instead, a state of beneficial residual stress is introduce by a surface finishing process. Glass is submersed in a bath of a molten potassium salt (typically potassium nitrate) at temperatures of or greater. This causes sodium ions in the glass surface to be replaced by potassium ions from the bath.
These potassium ions are larger than the sodium ions and therefore wedge into the gaps left by the smaller sodium ions when they migrate to the molten potassium nitrate. This replacement of ions causes the surface of the glass to be in a state of compression and the core in compensating tension. The surface compression of chemically strengthened glass may reach up to .
The strengthening mechanism depends on the fact that the compressive strength of glass is significantly higher than its tensile strength. With both surfaces of the glass already in compression, it takes a certain amount of bending before one of the surfaces can even go into tension. More bending is required to reach the tensile strength. The other surface simply experiences more and more compressive stress. But since the compressive strength is so much larger, no compressive failure is experienced.
Because the surface of chemically strengthened glass is in compression, it is also significantly more scratch resistant than untreated glass. This is why cell phone screens are typically made this way. Since phones are commonly carried in a pocket or purse with items such as keys, scratch resistance is important.
There also exists a more advanced two-stage process for making chemically strengthened glass, in which the glass article is first immersed in a sodium nitrate bath at 450 °C (842 °F), which enriches the surface with sodium ions. This leaves more sodium ions on the glass for the immersion in potassium nitrate to replace with potassium ions. In this way, the use of a sodium nitrate bath increases the potential for surface compression in the finished article.
Chemical strengthening results in a strengthening similar to toughened glass. However, the process does not use extreme variations of temperature and therefore chemically strengthened glass has little or no bow or warp, optical distortion, or strain pattern. This differs from toughened glass, in which slender pieces can be significantly bowed.
Also unlike toughened glass, chemically strengthened glass may be cut after strengthening, but loses its added strength within approximately 20 mm of the cut. Similarly, when the surface of chemically strengthened glass is deeply scratched, this area loses its additional strength.
Another negative of chemically strengthened glass is the added cost. While tempered glass can be made cheaply through the fabrication process, chemically strengthened glass has a more expensive route to the market. These costs make the product prohibitive for use in many applications.
Chemically strengthened glass was used for the aircraft canopy of some fighter aircraft.
See also
Architectural glass
Superfest
References
Glass types
Glass coating and surface modification | Chemically strengthened glass | [
"Chemistry"
] | 1,106 | [
"Glass chemistry",
"Coatings",
"Glass coating and surface modification"
] |
11,960,507 | https://en.wikipedia.org/wiki/Iron%20peak | The iron peak is a local maximum in the vicinity of Fe (Cr, Mn, Fe, Co and Ni) on the graph of the abundances of the chemical elements.
For elements lighter than iron on the periodic table, nuclear fusion releases energy. For iron, and for all of the heavier elements, nuclear fusion consumes energy. Chemical elements up to the iron peak are produced in ordinary stellar nucleosynthesis, with the alpha elements being particularly abundant. Some heavier elements are produced by less efficient processes such as the r-process and s-process. Elements with atomic numbers close to iron are produced in large quantities in supernovae due to explosive oxygen and silicon fusion, followed by radioactive decay of nuclei such as Nickel-56. On average, heavier elements are less abundant in the universe, but some of those near iron are comparatively more abundant than would be expected from this trend.
Binding energy
A graph of the nuclear binding energy per nucleon for all the elements shows a sharp increase to a peak near nickel and then a slow decrease to heavier elements. Increasing values of binding energy represent energy released when a collection of nuclei is rearranged into another collection for which the sum of nuclear binding energies is higher. Light elements such as hydrogen release large amounts of energy (a big increase in binding energy) when combined to form heavier nuclei. Conversely, heavy elements such as uranium release energy when converted to lighter nuclei through alpha decay and nuclear fission. is the most thermodynamically favorable in the cores of high-mass stars. Although iron-58 and nickel-62 have even higher (per nucleon) binding energy, their synthesis cannot be achieved in large quantities, because the required number of neutrons is typically not available in the stellar nuclear material, and they cannot be produced in the alpha process (their mass numbers are not multiples of 4).
See also
Abundances of the elements (data page)
References
Astrophysics | Iron peak | [
"Physics",
"Astronomy"
] | 393 | [
"Astronomical sub-disciplines",
"Astrophysics"
] |
11,960,797 | https://en.wikipedia.org/wiki/Dibromofluoromethane%20%28data%20page%29 | This page provides supplementary chemical data on dibromofluoromethane.
Material Safety Data Sheet
The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source such as SIRI, and follow its directions.
Structure and properties
Thermodynamic properties
Spectral data
References
Chemical data pages
Chemical data pages cleanup | Dibromofluoromethane (data page) | [
"Chemistry"
] | 87 | [
"Chemical data pages",
"nan"
] |
11,960,901 | https://en.wikipedia.org/wiki/WS-Trust | WS-Trust is a WS-* specification and OASIS standard that provides extensions to WS-Security, specifically dealing with the issuing, renewing, and validating of security tokens, as well as with ways to establish, assess the presence of, and broker trust relationships between participants in a secure message exchange.
The WS-Trust specification was authored by representatives of a number of companies, and was approved by OASIS as a standard in March 2007.
Using the extensions defined in WS-Trust, applications can engage in secure communication designed to work within the Web services framework.
Overview
WS-Trust defines a number of new elements, concepts and artifacts in support of that goal, including:
the concept of a Security Token Service (STS) - a web service that issues security tokens as defined in the WS-Security specification.
the formats of the messages used to request security tokens and the responses to those messages.
mechanisms for key exchange
WS-Trust is then implemented within Web services libraries, provided by vendors or by open source collaborative efforts. Web services frameworks that implement the WS-Trust protocols for token request include: Microsoft's Windows Communication Foundation (WCF) and Windows Identity Foundation (WIF - as of .NET 4.5, WIF is integrated into .NET Core), Sun's WSIT framework, Apache's Rampart (part of axis2), and others. In addition, vendors or other groups may deliver products that act as a Security Token Service, or STS. Microsoft's Access Control Services is one such service, available online today. PingIdentity Corporation also markets an STS. Microsoft's ADFS also provides implementation of an STS.
Authors
The companies involved in defining WS-Trust were:
Actional Corporation, BEA Systems, Inc.
Computer Associates International, Inc.
International Business Machines Corporation
Layer 7 Technologies
Microsoft Corporation
Oblix Inc.
OpenNetwork Technologies Inc.
Ping Identity Corporation
Reactivity Inc.
RSA Security Inc.
VeriSign Inc
References
External links
WS-Trust specification document, v1.4
WS-Trust specification document, v1.3
OASIS' Web Services Secure Exchange (WS-SX) Technical Committee
IBM's page on Web Services Trust Language
See also
WS-Security
WS-* Web Service Specifications
Web Services
OASIS (organization)
Security Tokens
Security Token Service (STS)
Identity management
Web service specifications
Security technology
Computer access control
Federated identity
Identity management
Identity management systems | WS-Trust | [
"Technology",
"Engineering"
] | 507 | [
"Computing stubs",
"World Wide Web stubs",
"Cybersecurity engineering",
"Computer access control"
] |
11,961,883 | https://en.wikipedia.org/wiki/Aerobic%20digestion | Aerobic digestion is a process in sewage treatment designed to reduce the volume of sewage sludge and make it suitable for subsequent use. More recently, technology has been developed that allows the treatment and reduction of other organic waste, such as food, cardboard and horticultural waste.
It is a bacterial process occurring in the presence of oxygen. Bacteria rapidly consume organic matter and convert it into carbon dioxide, water and a range of lower molecular weight organic compounds. As there is no new supply of organic material from sewage, the activated sludge biota begin to die and are used as food by saprotrophic bacteria. This stage of the process is known as endogenous respiration and it is process that reduces the solid concentration in the sludge.
Process
Aerobic digestion is typically used in an activated sludge treatment plant. Waste activated sludge and primary sludge are combined, where appropriate, and passed to a thickener where the solids content is increased. This substantially reduces the volume that is required to be treated in the digester. The process is usually run as a batch process with more than one digester tank in operation at any one time.
Air is pumped through the tank and the contents are stirred to keep the contents fully mixed. Carbon dioxide, waste air and small quantities of other gases including hydrogen sulfide are given off. These waste gases require treatment to reduce odours in works close to housing or capable of generating public nuisance. The digestion is continued until the percentage of degradable solids is reduced to between 20% and 10% depending on local conditions.
Where non-sewage waste is being processed, organic waste such as food, cardboard and horticultural waste can be significantly reduced in volume leaving an output that can be used as soil improver or biomass fuel.
Advantages
Aerobic digestion occurs much faster than anaerobic digestion. The process is usually run at ambient temperature and the process is much less complex and easier to manage than anaerobic digestion.
Disadvantages
The operating costs are typically much greater for aerobic digestion than for anaerobic digestion because of energy used by the blowers, pumps and motors needed to add oxygen to the process. However, recent technological advances include non-electrically aerated filter systems that use natural air currents for the aeration instead of electrically operated machinery.
The digested sludge is relatively low in residual energy and although it can be dried and incinerated to produce heat, the energy yield is very much lower than that produced by anaerobic digestion.
Autothermal thermophilic aerobic digestion
Autothermal thermophilic aerobic digestion is a faecal sludge treatment design concept that uses the nutrients in the sludge and the metabolic heat of the bacteria to create high temperatures in the aerobic digester. This gradually shifts the microbial community towards thermophilic at temperatures typically at 55-degree Celsius or above. While the higher aeration requirements of autothermal thermophilic aerobic digestion further increases energy use and potential smell nuisance, the increased temperature makes the resulting biosolids much safer for re-use.
References
External links
WHY AEROBIC DIGESTION IS SET TO MODERNISE FOOD WASTE MANAGEMENT blue-castle.co.uk
Biodegradable waste management
Environmental engineering
Mechanical biological treatment
Sewerage
Water technology | Aerobic digestion | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 677 | [
"Biodegradable waste management",
"Chemical engineering",
"Water pollution",
"Sewerage",
"Biodegradation",
"Civil engineering",
"Environmental engineering",
"Water technology"
] |
11,962,452 | https://en.wikipedia.org/wiki/Oxazolam | Oxazolam is a drug that is a benzodiazepine derivative. It has anxiolytic, anticonvulsant, sedative, and skeletal muscle relaxant properties. It is a prodrug for desmethyldiazepam.
See also
Benzodiazepine
References
Chloroarenes
GABAA receptor positive allosteric modulators
Lactams
Oxazolobenzodiazepines
Prodrugs | Oxazolam | [
"Chemistry"
] | 98 | [
"Chemicals in medicine",
"Prodrugs"
] |
11,962,478 | https://en.wikipedia.org/wiki/Parking%20orbit | A parking orbit is a temporary orbit used during the launch of a spacecraft. A launch vehicle follows a trajectory to the parking orbit, then coasts for a while, then engines fire again to enter the final desired trajectory.
An alternative trajectory that is used on some missions is direct injection, where the rocket fires continuously (except during staging) until its fuel is exhausted, ending with the payload on the final trajectory. This technique was first used by the Soviet Venera 1 mission to Venus in 1961.
Reasons for use
Geostationary spacecraft
Geostationary spacecraft require an orbit in the plane of the equator. Getting there requires a geostationary transfer orbit with an apogee directly above the equator. Unless the launch site itself is quite close to the equator, it requires an impractically large amount of fuel to launch a spacecraft directly into such an orbit. Instead, the craft is placed with an upper stage in an inclined parking orbit. When the craft crosses the equator, the upper stage is fired to raise the spacecraft's apogee to geostationary altitude (and often reduce the inclination of the transfer orbit, as well). Finally, a circularization burn is required to raise the perigee to the same altitude and remove any remaining inclination.
Translunar or interplanetary spacecraft
In order to reach the Moon or a planet at a desired time, the spacecraft must be launched within a limited range of times known as a launch window. Using a preliminary parking orbit before final injection can widen this window from seconds or minutes, to several hours. For the Apollo program's crewed lunar missions, a parking orbit allowed time for spacecraft checkout while still close to home, before committing to the lunar trip.
Design challenges
The use of a parking orbit can lead to a number of technical challenges. For example, during the development Centaur upper stage, the following problems were noted and had to be addressed:
The injection burn occurs under zero g conditions.
If the same upper stage which performs the parking orbit injection is used for the final injection burn, a restartable liquid-propellant rocket engine is required.
During the parking orbit coast, the propellants will drift away from the bottom of the tank and the pump inlets. This must be dealt with through the use of tank diaphragms, or ullage rockets to settle the propellant back to the bottom of the tank.
A reaction control system is needed to orient the stage properly for the final burn, and perhaps to establish a suitable thermal orientation during coast.
Cryogenic propellants must be stored in well-insulated tanks, to prevent excessive boiloff during coast.
Battery life and other consumables must be sufficient for the duration of the parking coast and final injection.
The Centaur and Agena families of upper stages were designed for restarts and have often been used in missions using parking orbits. The last Agena flew in 1987, but Centaur is still in production. The Briz-M is also capable of coasts and restarts, and often performs the same role for Russian rockets.
Examples
The Apollo program used parking orbits, for all the reasons mentioned above except those that pertain to geostationary orbits.
When the Space Shuttle orbiter launched interplanetary probes such as Galileo, it used a parking orbit to deliver the probe to the right injection spot.
The Ariane 5 does not usually use parking orbits. This simplifies the launcher since multiple restart is not needed, and the penalty is small for their typical GTO mission, as their launch site is close to the equator. A less commonly used second stage, the Ariane-5ES has multiple restart capability, and has been used for missions such as the Automated Transfer Vehicle (ATV) that use parking orbits. The Ariane 6 upper stage supports multiple restarts and can be used with missions that require parking orbits.
In a literal example of a parking orbit, the Automated Transfer Vehicle could park for several months in orbit while waiting to rendezvous with the International Space Station. For safety reasons, the ATV could not approach the station while a Space Shuttle was docked or when a Soyuz or Progress was maneuvering to dock or depart.
References
Astrodynamics
Spacecraft propulsion
Orbits | Parking orbit | [
"Engineering"
] | 851 | [
"Astrodynamics",
"Aerospace engineering"
] |
11,962,567 | https://en.wikipedia.org/wiki/Caristi%20fixed-point%20theorem | In mathematics, the Caristi fixed-point theorem (also known as the Caristi–Kirk fixed-point theorem) generalizes the Banach fixed-point theorem for maps of a complete metric space into itself. Caristi's fixed-point theorem modifies the -variational principle of Ekeland (1974, 1979). The conclusion of Caristi's theorem is equivalent to metric completeness, as proved by Weston (1977).
The original result is due to the mathematicians James Caristi and William Arthur Kirk.
Caristi fixed-point theorem can be applied to derive other classical fixed-point results, and also to prove the existence of bounded solutions of a functional equation.
Statement of the theorem
Let be a complete metric space. Let and be a lower semicontinuous function from into the non-negative real numbers. Suppose that, for all points in
Then has a fixed point in that is, a point such that The proof of this result utilizes Zorn's lemma to guarantee the existence of a minimal element which turns out to be a desired fixed point.
References
Fixed-point theorems
Metric geometry
Theorems in real analysis | Caristi fixed-point theorem | [
"Mathematics"
] | 239 | [
"Theorems in mathematical analysis",
"Theorems in real analysis",
"Fixed-point theorems",
"Theorems in topology"
] |
11,962,589 | https://en.wikipedia.org/wiki/Parachute%20%28drugs%29 | Parachuting or bombing is a method of swallowing drugs by rolling or folding powdered or crushed drugs in a piece of edible paper to ingest while avoiding the unpleasant taste of the chemical. It is sometimes called a "snow bomb", especially if using cocaine.
This method is used among many pharmaceuticals that are commonly crushed for recreational use. The toilet paper method must use single ply toilet paper or one must separate the layers of double ply. Tissues are also a common go to for this method of drug ingestion. Another common paper used is rolling paper for smoking herbal substances or tobacco. Rice or starch papers known as oblaat in Japan is a method that is becoming more popular. Opioids, amphetamines, benzodiazepines and other narcotics are commonly parachuted. This method's purpose is recreational because the drugs become absorbed all at once when the paper unravels in one's stomach.
References
Drug culture
Drug paraphernalia
Routes of administration | Parachute (drugs) | [
"Chemistry"
] | 204 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs",
"Routes of administration"
] |
11,962,687 | https://en.wikipedia.org/wiki/Grid-oriented%20storage | Grid-oriented Storage (GOS) was a term used for data storage by a university project during the era when the term grid computing was popular.
Description
GOS was a successor of the term network-attached storage (NAS). GOS systems contained hard disks, often RAIDs (redundant arrays of independent disks), like traditional file servers.
GOS was designed to deal with long-distance, cross-domain and single-image file operations, which is typical in Grid environments. GOS behaves like a file server via the file-based GOS-FS protocol to any entity on the grid. Similar to GridFTP, GOS-FS integrates a parallel stream engine and Grid Security Infrastructure (GSI).
Conforming to the universal VFS (Virtual Filesystem Switch), GOS-FS can be pervasively used as an underlying platform to best utilize the increased transfer bandwidth and accelerate the NFS/CIFS-based applications. GOS can also run over SCSI, Fibre Channel or iSCSI, which does not affect the acceleration performance, offering both file level protocols and block level protocols for storage area network (SAN) from the same system.
In a grid infrastructure, resources may be geographically distant from each other, produced by differing manufacturers, and have differing access control policies. This makes access to grid resources dynamic and conditional upon local constraints. Centralized management techniques for these resources are limited in their scalability both in terms of execution efficiency and fault tolerance. Provision of services across such platforms requires a distributed resource management mechanism and the peer-to-peer clustered GOS appliances allow a single storage image to continue to expand, even if a single GOS appliance reaches its capacity limitations. The cluster shares a common, aggregate presentation of the data stored on all participating GOS appliances. Each GOS appliance manages its own internal storage space. The major benefit of this aggregation is that clustered GOS storage can be accessed by users as a single mount point.
GOS products fit the thin-server categorization. Compared with traditional “fat server”-based storage architectures, thin-server GOS appliances deliver numerous advantages, such as the alleviation of potential network/grid bottle-necks, CPU and OS optimized for I/O only, ease of installation, remote management and minimal maintenance, low cost and Plug and Play, etc. Examples of similar innovations include NAS, printers, fax machines, routers and switches.
An Apache server has been installed in the GOS operating system, ensuring an HTTPS-based communication between the GOS server and an administrator via a Web browser. Remote management and monitoring makes it easy to set up, manage, and monitor GOS systems.
History
Frank Zhigang Wang and Na Helian proposed a funding proposal to the UK government titled “Grid-Oriented Storage (GOS): Next Generation Data Storage System Architecture for the Grid Computing Era” in 2003. The proposal was approved and granted one million pounds in 2004. The first prototype was constructed in 2005 at Centre for Grid Computing, Cambridge-Cranfield High Performance Computing Facility. The first conference presentation was at IEEE Symposium on Cluster Computing and Grid (CCGrid), 9–12 May 2005, Cardiff, UK. As one of the five best work-in-progress, it was included in the IEEE Distributed Systems Online. In 2006, the GOS architecture and its implementations was published in IEEE Transactions on Computers, titled “Grid-oriented Storage: A Single-Image, Cross-Domain, High-Bandwidth Architecture”.
Starting in January 2007, demonstrations were presented at Princeton University, Cambridge University Computer Lab and others.
By 2013, the Cranfield Centre still used future tense for the project.
Peer-to-peer file sharings use similar techniques.
Notes
Further reading
Frank Wang, Na Helian, Sining Wu, Yuhui Deng, Yike Guo, Steve Thompson, Ian Johnson, Dave Milward & Robert Maddock, Grid-Oriented Storage, IEEE Distributed Systems Online, Volume 6, Issue 9, Sept. 2005.
Frank Wang, Sining Wu, Na Helian, Andy Parker, Yike Guo, Yuhui Deng, Vineet Khare, Grid-oriented Storage: A Single-Image, Cross-Domain, High-Bandwidth Architecture, IEEE Transactions on Computers, Vol.56, No.4, pp. 474–487, 2007.
Frank Zhigang Wang, Sining Wu, Na Helian, An Underlying Data-Transporting Protocol for Accelerating Web Communications, International Journal of Computer Networks, Elsevier, 2007.
Frank Zhigang Wang, Sining Wu, Na Helian, Yuhui Deng, Vineet Khare, Chris Thompson and Michael Parker, Grid-based Data Access to Nucleotide Sequence Database with 6x Improvement in Response Times, New Generation Computing, No.2, Vol.25, 2007.
Frank Wang, Yuhui Deng, Na Helian, Evolutionary Storage: Speeding up a Magnetic Disk by Clustering Frequent Data, IEEE Transactions on Magnetics, Issue.6, Vol.43, 2007.
Frank Zhigang Wang, Na Helian, Sining Wu, Yuhui Deng, Vineet Khare, Chris Thompson and Michael Parker, Grid-based Storage Architecture for Accelerating Bioinformatics Computing, Journal of VLSI Signal Processing Systems, No.1, Vol.48, 2007.
Yuhui Deng and Frank Wang, A Heterogeneous Storage Grid Enabled by Grid Service, ACM Operating System Review, No.1, Vol.41, 2007.
Yuhui Deng & Frank Wang, Optimal Clustering Size of Small File Access in Network Attached Storage Device, Parallel Processing Letters, No.1, Vol.17, 2007.
Data management | Grid-oriented storage | [
"Technology"
] | 1,175 | [
"Data management",
"Data"
] |
11,963,095 | https://en.wikipedia.org/wiki/Stanley%20Skewes | Stanley Skewes (; 1899–1988) was a South African mathematician, best known for his discovery of the Skewes's number in 1933. He was one of John Edensor Littlewood's students at Cambridge University. Skewes's numbers contributed to the refinement of the theory of prime numbers.
Academic career
Skewes obtained a degree in civil engineering from the University of Cape Town before emigrating to England. He studied mathematics at Cambridge University and obtained a PhD in mathematics in 1938.
He discovered the first Skewes's number in 1933. This is also referred to as the Riemann true Skewes's number owing to its relationship to the Riemann hypothesis as related to prime number theory. He discovered the second Skewes's number in 1955. This number was applicable if the Riemann hypothesis is false. Since his original discovery the numbers have been further refined.
Publications
Personal life
Stanley Skewes was born in Germiston, South Africa in 1899. His parents were Henry (Harry) Skewes, a tin miner and assayer from Cury, Cornwall, England and Emily Moyle, who was American by birth. His parents moved from Redruth, Cornwall in 1894 to the Transvaal, South Africa.
He married Ena Allen. She was the daughter of the head chef at King's College, Cambridge, and a talented opera singer. Among his contemporaries at Cambridge was Alan Turing. They rowed together at Cambridge. Although Skewes returned to South Africa, he revisited Cambridge and Cornwall. He was also a keen rugby player in his youth.
Skewes and his number are discussed by Isaac Asimov in his book Of Matters Great and Small and in the 20th edition of the Guinness Book of Records.
A memorandum written by Skewes on his retirement was kept in a glass case in the department of mathematics at the University of Cape Town. The memorandum discuses Skewes's number and further development of it.
He died in 1988 in Cape Town, South Africa.
References
Number theorists
South African people of Cornish descent
Alumni of the University of Cambridge
University of Cape Town alumni
1899 births
1988 deaths
20th-century South African mathematicians | Stanley Skewes | [
"Mathematics"
] | 448 | [
"Number theorists",
"Number theory"
] |
11,963,397 | https://en.wikipedia.org/wiki/Norwegian%20Safety%20Investigation%20Authority | The Norwegian Safety Investigation Authority (NSIA; , SHK) is the government agency responsible for investigating transport-related accidents within Norway. Specifically, it investigates aviation accidents and incidents, rail accidents, maritime accidents, select traffic accidents, and serious incidents in the defence sector.
All investigations aim to find underlying causes and to improve safety; criminal investigation is not part of AIBN's mandate. Subordinate to the Ministry of Transport, the agency is located on the premises of Kjeller Airport in Skedsmo.
Traditionally marine accidents were investigated Institute of Maritime Enquiry, which mixed safety investigation, criminal and civil liability into a combined investigation. Aviation accidents and major rail accidents were investigated by ad hoc commissions. The Accident Investigation Board for Civil Aviation was established as a permanent organization on 1 January 1989, originally based at Oslo Airport, Fornebu. From 2002 it also took over investigating rail accidents, road accidents were included in 2005, marine accidents from 2008 and finally defense sector accidents in 2020.
History
Former commissions
Traditionally, marine accident investigation was carried out by the Institute of Maritime Enquiry () and the Permanent Investigation Board for Special Accidents in the Fisheries Fleet. This system centered around mandatory inquiries carried out by a district court. In exceptional cases the Norwegian government had the jurisdiction to appoint an ad hoc investigation board.
At the time of Norway's first major civilian aviation accident, the Havørn accident on 16 June 1936, no particular routine existed for investigating aviation accidents. An ad hoc commission was established at the scene to investigate it, consisting of Chief of Police Alf Reksten, Sheriff Kaare Bredvik, the Norwegian Air Lines' technical director Bernt Balchen, Captain Eckhoff of the aviation authorities, and Gjermundson from the insurance company.
A similar organization took place from 1945 to 1956, where the government appointed an accident investigation commission for each accident and incident. These commissions had no permanent organization or members and were appointed for each accident on an ad hoc basis. Its members normally consisted of staff from the Norwegian Air Traffic and Airport Management and the Royal Norwegian Air Force. In addition, it had representatives from the Norwegian Police Service and the Norwegian Prosecuting Authority.
From 1956, a permanent secretariat was appointed, the Aviation Accident Commission (), but the various commissions members were only tied to the commission during the period of the investigation. By the 1980s, this had increased to two full-time technicians and a clerk. This period had accident commissions with a significantly different mission from later. Firstly, it only investigated actual accidents of a certain size. General-aviation accidents and near accidents were not investigated. Secondly, the commissions were both given the task to uncover the cause of the accident from a safety point of view, but also to uncover any criminal occurrences. This was the reason for including police and prosecution officials in the commissions.
Within the railway sector, accident investigation had been carried out by the Norwegian State Railways and its successors, the Norwegian National Rail Administration and the Norwegian State Railways. Major accidents were thereby investigated by in-house commissions with the potential for conflicts of interest, or through ad hoc committees appointed by the government. Similar to marine accidents, it was ultimately a subjective call by the Ministry of Transport and Communications to make the call for if a committee were needed.
Establishment
During the 1980s, a shift occurred in the view of aviation accident commissions, and by 1988, the Ministry of Transport and Communications launched a specific proposal to create a permanent agency responsible for aviation accident and incident investigation. This followed changes to international law according to regulations set down by the International Civil Aviation Organization. The ministry primarily stressed the mix of criminal and safety preventative as contrary to international law, as well as near-accidents being investigated by NATAM. Secondary concerns were that since the commission members were part-time employees, investigations would drag on unnecessarily, as freeing up the members from the regular jobs was often difficult. The investigators also often arrived late at the scene of the crime due to these conditions. One alternative proposal was to make the commission part of the Civil Aviation Authority of Norway, although this never materialized as the latter was not created until later.
The legal framework took effect on 1 January 1989, and the same day, the Accident Investigation Board for Civil Aviation () was established. In its primordial form it was organized as an office within the Ministry of Transport and Communications. Its first director was Ragnar Rygnestad, which had been the former commission's secretary for ten years. The board soon received five employees. The legal changes meant that near-accidents were also investigated, significantly increasing the number of cases to be handled. By June, the agency still did not have sufficient staff to handle all cases, and in particular had not yet implemented human behavior and psychological expertise. It was initially based at Villa Hareløkka on the premises of Oslo Airport, Fornebu in Bærum. In addition, it used a military hangar at Kjeller Airport to store and reconstruct aircraft parts.
The board was reorganized from 1 July 1999, when it was split out of the ministry and became an independent government agency. By then, the agency had 15 employees. Oslo Airport, Fornebu was closed down 1998, so the board was forced to move from its premises. The board subsequently relocated to a temporary site on the premises of Kjeller Airport in Skedsmo. A new, tailor-made structure was opened within the military perimeter of the airport in May 2001. Designed by Knut Longva, it features both offices and a hangar measuring .
Rail and road
Meanwhile, the government started looking at expanding the agency's role. While these commissions had technical competence, their transient nature caused them to not be sufficient methodological in their investigations. Creating a permanent staff and a larger specialist environment was seen as a way to allow for better investigation and reporting. Particularly two accidents were pinnacle to this move, the sinking of MS Sleipner and the Åsta accident, which killed 16 and 19 people, respectively. Although the government wanted to include all modes of transport, legal and practical reasons caused the railway sector to be the first to be included.
The agency took over responsibility for investigating railway accidents from 1 January 2002. It simultaneously took the name Accident Investigation Board for Civil Aviation and Railways (, HSLB). At the time, the agency investigated about 100 to 150 aviation accidents and incidents per year and about 60 railway accidents and incidents per year.
The next expansion involved road traffic accidents, taking effect on 1 September 2005. Unlike in aviation and railway accidents, only a select few road accidents were to be investigated. These were selected based on their ability to provide useful information to improve road safety. In particular bus and truck accidents were prioritized, along with tunnel accidents and ones with dangerous goods. The agency initially hired four investigations and aimed at them investigating twenty to twenty-five accidents per year. This comprised 3.4 million Norwegian krone of the agency's 31.8 million budget. One advantage of the agency was that it could allow for protected testimonies, without these having to be subject to criminal investigation by the Norwegian Police Service. The agency thereby took its current name. Meanwhile, the Norwegian Public Roads Administration established a group of regional offices to investigate other accidents and aggregate information from these.
During this period, discussions also arose as to whether the board should investigate cases related to pipelines and accidents on oil platforms. This discussion came from the mandate of the United States' National Transportation Safety Board, which had such authority.
Marine expansion
In particular, the investigations of MS Scandinavian Star, MS Estonia, and MS Jan Heweliusz during the 1990s caused a questioning of the quality of marine accident investigations, and the other Nordic countries established marine investigation boards during that decade. Work on reforming the marine system started in 1998, and resulted in a committee recommending the system be scrapped in favor of an accident investigation board. The prime reason was that the Institute of Maritime Enquiry was regarded as having insufficient competence to investigate major accidents. Also, some concerns were that the system mixed the criminal prosecution and the safety investigation aspects, which could hinder a proper learning to take place from an accident. The system used court interviews with witnesses, while owners, insurance companies, and press were present. Facing both legal and economic consequences, witnesses would often give testimonies of reduced accuracy, hindering proper investigation from a safety point of view.
Initially, the committee proposed an independent investigation board for the maritime sector, either as part of the Norwegian Maritime Authority or as an independent agency subordinate to the Ministry of Justice. During the political discussions a joint board was favored instead. Parliament approved the new jurisdiction in 2004. However, it took four years to implement the decision, and nine years from the conclusions of the committee were presented. The delays were caused by the legal implications and complexity of the investigations. Because the board was only to investigate from a safety point of view, a new legal and administrative framework had to be implemented to ensure that the Norwegian Police Service could take over the responsibility for criminal investigation of the marine accidents.
The changes took effect on 1 July 2008. In addition to the investigation aspect, which was issued to the board, the Maritime Authority established a division to work with strategic safety. Criminal investigation of marine accidents became the responsibility of an office at Rogaland Police District.
Defense sector expansion
In July 2019, the Norwegian government announced that the AIBN would merge with the (DAIBN) in 2020. The AIBN would take over the work of the DAIBN on 1 July 2020 under the new name Norwegian Safety Investigation Authority (NSIA; ()).
Mandate
The Accident Investigation Board Norway is a government agency subordinate to the Ministry of Transport and Communications. In questions related to maritime safety, it reports to the Ministry of Trade, Industry, and Fisheries. Neither ministry can instruct the board in professional matters. The agency is mandated to investigate transport-related accidents and incident within the scope of aviation, maritime, rail transport, and road transport. The board's responsibility is to determine which accidents and incidents are to be investigated, and the scope and scale of any investigations. This is a trade-off between use of resources and the perceived safety benefits from further inquiries.
AIBN's goal is exclusively to look into the safety aspects of accidents, with the overall goal to uncover causes and the line of events so as to learn, improve safety, and hinder similar accidents from happening again. The board is not involved in any assessment of blame or liability, whether under criminal or civil law. Criminal investigation is carried out by the Norwegian Police Service and prosecution by the Norwegian Prosecuting Authority. In particular, the board can accept testimonies, which can remain anonymous and will under no circumstances be handed over to the police or prosecuting authorities. The board's responsibilities are delineated towards those of the Police Service and the Prosecution Authority, as well as those of the Civil Aviation Authority of Norway, the Norwegian Maritime Authority, the Norwegian Public Roads Administration, and the Norwegian Railway Authority.
Aviation accidents are mandated through the Aviation Act of 11 June 1993, which again references Council Directive 94/56/EC of 21 November 1994. This includes all aviation accidents, as well as serious incidents.
Marine accidents and incident investigation is based on the Norwegian Maritime Code of 24 June 1994. This is again based on obligationsand requirements stipulated under the International Convention for the Safety of Life at Sea. This includes all accidents with passenger ships and other large Norwegian vessels, in which people have or are assumed to have lost lives or been substantially injured. AIBN may also investigate foreign ships in cases where Norwegian jurisdiction can be applied under international law. AIBN may also investigate accidents with recreational boats if such an inquiry is presumed to improve safety at sea.
Rail accident and incident investigation has its legal basis in the Railway Investigation Act of 3 June 2005. This is again a national incorporation of the European Union's Railway Safety Directive 2004/49/EC. The responsibility includes both mainline railways, tramways and rapid transit, but not funiculars.
Road accident investigation is based on the Road Traffic Act of 18 June 1965. AIBN has no legal obligation to investigate any specific road accidents, although it is to be informed of any accidents involving buses and heavy trucks, and accidents in tunnels and those involving dangerous goods. AIBN then makes the call as to whether to investigate the matter, based on the assessment of if an investigation can further road traffic safety.
References
Safety Investigation Authority
Norway
Organizations investigating aviation accidents and incidents
Aviation organisations based in Norway
Organisations based in Lillestrøm
Government railway authorities of Norway
Road safety organizations
Government agencies established in 1989
1989 establishments in Norway
Transport authorities of Norway
Ministry of Transport (Norway)
Transport safety organizations | Norwegian Safety Investigation Authority | [
"Technology"
] | 2,573 | [
"Railway accidents and incidents",
"Rail accident investigators"
] |
11,963,455 | https://en.wikipedia.org/wiki/HD%20192699 | HD 192699 is a yellow subgiant star located approximately 214 light-years away in the constellation of Aquila. It has the apparent magnitude of 6.45. Based on its mass of 1.68 solar, it was an A-type star when it was a main-sequence. In April 2007, a planet was announced orbiting the star, together with HD 175541 b and HD 210702 b.
The star HD 192699 is named Chechia. The name was selected in the NameExoWorlds campaign by Tunisia, during the 100th anniversary of the IAU. Chechia is a flat-surfaced, traditional red wool hat.
See also
HD 175541
HD 210702
List of extrasolar planets
References
External links
G-type subgiants
Planetary systems with one confirmed planet
Aquila (constellation)
Durchmusterung objects
192699
099894 | HD 192699 | [
"Astronomy"
] | 186 | [
"Aquila (constellation)",
"Constellations"
] |
11,963,550 | https://en.wikipedia.org/wiki/Ultra%20low%20expansion%20glass | Ultra low expansion glass (ULE) is a registered trademark of Corning Incorporated. ULE has a very low coefficient of thermal expansion and contains as components silica and less than 10% titanium dioxide. Such high resistance to thermal expansion makes it very resistant to high temperature thermal shock. ULE has been made by Corning since the 1960s, but is still very important to current applications.
Applications
There are many applications for ULE, but by far the most common is for mirrors and lenses for telescopes in both space and terrestrial settings. One of the most well known examples of the use of ULE is in the Hubble Space Telescope's mirror. Another good example of its application is in the Gemini telescope's mirror bank. This type of material is needed for this application because the mirrors on telescopes, especially very large, high-precision units, cannot bend or lose their shape even slightly. If this were to happen, the telescope would be out of focus. Some other examples and uses of ULE are:
Ultra-low expansion substrates for mirrors and other optics
Length standards
Lightweight honeycomb mirror mounts
Astronomical telescopes
Precision measurement technology
Laser cavities
Another newer use for this material that is showing promise is in the semiconductor industry, this again because of the purity and extreme low expansion of ULE glass.
Structure
The structure of ULE is completely amorphous; because of this it is a glass, not a ceramic. The amorphous structure of the material comes from there being no crystal phases within the structure, so there is no long range order.
Processing
The way that ULE is made is very different from the standard way that glass is made. Instead of mixing powdered materials together into a batch, melting that batch and pouring out sheets of glass, ULE, being such a high temperature glass, has to be made in a flame hydrolysis process. In this process high purity precursors are injected into the flames, which causes them to react and form TiO2 and SiO2. The TiO2 and SiO2 then fall down and are deposited onto the growing glass.
Properties
Thermal
Ultra low expansion glass has an coefficient of thermal expansion of about 10−8/K at 5–35 °C. It has a thermal conductivity of 1.31 W/(m·°C), thermal diffusion of 0.0079 cm2/s, a mean specific heat of 767 J/(kg·°C), a strain point of 890 °C [1634 °F], and an estimated softening point of 1490 °C [2714 °F], an annealing point of 1000 °C [1832 °F].
Mechanical
Ultra low expansion glass has an ultimate tensile strength of , a Poisson’s ratio 0.17, a density of (), a specific stiffness of (), a shear modulus of (), a bulk modulus of (), and an elastic modulus of ().
Optical
ULE has a Stress Optical Coefficient of 4.15 (nm/cm)/(kg/cm3) [0.292(nm/cm) psi], and an Abbe number of 53.1.
Other
ULE has a high resistance to weathering because of its hardness, and is also unaffected by nearly all chemical agents. ULE also shows no changes when quickly cooled from 350 °C.
References
Glass compositions
Low thermal expansion materials | Ultra low expansion glass | [
"Physics",
"Chemistry"
] | 687 | [
"Glass chemistry",
"Glass compositions",
"Low thermal expansion materials",
"Materials",
"Matter"
] |
11,963,553 | https://en.wikipedia.org/wiki/HD%20210702 | HD 210702 is a star with an orbiting exoplanet in the northern constellation of Pegasus. It has an apparent visual magnitude of 5.93, which is bright enough that the star is dimly visible to the naked eye. The distance to HD 210702 is 177 light years based on parallax measurements, and it is drifting further away with a radial velocity of 18.5 km/s. It is a probable member of the Ursa Major moving group, an association of co-moving stars.
Although a stellar classification of K1 III suggests this is an evolved giant star, it is more likely to be a subgiant star currently at the based of the red giant branch. Currently 3 billion years old, HD 210702 spent its main-sequence life as an A-type star. Consistent with its evolutionary status, it has little or no magnetic activity in its chromosphere. The star has 1.5 times the mass of the Sun and has expanded to 4.9 times the Sun's radius. It is radiating 12.9 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 4,946 K.
Planetary system
The star shows variability in its radial velocity consistent with an exoplanetary companion in a Keplerian orbit, and one was duly discovered in April 2007, from observations at Lick and Keck Observatories in Mount Hamilton (California) and Mauna Kea (Hawai'i), United States. As the inclination of the orbital plane is unknown, only a lower bound on the mass of this object can be estimated. It has at least 1.8 times the mass of Jupiter.
See also
HD 175541
HD 192699
List of extrasolar planets
References
Further reading
K-type subgiants
Planetary systems with one confirmed planet
Pegasus (constellation)
8461
Durchmusterung objects
210702
109577 | HD 210702 | [
"Astronomy"
] | 392 | [
"Pegasus (constellation)",
"Constellations"
] |
11,964,243 | https://en.wikipedia.org/wiki/Hoan%20Kiem%20turtle | The Hoàn Kiếm turtle, also Rafetus leloii, was an obsolete or controversial taxon of turtle from Southeast Asia, based on specimens from Hoàn Kiếm Lake in Hanoi, Vietnam. Most experts classify this turtle as synonymous with the rare Yangtze giant softshell turtle (Rafetus swinhoei), although some Vietnamese biologists asserted that R. leloii is a distinct species. If the two taxa are to be considered distinct, R. leloii may be considered extinct.
The last known turtle, affectionately known to locals as "Cụ Rùa", meaning "great grandfather turtle" in Vietnamese, was reported dead on 19 January 2016. A local man saw the body of the turtle floating in the water and reported it to the authorities. The last time the turtle was spotted alive was on 21 December 2015.
Classification
Most authorities classify leloii as a junior synonym of the Yangtze giant softshell turtle, based a study by Farkas et al. However, some Vietnamese biologists, such as Hà Đình Đức, who first described leloii, and Le Tran Binh, insist that the two turtles are not the same species. Le points out genetic differences as well as differences in morphology, re-describing the Hoan Kiem turtle as Rafetus vietnamensis. However, Farkas et al. repeated their 2003 conclusion in 2011, stating that differences between specimens may be due to age. They also pointed out that Le et al. did not adequately describe their methods for DNA sequencing, and that the genetic sequences used were never sent to GenBank. They also criticized the fact that Le et al. violated ICZN Code by renaming the species from leloii to vietnamensis on the grounds of “appropriateness”. Another genetic analysis was purportedly when the turtle was rescued and cleaned, which allegedly showed it to be female and distinct from the R. swinhoei of China and Đồng Mô, Vietnam. However, the results were not formally announced or ever published in a peer-reviewed research article, and some skepticism has been cast on the results.
Đức has also hypothesized that Emperor Thái Tổ of the Lê dynasty brought the turtles from Thanh Hóa Province and released them in Hoàn Kiếm Lake. Recently, Đức and some researchers found skeletons of giant turtles in Yên Bái, Phú Thọ and Hòa Bình provinces.
Mythology
Stories of the Hoàn Kiếm turtle began in the fifteenth century with Lê Lợi, who became an emperor of Vietnam and founder of the Lê dynasty. According to legend, Lê Lợi had the sword named Heaven's Will given to him by the Golden Turtle God, Kim Quy.
One day, not long after the war and the Chinese had accepted Vietnam's independence, Lê Lợi was out boating on the lake. Suddenly the Golden Turtle God surfaced, prompting Lê Lợi to return Heaven's Will and thank the divine turtle for its help. The Golden Turtle God took back the sword and disappeared into depths of the lake. Lê Lợi then renamed the lake Hoàn Kiếm Lake (or Hồ Gươm), meaning “The Lake of the Returned Sword.”
Rediscovery
Near the northern shore of Hoàn Kiếm Lake lies Jade Island, on which the Temple of the Jade Mountain is located. On June 2, 1967, a Hoàn Kiếm turtle died from injuries caused by an abusive fisherman that was ordered to net the turtle and carry it, but instead hit the turtle with a crowbar. The turtle's body was preserved and placed on display in the temple. That particular specimen weighed 200 kg (440 lbs) and measured 1.9 metres long (6 ft 3 in). Until that time, no one was sure if the species still lived.
On March 24, 1998, an amateur cameraman caught the creature on video, conclusively proving the elusive creatures still survived in the lake. Prior to its recent rediscovery, the turtles were thought to be only a legend and were classified as cryptozoological.
In 2000, professor Hà Đình Đức gave the Hoàn Kiếm turtle the scientific name Rafetus leloii. This nomenclature has been rejected by other herpetologists who assert leloii is synonymous with swinhoei.
By the spring of 2011, concerned with the Hoàn Kiếm specimen's more frequent than usual surfacing, and apparent lesions on its body, the city authorities started attempts to capture the giant reptile of Hoàn Kiếm Lake, and take it for medical treatment. On February 9, a local turtle farm operator, KAT Group, was chosen to prepare a suitable net to capture the sacred animal.
The first attempt, on March 8, 2011, failed, as the turtle made a hole in the net with which the workers tried to capture it, and escaped. An expert commented, “It’s hard to catch a large, very large soft-shell turtle.” On March 31, in an unusual act, the turtle went to the shore to bask in the sun.
Finally, on April 3, 2011, the giant turtle was netted in an operation that involved members of the Vietnamese military. The captured creature was put into an enclosure constructed on an island in the middle of the lake, for study and treatment. According to the scientists involved, the turtle was determined to be female, and genetic research suggested it was distinct from the R. swinhoei turtles in China, and Đồng Mô in Vietnam.
Some witnesses believe there are at least two or three turtles living in Hoàn Kiếm Lake and that the “smaller” one appears more regularly. Đức rejected these reports.
The last known Hoan Kiem turtle was found dead on 19 January 2016.
Conservation concerns
Despite eyewitness sightings of two or more turtles, Đức believed that there was only one specimen left in Hoàn Kiếm Lake. Peter Pritchard, a turtle biologist, believed that there were no more than five specimens left in 2008.
The lake itself is both small and shallow, measuring 200 metres wide, 600 metres long, and only two meters deep. It is also badly polluted, although the turtles could conceivably live underwater indefinitely, coming to the surface only for an occasional gulp of air or a bit of sun. According to Pritchard, the turtles are threatened by municipal “improvements” around the lake. The banks have been almost entirely cemented over, leaving only a few yards of rocky beach where a turtle might find a place to bury her clutches of 100 or more eggs.
Plans were underway to clean the lake of pollution, and the construction of an artificial beach had been proposed to facilitate breeding. Dredging the lake, to clean up its bottom, was carried out in February and March 2011.
Đức urged people to protect this animal and is quoted as saying, “We hope that we will find a partner for the turtle in Hồ Gươm, so that our legendary animal could avoid extinction.” Believing the turtle to be different from R. swinhoei, he is against the idea of crossbreeding turtles of the two kinds.
The turtle died in 2016.
References
External links
Reptiles of Vietnam
Rafetus
Controversial taxa
Tourist attractions in Hanoi
Cụ Rùa | Hoan Kiem turtle | [
"Biology"
] | 1,476 | [
"Biological hypotheses",
"Controversial taxa"
] |
11,965,281 | https://en.wikipedia.org/wiki/Lau%20event | The Lau event was the last of three relatively minor mass extinctions (the Ireviken, Mulde, and Lau events) during the Silurian period. It had a major effect on the conodont fauna, but barely scathed the graptolites, though they suffered an extinction very shortly thereafter termed the Kozlowskii event that some authors have suggested was coeval with the Lau event and only appears asynchronous due to taphonomic reasons. It coincided with a global low point in sea level caused by glacioeustasy and is closely followed by an excursion in geochemical isotopes in the ensuing late Ludfordian faunal stage and a change in depositional regime.
Biological impact
The Lau event started at the beginning of the late Ludfordian, a subdivision of the Ludlow stage, about . Its strata are best exposed in Gotland, Sweden, taking its name from the parish of Lau. Its base is set at the first extinction datum, in the Eke beds, and despite a scarcity of data, it is apparent that most major groups suffered an increase in extinction rate during the event; major changes are observed worldwide at correlated rocks, with a "crisis" observed in populations of conodonts and graptolites. More precisely, conodonts suffered in the Lau event, and graptolites in the subsequent isotopic excursion. Local extinctions may have played a role in many places, especially the increasingly enclosed Welsh basin; the event's relatively high severity rating of 6.2 does not change the fact that many life-forms became re-established shortly after the event, presumably surviving in refuge or in environments that have not been preserved in the geological record. Based on its timing, it's possible that this event finished off the palaeoscolecids. Although life persisted after the event, community structures were permanently altered and many lifeforms failed to regain the niches they had occupied before the event.
Isotopic effects
A peak in , accompanied by fluctuations in other isotope concentrations, is often associated with mass extinctions. Some workers have attempted to explain this event in terms of climate or sea level change – perhaps arising due to a build-up of glaciers; however, such factors alone do not appear to be sufficient to explain the events. An alternative hypothesis is that changes in ocean mixing were responsible. An increase in density is required to make water downwell; the cause of this densification may have changed from hypersalinity (due to ice formation and evaporation) to temperature (due to water cooling). A different hypothesis attributes the carbon isotope fluctuations to methanogenesis caused by the increased influx of iron-bearing dust and consequent disruption of limiting nutrient ratios. Loydell suggests many causes of the isotopic excursion, including increased carbon burial, increased carbonate weathering, changes in atmospheric and oceanic interactions, changes in primary production, and changes in humidity or aridity. He uses a correlation between the events and glacially induced global sea level change to suggest that carbonate weathering is the major player, with other factors playing a less significant role.
The curve slightly lags conodont extinctions, hence the two events may not represent the same thing. Therefore, the term Lau event is used only for the extinction, not the following isotopic activity, which is named after the time period in which it occurred.
A positive excursion of in pyrite coincides with the positive excursion following the Lau event, likely related to the expansion of euxinic conditions and enhanced pyrite burial.
Sedimentological impact
Profound sedimentary changes occurred at the beginning of the Lau event; these are probably associated with the onset of sea level rise, which continued through the event, reaching a high point at the time of deposition of the Burgsvik beds, after the event.
These changes appear to display anachronism, marked by an increase in erosional surfaces and the return of flat-pebbled conglomerates in the Eke beds. This is further evidence of a major blow to ecosystems of the time – such deposits can only form in conditions similar to those of the early Cambrian period, when life as we know it was only just becoming established. Indeed, stromatolites, which rarely form in the presence of abundant higher life forms, are observed during the Lau event and, occasionally, in the overlying Burgsvik beds; microbial colonies of Rothpletzella and Wetheredella become abundant. This suite of characteristics is common to the larger end-Ordovician and end-Permian extinctions.
See also
Anoxic event
Further reading
Linking the progressive expansion of reducing conditions to a stepwise mass extinction event in the late Silurian oceans
References
Extinction events
Silurian events
Isotope excursions | Lau event | [
"Chemistry",
"Biology"
] | 988 | [
"Evolution of the biosphere",
"Isotope excursions",
"Extinction events",
"Isotopes"
] |
16,204,398 | https://en.wikipedia.org/wiki/Esscher%20principle | The Esscher principle is an insurance premium principle. It is given by , where is a strictly positive parameter. This premium is the net premium for a risk , where denotes the moment generating function.
The Esscher principle is a risk measure used in actuarial sciences that derives from the Esscher transform. This risk measure does not respect the positive homogeneity property of coherent risk measure for .
References
Actuarial science | Esscher principle | [
"Mathematics"
] | 90 | [
"Applied mathematics",
"Actuarial science"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.