id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
56,095,737 | https://en.wikipedia.org/wiki/Claire%20Vallance | Claire Vallance is a professor of Physical Chemistry at the University of Oxford, Tutorial Fellow in Physical Chemistry at Hertford College, and past President of the Faraday Division of the Royal Society of Chemistry. In collaboration with professor Mark Brouard and others, she created the PImMS (Pixel Imaging Mass Spectrometry) sensor, used for time-of-flight particle imaging and recently featured in the Royal Society of Chemistry's Research Frontiers report. She is co-founder of the spin-out company Oxford HighQ, which is developing next-generation chemical and nanoparticle sensors based on optical microcavity technology. Vallance's research spans chemical reaction dynamics, optical microcavity spectroscopy, and applications of spectroscopy and imaging in medical diagnostics. She is also an accomplished musician and triathlete.
Education
Claire Vallance attended Marlborough Girls' College in Blenheim, New Zealand. She then studied Chemistry, Physics, Mathematics, and Music at the University of Canterbury, where she completed a B.Sc.(hons) degree in 1995, graduating first in her year. She studied for a Ph.D. under the supervision of Peter Harland, working in gas-phase molecular dynamics, and graduated in early 1999. Upon completion of her studies, she returned to Oxford to take up a Violette and Samuel Glasstone Fellowship in the Physical and Theoretical Chemistry Laboratory and a Junior Research Fellowship at St. Catherine's College.
Honours and awards
Fellow of the Royal Society of Chemistry, 2016
Books
Tutorials in Molecular Reaction Dynamics. RSC Press, 2010. (Joint editor with Mark Brouard)
Astrochemistry: from the Big Bang to the Present Day, World Scientific Press, 2017.
An Introduction to Chemical Kinetics, Morgan-Claypool Publishing, 2017
An Introduction to the Gas Phase, Morgan-Claypool Publishing, 2018
References
External links
Living people
Year of birth missing (living people)
Fellows of Hertford College, Oxford
Physical chemists
New Zealand women chemists
New Zealand pianists
New Zealand women pianists
New Zealand chemists
New Zealand violinists
New Zealand women violinists
21st-century pianists
21st-century violinists
21st-century women pianists
University of Canterbury alumni | Claire Vallance | Chemistry | 448 |
2,518,272 | https://en.wikipedia.org/wiki/Aeroacoustics | Aeroacoustics is a branch of acoustics that studies noise generation via either turbulent fluid motion or aerodynamic forces interacting with surfaces. Noise generation can also be associated with periodically varying flows. A notable example of this phenomenon is the Aeolian tones produced by wind blowing over fixed objects.
Although no complete scientific theory of the generation of noise by aerodynamic flows has been established, most practical aeroacoustic analysis relies upon the so-called aeroacoustic analogy, proposed by Sir James Lighthill in the 1950s while at the University of Manchester. whereby the governing equations of motion of the fluid are coerced into a form reminiscent of the wave equation of "classical" (i.e. linear) acoustics in the left-hand side with the remaining terms as sources in the right-hand side.
History
The modern discipline of aeroacoustics can be said to have originated with the first publication of Light hill in the early 1950s, when noise generation associated with the jet engine was beginning to be placed under scientific scrutiny.
Lighthill's equation
Lighthill rearranged the Navier–Stokes equations, which govern the flow of a compressible viscous fluid, into an inhomogeneous wave equation, thereby making a connection between fluid mechanics and acoustics. This is often called "Lighthill's analogy" because it presents a model for the acoustic field that is not, strictly speaking, based on the physics of flow-induced/generated noise, but rather on the analogy of how they might be represented through the governing equations of a compressible fluid.
The continuity and the momentum equations are given by
where is the fluid density, is the velocity field, is the fluid pressure and is the viscous stress tensor. Note that is a tensor (see also tensor product). Differentiating the conservation of mass equation with respect to time, taking the divergence of the last equation and subtracting the latter from the former, we arrive at
Subtracting , where is the speed of sound in the medium in its equilibrium (or quiescent) state, from both sides of the last equation results in celebrated Lighthill equation of aeroacoustics,
where is the Hessian and is the so-called Lighthill turbulence stress tensor for the acoustic field. The Lighthill equation is an inhomogenous wave equation. Using Einstein notation, Lighthill’s equation can be written as
Each of the acoustic source terms, i.e. terms in , may play a significant role in the generation of noise depending upon flow conditions considered. The first term describes inertial effect of the flow (or Reynolds' Stress, developed by Osborne Reynolds) whereas the second term describes non-linear acoustic generation processes and finally the last term corresponds to sound generation/attenuation due to viscous forces.
In practice, it is customary to neglect the effects of viscosity on the fluid as it effects are small in turbulent noise generation problems such as the jet noise. Lighthill provides an in-depth discussion of this matter.
In aeroacoustic studies, both theoretical and computational efforts are made to solve for the acoustic source terms in Lighthill's equation in order to make statements regarding the relevant aerodynamic noise generation mechanisms present. Finally, it is important to realize that Lighthill's equation is exact in the sense that no approximations of any kind have been made in its derivation.
Landau–Lifshitz aeroacoustic equation
In their classical text on fluid mechanics, Landau and Lifshitz derive an aeroacoustic equation analogous to Lighthill's (i.e., an equation for sound generated by "turbulent" fluid motion), but for the incompressible flow of an inviscid fluid. The inhomogeneous wave equation that they obtain is for the pressure rather than for the density of the fluid. Furthermore, unlike Lighthill's equation, Landau and Lifshitz's equation is not exact; it is an approximation.
If one is to allow for approximations to be made, a simpler way (without necessarily assuming the fluid is incompressible) to obtain an approximation to Lighthill's equation is to assume that , where and are the (characteristic) density and pressure of the fluid in its equilibrium state. Then, upon substitution the assumed relation between pressure and density into we obtain the equation (for an inviscid fluid, σ = 0)
And for the case when the fluid is indeed incompressible, i.e. (for some positive constant ) everywhere, then we obtain exactly the equation given in Landau and Lifshitz, namely
A similar approximation [in the context of equation ], namely , is suggested by Lighthill [see Eq. (7) in the latter paper].
Of course, one might wonder whether we are justified in assuming that . The answer is affirmative, if the flow satisfies certain basic assumptions. In particular, if and , then the assumed relation follows directly from the linear theory of sound waves (see, e.g., the linearized Euler equations and the acoustic wave equation). In fact, the approximate relation between and that we assumed is just a linear approximation to the generic barotropic equation of state of the fluid.
However, even after the above deliberations, it is still not clear whether one is justified in using an inherently linear relation to simplify a nonlinear wave equation. Nevertheless, it is a very common practice in nonlinear acoustics as the textbooks on the subject show: e.g., Naugolnykh and Ostrovsky and Hamilton and Morfey.
See also
Acoustic theory
Aeolian harp
Computational aeroacoustics
References
External links
M. J. Lighthill, "On Sound Generated Aerodynamically. I. General Theory," Proc. R. Soc. Lond. A 211 (1952) pp. 564–587. This article on JSTOR.
M. J. Lighthill, "On Sound Generated Aerodynamically. II. Turbulence as a Source of Sound," Proc. R. Soc. Lond. A 222 (1954) pp. 1–32. This article on JSTOR.
L. D. Landau and E. M. Lifshitz, Fluid Mechanics 2ed., Course of Theoretical Physics vol. 6, Butterworth-Heinemann (1987) §75. , Preview from Amazon.
K. Naugolnykh and L. Ostrovsky, Nonlinear Wave Processes in Acoustics, Cambridge Texts in Applied Mathematics vol. 9, Cambridge University Press (1998) chap. 1. , Preview from Google.
M. F. Hamilton and C. L. Morfey, "Model Equations," Nonlinear Acoustics, eds. M. F. Hamilton and D. T. Blackstock, Academic Press (1998) chap. 3. , Preview from Google.
Aeroacoustics at the University of Mississippi
Aeroacoustics at the University of Leuven
International Journal of Aeroacoustics
Examples in Aeroacoustics from NASA
Aeroacoustics.info
Acoustics
Aerodynamics
Fluid dynamics
Sound | Aeroacoustics | Physics,Chemistry,Engineering | 1,453 |
3,120,850 | https://en.wikipedia.org/wiki/Chronic%20wound | A chronic wound is a wound that does not progress through the normal stages of wound healing—haemostasis, inflammation, proliferation, and remodeling—in a predictable and timely manner. Typically, wounds that do not heal within three months are classified as chronic. Chronic wounds may remain in the inflammatory phase due to factors like infection or bacterial burden, ischaemia, presence of necrotic tissue, improper moisture balance of wound site, or underlying diseases such as diabetes mellitus.
In acute wounds, a regulated balance of pro-inflammatory cytokines (signalling molecules) and proteases (enzymes) prevent the degradation of the extracellular matrix (ECM) and collagen to ensure proper wound healing.
In chronic wounds, there is excessive levels of inflammatory cytokines and proteases, leading to excessive degradation of the ECM and collagen. This disrupts tissue repair and impedes recovery, keeping the wound in a non-healing state.
Chronic wounds may take years to heal or, in some cases, may never heal, causing significant physical and emotional stress for patients and placing a financial burden on healthcare systems. Acute and chronic wounds are part of a spectrum, with chronic wounds requiring prolonged and complex care compared to acute wounds.
Signs and symptoms
Chronic wound patients often report pain as dominant in their lives.
It is recommended that healthcare providers handle the pain related to chronic wounds as one of the main priorities in chronic wound management (together with addressing the cause). Six out of ten venous leg ulcer patients experience pain with their ulcer, and similar trends are observed for other chronic wounds.
Persistent pain (at night, at rest, and with activity) is the main problem for patients with chronic ulcers. Frustrations regarding ineffective analgesics and plans of care that they were unable to adhere to were also identified.
Cause
In addition to poor circulation, neuropathy, and difficulty moving, factors that contribute to chronic wounds include systemic illnesses, age, and repeated trauma. The genetic skin disorders collectively known as epidermolysis bullosa display skin fragility and a tendency to develop chronic, non-healing wounds. Comorbid ailments that may contribute to the formation of chronic wounds include vasculitis (an inflammation of blood vessels), immune suppression, pyoderma gangrenosum, and diseases that cause ischemia. Immune suppression can be caused by illnesses or medical drugs used over a long period, like steroids. Emotional stress can also negatively affect the healing of a wound, possibly by raising blood pressure and levels of cortisol, which lowers immunity.
What appears to be a chronic wound may also be a malignancy; for example, cancerous tissue can grow until blood cannot reach the cells and the tissue becomes an ulcer. Cancer, especially squamous cell carcinoma, may also form as the result of chronic wounds, probably due to repetitive tissue damage that stimulates rapid cell proliferation.
Another factor that may contribute to chronic wounds is old age. The skin of older people is more easily damaged, and older cells do not proliferate as fast and may not have an adequate response to stress in terms of gene upregulation of stress-related proteins. In older cells, stress response genes are overexpressed when the cell is not stressed, but when it is, the expression of these proteins is not upregulated by as much as in younger cells.
Comorbid factors that can lead to ischemia are especially likely to contribute to chronic wounds. Such factors include chronic fibrosis, edema, sickle cell disease, and peripheral artery disease such as by atherosclerosis.
Repeated physical trauma plays a role in chronic wound formation by continually initiating the inflammatory cascade. The trauma may occur by accident, for example when a leg is repeatedly bumped against a wheelchair rest, or it may be due to intentional acts. Heroin users who lose venous access may resort to 'skin popping', or injecting the drug subcutaneously, which is highly damaging to tissue and frequently leads to chronic ulcers. Children who are repeatedly seen for a wound that does not heal are sometimes found to be victims of a parent with Munchausen syndrome by proxy, a disease in which the abuser may repeatedly inflict harm on the child in order to receive attention.
Periwound skin damage caused by excessive amounts of exudate and other bodily fluids can perpetuate the non-healing status of chronic wounds. Maceration, excoriation, dry (fragile) skin, hyperkeratosis, callus and eczema are frequent problems that interfere with the integrity of periwound skin. They can create a gateway for infection as well as cause wound edge deterioration preventing wound closure.
Pathophysiology
Chronic wounds may affect only the epidermis and dermis, or they may affect tissues all the way to the fascia. They may be formed originally by the same things that cause acute ones, such as surgery or accidental trauma, or they may form as the result of systemic infection, vascular, immune, or nerve insufficiency, or comorbidities such as neoplasias or metabolic disorders. The reason a wound becomes chronic is that the body's ability to deal with the damage is overwhelmed by factors such as repeated trauma, continued pressure, ischemia, or illness.
Though much progress has been accomplished in the study of chronic wounds lately, advances in the study of their healing have lagged behind expectations. This is partly because animal studies are difficult because animals do not get chronic wounds, since they usually have loose skin that quickly contracts, and they normally do not get old enough or have contributing diseases such as neuropathy or chronic debilitating illnesses. Nonetheless, current researchers now understand some of the major factors that lead to chronic wounds, among which are ischemia, reperfusion injury, and bacterial colonization.
Ischemia
Ischemia is an important factor in the formation and persistence of wounds, especially when it occurs repetitively (as it usually does) or when combined with a patient's old age. Ischemia causes tissue to become inflamed and cells to release factors that attract neutrophils such as interleukins, chemokines, leukotrienes, and complement factors.
While they fight pathogens, neutrophils also release inflammatory cytokines and enzymes that damage cells. One of their important jobs is to produce Reactive Oxygen Species (ROS) to kill bacteria, for which they use an enzyme called myeloperoxidase. The enzymes and ROS produced by neutrophils and other leukocytes damage cells and prevent cell proliferation and wound closure by damaging DNA, lipids, proteins, the extracellular matrix (ECM), and cytokines that speed healing. Neutrophils remain in chronic wounds for longer than they do in acute wounds, and contribute to the fact that chronic wounds have higher levels of inflammatory cytokines and ROS. Since wound fluid from chronic wounds has an excess of proteases and ROS, the fluid itself can inhibit healing by inhibiting cell growth and breaking down growth factors and proteins in the ECM. This impaired healing response is considered uncoordinated. However, soluble mediators of the immune system (growth factors), cell-based therapies and therapeutic chemicals can propagate coordinated healing.
It has been suggested that the three fundamental factors underlying chronic wound pathogenesis are cellular and systemic changes of aging, repeated bouts of ischemia-reperfusion injury, and bacterial colonization with resulting inflammatory host response.
Bacterial colonization
Since more oxygen in the wound environment allows white blood cells to produce ROS to kill bacteria, patients with inadequate tissue oxygenation, for example those who developed hypothermia during surgery, are at higher risk for infection. The host's immune response to the presence of bacteria prolongs inflammation, delays healing, and damages tissue. Infection can lead not only to chronic wounds but also to gangrene, loss of the infected limb, and death of the patient. More recently, an interplay between bacterial colonization and increases in reactive oxygen species leading to formation and production of biofilms has been shown to generate chronic wounds.
Like ischemia, bacterial colonization and infection damage tissue by causing a greater number of neutrophils to enter the wound site. In patients with chronic wounds, bacteria with resistances to antibiotics may have time to develop. In addition, patients that carry drug resistant bacterial strains such as methicillin-resistant Staphylococcus aureus (MRSA) have more chronic wounds.
Growth factors and proteolytic enzymes
Chronic wounds also differ in makeup from acute wounds in that their levels of proteolytic enzymes such as elastase. and matrix metalloproteinases (MMPs) are higher, while their concentrations of growth factors such as Platelet-derived growth factor and Keratinocyte Growth Factor are lower.
Since growth factors (GFs) are imperative in timely wound healing, inadequate GF levels may be an important factor in chronic wound formation. In chronic wounds, the formation and release of growth factors may be prevented, the factors may be sequestered and unable to perform their metabolic roles, or degraded in excess by cellular or bacterial proteases.
Chronic wounds such as diabetic and venous ulcers are also caused by a failure of fibroblasts to produce adequate ECM proteins and by keratinocytes to epithelialize the wound. Fibroblast gene expression is different in chronic wounds than in acute wounds.
Though all wounds require a certain level of elastase and proteases for proper healing, too high a concentration is damaging. Leukocytes in the wound area release elastase, which increases inflammation, destroys tissue, proteoglycans, and collagen, and damages growth factors, fibronectin, and factors that inhibit proteases. The activity of elastase is increased by human serum albumin, which is the most abundant protein found in chronic wounds. However, chronic wounds with inadequate albumin are especially unlikely to heal, so regulating the wound's levels of that protein may in the future prove helpful in healing chronic wounds.
Excess matrix metalloproteinases, which are released by leukocytes, may also cause wounds to become chronic. MMPs break down ECM molecules, growth factors, and protease inhibitors, and thus increase degradation while reducing construction, throwing the delicate compromise between production and degradation out of balance.
Diagnosis
Infection
If a chronic wound becomes more painful this is a good indication that it is infected. A lack of pain however does not mean that it is not infected. Other methods of determination are less effective.
Classification
The vast majority of chronic wounds can be classified into three categories: venous ulcers, diabetic, and pressure ulcers. A small number of wounds that do not fall into these categories may be due to causes such as radiation poisoning or ischemia.
Venous and arterial ulcers
Venous ulcers, which usually occur in the legs, account for about 70% to 90% of chronic wounds and mostly affect the elderly. They are thought to be due to venous hypertension caused by improper function of valves that exist in the veins to prevent blood from flowing backward. Ischemia results from the dysfunction and, combined with reperfusion injury, causes the tissue damage that leads to the wounds.
Diabetic ulcers
Another major cause of chronic wounds, diabetes, is increasing in prevalence. Diabetics have a 15% higher risk for amputation than the general population due to chronic ulcers. Diabetes causes neuropathy, which inhibits nociception and the perception of pain. Thus patients may not initially notice small wounds to legs and feet, and may therefore fail to prevent infection or repeated injury. Further, diabetes causes immune compromise and damage to small blood vessels, preventing adequate oxygenation of tissue, which can cause chronic wounds. Pressure also plays a role in the formation of diabetic ulcers.
Pressure ulcers
Another leading type of chronic wounds is pressure ulcers, which usually occur in people with conditions such as paralysis that inhibit movement of body parts that are commonly subjected to pressure such as the heels, shoulder blades, and sacrum. Pressure ulcers are caused by ischemia that occurs when pressure on the tissue is greater than the pressure in capillaries, and thus restricts blood flow into the area. Muscle tissue, which needs more oxygen and nutrients than skin does, shows the worst effects from prolonged pressure. As in other chronic ulcers, reperfusion injury damages tissue.
Treatment
Though treatment of the different chronic wound types varies slightly, appropriate treatment seeks to address the problems at the root of chronic wounds, including ischemia, bacterial load, and imbalance of proteases. Periwound skin issues should be assessed and their abatement included in a proposed treatment plan.
Various methods exist to ameliorate these problems, including antibiotic and antibacterial use, debridement, irrigation, vacuum-assisted closure, warming, oxygenation, moist wound healing (the term pioneered by George D. Winter), removing mechanical stress, and adding cells or other materials to secrete or enhance levels of healing factors.
It is uncertain whether intravenous metronidazole is useful in reducing foul smelling from malignant wounds. There is insufficient evidence to use silver-containing dressings or topical agents for the treatment of infected or contaminated chronic wounds. For infected wounds, the following antibiotics are often used (if organisms are susceptible) as oral therapy due to their high bioavailability and good penetration into soft tissues: ciprofloxacin, clindamycin, minocycline, linezolid, moxifloxacin, and trimethoprim-sulfamethoxazole.
The challenge of any treatment is to address as many adverse factors as possible simultaneously, so each of them receives equal attention and does not continue to impede healing as the treatment progresses.
Preventing and treating infection
To lower the bacterial count in wounds, therapists may use topical antibiotics, which kill bacteria and can also help by keeping the wound environment moist,
which is important for speeding the healing of chronic wounds. Some researchers have experimented with the use of tea tree oil, an antibacterial agent which also has anti-inflammatory effects. Disinfectants are contraindicated because they damage tissues and delay wound contraction. Further, they are rendered ineffective by organic matter in wounds like blood and exudate and are thus not useful in open wounds.
A greater amount of exudate and necrotic tissue in a wound increases likelihood of infection by serving as a medium for bacterial growth away from the host's defenses. Since bacteria thrive on dead tissue, wounds are often surgically debrided to remove the devitalized tissue. Debridement and drainage of wound fluid are an especially important part of the treatment for diabetic ulcers, which may create the need for amputation if infection gets out of control. Mechanical removal of bacteria and devitalized tissue is also the idea behind wound irrigation, which is accomplished using pulsed lavage.
Removing necrotic or devitalized tissue is also the aim of maggot therapy, the intentional introduction by a health care practitioner of live, disinfected maggots into non-healing wounds. Maggots dissolve only necrotic, infected tissue; disinfect the wound by killing bacteria; and stimulate wound healing. Maggot therapy has been shown to accelerate debridement of necrotic wounds and reduce the bacterial load of the wound, leading to earlier healing, reduced wound odor and less pain. The combination and interactions of these actions make maggots an extremely potent tool in chronic wound care.
Negative pressure wound therapy (NPWT) is a treatment that improves ischemic tissues and removes wound fluid used by bacteria. This therapy, also known as vacuum-assisted closure, reduces swelling in tissues, which brings more blood and nutrients to the area, as does the negative pressure itself. The treatment also decompresses tissues and alters the shape of cells, causes them to express different mRNAs and to proliferate and produce ECM molecules.
Recent technological advancements produced novel approaches such as self-adaptive wound dressings that rely on properties of smart polymers sensitive to changes in humidity levels. The dressing delivers absorption or hydration as needed over each independent wound area and aids in the natural process of autolytic debridement. It effectively removes liquefied slough and necrotic tissue, disintegrated bacterial biofilm as well as harmful exudate components, known to slow the healing process. The treatment also reduces bacterial load by effective evacuation and immobilization of microorganisms from the wound bed, and subsequent chemical binding of available water that is necessary for their replication. Self-adaptive dressings protect periwound skin from extrinsic factors and infection while regulating moisture balance over vulnerable skin around the wound.
Treating trauma and painful wounds
Persistent chronic pain associated with non-healing wounds is caused by tissue (nociceptive) or nerve (neuropathic) damage and is influenced by dressing changes and chronic inflammation. Chronic wounds take a long time to heal and patients can experience chronic wounds for many years. Chronic wound healing may be compromised by coexisting underlying conditions, such as venous valve backflow, peripheral vascular disease, uncontrolled edema and diabetes mellitus.
If wound pain is not assessed and documented it may be ignored and/or not addressed properly. It is important to remember that increased wound pain may be an indicator of wound complications that need treatment, and therefore practitioners must constantly reassess the wound as well as the associated pain.
Optimal management of wounds requires holistic assessment. Documentation of the patient's pain experience is critical and may range from the use of a patient diary, (which should be patient driven), to recording pain entirely by the healthcare professional or caregiver. Effective communication between the patient and the healthcare team is fundamental to this holistic approach. The more frequently healthcare professionals measure pain, the greater the likelihood of introducing or changing pain management practices.
At present there are few local options for the treatment of persistent pain, whilst managing the exudate levels present in many chronic wounds. Important properties of such local options are that they provide an optimal wound healing environment, while providing a constant local low dose release of ibuprofen while worn.
If local treatment does not provide adequate pain reduction, it may be necessary for patients with chronic painful wounds to be prescribed additional systemic treatment for the physical component of their pain. Clinicians should consult with their prescribing colleagues referring to the WHO pain relief ladder of systemic treatment options for guidance. For every pharmacological intervention there are possible benefits and adverse events that the prescribing clinician will need to consider in conjunction with the wound care treatment team.
Ischemia and hypoxia
Blood vessels constrict in tissue that becomes cold and dilate in warm tissue, altering blood flow to the area. Thus keeping the tissues warm is probably necessary to fight both infection and ischemia. Some healthcare professionals use 'radiant bandages' to keep the area warm, and care must be taken during surgery to prevent hypothermia, which increases rates of post-surgical infection.
Underlying ischemia may also be treated surgically by arterial revascularization, for example in diabetic ulcers, and patients with venous ulcers may undergo surgery to correct vein dysfunction.
Diabetics that are not candidates for surgery (and others) may also have their tissue oxygenation increased by Hyperbaric Oxygen Therapy, or HBOT, which may provide a short-term improvement in healing by improving the oxygenated blood supply to the wound. In addition to killing bacteria, higher oxygen content in tissues speeds growth factor production, fibroblast growth, and angiogenesis. However, increased oxygen levels also means increased production of ROS. Antioxidants, molecules that can lose an electron to free radicals without themselves becoming radicals, can lower levels of oxidants in the body and have been used with some success in wound healing.
Low level laser therapy has been repeatedly shown to significantly reduce the size and severity of diabetic ulcers as well as other pressure ulcers.
Pressure wounds are often the result of local ischemia from the increased pressure. Increased pressure also plays a roles in many diabetic foot ulcerations as changes due to the disease causes the foot to have limited joint mobility and creates pressure points on the bottom of the foot. Effective measures to treat this includes a surgical procedure called the gastrocnemius recession in which the calf muscle is lengthened to decrease the fulcrum created by this muscle and resulting in a decrease in plantar forefoot pressure.
Growth factors and hormones
Since chronic wounds underexpress growth factors necessary for healing tissue, chronic wound healing may be speeded by replacing or stimulating those factors and by preventing the excessive formation of proteases like elastase that break them down.
One way to increase growth factor concentrations in wounds is to apply the growth factors directly. This generally takes many repetitions and requires large amounts of the factors, although biomaterials are being developed that control the delivery of growth factors over time. Another way is to spread onto the wound a gel of the patient's own blood platelets, which then secrete growth factors such as vascular endothelial growth factor (VEGF), insulin-like growth factor 1–2 (IGF), PDGF, transforming growth factor-β (TGF-β), and epidermal growth factor (EGF). Other treatments include implanting cultured keratinocytes into the wound to reepithelialize it and culturing and implanting fibroblasts into wounds. Some patients are treated with artificial skin substitutes that have fibroblasts and keratinocytes in a matrix of collagen to replicate skin and release growth factors.
In other cases, skin from cadavers is grafted onto wounds, providing a cover to keep out bacteria and preventing the buildup of too much granulation tissue, which can lead to excessive scarring. Though the allograft (skin transplanted from a member of the same species) is replaced by granulation tissue and is not actually incorporated into the healing wound, it encourages cellular proliferation and provides a structure for epithelial cells to crawl across. On the most difficult chronic wounds, allografts may not work, requiring skin grafts from elsewhere on the patient, which can cause pain and further stress on the patient's system.
Collagen dressings are another way to provide the matrix for cellular proliferation and migration, while also keeping the wound moist and absorbing exudate. Additionally Collagen has been shown to be chemotactic to human blood monocytes, which can enter the wound site and transform into beneficial wound-healing cells.
Since levels of protease inhibitors are lowered in chronic wounds, some researchers are seeking ways to heal tissues by replacing these inhibitors in them. Secretory leukocyte protease inhibitor (SLPI), which inhibits not only proteases but also inflammation and microorganisms like viruses, bacteria, and fungi, may prove to be an effective treatment.
Research into hormones and wound healing has shown estrogen to speed wound healing in elderly humans and in animals that have had their ovaries removed, possibly by preventing excess neutrophils from entering the wound and releasing elastase. Thus the use of estrogen is a future possibility for treating chronic wounds.
Epidemiology
Chronic wounds mostly affect people over the age of 60.
The incidence is 0.78% of the population and the prevalence ranges from 0.18 to 0.32%. As the population ages, the number of chronic wounds is expected to rise. Ulcers that heal within 12 weeks are usually classified as acute, and longer-lasting ones as chronic.
References
Further reading
Skin conditions resulting from physical factors
Necrosis | Chronic wound | Biology | 4,987 |
38,331,380 | https://en.wikipedia.org/wiki/Bulletin%20of%20Earthquake%20Engineering | The Bulletin of Earthquake Engineering is a bimonthly peer-reviewed scientific journal published by Springer Science+Business Media on behalf of the European Association for Earthquake Engineering. It covers all aspects of earthquake engineering. It was established in 2003 and the editor-in-chief is Atilla Ansal (Ozyegin University).
Abstracting and indexing
This journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.827.
References
External links
Earthquake engineering
Springer Science+Business Media academic journals
Quarterly journals
English-language journals
Engineering journals
Academic journals established in 2003 | Bulletin of Earthquake Engineering | Engineering | 125 |
63,396,022 | https://en.wikipedia.org/wiki/Slicing%20the%20Truth | Slicing the Truth: On the Computability Theoretic and Reverse Mathematical Analysis of Combinatorial Principles is a book on reverse mathematics in combinatorics, the study of the axioms needed to prove combinatorial theorems. It was written by Denis R. Hirschfeldt, based on a course given by Hirschfeldt at the National University of Singapore in 2010, and published in 2014 by World Scientific, as volume 28 of the Lecture Notes Series of the Institute for Mathematical Sciences, National University of Singapore.
Topics
The book begins with five chapters that discuss the field of reverse mathematics, which has the goal of classifying mathematical theorems by the axiom schemes needed to prove them, and the big five subsystems of second-order arithmetic into which many theorems of mathematics have been classified. These chapters also review some of the tools needed in this study, including computability theory, forcing, and the low basis theorem.
Chapter six, "the real heart of the book", applies this method to an infinitary form of Ramsey's theorem: every edge coloring of a countably infinite complete graph or complete uniform hypergraph, using finitely many colors, contains a monochromatic infinite induced subgraph. The standard proof of this theorem uses the arithmetical comprehension axiom, falling into one of the big five subsystems, ACA0. However, as David Seetapun originally proved, the version of the theorem for graphs is weaker than ACA0, and it turns out to be inequivalent to any one of the big five subsystems. The version for uniform hypergraphs of fixed order greater than two is equivalent to ACA0, and the version of the theorem stated for all numbers of colors and all orders of hypergraphs simultaneously is stronger than ACA0.
Chapter seven discusses conservative extensions of theories, in which the statements of a powerful theory (such as one of the forms of second-order arithmetic) that are both provable in that theory and expressible in a weaker theory (such as Peano arithmetic) are only the ones that are already provably in the weaker theory. Chapter eight summarizes the results so far in diagrammatic form. Chapter nine discusses ways to weaken Ramsey's theorem, and the final chapter discusses stronger theorems in combinatorics including the Dushnik–Miller theorem on self-embedding of infinite linear orderings, Kruskal's tree theorem, Laver's theorem on order embedding of countable linear orders, and Hindman's theorem on IP sets. An appendix provides a proof of a theorem of Jiayi Liu, part of the collection of results showing that the graph Ramsey theorem does not fall into the big five subsystems.
Audience and reception
This is a technical monograph, requiring its readers to have some familiarity with computability theory and Ramsey theory. Prior knowledge of reverse mathematics is not required. It is written in a somewhat informal style, and includes many exercises, making it usable as a graduate textbook or beginning work in reverse mathematics; reviewer François Dorais writes that it is an "excellent introduction to reverse mathematics and the computability theory of combinatorial principles" as well as a case study in the methods available for proving results in reverse mathematics.
Reviewer William Gasarch complains about two missing topics, the work of Joe Mileti on the reverse mathematics of canonical versions of Ramsey's theorem, and the work of James Schmerl on the reverse mathematics of graph coloring. Nevertheless he recommends this book to anyone interested in reverse mathematics and Ramsey theory. And reviewer Benedict Eastaugh calls it "a welcome addition ... providing a fresh and accessible look at a central aspect of contemporary reverse mathematical research."
Related reading
A "classic reference" in reverse mathematics is the book Subsystems of Second Order Arithmetic (2009) by Stephen Simpson; it is centered around the big five subsystems and contains many more examples of results equivalent in strength to one of these five. Dorais suggests using the two books together as companion volumes.
Reviewer Jeffry Hirst suggests Computability Theory by Rebecca Weber as a good source for the background needed to read this book.
References
External links
Slicing the Truth, Hirschfeldt's web site, including a preprint version of the book.
Mathematical logic
Proof theory
Computability theory
Ramsey theory
Mathematics books
2014 non-fiction books | Slicing the Truth | Mathematics | 892 |
14,810,012 | https://en.wikipedia.org/wiki/GTF2H2 | General transcription factor IIH subunit 2 is a protein that in humans is encoded by the GTF2H2 gene.
Function
This gene is part of a 500 kb inverted duplication on chromosome 5q13. This duplicated region contains at least four genes and repetitive elements which make it prone to rearrangements and deletions. The repetitiveness and complexity of the sequence have also caused difficulty in determining the organization of this genomic region. This gene is within the telomeric copy of the duplication. Deletion of this gene sometimes accompanies deletion of the neighboring SMN1 gene in spinal muscular atrophy (SMA) patients but it is unclear if deletion of this gene contributes to the SMA phenotype. This gene encodes the 44 kDa subunit of RNA polymerase II transcription initiation factor IIH which is involved in basal transcription and nucleotide excision repair. Transcript variants for this gene have been described, but their full length nature has not been determined. A second copy of this gene within the centromeric copy of the duplication has been described in the literature. It is reported to be different by either two or four base pairs; however, no sequence data is currently available for the centromeric copy of the gene.
Interactions
GTF2H2 has been shown to interact with GTF2H5, XPB and ERCC2.
See also
Transcription Factor II H
References
Further reading
External links
Transcription factors | GTF2H2 | Chemistry,Biology | 300 |
3,144,661 | https://en.wikipedia.org/wiki/Yakov%20Frenkel |
Yakov Il'ich Frenkel (; 10 February 1894 – 23 January 1952) was a Soviet physicist renowned for his works in the field of condensed-matter physics. He is also known as Jacov Frenkel, frequently using the name J. Frenkel in publications in English.
Early years
He was born to a Jewish family in Rostov on Don, in the Don Host Oblast of the Russian Empire on 10 February 1894. His father was involved in revolutionary activities and spent some time in internal exile to Siberia; after the danger of pogroms started looming in 1905, the family spent some time in Switzerland, where Yakov Frenkel began his education. In 1912, while studying in the Karl May Gymnasium in St. Petersburg, he completed his first physics work on the Earth's magnetic field and atmospheric electricity. This work attracted Abram Ioffe's attention and later led to collaboration with him. He considered moving to the USA (which he visited in the summer of 1913, supported by money hard-earned by tutoring) but was nevertheless admitted to St. Petersburg University in the winter semester of 1913, at which point any emigration plans ended. Frenkel graduated from the university in three years and remained there to prepare for a professorship (his oral exam for the master's degree was delayed due to the events of the October revolution). His first scientific paper came to light in 1917.
Early scientific career
In the last years of the Great War and until 1921 Frenkel was involved (along with Igor Tamm) in the foundation of the University in Crimea (his family moved to Crimea due to the deteriorating health of his mother). From 1921 till the end of his life, Frenkel worked at the Physico-Technical Institute. Beginning in 1922, Frenkel published a book virtually every year. In 1924, he published 16 papers (of which 5 were basically German translations of his other publications in Russian), three books, and edited multiple translations. He was the author of the first theoretical course in the Soviet Union. For his distinguished scientific service, he was elected a corresponding member of the USSR Academy of Sciences in 1929.
He married Sara Isakovna Gordin in 1920. They had two sons, Sergei and Viktor (Victor). He served as a visiting professor at the University of Minnesota in the United States for a short period of time around 1930.
Early works of Yakov Frenkel focused on electrodynamics, statistical mechanics and relativity, though he soon switched to the quantum theory. Paul Ehrenfest, whom he met at a conference in Leningrad, encouraged him to go abroad for collaborations which he did in 1925–1926, mainly in Hamburg and Göttingen, and met with Albert Einstein in Berlin. It was during this period when Schrödinger published his groundbeaking papers on wave mechanics; Heisenberg's had appeared shortly before. Frenkel enthusiastically entered the field through discussions (he reportedly discovered what is now called the Klein–Gordon equation simultaneously with Oskar Klein) but his first scientific paper on the matter (considering electrodynamics in metals) was published in 1927.
In 1927–1930, he discovered the reason for the existence of domains in ferromagnetics; worked on the theory of resonance broadening and collision broadening of the spectral lines; developed a theory of electric resistance on the boundary of two metals and of a metal and a semiconductor.
Celebrated discoveries
In conducting research on the molecular theory of the condensed state (1926), he introduced the notion of the hole in a crystal, three years before Paul Dirac introduced his eponymous sea. The Frenkel defect became firmly established in the physics of solids and liquids. In the 1930s, his research was supplemented with works on the theory of plastic deformation. His theory, now known as the Frenkel–Kontorova model, is important in the study of dislocations. Tatyana Kontorova was then a PhD candidate working with Frenkel.
In 1930 to 1931, Frenkel showed that neutral excitation of a crystal by light is possible, with an electron remaining bound to a hole created at a lattice site identified as a quasiparticle, the exciton. Mention should be made of Frenkel's works on the theory of metals, nuclear physics (the liquid drop model of the nucleus, in 1936), and semiconductors.
In 1930, his son Viktor Frenkel was born. Viktor became a prominent historian of science, writing a number of biographies of prominent physicists including an enlarged version of Yakov Ilich Frenkel, published in 1996.
In 1934, Frenkel outlined the formalism for the multi-configuration self-consistent field method, later rediscovered and developed by Douglas Hartree.
He contributed to semiconductor and insulator physics by proposing a theory, which is now commonly known as the Poole–Frenkel effect, in 1938. "Poole" refers to H. H. Poole (Horace Hewitt Poole, 1886–1962), Ireland. Poole reported experimental results on the conduction in insulators and found an empirical relationship between conductivity and electrical field. Frenkel later developed a microscopic model, similar to the Schottky effect, to explain Poole's results more accurately. In this paper published in USA, Frenkel only very briefly mentioned an empirical relationship as Poole's law. Frenkel cited Poole's paper when he wrote a longer article in a Soviet journal.
During the 1930s, Frenkel and Ioffe opposed dangerous tendencies in Soviet physics, tying science to the materialist ideology, with remarkable courage. Soviet physics, as a result of these actions, never descended to the depths biology did. Still, he subsequently had to forgo publishing several papers, fearing that might have unfortunate consequences.
Yakov Frenkel was involved in the studies of the liquid phase, too, since the mid-1930s (he undertook some research in colloids) and during the World War II, when the institute was evacuated to Kazan. The results of his more than twenty years of study of the theory of liquid state were generalized in the classic monograph "Kinetic theory of liquids".
Later years
During the wartime, he worked on contemporary practical problems to help his country in sustaining the harsh fight. After the war, Frenkel focussed on seismoelectrics, also proposing that sound waves in metals might affect electric phenomena. He subsequently worked mainly in the field of atmospheric effects, but did not abandon his other interests, publishing several papers in nuclear physics.
Frenkel died in Leningrad in 1952. His son, Victor Frenkel, wrote a biography of his father, Yakov Ilich Frenkel: His work, life and letters. This book, originally written in Russian, has also been translated and published in English.
See also
Chandrasekhar limit
Poromechanics
Solid state ionics
References
English translations of books by Frenkel
, 2nd edition ( Dover Publications, 1950),
Literature
Victor Frenkel|Victor Yakovlevich Frenkel: Yakov Illich Frenkel. His work, life and letters. (original: (ru) Яков Ильич Френкель, translated by Alexander S. Silbergleit), Birkhäuser, Basel / Boston / Berlin 2001, (English).
Online
External links
Biography of Jacov Il'ich Frenkel
1894 births
1952 deaths
Scientists from Rostov-on-Don
People from Don Host Oblast
Russian materials scientists
Jewish Russian physicists
Soviet physicists
Soviet nuclear physicists
Corresponding Members of the USSR Academy of Sciences
Condensed matter physicists
Russian scientists | Yakov Frenkel | Physics,Materials_science | 1,568 |
8,878,119 | https://en.wikipedia.org/wiki/Transit%20map | A transit map is a topological map in the form of a schematic diagram used to illustrate the routes and stations within a public transport system—whether this be bus, tram, rapid transit, commuter rail or ferry routes. Metro maps, subway maps, or tube maps of metropolitan railways are some common examples.
The primary function of a transit map is to facilitating the passengers' orientation and navigation, helping them to efficiently use the public transport system and identify which stations function as interchange between lines.
Unlike conventional maps, transit maps are usually not designed to be geographically accurate. Instead, to increase legibility, simplicity and visual aesthetic quality, designers simplify complex routes by using abstract geometry - straight lines, fixed angles and often a fixed distance between stations, compressing those in the outer area of the system and expanding those close to the center. This transformation of a topographical map into a schematic diagram is known as schematization. Although they prioritize clarity over strict geographic accuracy, the relative positions and connections between stations and routes are still accurately depicted for effective navigation. Transit map design places a strong emphasis on user needs, ensuring that layouts and visual elements are optimized to empower passengers with intuitive navigation tools, facilitating seamless decision-making and enhancing overall travel experience.
The main components of a transit map include symbols or named icons representing stations, stops, and interchanges, color-coded lines indicating available routes and transportation services, capturing not only the essential structure of transport networks, but also the city's iconic landscape itself. Its layout, such as geographic, multilinear, radial, concentric circular, grid, or hybrid, is chosen based on geographical intricacies, network complexity, and user preference. Careful consideration is given to icon choice to distinguish different kinds of stations (regular, interchange or terminal), line styles, colors, typography, and their consistent application for clear, effective and intuitive communication.
Transit maps can be found in the transit vehicles, at the platforms or in printed timetables. They are also accessible through digital platforms like mobile apps and websites, ensuring widespread availability and convenience for passengers.
History
The mapping of transit systems was at first generally geographically accurate, but abstract route-maps of individual lines (usually displayed inside the carriages) can be traced back as early as 1908 (London's District line), and certainly there are examples from European and American railroad cartography as early as the 1890s where geographical features have been removed and the routes of lines have been artificially straightened out. But it was George Dow of the London and North Eastern Railway who was the first to launch a diagrammatic representation of an entire rail transport network (in 1929); his work is seen by historians of the subject as being part of the inspiration for Harry Beck when he launched his iconic London Underground map in 1933.
After this pioneering work, many transit authorities worldwide imitated the diagrammatic look for their own networks, some while continuing to also publish hybrid versions that were geographically accurate.
Early maps of the Berlin U-Bahn, Berlin S-Bahn, Boston T, Paris Métro, and New York City Subway also exhibited some elements of the diagrammatic form.
The new Madrid Metro map (of 2007), designed by the RaRo Agency, took the idea of a simple diagram one step further by becoming one of the first produced for a major network to remove diagonal lines altogether; it is constituted just by horizontal and vertical lines only at right angles to each other. After many complaints over its disadvantages, the company reverted to the previous map in 2013.
Transit maps are now increasingly digitized and can be shown in many forms online.
Elements
The primary purpose of a transit map is to help passengers—especially those unfamiliar with the system—to take the correct routes to travel between two points; this may include having to change vehicle or mode in the course of the trip. The map uses symbols to illustrate the lines, stations and transfer points, as well as a system of geographic identification. At the same time the map must remain simple to allow overview, and be usable by those unfamiliar with the geography of the area.
Stations are marked with symbols that break the line, along with their names, so they may be referred to on other maps or travel itineraries. Further help may be granted through the inclusion of important tourist attractions and other locations such as the city center; these may be identified through symbols or wording.
Color coding allows the map to specify each route in an easy way, allowing the users to quickly identify where each specific route goes; if it does not go to the desired destination, the colors and symbols allow the user to identify a feasible point of transfer between lines. Symbols such as aircraft may be used to illustrate airports, and symbols of trains may be used to identify stations that allow transfer to other modes, such as commuter or intercity train services. With the widespread use of zone pricing for fare calculation, systems that span more than one zone need a system to inform the use which zone a particular station is located in. Common ways include varying the tone of the background color, or by running a weak line along the zone boundaries.
Many transit authorities publish multiple maps of their systems; this can be done by isolating one mode of transport, for instance only rapid transit or only bus, onto a single map, or instead the authorities publish maps covering only a limited area, but with greater detail. Another modification is to produce geographically accurate maps of the system, to allow users to better understand the routes. Even if official geographical accurate maps are not available, these can often be obtained from unofficial sources since the information is available from other sources.
Iconic status
There are a growing number of books, websites and works of art on the subject of urban rail and metro map design and use. There are now hundreds of examples of diagrams in an urban rail or metro map style that are used to represent everything from other transit networks like buses and national rail services to sewerage systems and Derbyshire public houses.
One of the most well-known adaptations of an urban rail map was The Great Bear, a lithograph by Simon Patterson. First shown in 1992 and nominated for the Turner Prize, The Great Bear replaces station names on the London Underground map with those of explorers, saints, film stars, philosophers and comedians. Other artists such as Scott Rosenbaum, and Ralph Gray have also taken the iconic style of the urban rail map and made new artistic creations ranging from the abstract to the Solar System. Following the success of these the idea of adapting other urban rail and metro maps has spread so that now almost every major subway or rapid transit system with a map has been doctored with different names, often anagrams of the original station name.
Some maps including those for the rapid transit systems of New York City, Washington D.C., Boston, Montreal, and Denver, have been recreated to include the names of local bars that have a good selection of craft beers.
See also
Bicycle map
Isochrone map
Metrominuto
Road map
Notes
References
Further reading
Mr Beck's Underground Map, Ken Garland, Capital Transport, London, 1994.
No Need To Ask, David Leboff and Tim Demuth, Capital Transport, London, 1999.
Metro Maps of the World, Mark Ovenden, Capital Transport, London, 2003.
Das Berliner U- und S-Bahnnetz, Alfred B. Gottwaldt, TransPress, Stuttgart, 2004.
Telling the passenger where to get off, Andrew Dow, Capital Transport, London, 2005.
Underground Maps After Beck, Maxwell J. Roberts, Capital Transport, London, 2005.
Transit Maps of the world, Mark Ovenden, Penguin books, New York, 2007.
External links
Subways Transport, an extensive site with archive maps on virtually every urban rail system in the world.
Urban Rail
Infographics
Pictograms
Public transport | Transit map | Mathematics | 1,587 |
9,384,649 | https://en.wikipedia.org/wiki/Europe%20PubMed%20Central | Europe PubMed Central (Europe PMC) is an open-access repository that contains millions of biomedical research works. It was known as UK PubMed Central until 1 November 2012.
Service
Europe PMC provides free access to more than 9.3 million full-text biomedical and life sciences research articles and over 43.3 million citations. Europe PMC contains some citation information and includes text mining based marked up text that links to external molecular and medical datasets.
The Europe PMC funders group requires that articles describing the results of biomedical and life sciences research they have supported be made freely available in Europe PMC within 6 months of publication to maximise the impact of the work that they fund.
The Grant Lookup facility allows users to search for information in a wide variety of different ways on over 101,900 grants awarded by the Europe PMC funders.
Most content is mirrored from PubMed Central, which manages the deposit of entire books and journals.
Additionally, Europe PMC offers a manuscript submission system, Europe PMC plus, which allows scientists to self-deposit their peer-reviewed research articles for inclusion in the Europe PMC collection.
Organisation
The Europe PMC project was originally launched in 2007 as the first 'mirror' site to PMC, which aims to provide international preservation of the open and free-access biomedical and life sciences literature. It forms part of a network of PMC International (PMCI) repositories that includes PubMed Central Canada. Europe PMC is not an exact "mirror" of the PMC database but has developed some different features. On 15 February 2013 CiteXplore was subsumed under Europe PubMed Central.
The resource is managed and developed by the European Molecular Biology Laboratory-European Bioinformatics Institute (EMBL-EBI), on behalf of an alliance of 27 biomedical and life sciences research funders, led by the Wellcome Trust.
Europe PMC is supported by 27 organisations: Academy of Medical Sciences, Action on Hearing Loss, Alzheimer's Society, Arthritis Research UK, Austrian Science Fund (FWF), the Biotechnology and Biological Sciences Research Council, Blood Cancer UK, Breast Cancer Now, the British Heart Foundation, Cancer Research UK, the Chief Scientist Office of the Scottish Executive Health Department, Diabetes UK, the Department of Health, the Dunhill Medical Trust, the European Research Council, Marie Curie, the Medical Research Council, the Motor Neurone Disease Association, the Multiple Sclerosis Society, the Myrovlytis Trust, the National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs), Parkinson's UK, Prostate Cancer UK, Telethon Italy, the Wellcome Trust, the World Health Organization and Worldwide Cancer Research (formerly Association for International Cancer Research).
See also
List of academic databases and search engines
MEDLINE
PubMed Central
Hyper Articles en Ligne
Isidore (platform)
References
External links
Fact-sheet
Internet properties established in 2007
Bibliographic databases and indexes
Biological databases
Databases in Europe
Full-text scholarly online databases
Information technology organisations based in the United Kingdom
Medical databases
Medical research organizations
Medical search engines
Open-access archives
Science and technology in Cambridgeshire
South Cambridgeshire District | Europe PubMed Central | Biology | 653 |
76,847,280 | https://en.wikipedia.org/wiki/64%20Ceti | 64 Ceti is a star located located in the constellation Cetus. Based on its spectral type of G0IV, it is a G-type star that has left the main sequence and evolved into a subgiant. It is located away, based on a parallax measured by Gaia DR3, and it is moving towards Earth at a velocity of 19km/s. The apparent magnitude of 64 Ceti is 5.62, which makes it visible to the naked eye only in dark skies, far away from light pollution.
Characteristics
64 Ceti is a G-type star that has left the main sequence and now evolved into a subgiant, based on its spectral type of G0IV. It has about 1.53 times the Sun's mass and has expanded to 2.53 times the Sun's diameter. It is emitting 8.13 times the solar luminosity from its photosphere at an effective temperature of 6,066 K. The age of 64 Ceti is estimated at 2.63 billion years, about 58% of the Solar System's age, and it rotates under its axis at a speed of 8.96 km/s, translating into a rotation period of 15 days. The B-V index of the star is 0.52, corresponding to a yellow-white hue of a late G/early F star.
It is located in the constellation Cetus, based on its celestial coordinates. Gaia DR3 measured a parallax of 23.8 milliarcseconds for this star, translating into a distance of . The apparent magnitude of 64 Ceti is 5.62, which means that it is a faint star, visible to the naked eye only from locations with dark skies. The absolute magnitude, i.e. its brightness if it was seen at a distance of , is 2.49. The star is moving towards Earth at a velocity of 19 km/s. It has a high proper motion across the sky and belongs to the thin disk population, being located above the galactic plane.
Notes
References
Cetus
Ceti, 64
G-type subgiants
013421
010212
0635
WISE objects | 64 Ceti | Astronomy | 448 |
287,781 | https://en.wikipedia.org/wiki/Minitel | The Minitel, officially known as TELETEL, was an interactive videotex online service accessible through telephone lines. It was the world's first and most successful mass-market online service prior to the World Wide Web. It was developed in Cesson-Sévigné, Brittany, by government-owned France Télécom.
The service was initially launched on an experimental basis on 15 July 1980 in Saint-Malo and extended to other regions in autumn 1980. It was commercially introduced throughout France in 1982 by the PTT (Postes, Télégraphes et Téléphones; since 1991, divided into France Télécom and La Poste). From its inception, users were able to make online purchases, book train tickets, access business information services, search the telephone directory, maintain a mailbox, and utilize chat functionalities similar to those now supported by the World Wide Web.
In February 2009, France Télécom reported that the Minitel network still maintained 10 million monthly connections. The service was discontinued by France Télécom on 30 June 2012.
Name
Officially known as TELETEL, the name Minitel is derived from the French title Médium interactif par numérisation d'information téléphonique (Interactive medium for digitized information by telephone).
History
Videotex was a crucial element in the telecommunications sector of many industrialized countries, with numerous national post, telephone, and telegraph companies and commercial ventures launching pilot projects. It was viewed as a major force in advancing towards an information society.
In 1978, Postes, Télégraphes et Téléphones (PTT) initiated the design of the Minitel network. By distributing terminals capable of accessing a nationwide electronic directory of telephone and address information, the PTT aimed to increase the utilization of the country’s 23 million phone lines and reduce the costs associated with printing phone books and employing directory assistance personnel. Millions of terminals were given for free (officially loans, and property of the PTT) to telephone subscribers.
The telephone company prioritized ease of use. The French government's decision to provide free Minitel terminals to every household was a key factor in the widespread adoption and success of Minitel. By providing a popular service on simple, free equipment, Minitel achieved high market penetration and avoided the chicken and egg problem that hindered the widespread adoption of similar systems in the United States. In exchange for the terminal, Minitel users received only the yellow pages (classified as commercial listings with advertisements), while the white pages were freely accessible on Minitel and could be searched faster than through a paper directory.
In the early 1980s, parallel to the US development of the ARPANET, France launched the Minitel project to bring data networking into homes. According to the PTT, during the first eight years of nationwide operation, 8 billion francs were spent on purchasing terminals, resulting in a profit of after deducting payments to information providers such as newspapers. Additionally, an average of 500 million francs was saved annually by printing fewer phone books.
A trial involving 55 residential and business telephone customers using experimental terminals commenced in Saint-Malo on 15 July 1980, two days after a presentation to President Valéry Giscard d'Estaing on 13 July. This trial expanded to 2,500 customers in other regions in autumn 1980. Beginning in May 1981, 4,000 experimental terminals with a different design were distributed in Ille-et-Vilaine, and commercial service using Minitel terminals was launched in 1982. By the end of 1983, there were 120,000 Minitel terminals in France. It became highly successful in 1984 when the French government distributed free Minitel terminals to households.
Minitel became a financial success for the PTT, as using the service cost the 2022 equivalent of 30 euro cent per minute. The telephone company primarily provided the white pages, while establishing infrastructure for other entities to offer services. Minitel facilitated access to various categories including the phone directory (free), mail-order retail companies, airline or train ticket purchases, information services, databases, message boards, online dating services, and computer games.
By 1985, games and electronic messaging accounted for 42 percent of Minitel traffic, with messaging alone representing 17 percent of traffic by 1988. The platform became particularly popular among young people, who would engage in late-night sessions playing text-based online video games.
By early 1986, 1.4 million terminals were connected to the Minitel network, with plans to distribute an additional million by the end of the year. This expansion faced opposition from newspapers concerned about competition from an electronic network. In 1980, Ouest-France expressed the concern that Minitel would "separate people from each other and endanger social relationships." To mitigate opposition from the newspapers, they were permitted to establish the first consumer services on Minitel. Libération offered 24-hour online news, including results from events at the 1984 Summer Olympics in Los Angeles that occurred overnight in France. Providers promoted their services in their own publications, which helped to market the Minitel network. Newspapers were founded specifically to create Minitel services.
By 1988, three million terminals were installed, with 100,000 new units being added monthly. The telephone directory received 23 million calls per month, with 40,000 updates daily. Approximately six thousand other services were available, with around 250 new ones being added each month.
The emergence of Minitel led to the proliferation of numerous start-up ventures, similar to the later dot-com bubble of World Wide Web-related companies. Many of these small enterprises encountered challenges due to an oversaturated market or poor business practices, such as inadequate infrastructure for online retailers. By the late 1980s, the Minitel system had become widespread in France, with numerous products displaying their Minitel numbers as a direct marketing tool.
Despite initial expectations, messageries roses ("pink messages"), adult chat services facilitated by operators posing as receptive women, gained significant traction, causing some discomfort among government officials who preferred to focus on the growing business applications of messaging. Extensive street advertising promoted services such as "3615 Sextel," "Jane," "kiss," "3615 penthouse," and "men." These and other pornographic sites faced criticism for their potential accessibility to minors. While the government opted against implementing coercive measures, it underscored the responsibility of parents, rather than governmental intervention, in regulating children's online activities. The government did impose a tax on pornographic online services.
Numerous services were covertly operated by conservative newspapers, which publicly expressed disapproval the sex industry. The majority of operators were not the scantily-clad women depicted in the advertisements, but were men engaged in their regular occupations.
By the mid-1990s, it provided over 20,000 services, including home banking and specialized databases. Minitel was widespread in French homes a decade before the Internet became known in the US. France Télécom, maintained steady income from Minitel and cautiously approached the Internet to protect its business model. This slow adaptation paralleled the hesitant adoption of high-definition TV in the U.S., where companies resisted new technologies to safeguard profits. France's struggle with Internet adoption reflected typical free-market issues, rather than those associated with centralized economies.
In 1997, recognizing the emerging global Internet society, the French government partially privatized France Télécom, ending its telephone monopoly and introducing competition in the telecommunications sector. This led to reduced prices for telephone communications, allowing more affordable dial-up Internet access by the late 1990s. Minitel became quickly outpaced by the development of the Internet. France Télécom estimated that by the end of 1999, almost 9 million terminals, including web-enabled personal computers (Windows, Mac OS, and Linux), had access to the network, which was used by 25 million people out of a total population of 60 million. Developed by 10,000 companies, nearly 26,000 different services were available by 1996.
Finances
Payment methods included credit cards for purchases and telephone bills, with rates contingent upon the websites visited. Initially, users subscribed to individual services, but adoption surged following the introduction of a "kiosk" model by the telephone company, named after newsagent shops. Charges for Minitel usage and voice calls were amalgamated on the monthly telephone bill without itemized breakdown. Service providers typically received two-thirds of the $10 per hour fee paid by customers as of 1988.
Since the telephone company managed bill collection and users who failed to settle bills risked losing telephone service, the customer acquisition cost for service providers remained low. The consolidated billing system fostered impulse shopping, as users, while browsing, often discovered and utilized additional services beyond their original intention. Given the anonymity of users and services, Minitel usage was prevalent in workplaces where companies covered telephone expenses.
In 1985, France Télécom generated 620 million francs (approximately ) in revenue from Minitel. Throughout the year, 2,000 private companies collectively earned 289 million francs (about ), while Libération, a prominent newspaper, garnered 2.5 million francs (about US$300,000) from the service in September. Despite the increasing prevalence of the World Wide Web, Minitel connections remained stable in the late 1990s, with a consistent monthly volume of 100 million connections alongside 150 million online directory inquiries.
In response to the rising incidence of cybercrime, France Telecom has developed a new contract specifying that all Minitel service operators must identify themselves by providing their name and address. This measure aims to enhance security and accountability within the network.
In 1998, Minitel returned () in revenue, with allocated by France Télécom to service providers. Notably, Minitel sales in the late 1990s constituted nearly 15 percent of total sales for La Redoute and 3 Suisses, prominent mail order companies in France. By 2005, the most popular Minitel application was Teleroute, an online real-time freight exchange, which accounted for nearly 8 percent of Minitel usage.
In December 1985 Minitel users made more than 22 million calls, up 400 percent in one year. In 1994 they made 1,913 million Minitel calls, used the system for 110 million hours, and spent 6.6 billion francs. In 2005, there were 351 million calls for 18.5 million hours of connection, generating of revenue, of which were redistributed to 2,000 service providers (these numbers were declining at around 30 percent per year). There were still six million terminals owned by France Télécom, which had been left with their users in order to avoid recycling problems.
Key utilization of Minitel included banking and financial services, which leveraged Minitel's security features, as well as access to professional databases. France Télécom cited 12 million updates to personal carte vitale healthcare cards were facilitated through Minitel.
By 2007, revenue exceeded . This trend persisted into 2010, with revenues reaching , of which 85 percent was allocated to service providers.
Phonebook
The most popular service of the Minitel was the Annuaire Electronique. It garnered significant popularity, with approximately half of the network's calls directed on it in 1985. In May of that same year, a nationwide white pages directory covering all 24 million telephone subscribers was introduced, accessible through the phone number 11. Following the adoption of the new French numbering system on 18 October 1996, access to the phone directory transitioned to 3611. Companies had the option to include up to three lines of supplementary information and a rudimentary website. Advertisement space within the Minitel phone directory was managed by the Office d'Annonces (ODA), today known as Solocal / Pages Jaunes Groupe based in Sèvres, France. In 1991, the "Minitel Website" for the Paris Sony Stores contained already over 100 pages. Today the 3611 Minitel Directory is replaced by the online white or yellow pages.
On 11 February 2009, France Télécom and PagesJaunes jointly announced the cancellation of plans to discontinue the Minitel service in March 2009, despite its continued high usage of its directory assistance service, which was still accessed over a million times monthly. France Télécom retired the service on 30 June 2012, attributing the decision to operational costs and declining customer interest.
Technology
Minitel utilized computer terminals featuring a text-only monochrome screen, a keyboard, and a modem, all integrated into a single tabletop unit. These terminals had the capability to display basic graphics using a predefined set of block graphics characters. Color units were eventually offered for an additional fee, they saw limited adoption. Aftermarket printers were also available to users.
Operating over the existing Transpac network, Minitel terminals connected to the system by dialing a short code number, initiating a connection to a PAVI (Point d'Accès VIdéotexte, meaning "videotext access point") via the subscriber's analog telephone line. The PAVI was then digitally linked to the destination servers of the relevant company or administration through Transpac. The surge in popularity of the service led to a temporary disruption in June 1985, lasting two weeks, when an increase in connection attempts per second revealed a dormant software bug.
In France, the widely recognised dial number for accessing Minitel services was 3615, with 3617 reserved for premium services. Minitel service names typically incorporated these numbers as prefixes to signify their association with the system. Billboard advertisements during this period often featured minimal content, comprising an image, company name, and a "3615" number. The inclusion of the "3615" number implied the promotion of a Minitel service. A notable instance of this cultural reference can be observed in the title of the film 3615 code Père Noël, where a child endeavors to contact Santa Claus using Minitel, to inadvertently connect with a local criminal.
Minitel used a full-duplex data transmission method facilitated by its modem. The downlink operated at a speed of 1200 bit/s (equivalent to 9 KB/min), while the uplink operated at 75 bit/s (equivalent to 0.6 KB/min). This configuration enabled relatively swift downloads by the standards of its time. Referred to colloquially as "1275", the system was more accurately designated as V.23. Originally designed for general-purpose data communications, it found widespread application in Minitel and analogous services worldwide.
Technically, Minitel refers to the terminals, while the network is known as Télétel.
Minitel terminals were equipped with an AZERTY keyboard, reflective of the standard keyboard layout in France, as opposed to the QWERTY layout more commonly used in English-speaking regions. Some early models deviated from this convention, featuring an ABCDEF keyboard arrangement .
Minitel and the Internet
The impact of Minitel on the development of the Internet in France remains a topic of significant debate, partly because Minitel offered over a thousand services, many of which are now available on the Internet. In 1986, French university students effectively organized a national strike utilizing Minitel, showcasing an early instance of digital communication tools being employed for technological political objectives. The French government's allegiance to the domestically developed Minitel impeded the uptake of the Internet in France. Despite reaching a peak of nine million terminals in the 1990s, there remained 810,000 terminals in the country as late as 2012. Resources within France Télécom were directed towards Minitel development, diverting attention from Internet-related initiatives. The sustained emphasis on Minitel by France Telecom did not significantly impact the adoption or advancement of internet-based companies in France. By 2018, the country is comparable with the other western countries in terms of high-speed internet penetration in households.
Minitel in other countries
Belgium: Minitel was launched by Belgacom and delivered services led by Teleroute. Although it was used by businesses, it was rarely used by the public. The main reason was that the terminals were not offered for free as in France and that usage of the service was expensive (50 Euro cents a minute). Moreover, there was never much promotion thereof by Belgacom.
Brazil: Telebrás had a videotex service called "Videotexto" or "VTX" during the 1980s and 1990s with services provided by local telephone companies such as Telesp (now part of Telefônica Vivo). Services included chats, games, telephone list search, and electronic banking, among others. The Minitel protocol is still used by some cable TV companies to provide general information to their customers.
Canada: Bell Canada experimented with a Minitel-like system known as Alex with terminals called AlexTel. The system was conceptually similar to Minitel, but used the Canadian NAPLPS protocols and North American Bell System RJ-11 standard telephone connectors. Originally launched experimentally in the Montreal area, Alex was then launched in most areas served by Bell Canada (primarily Ontario and Quebec) with offers of a free trial period and terminal. The principal information offering was the telephone directory. Although branded as a "bilingual" (English and French Canadian) service, the majority of other services offered were the experimental ones originally offered in Quebec and completely Francophone. Retention rates were reportedly close to zero. The service closed down shortly after exiting the experimental stage. Telidon was an earlier Canadian text and graphics service using the same technological underpinnings.
Finland: In 1986, PTL-Tele, then Sonera (now part of Telia Company) launched the on-line service called TeleSampo. TeleSampo included not only videotex services, but also many other Ascii-based Value Added Services (VAS). Roughly at the same time, HPY HTF (now Elisa) launched a videotex service called Infotel (fi). TeleSampo service was switched off in 2004.
Germany: "Bildschirmtext" (BTX) that existed between 1983 and 2001 is almost as old as Minitel and technically very similar, but it was largely unsuccessful because consumers had to buy expensive decoders to use it. The German postal service held a monopoly on the decoders that prevented competition and lower prices. Few people bought the boxes, so there was little incentive for companies to post content, which in turn did nothing to further box sales. When the monopoly was loosened, it was too late because PC-based online services had started to appear. Some post offices in Germany offered BTX boxes for public use, allowing access to BTX without owning a box.
Ireland: Minitel was introduced to Ireland by Eir (then called Telecom Éireann) in 1988. The system was based on the French model and Irish services were even accessible from France via the code "36 19 Irlande". A number of major Irish businesses came together to offer a range of online services, including directory information, shopping, banking, hotel reservations, airline reservations, news, weather and information services. The system was also the first platform in Ireland to offer users access to e-mail outside of a corporate setting. Despite being cutting edge for its time, the system failed to capture a large market and was ultimately withdrawn due to lack of commercial interest. The rise of the internet and other global online services in the early to mid-1990s played a major factor in the death of Irish Minitel. Minitel Ireland's terminals were technically identical to their French counterparts, except that they had a Qwerty keyboard and an RJ-11 telephone jack which is the standard telephone connector in Ireland. Terminals could be rented for 5.00 Irish pounds (6.35 euros) per month or purchased for 250.00 Irish pounds (317.43 euros) in 1992.
Italy: In 1985, the national telephone operator SIP – Società italiana per l'esercizio telefonico (now known as Telecom Italia) launched the Videotel (it) service. The system use was charged on a per-page basis. Due to the excessive cost of the hardware and the expensive services, diffusion was very low, leading to the diffusion of a FidoNet-oriented movement. The service was shut down in 1994.
Netherlands: The then state-owned phone company PTT (now KPN) operated two platforms: Viditel (nl) and Videotex Nederland (nl). The main difference was that Viditel used one big central host where Videotex NL used a central access system responsible for realizing the correct connection to the required host: owned and managed by others. Viditel was introduced on 7 August 1980, and required a Vidimodem as well as a compatible home computer (one such example was the Philips P2000T which had a built-in Teletext chip) or a television set which could support Teletext; the required equipment itself would cost anywhere between 3,000 and 5,000 Dutch guilders overall. Viditel was shut down in September 1989 due to high operating costs and was succeeded by the cheaper and more widely used Videotex Nederland. The Videotex NL services offered access via several premium rate numbers and the information/service provider could choose the costs for accessing his service. Depending on the number used, the tariff could vary from 0–1 guilders (0.00–0.45 euro) per minute. Some private networks such as Travelnet (for travel-agencies) and RDWNet for automotive industry, used the same platform as Videotex NL but used dedicated dial-in phone numbers, dedicated access-hardware and also used authentication. Although the protocol used in France for Minitel was slightly different from the international standard one could use the "international" terminal (or PC's with the correct terminal-emulation software) to access the French services. It was possible to connect to most French Minitel services via the Dutch Videotex NL network, but the price per minute was considerably higher: most French Minitel services were reachable via the dial-in number 06-7900, which had a tariff of 1 guilder/minute (approx. €0,45/minute). Videotex Nederland was eventually shut down in 1997, and the parent company behind Videotex Nederland was subsequently renamed as Planet Media Group.
Singapore: Singapore Teleview was first trialled by the Telecom Authority of Singapore (now Singtel) beginning in 1987, and was formally launched in 1991. The Teleview system, while similar in concept to the Minitel and Prestel, was unique in that it was able to display photographic images instead of graphical images used by Minitel and Prestel. Teleview was eventually rendered obsolete by SLIP/PPP-based modem Internet connections in the late 1990s.
South Africa: Videotex was introduced by Telkom in 1986 and named Beltel. The Minitel was introduced later to popularise the service.
Spain: Videotex was introduced by Telefónica in 1990 and named Ibertex. The Ibertex was based on the French model but used the German Bildschirmtext CEPT-1 profile.
Sweden: Swedish state-owned telephone company Televerket (now Telia Company) introduced a similar service, called Teleguide (sv), in 1991. Teleguide was shut down in 1993 due to a contract dispute between Televerket and the vendors IBM and Esselte.
United Kingdom: The Prestel system was similar in concept to Minitel, using dedicated terminals or software on personal computers to access the network. The number of Prestel subscribers only reached 90 thousand.
United States: In 1991, France Télécom launched a Minitel service called "101 Online" in San Francisco; this venture was not successful. In the early 1990s, US West (subsequently Qwest and now Lumen Technologies) launched a Minitel service in the Minneapolis and Omaha markets called "CommunityLink". This joint venture of US West and France Télécom provided Minitel content to IBM PC, Commodore 64, and Apple II owners using a Minitel-emulating software application over a dialup modem. Many of the individual services were the same as or similar to those offered by France Télécom to the French market; in fact, some chat services linked up with France Télécom's network in France. The service was fairly short-lived as competing offerings from providers like AOL, Prodigy, and CompuServe as well as independent bulletin board systems and internet service providers offered more services targeted at American users for a lower price. Many of US West's Minitel offerings were charged à la carte or hourly while competitors offered monthly all-inclusive pricing and many smaller BBSes were completely free of charge as long as users called a local number. Minitel also offered services directly in the US with a DOS based client that was sent out to customers for use with an IBM PC compatible. In 1983, the publishing company Knight Ridder and AT&T offered a competing service called Viewtron. The service offered news, aviation schedules and educative content, but no way of mail communication, as the publishing company thought communication should be one-way only.
See also
History of the World Wide Web
Internet in France
Singapore Teleview
References
External links
The official website
Minitel.org – Memories of Minitel and X.25 networks
Computer Chronicles: High Tech France, video circa 1990
The French Minitel: Is There Digital Life Outside of the "US ASCII" Internet? A Challenge or Convergence?, D-Lib Magazine, December 1995
Wired News: Minitel – The Old New Thing, April 2001
CNN Tech: Minitel – the Beta Internet Breaks Out, April 2001
BBC News: France's Minitel: 20 years young, 14 May 2003
Forbes.com: The French Minitel Goes Online, 14 July 2003
New York Times: On the Farms of France, the Death of a Pixelated Workhorse, 27 June 2012
The Atlantic: Minitel, the Open Network Before the Internet. A state-run French computer service from the 1980s offers a cautionary tale about too much reliance on today’s private internet providers., 16 June 2017
Minitel Research Lab, USA
Communications in France
Orange S.A.
Legacy systems
Videotex
Information appliances
Pre–World Wide Web online services
1978 establishments in France
2012 disestablishments in France
E-commerce in France
French inventions | Minitel | Technology | 5,362 |
46,533,388 | https://en.wikipedia.org/wiki/C1orf131 | Uncharacterized protein C1orf131 is a protein that in humans is encoded by the gene C1orf131. The first ortholog of this protein was discovered in humans. Subsequently, through the use of algorithms and bioinformatics, homologs of C1orf131 have been discovered in numerous species, and as a result, the name of the majority of the proteins in this protein family is Uncharacterized protein C1orf131 homolog.
Gene
In humans C1orf131 is located on the minus strand of chromosome 1 and on the cytogenetic band 1q42.2 along with 193 other genes. Notably, the gene upstream of C1orf131 is GNPAT, and the gene downstream of C1orf131 is TRIM67. When this gene is transcribed in humans, C1orf131 most often forms an mRNA of 1458 base pairs long which is composed of seven exons. There are at least nine others alternative splice forms in humans that produce proteins. They range in size from 129 base pairs (2 exons) to 1458 base pairs (7 exons).
Protein
In the C1orf131 protein family, the proteins are between 93 and 450 amino acids long; however, the majority tend to be between 160-295 amino acids long. They have a molecular weight between 10.6 and 49.0 kDa with the majority between 18.6 and 32.7 kDa. They have an isoelectric point between 9.6 and 11.2. Over 30 orthologs from mammals, birds and lizards have been identified as having a poly(A) RNA binding site. All orthologs in this protein family have a domain of unknown function DUF4602. The human protein has been shown to be both phosphorylated and acetylated. These proteins are lysine-rich, charged amino acids (DEHKR), and basic charged amino acids (HKR). The secondary structure of these proteins primarily consist of alpha helices and coils with a small percentage of beta strands. C1orf131 has been shown to interact with ubiquitin through affinity capture followed by mass spectrometry and APP (amyloid beta (A4) precursor protein) through reconstituted complex.
DUF4602
DUF4602 (PF15375) is generally 120+ amino acids long. There is typically only one gene that contains this DUF domain;however, the DUF domain has been identified in two different proteins in several species. In Trichuris suis DUF4602 is found in both hypothetical protein M5114_09117 and tRNA pseudouridine synthase D, and in Echinocuccus granulosus DUF4602 has been found in hypothetical protein EGR 05135 and expressed conserved protein. DUF4602 has been found primarily in eukaryotes; however, DUF4602 has been identified in the virus DRHN1, Bacillus sp. UNC41MFS5, Enterococcus faecalis, and Enterococcus faecalis 13-SD-W-01. In the C1orf131 orthologs the DUF domains are typically located in the middle of the gene toward the C-terminus side in larger proteins (250+ residues) and in smaller orthologs (160-250 residues) the DUF domain is located near the N-terminus. Also in larger orthologs there are regions of low complexity which could indicate that these proteins are intrinsically disordered proteins.
Evolutionary history
This gene family exists only in eukaryotes. There are no paralogs of this gene; however, there are a few pseudogenes of C1orf131. Thus far they have only been found in orangutans, mouse lemurs, and sloths. When this gene family is compared to cytochrome C, a slow evolving gene, and fibrinogen gamma chain, a fast evolving gene it is shown to evolve at a faster rate than fibrinogen.
References
Genes on human chromosome 1
Uncharacterized proteins | C1orf131 | Biology | 867 |
603,940 | https://en.wikipedia.org/wiki/Mesosome | Mesosomes or chondrioids are folded invaginations in the plasma membrane of bacteria that are produced by the chemical fixation techniques used to prepare samples for electron microscopy. Although several functions were proposed for these structures in the 1960s, they were recognized as artifacts by the late 1970s and are no longer considered to be part of the normal structure of bacterial cells. These extensions are in the form of vesicles, tubules and lamellae.
Initial observations
These structures are invaginations of the plasma membrane observed in gram-positive bacteria that have been chemically fixed to prepare them for electron microscopy. They were first observed in 1953 by George B. Chapman and James Hillier, who referred to them as "peripheral bodies." They were termed "mesosomes" by Fitz-James in 1960.
Initially, it was thought that mesosomes might play a role in several cellular processes, such as cell wall formation during cell division, chromosome replication, or as a site for oxidative phosphorylation. The mesosome was thought to increase the surface area of the cell, aiding the cell in cellular respiration. This is analogous to cristae in the mitochondrion in eukaryotic cells, which are finger-like projections and help eukaryotic cells undergo cellular respiration. Mesosomes were also hypothesized to aid in photosynthesis, cell division, DNA replication, and cell compartmentalisation.
Disproof of hypothesis
These models were called into question during the late 1970s when data accumulated suggesting that mesosomes are artifacts formed through damage to the membrane during the process of chemical fixation, and do not occur in cells that have not been chemically fixed. By the mid to late 1980s, with advances in cryofixation and freeze substitution methods for electron microscopy, it was generally concluded that mesosomes do not exist in living cells. However, a few researchers continue to argue that the evidence remains inconclusive, and that mesosomes might not be artifacts in all cases.
Recently, similar folds in the membrane have been observed in bacteria that have been exposed to some classes of antibiotics, and antibacterial peptides (defensins). The appearance of these mesosome-like structures may be the result of these chemicals damaging the plasma membrane and/or cell wall.
The case of the proposal and then disproof of the mesosome hypothesis has been discussed from the viewpoint of the philosophy of science as an example of how a scientific idea can be falsified and the hypothesis then rejected, and analyzed to explore how the scientific community carries out this testing process.
See also
Cell membrane
Organelle
Lysosome
References
Further reading
Membrane biology
Organelles
Prokaryotic cell anatomy | Mesosome | Chemistry | 574 |
5,004,430 | https://en.wikipedia.org/wiki/John%20McWhirter%20%28mathematician%29 | See John McWhirter (disambiguation) for other people of the same name.
John Graham McWhirter (born 28 March 1949) is a British mathematician and engineer in the field of signal processing.
John McWhirter attended Newry High School. He graduated in mathematics from Queen's University Belfast in 1970, and did his PhD there in 1973 on "The Virial Theorem in Collision Theory" under Benjamin Moiseiwitsch. He started working in the Signal Processing Group at the Royal Signals and Radar Establishment, Great Malvern, in the late 1970s, and has worked there for RSRE's successor organisations, currently QinetiQ. McWhirter left QinetiQ on 31 August 2007 to take up his current post as Distinguished Research Professor in Engineering at Cardiff University.
His work has mainly been in military areas including radar, sonar and communications, recently branching into civil applications. A particular interest is "blind" signal detection in which one does not know whether a signal is present, or its nature.
Awards and honours
1986 honorary visiting professor at Queen's University Belfast
1988 visiting professor at Cardiff University
1996 Elected as a Fellow of the Royal Academy of Engineering (FREng)
1999 Elected as a Fellow of the Royal Society.
2000 Honorary Doctorate from the Queen's University Belfast
2002 Honorary Doctorate from the University of Edinburgh
2003 EURASIP European Group Technical Achievement Award
He is also a Fellow of the Institute of Physics. He is a Fellow of the Institute of Mathematics and its Applications (IMA) and in 2002/3 its president. He is also a Founding Fellow of the Learned Society of Wales.
Selected papers
On the numerical inversion of the Laplace transform and similar Fredholm integral equations of the first kind, J G McWhirter and E R Pike, J. Phys. A: Math. Gen. 11 1729–1745 (1978)
Some systolic array developments in the United Kingdom, John V. McCanny and John G. McWhirter, Computer Volume 20, Issue 7 p. 51 (1987)
References
External links
Cardiff University page
Year of birth missing (living people)
Living people
People from Malvern, Worcestershire
Fellows of the Royal Society
Fellows of the Institution of Engineering and Technology
Fellows of the Royal Academy of Engineering
20th-century British mathematicians
21st-century British mathematicians
Alumni of Queen's University Belfast
Academics of Cardiff University
Qinetiq
Fellows of the Institute of Physics
Fellows of the Learned Society of Wales
People from Newry
1949 births | John McWhirter (mathematician) | Engineering | 503 |
21,660,626 | https://en.wikipedia.org/wiki/Cisgenesis | Cisgenesis is a product designation for a category of genetically engineered plants. A variety of classification schemes have been proposed that order genetically modified organisms based on the nature of introduced genotypical changes, rather than the process of genetic engineering.
Cisgenesis (etymology: cis = same side; and genesis = origin) is one term for organisms that have been engineered using a process in which genes are artificially transferred between organisms that could otherwise be conventionally bred. Genes are only transferred between closely related organisms. Nucleic acid sequences must be isolated and introduced using the same technologies that are used to produce transgenic organisms, making cisgenesis similar in nature to transgenesis. The term was first introduced in 2000 by Henk J. Schouten and Henk Jochemsen, and in 2004 a PhD thesis by Jan Schaart of Wageningen University in 2004, discussing making strawberries less susceptible to Botrytis cinerea.
In Europe, currently, this process is governed by the same laws as transgenesis. While researchers at Wageningen University in the Netherlands feel that this should be changed and regulated in the same way as conventionally bred plants, other scientists, writing in Nature Biotechnology, have disagreed. In 2012 the European Food Safety Authority (EFSA) issued a report with their risk assessment of cisgenic and intragenic plants. They compared the hazards associated with plants produced by cisgenesis and intragenesis with those obtained either by conventional plant breeding techniques or transgenesis. The EFSA concluded that "similar hazards can be associated with cisgenic and conventionally bred plants, while novel hazards can be associated with intragenic and transgenic plants."
Cisgenesis has been applied to transfer of natural resistance genes to the devastating disease Phytophthora infestans in potato and scab (Venturia inaequalis) in apple.
Cisgenesis and transgenesis use artificial gene transfer, which results in less extensive change to an organism's genome than mutagenesis, which was widely used before genetic engineering was developed.
Some people believe that cisgenesis should not face as much regulatory oversight as genetic modification created through transgenesis as it is possible, if not practical, to transfer alleles among closely related species even by traditional crossing. The primary biological advantage of cisgenesis is that it does not disrupt favorable heterozygous states, particularly in asexually propagated crops such as potato, which do not breed true to seed. One application of cisgenesis is to create blight resistant potato plants by transferring known resistance loci wild genotypes into modern, high yielding varieties.
The Dutch government has proposed to exclude cisgenic plants from the European GMO Regulation, in view of the safety of cisgenic plants compared to classically bred plants, and their contribution to durable food production.
Related classification scheme
A related classification scheme proposed by Kaare Nielsen is:
Diagram
.
References
Genetic engineering | Cisgenesis | Chemistry,Engineering,Biology | 588 |
12,170,296 | https://en.wikipedia.org/wiki/Glaser%20coupling | The Glaser coupling is a type of coupling reaction. It is by far one of the oldest coupling reactions and is based on copper compounds like copper(I) chloride or copper(I) bromide and an additional oxidant like air. The base used in the original research paper is ammonia and the solvent is water or an alcohol.
The reaction was first reported by in 1869. He suggested the following process on his way to diphenylbutadiyne:
CuCl + PhC2H + NH3 → PhC2Cu + NH4Cl
4PhC2Cu + O2 → 2PhC2C2Ph + 2Cu2O
Modifications
Eglinton reaction
In the related Eglinton reaction two terminal alkynes are coupled by a copper(II) salt such as cupric acetate.
2R-\!{\equiv}\!-H ->[\ce{Cu(OAc)2}][\ce{pyridine}] R-\!{\equiv}\!-\!{\equiv}\!-R
The oxidative coupling of alkynes has been used to synthesize a number of natural products. The stoichiometry is represented by this highly simplified scheme:
Such reactions proceed via copper(I)-alkyne complexes.
This methodology was used in the synthesis of cyclooctadecanonaene. Another example is the synthesis of diphenylbutadiyne from phenylacetylene.
Hay coupling
The Hay coupling is variant of the Glaser coupling. It relies on the TMEDA complex of copper(I) chloride to activate the terminal alkyne. Oxygen (air) is used in the Hay variant to oxidize catalytic amounts of Cu(I) to Cu(II) throughout the reaction, as opposed to a stoichiometric amount of Cu(II) used in the Eglington variant. The Hay coupling of trimethylsilylacetylene gives the butadiyne derivative.
Scope
In 1882 Adolf von Baeyer used the method to prepare 1,4-bis(2-nitrophenyl)butadiyne, en route to indigo dye.
Shortly afterwards, Baeyer reported a different route to indigo, now known as the Baeyer–Drewson indigo synthesis.
See also
Cadiot–Chodkiewicz coupling - Another alkyne coupling reaction catalysed by copper(I).
Sonogashira coupling - Pd/Cu catalysed coupling of an alkyne and an aryl or vinyl halide
Castro–Stephens coupling - A cross-coupling reaction between a copper(I) acetylide and an aryl halide
Fritsch–Buttenberg–Wiechell rearrangement - can also form diynes
References
Carbon-carbon bond forming reactions
Name reactions | Glaser coupling | Chemistry | 593 |
18,675,102 | https://en.wikipedia.org/wiki/Reference%20tone | A reference tone is a pure tone corresponding to a known frequency, and produced at a stable sound pressure level (volume), usually by specialized equipment.
In media
The most common reference tone in audio engineering is a at −20dB. It is meant to be used by audio engineers in order to adjust the playback equipment so that the accompanying media is at a comfortable volume for the audience. In video production, this tone is usually accompanied by a test card so the video programming may be calibrated as well. It is sometimes played in sequence between a 100 Hz and 10 kHz tone to ensure an accurate response from the equipment at varying audio frequencies. This is also the "bleep" tone commonly used to censor obscene or sensitive audio content.
In music
Many electronic tuners used by musicians emit a tone of 440Hz, corresponding to a pitch of A above Middle C (A4). More sophisticated tuners offer a choice of other reference pitches to account for differences in tuning. Some specialized tuners offer pitches used commonly on a particular instrument (standard guitar tuning, fifth intervals for string instruments, the open tones for various brass instruments).
In telecommunications
In telecommunication, a standard test tone is a pure tone with a standardized level generally used for level alignment of single links and of links in tandem.
For standardized test signal levels and frequencies, see MIL-STD-188-100 for United States Department of Defense (DOD) use, and the Code of Federal Regulations Title 47, part 68 for other Government agencies.
References
External links
Downloadable reference tones, from The Freesound Project
Audio engineering
Music production
Telecommunications standards | Reference tone | Engineering | 328 |
31,345,089 | https://en.wikipedia.org/wiki/BS%208888 | BS 8888 is the British standard developed by the BSI Group for technical product documentation, geometric product specification, geometric tolerance specification and engineering drawings.
History
2008 update
A significant change in the 2008 revision is that there is no longer a requirement
1984
This updated version of the standard has been restructured to be more aligned to the workflow of designers and engineers to assist throughout the design process. The standard now references 3D geometry, not only as drawings but also allowing a 3D surface to be used as a datum feature.
Purpose
BS 8888 performs three fundamental tasks :
Unifying all the ISO standards applicable to technical specification;
Giving an index of ISO standards involved with different principles of technical product specification (TPS);
Providing BSI with a platform for further explanatory commentary where necessary.
References
http://www.g-tol.co.uk/in1.htm (Iain McLeod Associates)
http://www.roymech.co.uk/Useful_Tables/Drawing/Drawing.html
08888
Technical drawing | BS 8888 | Engineering | 217 |
49,584,006 | https://en.wikipedia.org/wiki/Dibutyrylmorphine | Dibutyrylmorphine (also known as dibutanoylmorphine) is the 3,6-dibutyryl ester of morphine, first synthesized by the CR Alders Wright organization in the United Kingdom in 1875.
In animal studies its potency as an analgesic is higher compared to morphine, but lower than that of heroin.
Its structure is similar to that of other morphine esters such as heroin and nicomorphine. In many countries it is controlled as an ester of a controlled substance.
Esters of morphine were first produced by boiling morphine in acids or acid anhydrides including acetic, formic, propanoic, benzoic, butyric, and others, forming numerous mono-, di-, and tetraesters. Some of these were later researched further by others and some were eventually marketed. They included heroin, the first designer drugs which were produced in the late 1920s to replace heroin when it was outlawed by the League of Nations, medicinal drugs such as nicomorphine and others. Some of the corresponding esters of codeine, dihydrocodeine, dihydromorphine, isocodeine were also developed, such as the cough suppressant nicocodeine. The 3,6-diesters of morphine are drugs with more rapid and complete central nervous system penetration due to increased lipid solubility and other structural considerations.
References
4,5-Epoxymorphinans
Euphoriants
Morphine
Mu-opioid receptor agonists
Opioids
Catechol ethers
Prodrugs
Semisynthetic opioids | Dibutyrylmorphine | Chemistry | 343 |
18,298 | https://en.wikipedia.org/wiki/Lunar%20eclipse | A lunar eclipse is an astronomical event that occurs when the Moon moves into the Earth's shadow, causing the Moon to be darkened. Such an alignment occurs during an eclipse season, approximately every six months, during the full moon phase, when the Moon's orbital plane is closest to the plane of the Earth's orbit.
This can occur only when the Sun, Earth, and Moon are exactly or very closely aligned (in syzygy) with Earth between the other two, which can happen only on the night of a full moon when the Moon is near either lunar node. The type and length of a lunar eclipse depend on the Moon's proximity to the lunar node.
When the Moon is totally eclipsed by the Earth (a "deep eclipse"), it takes on a reddish color that is caused by the planet when it completely blocks direct sunlight from reaching the Moon's surface, as the only light that is reflected from the lunar surface is what has been refracted by the Earth's atmosphere. This light appears reddish due to the Rayleigh scattering of blue light, the same reason sunrises and sunsets are more orange than during the day.
Unlike a solar eclipse, which can only be viewed from a relatively small area of the world, a lunar eclipse may be viewed from anywhere on the night side of Earth. A total lunar eclipse can last up to nearly two hours (while a total solar eclipse lasts only a few minutes at any given place) because the Moon's shadow is smaller. Also unlike solar eclipses, lunar eclipses are safe to view without any eye protection or special precautions.
The symbol for a lunar eclipse (or any body in the shadow of another) is (U+1F776 🝶).
Types of lunar eclipse
Earth's shadow can be divided into two distinctive parts: the umbra and penumbra. Earth totally occludes direct solar radiation within the umbra, the central region of the shadow. However, since the Sun's diameter appears to be about one-quarter of Earth's in the lunar sky, the planet only partially blocks direct sunlight within the penumbra, the outer portion of the shadow.
Penumbral lunar eclipse
A penumbral lunar eclipse occurs when part or all of the Moon's near side passes into the Earth's penumbra. No part of the moon is in the Earth's umbra during this event, meaning that on all or a part of the Moon's surface facing Earth, the sun is partially blocked. The penumbra causes a subtle dimming of the lunar surface, which is only visible to the naked eye when the majority of the Moon's diameter has immersed into Earth's penumbra. A special type of penumbral eclipse is a total penumbral lunar eclipse, during which the entire Moon lies exclusively within Earth's penumbra. Total penumbral eclipses are rare, and when these occur, the portion of the Moon closest to the umbra may appear slightly darker than the rest of the lunar disk.
Partial lunar eclipse
When the Moon's near side penetrates partially into the Earth's umbra, it is known as a partial lunar eclipse, while a total lunar eclipse occurs when the entire Moon enters the Earth's umbra. During this event, one part of the Moon is in the Earth's umbra, while the other part is in the Earth's penumbra. The Moon's average orbital speed is about , or a little more than its diameter per hour, so totality may last up to nearly 107 minutes. Nevertheless, the total time between the first and last contacts of the Moon's limb with Earth's shadow is much longer and could last up to 236 minutes.
Total lunar eclipse
When the Moon's near side entirely passes into the Earth's umbral shadow, a total lunar eclipse occurs. Just prior to complete entry, the brightness of the lunar limb—the curved edge of the Moon still being hit by direct sunlight—will cause the rest of the Moon to appear comparatively dim. The moment the Moon enters a complete eclipse, the entire surface will become more or less uniformly bright, being able to reveal stars surrounding it. Later, as the Moon's opposite limb is struck by sunlight, the overall disk will again become obscured. This is because, as viewed from the Earth, the brightness of a lunar limb is generally greater than that of the rest of the surface due to reflections from the many surface irregularities within the limb: sunlight striking these irregularities is always reflected back in greater quantities than that striking more central parts, which is why the edges of full moons generally appear brighter than the rest of the lunar surface. This is similar to the effect of velvet fabric over a convex curved surface, which, to an observer, will appear darkest at the center of the curve. It will be true of any planetary body with little or no atmosphere and an irregular cratered surface (e.g., Mercury) when viewed opposite the Sun.
Central lunar eclipse
Central lunar eclipse is a total lunar eclipse during which the Moon passes near and through the centre of Earth's shadow, contacting the antisolar point. This type of lunar eclipse is relatively rare.
The relative distance of the Moon from Earth at the time of an eclipse can affect the eclipse's duration. In particular, when the Moon is near apogee, the farthest point from Earth in its orbit, its orbital speed is the slowest. The diameter of Earth's umbra does not decrease appreciably within the changes in the Moon's orbital distance. Thus, the concurrence of a totally eclipsed Moon near apogee will lengthen the duration of totality.
Selenelion
A selenelion or selenehelion, also called a horizontal eclipse, occurs where and when both the Sun and an eclipsed Moon can be observed at the same time. The event can only be observed just before sunset or just after sunrise, when both bodies will appear just above opposite horizons at nearly opposite points in the sky. A selenelion occurs during every total lunar eclipse—it is an experience of the observer, not a planetary event separate from the lunar eclipse itself. Typically, observers on Earth located on high mountain ridges undergoing false sunrise or false sunset at the same moment of a total lunar eclipse will be able to experience it. Although during selenelion the Moon is completely within the Earth's umbra, both it and the Sun can be observed in the sky because atmospheric refraction causes each body to appear higher (i.e., more central) in the sky than its true geometric planetary position.
Timing
The timing of total lunar eclipses is determined by what are known as its "contacts" (moments of contact with Earth's shadow):
P1 (First contact): Beginning of the penumbral eclipse. Earth's penumbra touches the Moon's outer limb.
U1 (Second contact): Beginning of the partial eclipse. Earth's umbra touches the Moon's outer limb.
U2 (Third contact): Beginning of the total eclipse. The Moon's surface is entirely within Earth's umbra.
Greatest eclipse: The peak stage of the total eclipse. The Moon is at its closest to the center of Earth's umbra.
U3 (Fourth contact): End of the total eclipse. The Moon's outer limb exits Earth's umbra.
U4 (Fifth contact): End of the partial eclipse. Earth's umbra leaves the Moon's surface.
P4 (Sixth contact): End of the penumbral eclipse. Earth's penumbra no longer makes contact with the Moon.
Danjon scale
The following scale (the Danjon scale) was devised by André Danjon for rating the overall darkness of lunar eclipses:
L = 0: Very dark eclipse. Moon almost invisible, especially at mid-totality.
L = 1: Dark eclipse, gray or brownish in coloration. Details distinguishable only with difficulty.
L = 2: Deep red or rust-colored eclipse. Very dark central shadow, while outer edge of umbra is relatively bright.
L = 3: Brick-red eclipse. Umbral shadow usually has a bright or yellow rim.
L = 4: Very bright copper-red or orange eclipse. Umbral shadow is bluish and has a very bright rim.
Lunar versus solar eclipse
There is often confusion between a solar eclipse and a lunar eclipse. While both involve interactions between the Sun, Earth, and the Moon, they are very different in their interactions.
The Moon does not completely darken as it passes through the umbra because of the refraction of sunlight by Earth's atmosphere into the shadow cone; if Earth had no atmosphere, the Moon would be completely dark during the eclipse. The reddish coloration arises because sunlight reaching the Moon must pass through a long and dense layer of Earth's atmosphere, where it is scattered. Shorter wavelengths are more likely to be scattered by the air molecules and small particles; thus, the longer wavelengths predominate by the time the light rays have penetrated the atmosphere. Human vision perceives this resulting light as red. This is the same effect that causes sunsets and sunrises to turn the sky a reddish color. An alternative way of conceiving this scenario is to realize that, as viewed from the Moon, the Sun would appear to be setting (or rising) behind Earth.
The amount of refracted light depends on the amount of dust or clouds in the atmosphere; this also controls how much light is scattered. In general, the dustier the atmosphere, the more that other wavelengths of light will be removed (compared to red light), leaving the resulting light a deeper red color. This causes the resulting coppery-red hue of the Moon to vary from one eclipse to the next. Volcanoes are notable for expelling large quantities of dust into the atmosphere, and a large eruption shortly before an eclipse can have a large effect on the resulting color.
In culture
Several cultures have myths related to lunar eclipses or allude to the lunar eclipse as being a good or bad omen. The Egyptians saw the eclipse as a sow swallowing the Moon for a short time; other cultures view the eclipse as the Moon being swallowed by other animals, such as a jaguar in Mayan tradition, or a mythical three-legged toad known as Chan Chu in China. Some societies thought it was a demon swallowing the Moon, and that they could chase it away by throwing stones and curses at it. The Ancient Greeks correctly believed the Earth was round and used the shadow from the lunar eclipse as evidence. Some Hindus believe in the importance of bathing in the Ganges River following an eclipse because it will help to achieve salvation.
Inca
Similarly to the Mayans, the Incans believed that lunar eclipses occurred when a jaguar ate the Moon, which is why a blood moon looks red. The Incans also believed that once the jaguar finished eating the Moon, it could come down and devour all the animals on Earth, so they would take spears and shout at the Moon to keep it away.
Mesopotamians
The ancient Mesopotamians believed that a lunar eclipse was when the Moon was being attacked by seven demons. This attack was more than just one on the Moon, however, for the Mesopotamians linked what happened in the sky with what happened on the land, and because the king of Mesopotamia represented the land, the seven demons were thought to be also attacking the king. In order to prevent this attack on the king, the Mesopotamians made someone pretend to be the king so they would be attacked instead of the true king. After the lunar eclipse was over, the substitute king was made to disappear (possibly by poisoning).
Chinese
In some Chinese cultures, people would ring bells to prevent a dragon or other wild animals from biting the Moon. In the 19th century, during a lunar eclipse, the Chinese navy fired its artillery because of this belief. During the Zhou Dynasty ( 1046–256 BC) in the Book of Songs, the sight of a Red Moon engulfed in darkness was believed to foreshadow famine or disease.
Blood moon
Certain lunar eclipses have been referred to as "blood moons" in popular articles but this is not a scientifically recognized term. This term has been given two separate, but overlapping, meanings.
The meaning usually relates to the reddish color a totally eclipsed Moon takes on to observers on Earth. As sunlight penetrates the atmosphere of Earth, the gaseous layer filters and refracts the rays in such a way that the green to violet wavelengths on the visible spectrum scatter more strongly than the red, thus giving the Moon a reddish cast. This is possible because the rays from the Sun are able to wrap around the Earth and reflect off the Moon.
Occurrence
At least two lunar eclipses and as many as five occur every year, although total lunar eclipses are significantly less common than partial lunar eclipses. If the date and time of an eclipse is known, the occurrences of upcoming eclipses are predictable using an eclipse cycle, like the saros. Eclipses occur only during an eclipse season, when the Sun appears to pass near either node of the Moon's orbit.
View from the Moon
A lunar eclipse is on the Moon a solar eclipse. The occurrence makes Earth's atmosphere appear as a red ring around the dark Earth. During full moon, the phase when lunar eclipses take place, the dark side of the Earth is illuminated by the Moon and its moon light.
See also
Lists of lunar eclipses and List of 21st-century lunar eclipses
Lunar occultation
Moon illusion
Orbit of the Moon
Solar eclipse
Eclipses in history and culture
References
Works cited
Further reading
Bao-Lin Liu, Canon of Lunar Eclipses 1500 B.C.-A.D. 3000. Willmann-Bell, Richmond VA, 1992
Jean Meeus and Hermann Mucke Canon of Lunar Eclipses -2002 to +2526 (3rd edition). Astronomisches Büro, Vienna, 1992
Espenak, F., Fifty Year Canon of Lunar Eclipses: 1986–2035. NASA Reference Publication 1216, 1989
Espenak, F. Thousand Year Canon of Lunar Eclipses 1501 to 2500, Astropixels Publishing, Portal AZ, 2014
External links
Lunar Eclipse Essentials: video from NASA
Animated explanation of the mechanics of a lunar eclipse , University of South Wales
U.S. Navy Lunar Eclipse Computer
NASA Lunar Eclipse Page
Search among the 12,064 lunar eclipses over five millennium and display interactive maps
Lunar Eclipses for Beginners
Tips on photographing the lunar eclipse from New York Institute of Photography
Astronomical events
Eclipses
Lunar observation | Lunar eclipse | Astronomy | 2,999 |
380,683 | https://en.wikipedia.org/wiki/Endowment%20%28Mormonism%29 | In Mormonism, the endowment is a two-part ordinance designed for participants to become kings, queens, priests, and priestesses in the afterlife. As part of the first ceremony, participants take part in a scripted reenactment of the Biblical creation and fall of Adam and Eve. The ceremony includes a symbolic washing and anointing, and receipt of a "new name" which they are not to reveal to others except at a certain part in the ceremony, and the receipt of the temple garment, which Mormons then are expected to wear under their clothing day and night throughout their life. Participants are taught symbolic gestures and passwords considered necessary to pass by angels guarding the way to heaven, and are instructed not to reveal them to others. As practiced today in the Church of Jesus Christ of Latter-day Saints (LDS Church), the endowment also consists of a series of covenants (promises to God) that participants make, such as a covenant of consecration to the LDS Church. All LDS Church members who choose to serve as missionaries or participate in a celestial marriage in a temple must first complete the first endowment ceremony.
The second part of the endowment, called the second anointing, is the pinnacle ordinance of the temple, jointly given to a husband and wife couple to ensure salvation, guarantee exaltation, and confer godhood. Participants are anointed kings, queens, priests, and priestesses, whereas in the first endowment they are only anointed to become those contingent to following specified covenants. The second anointing is only given to a select group, and its existence is not widely known among the general membership.
The endowment as practiced today was instituted by founder Joseph Smith in the 1840s with further contributions by Brigham Young and his successors. The ceremony is performed in Latter Day Saint temples, which are dedicated specifically for the endowment and certain other ordinances sacred to Mormons, and are open only to Mormons who meet certain requirements. There was a brief period during the construction of the Salt Lake Temple where a small building referred to as the Endowment House was used to administer the endowment ordinance. The endowment is currently practiced by the LDS Church, several denominations of Mormon fundamentalism, and a few other Mormon denominations. The LDS Church has altered the ceremony throughout its history.
A distinct endowment ceremony was also performed in the 1830s in the Kirtland Temple, the first temple of the broader Latter Day Saint movement, which includes other smaller churches such as the Community of Christ. The term "endowment" thus has various meanings historically, and within the other branches of the Latter Day Saint movement.
About two-thirds of US members reported having current authorization from their local leadership to participate in temple ordinances in a 2012 survey. Estimates show that fewer than half of converts to the LDS Church ultimately undergo the first endowment ceremony, and young people preparing for missions account for about one-third of "live" endowments (as contrasted with proxy endowments for the deceased). The less common second endowment ceremony had been given 15,000 times by 1941, but has become less frequent in modern times.
Previous Latter Day Saint endowments
The meaning and scope of the term endowment evolved during the early Latter Day Saint movement, of which Mormonism is a part. The term derives from the Authorized King James Version, referring to the spiritual gifts given the disciples of Jesus on the day of Pentecost, in which they were "endowed with power from on high," Christians generally understand this endowment to refer to the gift of the Holy Spirit, which the Latter Day Saints believe is given at the Confirmation ceremony. In 1831, however, Smith began teaching that the elders of the church needed to be further "endowed with power from on high" in order to be effective proselytizers. He therefore gathered the elders together at a general conference in June 1831 and "endowed" them with this power by ordaining them to the High Priesthood.
By the mid-1830s, Smith was teaching that a further endowment was necessary, this time requiring the completion of the Kirtland Temple as a house of God where God could pour out his Holy Spirit. Upon the completion of the Kirtland Temple after three years of construction (1833–1836), the elders of the church gathered for this second promised endowment in early 1836. The Kirtland endowment included a ritual ceremony involving preparatory washings and anointings with oil, followed by a gathering in the temple in which many reported spiritual gifts such as speaking in tongues and visions.
The Nauvoo endowment
Overview
The Nauvoo endowment consists of two phases: (1) an initiation, and (2) an instructional and testing phase. The initiation consists of a washing and anointing, culminating in the clothing of the patron in a "Garment of the Holy Priesthood", which is thereafter worn as an undergarment.
The instructional and testing phase of the endowment consists of a scripted reenactment of Adam and Eve's experience in the Garden of Eden (performed by live actors—called officiators; in the mid-20th century certain portions were adapted to a film presentation). The instruction is punctuated with oaths, symbolic gestures, and a prayer around an altar, and at the end of instruction, the initiate's knowledge of symbolic gestures and key-words is tested at a "veil."
Introduction
On May 3, 1842, Joseph Smith prepared the second floor of his Red Brick Store, in Nauvoo, Illinois, to represent "the interior of a temple as circumstances would permit". The next day, May 4, he introduced the Nauvoo endowment ceremony to nine associates.
Throughout 1843 and 1844 Smith continued to initiate other men, as well as women, into the endowment ceremony. By the time of his death on June 27, 1844, more than 50 persons had been admitted into the Anointed Quorum, the name by which this group called themselves.
The Nauvoo endowment and Freemasonry
There are many similarities between Smith's endowment ceremony and certain rituals of Freemasonry, particularly the Royal Arch degree (although, there is no evidence that he ever saw the Royal Arch degree or joined a Royal Arch Chapter). These specific similarities included instruction in various signs, tokens, and passwords, and the imposition of various forms of the penalties for revealing them. The original wording of the penalties, for example, closely followed the graphic wording of the Masonic penalties.
According to the predominant view by historians, Smith used and adapted material from the Masonic rituals in creating the endowment ceremony. All of those first initiated by Smith on May 4, 1842, were longstanding or recent Masons: James Adams was the Deputy Grand Master of the Masonic Grand Lodge of Illinois; Newell K. Whitney, George Miller and Heber C. Kimball had previously been Lodge Masters; Smith's brother, Hyrum, had been a Mason since 1827; and the remaining five participants (Law, Marks, Young, Richards, and Smith himself) had been initiated as Freemasons just weeks before the meeting. However, none of these Masons ever charged Smith with breaking any of Masonry's oaths or revealing its secrets. In contrast to those that believe Smith simply copied these rituals to advance his own religion, one Mormon historian has noted that these Masonic parallels confirmed to these men "the breath of the restoration impulse and was evidence of Smith's divine calling".
The LDS Church has never commented officially on these similarities, although certain features of the two rituals have been called "analogous" by one official Church Historian and the apostle Jeffrey R. Holland stated in a BBC interview that endowment ordinance vows to secrecy are "similar to a Masonic relationship." The LDS Church apostle John A. Widtsoe downplayed the similarities, arguing that they "do not deal with the basic matters [the endowment] but rather with the mechanism of the ritual." One LDS Church educator, however, was censured in the 1970s by the Church Educational System for arguing that the endowment ceremony had a dependent relationship with the rituals of freemasonry.
Some within the LDS Church, particularly Smith's contemporaries, have expressed the view that the endowment was given anciently by God in its original form at the Temple of Solomon, but that the form of the ritual degenerated into the form used by Freemasons. Heber C. Kimball clearly supported this position: "We have the true Masonry. The Masonry of today is received from the apostasy which took place in the days of Solomon and David. They have now and then a thing that is correct, but we have the real thing."
Later modifications by the LDS Church
After Smith officiated in Brigham Young's endowment in 1842 Smith told him, "Brother Brigham, this is not arranged perfectly; however we have done the best we could under the circumstances in which we are placed. I wish you to take this matter in hand: organize and systematize all these ceremonies". Young did as Smith directed, and under Young's direction the Nauvoo endowment ceremony was introduced to the church at large in the Nauvoo Temple during the winter of 1845–1846. A spacious hall in the temple's attic was arranged into appropriate ordinance "rooms" using canvas partitions. Potted plants were used in areas representing the Garden of Eden, and other areas were furnished appropriately, including a room representing the celestial kingdom. Over 5,500 persons received their endowments in this temple.
Young introduced the same ceremony in the Utah Territory in the 1850s, first in the Endowment House and then in the St. George Temple. During this period the ceremony had never been written down, but was passed orally from temple worker to worker. Shortly after the dedication of the St. George Temple, and before his death in 1877, Young became concerned about the possibility of variations in the ceremony within the church's temples and so directed the majority of the text of the endowment to be written down. This document became the standard for the ceremony thereafter. Also in 1877, the first endowments for the dead were performed in the St. George Temple.
In 1893, minor alterations in the text were made in an attempt to bring uniformity to the ceremony as administered in the temples. Between 1904 and 1906, the temple ceremony received very public scrutiny during the 1904 Senate investigation of LDS Apostle and U.S. Senator, Reed Smoot. Of particular concern to senators was the ceremony's "law of vengeance", in which, during the hearings, it was revealed that participants took an oath of vengeance to pray that God would "avenge the blood of the prophets on this nation". The "prophets" were Joseph and Hyrum Smith, and "this nation" was the United States.
Beginning in 1919, church president Heber J. Grant appointed a committee charged with revising the ceremony, which was done under the direction of Apostle George F. Richards from 1921 to 1927. Richards received permission to write down the previously unwritten portions of the ceremony. Among his revisions was the elimination of the "law of vengeance". Previous versions of the ceremony into the 1880s also had the representative of the Lord cut the symbols in the garments with a knife through the veil, with one source suggesting an early version cut into the knee of the participant to create a scar. The committee also removed the violent language from the penalty portions of the ceremony. Prior to 1927, participants made an oath that if they ever revealed the secret gestures of the ceremony, they would be subject to the following:
Each temple president received a "President's Book" with the revised ceremony ensuring uniformity throughout the church's temples.
The first filmed versions of the endowment were introduced in the 1950s, by a committee headed by Gordon B. Hinckley. That change was initiated by church president David O. McKay as a way of providing the instruction simultaneously in different languages, an innovation made necessary by the construction of the Bern Switzerland Temple, the church's first temple in Europe. , ceremonies in all but two (Salt Lake Temple and Manti Temple) of the church's 128 operating temples are presented using the filmed version.
In 1990, further changes included the elimination of all blood oaths and penalties. These penalties, representing what the member would rather suffer than reveal the sacred signs given them in the ceremony, were symbolized by gestures for having the throat cut, the breast cut open, and the bowels torn out. Changes also included the elimination of the five points of fellowship, the role of the preacher, and all reference to Lucifer's "popes and priests" were dropped.
The ceremony was also changed to lessen the differences in treatment between men and women. Women no longer are required to covenant to obey their husbands, but instead must covenant only to follow their husbands as their husbands follow God. Also, Eve is no longer explicitly blamed for the Fall or told that Adam "shall rule over thee", and several references to Adam were replaced with references to Adam and Eve. The lecture at the veil was also cut, and some repetition was eliminated.
In the temple endowment, women were previously urged to be a priestess "unto her husband," while men were promised they will be priests to God. In January 2019, that topic was removed from the endowment process, in accordance with other changes that included more lines for Eve in their ritual performance of the Book of Genesis. Also in 2019, a letter from the church's First Presidency stated that "Veiling an endowed woman's face prior to burial is optional." It had previously been required. The letter went on to say that such veiling, "may be done if the sister expressed such a desire while she was living. In cases where the wishes of the deceased sister on this matter are not known, her family should be consulted."
The Church announced in 1988 that 100 million vicarious endowments had been performed on behalf of deceased persons.
Modern endowment as practiced by the LDS Church
The most well-known Mormon endowment ceremony is that performed by the LDS Church in its temples. This ceremony is open only to members of the church deemed worthy and given a "temple recommend" by their priesthood leaders after one or more personal interviews. It comprises four parts:
An initiatory composed of the preparatory ordinances of washing and anointing
An instructional portion with lectures and representations
The making of covenants (i.e. oaths)
A testing of knowledge
The initiatory
The "initiatory" is a prelude to the endowment proper, similar to Chrismation, and consists of:
Instruction
Symbolic washing and anointing ordinances
Being clothed in the temple garment
Receiving a "new name" in preparation for the endowment.
Preceded only by sealings in 1831, washing and anointing ceremonies are perhaps the earliest practiced temple ordinances for the living since the organization of the LDS Church. There is evidence that these ordinances have been performed since 1832 when they were first practiced in the Whitney Store as part of the School of the Prophets, and were subsequently implemented in the Kirtland endowment.
As part of the endowment ceremony, the ordinance of washing and anointing symbolizes the ritual cleansing of priests that took place at Israel's Tabernacle, Solomon's Temple, and the Second Temple, later known as Herod's Temple. The washing symbolizes being "cleansed from the blood of this generation," and being anointed to become "clean from the blood and sins of this generation."
After the washing and anointing, the patron is given the temple garment, formally called the "Garment of the Holy Priesthood". This garment represents the "coats of skins" given to Adam and Eve in the Garden of Eden.
Similar ordinances are performed for the living and the dead in LDS temples, where men are:
Ordained to the priesthood (for the dead only, since a man coming to the temple for his own endowment would have previously received his Melchizedek priesthood ordination)
Washed with water (which only involves a cursory sprinkling of water)
Blessed to have the washing sealed
Anointed with oil
Blessed to have the anointing sealed
Clothed in holy garments
Women receive the same ordinances, except for the ordination.
As the final part of the initiatory, the patron is given a new name, which is a key word used during the ceremony. In general, this name is only known to the person to whom it is given; however, an endowed LDS woman reveals her name to her endowed husband (but not vice versa).
The instructional portion
The endowment focuses on LDS belief in a plan of salvation and changes to the ceremony in 2023 included more discussion of Jesus. Parts of the doctrine of the plan of salvation explained include:
The eternal Nature of God, of Jesus Christ, and their divinity
The pre-mortal existence and eternal nature of man (mankind lived with God before mortal life)
The reality of Satan, who is Jesus' and Adam's rebellious spirit brother
The fall of Adam and the reasons for mortality, trials, and blessings
The Atonement of Jesus Christ, and the need for the Atonement
The relationship of grace, faith, and works
Death, the literal resurrection, and qualifying for one of the three kingdoms of glory (or Outer Darkness)
The need for personal righteousness, covenant keeping, and love of God and fellow man
That Heavenly Father loves humanity as his children and wants people to become like he is, to receive joy
The sanctity and eternal nature of the family
The endowment is often thought of as a series of lectures where Latter-day Saints are taught about the creation of the world, the events in the Garden of Eden, what happened after Adam and Eve were cast out of the Garden into the "telestial world", and the progression of righteous individuals through "terrestrial" laws to one of the kingdoms of glory and exaltation.
During the ceremony, Latter-day Saints are dressed in temple clothes or temple robes, are taught in ordinance rooms about various gospel laws (including obedience, chastity, sacrifice and consecration) and make covenants to obey these laws. The early Mormon leader Brigham Young taught that participants are given "signs and tokens" that "enable you to walk back to the presence of the Father, passing the angels who stand as sentinels" and gain eternal exaltation. At the end of the ceremony, the participant is "tested" at the veil on their knowledge of what they were taught and covenanted to do, and then admitted into the celestial room, where they may meditate and pray, but are discouraged from lingering.
Covenant portion
The LDS Church defines a covenant as a sacred promise one makes to God.
The temple ceremony involves entering into five covenants:
Law of Obedience, which includes striving to keep God's commandments.
Law of Sacrifice, which means doing all that is possible to support the Lord's work and repenting with a broken heart and contrite spirit.
Law of the Gospel, which refers to the higher law that Jesus Christ taught, including baptism, repentance, and being sanctified by the Holy Ghost.
Law of Chastity, which means having sexual relations only with the person to whom an individual is legally and lawfully married, according to God's law.
Law of Consecration, which means dedicating time, talents, and everything the Lord has blessed an individual with to build up the church.
The promise given in the ceremony is that those who remain faithful will be endowed "with power from on high."
Testing portion
At the end of the endowment ceremony the participant is tested at a physical veil by a man representing the Lord on the signs and tokens just learned. Before 1990 at the veil the participant also put their arm around and pressed their cheek, shoulders, knees and feet against the person through the veil in what was called "the five points of fellowship."
Requirements for participation
The endowment is open only to Mormons who have a valid "temple recommend." To be eligible to receive a temple recommend, one must be deemed worthy by church leadership and have been a member of the LDS Church for at least one year. A male member of the church must hold the Melchizedek priesthood to participate in the endowment. A temple recommend is signed by the person receiving the recommend, a member of the person's bishopric and a member of the stake presidency, who each perform a personal, one-on-one "worthiness interview." Persons seeking a recommend to attend the temple for the first time and receive their endowment will generally meet with their bishop and stake president.
These interviews cover what the church believes to be the most important factors of personal morality and worthiness, including whether the person has a basic belief in key church doctrines such as the divinity of Jesus and the restoration; whether the person attends church meetings and supports the leadership of the LDS Church; whether the person affiliates with Mormon fundamentalists or other people considered by the church to be apostate; whether the person is honest and lives the law of chastity and the Word of Wisdom; whether the person abuses family members; whether the person pays tithing and any applicable spousal or child support; and whether the person has confessed to serious past sins.
Prior to participating in the endowment, members of the LDS Church frequently participate in a six-part temple preparation class which discusses temple-related topics but does not directly discuss the details of the ceremony.
Ineligible groups of members
Some members of the church were historically or are currently ineligible for the temple endowment. For about 130 years (between 1847 and 1978) all LDS endowment-related temple ordinances were denied to all Black women and men in a controversial temple racial restriction. As of 2023, all temple ordinances including the endowment continue to be denied for any lesbian, gay, or bisexual person who is in a same-sex marriage or homosexual sexual relationship. Transgender individuals who gender transition (even if just by changing their name, pronouns and gender presentation by clothing and hairstyle) are also barred from temple ordinances as of 2020. These restrictions have received criticism from both outside, and inside the LDS church.
Held sacredness and perceived secrecy
In the modern endowment ceremony, recipients explicitly agree to a "covenant of non-disclosure" to keep some content such as the ceremony's signs and tokens (and formerly penalties) confidential. The remainder of the ceremony carries with it no covenants of secrecy. Most Latter-day Saints are generally unwilling to discuss specific details of the ceremony, and have been instructed by top church leaders that the only place where the temple ceremonies should be discussed, even amongst faithful members, is within the temple. Many Mormons hold the making of these covenants to be highly sacred, and believe that details of the ceremony should be kept from those who are not properly prepared.
Penalties
Prior to revisions in 1990, the LDS Church's version of the endowment included penalties which were specified punishment for breaking an oath of secrecy after receiving the Nauvoo endowment ceremony. Adherents promised they would submit to execution in specific ways should they reveal certain contents of the ceremony. In the ceremony participants each symbolically enacted three of the methods of their execution. In 1990 the LDS Church removed the "penalty" portions of the ceremony. Aspects of the ceremony held confidential have been published in various sources, unauthorized by the LDS Church.
Historical organizational statements on confidentiality
Official church publications have consistently stated that temple ceremonies are confidential and not to be discussed outside the temple, but the degree and breadth of information shared has shifted over time. The non-public nature of the endowment is implied early on by a reference in facsimile no. 2 in the Book of Abraham (part of the LDS Church standard works dated to 1835) when it states that there are things that "cannot be revealed unto the world; but is to be had in the Holy Temple of God."
In 1904, B. H. Roberts declared in testimony to the United States Senate that certain aspects of the endowment ceremony were intended to be "secret from the world". This information includes, in the initiation and instructional/testing phases of the endowment ceremony, certain names and symbolic gestures called tokens and signs.
In 2021, the online versions of the General Handbook the specific covenants made during the endowment have been enumerated. This is the only new item that was not publicly discussed about the endowment that was added. Since that publication, the covenants made and their doctrinal implications have been discussed in more public forums including the publication of an article listing the covenants made and explaining their significance.
Perceived implications of confidentiality policy
Some Mormons have suggested that the reluctance to discuss the endowment encourages attacks and unauthorized exposés by evangelical Christians and others, and therefore advocate a more transparent attitude toward the ceremony. Transparency has increased a little bit since such criticisms were levied.
Latter-day Saint scholars' statements comparing modern endowment to ancient practices
The Latter-day Saint viewpoint is that the endowment is of ancient origin, revealed from the earliest time to the biblical Adam. Much research has been done by Latter Day Saints finding parallels between the endowment and ancient traditions. The LDS Church temple is referred to as a "house of learning" since it is a "kind of educational environment teaching by action and educating through ritual." The endowment ordinance, as presented in Latter-day Saint temples, has been referred to as a "ritual drama" that commemorates episodes of sacred history due to its "theatrical setting." When viewed as a restoration of ancient rites, the ritual drama and aesthetic environment in which the endowment is presented are both rich in Judeo-Christian symbolism. Comparative studies of the art, architecture, and rituals found in Mormonism, such as the endowment, reveal parallels to early Catholic and Jewish traditions.
The Testament of Levi discusses ceremonies and clothing that LDS scholar Blake Ostler relates to the modern LDS endowment. Some scholars have suggested that Jewish temple initiation was later merged with early Christian baptismal initiation sometime after the destruction of the Second Temple. By the fourth century CE, Christian baptism had adopted a much more dramatic and complex set of rituals accompanying it, including washing ceremonies, physical anointing with oil, being signed with a cross on the forehead, and receiving white garments and a new name, all which paralleled the Jewish initiation for priests and kings. St. Cyril of Jerusalem, in his Catechetical Lectures, related the anointing with oil at baptism with the anointing of a priest and king in the Old Testament, suggesting that the initiate actually became a priest and king in Christ.
The general theme of ascension through multiple gates or veils of heaven is found all throughout early Jewish, Christian, Muslim, and other Near Eastern religious writings, as well as in the Bible. Early works often describe angels and other sentinels which are set at these points, and several of these state that the ascending individual would be required to give specific signs and names to the sentinels in order to pass through the veil. The descriptions of key words, signs, and tokens being presented to the sentinels of the veils of heaven are particularly prevalent in old Gnostic Christian and Mandaean writings, and in Jewish lore. In one of the Nag Hammadi texts, Jesus promises that those who accept him would pass by each of the gates of heaven without fear and would be perfected in the third heaven. The Coptic Book of 1 Jeu describes Jesus instructing the apostles in the hand-signs, names, and seals that they must use before the guardians of heaven would remove the veils of heaven to allow them passage. In Hekhalot Rabbati 17:1–20:3, an old Jewish esoteric text, the faithful pass through seven doors in order to enter the presence of God, passing by angels whose names they must give, while presenting a seal. 3 Enoch also describes the names and seals given to the angels.
The Latter Day Saint temple garment is usually identified by Mormon scholars with the sacred "linen breeches" (michnasayim/mikhnesei bahd) and the "coat of linen" (kuttoneth) that ancient Israelite priests were commanded to wear. According to the Talmud, worn-out undergarments and priestly sashes were burned, being used as torch wicks in the temple. The temple garment has been compared to the modern tallit katan, a sacred undershirt of Orthodox and ultra-Orthodox Judaism. Both the temple garments in Mormonism and the tallit katan are meant to be worn all day under regular clothing as a constant reminder of the covenants, promises, and obligations the wearer is under. Latter-day Saint scholars interpret a biblical scripture in Luke as instructing the apostles to wait for both the pouring out of the Spirit on the day of Pentecost and the endowment ceremony before going out to evangelize.
See also
Notes
Sources
References
1842 introductions
1842 in Christianity
All articles with unsourced statements
Mormonism and Freemasonry
Creation myths
Latter Day Saint temple practices
Latter Day Saint terms
Adam and Eve in Mormonism | Endowment (Mormonism) | Astronomy | 5,908 |
29,491,519 | https://en.wikipedia.org/wiki/Hypercycle%20%28chemistry%29 | In chemistry, a hypercycle is an abstract model of organization of self-replicating molecules connected in a cyclic, autocatalytic manner. It was introduced in an ordinary differential equation (ODE) form by the Nobel Prize in Chemistry winner Manfred Eigen in 1971 and subsequently further extended in collaboration with Peter Schuster. It was proposed as a solution to the error threshold problem encountered during modelling of replicative molecules that hypothetically existed on the primordial Earth (see: abiogenesis). As such, it explained how life on Earth could have begun using only relatively short genetic sequences, which in theory were too short to store all essential information. The hypercycle is a special case of the replicator equation. The most important properties of hypercycles are autocatalytic growth competition between cycles, once-for-ever selective behaviour, utilization of small selective advantage, rapid evolvability, increased information capacity, and selection against parasitic branches.
Central ideas
The hypercycle is a cycle of connected, self-replicating macromolecules. In the hypercycle, all molecules are linked such that each of them catalyses the creation of its successor, with the last molecule catalysing the first one. In such a manner, the cycle reinforces itself. Furthermore, each molecule is additionally a subject for self-replication. The resultant system is a new level of self-organization that incorporates both cooperation and selfishness. The coexistence of many genetically non-identical molecules makes it possible to maintain a high genetic diversity of the population. This can be a solution to the error threshold problem, which states that, in a system without ideal replication, an excess of mutation events would destroy the ability to carry information and prevent the creation of larger and fitter macromolecules. Moreover, it has been shown that hypercycles could originate naturally and that incorporating new molecules can extend them. Hypercycles are also subject to evolution and, as such, can undergo a selection process. As a result, not only does the system gain information, but its information content can be improved. From an evolutionary point of view, the hypercycle is an intermediate state of self-organization, but not the final solution.
Over the years, the hypercycle theory has experienced many reformulations and methodological approaches. Among them, the most notable are applications of partial differential equations, cellular automata, and stochastic formulations of Eigen's problem. Despite many advantages that the concept of hypercycles presents, there were also some problems regarding the traditional model formulation using ODEs: a vulnerability to parasites and a limited size of stable hypercycles. In 2012, the first experimental proof for the emergence of a cooperative network among fragments of self-assembling ribozymes was published, demonstrating their advantages over self-replicating cycles. However, even though this experiment proves the existence of cooperation among the recombinase ribozyme subnetworks, this cooperative network does not form a hypercycle per se, so we still lack the experimental demonstration of hypercycles.
Model formulation
Model evolution
Error threshold problem
When a model of replicating molecules was created, it was found that, for effective storage of information, macromolecules on prebiotic Earth could not exceed a certain threshold length. This problem is known as the error threshold problem. It arises because replication is an imperfect process, and during each replication event, there is a risk of incorporating errors into a new sequence, leading to the creation of a quasispecies. In a system that is deprived of high-fidelity replicases and error-correction mechanisms, mutations occur with a high probability. As a consequence, the information stored in a sequence can be lost due to the rapid accumulation of errors, a so-called error catastrophe. Moreover, it was shown that the genome size of any organism is roughly equal to the inverse of mutation rate per site per replication. Therefore, a high mutation rate imposes a serious limitation on the length of the genome. To overcome this problem, a more specialized replication machinery that is able to copy genetic information with higher fidelity is needed. Manfred Eigen suggested that proteins are necessary to accomplish this task. However, to encode a system as complex as a protein, longer nucleotide sequences are needed, which increases the probability of a mutation even more and requires even more complex replication machinery. John Maynard Smith and Eörs Szathmáry named this vicious circle Eigen's Paradox.
According to current estimations, the maximum length of a replicated chain that can be correctly reproduced and maintained in enzyme-free systems is about 100 bases, which is assumed to be insufficient to encode replication machinery. This observation was the motivation for the formulation of the hypercycle theory.
Models
It was suggested that the problem with building and maintaining larger, more complex, and more accurately replicated molecules can be circumvented if several information carriers, each of them storing a small piece of information, are connected such that they only control their own concentration. Studies of the mathematical model describing replicating molecules revealed that to observe a cooperative behaviour among self-replicating molecules, they have to be connected by a positive feedback loop of catalytic actions. This kind of closed network consisting of self-replicating entities connected by a catalytic positive-feedback loop was named an elementary hypercycle. Such a concept, apart from an increased information capacity, has another advantage. Linking self-replication with mutual catalysis can produce nonlinear growth of the system. This, first, makes the system resistant to so-called parasitic branches. Parasitic branches are species coupled to a cycle that do not provide any advantage to the reproduction of a cycle, which, in turn, makes them useless and decreases the selective value of the system. Secondly, it reinforces the self-organization of molecules into the hypercycle, allowing the system to evolve without losing information, which solves the error threshold problem.
Analysis of potential molecules that could form the first hypercycles in nature prompted the idea of coupling an information carrier function with enzymatic properties. At the time of the hypercycle theory formulation, enzymatic properties were attributed only to proteins, while nucleic acids were recognized only as carriers of information. This led to the formulation of a more complex model of a hypercycle with translation. The proposed model consists of a number of nucleotide sequences I (I stands for intermediate) and the same number of polypeptide chains E (E stands for enzyme). Sequences I have a limited chain length and carry the information necessary to build catalytic chains E. The sequence Ii provides the matrix to reproduce itself and a matrix to build the protein Ei. The protein Ei gives the catalytic support to build the next sequence in the cycle, Ii+1. The self-replicating sequences I form a cycle consisting of positive and negative strands that periodically reproduce themselves. Therefore, many cycles of the +/− nucleotide collectives are linked together by the second-order cycle of enzymatic properties of E, forming a catalytic hypercycle. Without the secondary loop provided by catalysis, I chains would compete and select against each other instead of cooperating. The reproduction is possible thanks to translation and polymerization functions encoded in I chains. In his principal work, Manfred Eigen stated that the E coded by the I chain can be a specific polymerase or an enhancer (or a silencer) of a more general polymerase acting in favour of formation of the successor of nucleotide chain I. Later, he indicated that a general polymerase leads to the death of the system. Moreover, the whole cycle must be closed, so that En must catalyse I1 formation for some integer n > 1.
Alternative concepts
During their research, Eigen and Schuster also considered types of protein and nucleotide coupling other than hypercycles. One such alternative was a model with one replicase that performed polymerase functionality and that was a translational product of one of the RNA matrices existing among the quasispecies. This RNA-dependent RNA polymerase catalysed the replication of sequences that had specific motifs recognized by this replicase. The other RNA matrices, or just one of their strands, provided translational products which had specific anticodons and were responsible for unique assignment and transportation of amino acids.
Another concept devised by Eigen and Schuster was a model in which each RNA template's replication was catalysed by its own translational product; at the same time, this RNA template performed a transport function for one amino acid type. Existence of more than one such RNA template could make translation possible.
Nevertheless, in both alternative concepts, the system will not survive due to the internal competition among its constituents. Even if none of the constituents of such a system is selectively favoured, which potentially allows coexistence of all of the coupled molecules, they are not able to coevolve and optimize their properties. In consequence, the system loses its internal stability and cannot live on. The reason for inability to survive is the lack of mutual control of constituent abundances.
Mathematical model
Elementary hypercycle
The dynamics of the elementary hypercycle can be modelled using the following differential equation:
where
In the equation above, xi is the concentration of template Ii; x is the total concentration of all templates; ki is the excess production rate of template Ii, which is a difference between formation fi by self-replication of the template and its degradation di, usually by hydrolysis; ki,j is the production rate of template Ii catalysed by Ij; and φ is a dilution flux; which guarantees that the total concentration is constant. Production and degradation rates are expressed in numbers of molecules per time unit at unit concentration (xi = 1). Assuming that at high concentration x the term ki can be neglected, and, moreover, in the hypercycle, a template can be replicated only by itself and the previous member of the cycle, the equation can be simplified to:
where according to the cyclic properties, it can be assumed that
Hypercycle with translation
A hypercycle with translation consists of polynucleotides Ii (with concentration xi) and polypeptides Ei (with concentration yi). It is assumed that the kinetics of nucleotide synthesis follows a Michaelis–Menten-type reaction scheme in which the concentration of complexes cannot be neglected. During replication, molecules form complexes IiEi-1 (occurring with concentration zi). Thus, the total concentration of molecules (xi0 and yi0) will be the sum of free molecules and molecules involved in a complex:
The dynamics of the hypercycle with translation can be described using a system of differential equations modelling the total number of molecules:
where
In the above equations, cE and cI are total concentrations of all polypeptides and all polynucleotides, φx and φy are dilution fluxes, ki is the production rate of polypeptide Ei translated from the polynucleotide Ii, and fi is the production rate of polynucleotide Ii synthesised by the complex IiEi-1 (through replication and polymerization).
Coupling nucleic acids with proteins in such a model of hypercycle with translation demanded the proper model for the origin of translation code as a necessary condition for the origin of hypercycle organization. At the time of hypercycle theory formulation, two models for the origin of translation code were proposed by Crick and his collaborators. These were models stating that the first codons were constructed according to either an RRY or an RNY scheme, in which R stands for the purine base, Y for pyrimidine, and N for any base, with the latter assumed to be more reliable. Nowadays, it is assumed that the hypercycle model could be realized by utilization of ribozymes without the need for a hypercycle with translation, and there are many more theories about the origin of the genetic code.
Evolution
Formation of the first hypercycles
Eigen made several assumptions about conditions that led to the formation of the first hypercycles. Some of them were the consequence of the lack of knowledge about ribozymes, which were discovered a few years after the introduction of the hypercycle concept and negated Eigen's assumptions in the strict sense. The primary of them was that the formation of hypercycles had required the availability of both types of chains: nucleic acids forming a quasispecies population and proteins with enzymatic functions. Nowadays, taking into account the knowledge about ribozymes, it may be possible that a hypercycle's members were selected from the quasispecies population and the enzymatic function was performed by RNA. According to the hypercycle theory, the first primitive polymerase emerged precisely from this population. As a consequence, the catalysed replication could exceed the uncatalysed reactions, and the system could grow faster. However, this rapid growth was a threat to the emerging system, as the whole system could lose control over the relative amount of the RNAs with enzymatic function. The system required more reliable control of its constituents—for example, by incorporating the coupling of essential RNAs into a positive feedback loop. Without this feedback loop, the replicating system would be lost. These positive feedback loops formed the first hypercycles.
In the process described above, the fact that the first hypercycles originated from the quasispecies population (a population of similar sequences) created a significant advantage. One possibility of linking different chains I—which is relatively easy to achieve taking into account the quasispecies properties—is that the one chain I improves the synthesis of the similar chain I’. In this way, the existence of similar sequences I originating from the same quasispecies population promotes the creation of the linkage between molecules I and I’.
Evolutionary dynamics
After formation, a hypercycle reaches either an internal equilibrium or a state with oscillating concentrations of each type of chain I, but with the total concentration of all chains remaining constant. In this way, the system consisting of all chains can be expressed as a single, integrated entity. During the formation of hypercycles, several of them could be present in comparable concentrations, but very soon, a selection of the hypercycle with the highest fitness value will take place. Here, the fitness value expresses the adaptation of the hypercycle to the environment, and the selection based on it is very sharp. After one hypercycle wins the competition, it is very unlikely that another one could take its place, even if the new hypercycle would be more efficient than the winner. Usually, even large fluctuations in the numbers of internal species cannot weaken the hypercycle enough to destroy it. In the case of a hypercycle, we can speak of one-for-ever selection, which is responsible for the existence of a unique translation code and a particular chirality.
The above-described idea of a hypercycle's robustness results from an exponential growth of its constituents caused by the catalytic support. However, Eörs Szathmáry and Irina Gladkih showed that an unconditional coexistence can be obtained even in the case of a non-enzymatic template replication that leads to a subexponential or a parabolic growth. This could be observed during the stages preceding a catalytic replication that are necessary for the formation of hypercycles. The coexistence of various non-enzymatically replicating sequences could help to maintain a sufficient diversity of RNA modules used later to build molecules with catalytic functions.
From the mathematical point of view, it is possible to find conditions required for cooperation of several hypercycles. However, in reality, the cooperation of hypercycles would be extremely difficult, because it requires the existence of a complicated multi-step biochemical mechanism or an incorporation of more than two types of molecules. Both conditions seem very improbable; therefore, the existence of coupled hypercycles is assumed impossible in practice.
Evolution of a hypercycle ensues from the creation of new components by the mutation of its internal species. Mutations can be incorporated into the hypercycle, enlarging it if, and only if, two requirements are satisfied. First, a new information carrier Inew created by the mutation must be better recognized by one of the hypercycle's members Ii than the chain Ii+1 that was previously recognized by it. Secondly, the new member Inew of the cycle has to better catalyse the formation of the polynucleotide Ii+1 that was previously catalysed by the product of its predecessor Ii. In theory, it is possible to incorporate into the hypercycle mutations that do not satisfy the second condition. They would form parasitic branches that use the system for their own replication but do not contribute to the system as a whole. However, it was noticed that such mutants do not pose a threat to the hypercycle, because other constituents of the hypercycle grow nonlinearly, which prevents the parasitic branches from growing.
Evolutionary dynamics: a mathematical model
According to the definition of a hypercycle, it is a nonlinear, dynamic system, and, in the simplest case, it can be assumed that it grows at a rate determined by a system of quadratic differential equations. Then, the competition between evolving hypercycles can be modelled using the differential equation:
where
Here, Cl is the total concentration of all polynucleotide chains belonging to a hypercycle Hl, C is the total concentration of polynucleotide chains belonging to all hypercycles, ql is the rate of growth, and φ is a dilution flux that guarantees that the total concentration is constant. According to the above model, in the initial phase, when several hypercycles exist, the selection of the hypercycle with the largest ql value takes place. When one hypercycle wins the selection and dominates the population, it is very difficult to replace it, even with a hypercycle with a much higher growth rate q.
Compartmentalization and genome integration
Hypercycle theory proposed that hypercycles are not the final state of organization, and further development of more complicated systems is possible by enveloping the hypercycle in some kind of membrane. After evolution of compartments, a genome integration of the hypercycle can proceed by linking its members into a single chain, which forms a precursor of a genome. After that, the whole individualized and compartmentalized hypercycle can behave like a simple self-replicating entity. Compartmentalization provides some advantages for a system that has already established a linkage between units. Without compartments, genome integration would boost competition by limiting space and resources. Moreover, adaptive evolution requires the package of transmissible information for advantageous mutations in order not to aid less-efficient copies of the gene. The first advantage is that it maintains a high local concentration of molecules, which helps to locally increase the rate of synthesis. Secondly, it keeps the effect of mutations local, while at the same time affecting the whole compartment. This favours preservation of beneficial mutations, because it prevents them from spreading away. At the same time, harmful mutations cannot pollute the entire system if they are enclosed by the membrane. Instead, only the contaminated compartment is destroyed, without affecting other compartments. In that way, compartmentalization allows for selection for genotypic mutations. Thirdly, membranes protect against environmental factors because they constitute a barrier for high-weight molecules or UV irradiation. Finally, the membrane surface can work as a catalyst.
Despite the above-mentioned advantages, there are also potential problems connected to compartmentalized hypercycles. These problems include difficulty in the transport of ingredients in and out, synchronizing the synthesis of new copies of the hypercycle constituents, and division of the growing compartment linked to a packing problem.
In the initial works, the compartmentalization was stated as an evolutionary consequence of the hypercyclic organization. Carsten Bresch and coworkers raised an objection that hypercyclic organization is not necessary if compartments are taken into account. They proposed the so-called package model in which one type of a polymerase is sufficient and copies all polynucleotide chains that contain a special recognition motif. However, as pointed out by the authors, such packages are—contrary to hypercycles—vulnerable to deleterious mutations as well as a fluctuation abyss, resulting in packages that lack one of the essential RNA molecules. Eigen and colleagues argued that simple package of genes cannot solve the information integration problem and hypercycles cannot be simply replaced by compartments, but compartments may assist hypercycles. This problem, however, raised more objections, and Eörs Szathmáry and László Demeter reconsidered whether packing hypercycles into compartments is a necessary intermediate stage of the evolution. They invented a stochastic corrector model that assumed that replicative templates compete within compartments, and selective values of these compartments depend on the internal composition of templates. Numerical simulations showed that when stochastic effects are taken into account, compartmentalization is sufficient to integrate information dispersed in competitive replicators without the need for hypercycle organization. Moreover, it was shown that compartmentalized hypercycles are more sensitive to the input of deleterious mutations than a simple package of competing genes. Nevertheless, package models do not solve the error threshold problem that originally motivated the hypercycle.
Ribozymes
At the time of the hypercycle theory formulation, ribozymes were not known. After the breakthrough of discovering RNA's catalytic properties in 1982, it was realized that RNA had the ability to integrate protein and nucleotide-chain properties into one entity. Ribozymes potentially serving as templates and catalysers of replication can be considered components of quasispecies that can self-organize into a hypercycle without the need to invent a translation process. In 2001, a partial RNA polymerase ribozyme was designed via directed evolution. Nevertheless, it was able to catalyse only a polymerization of a chain having the size of about 14 nucleotides, even though it was 200 nucleotides long. The most up-to-date version of this polymerase was shown in 2013. While it has an ability to catalyse polymerization of longer sequences, even of its own length, it cannot replicate itself due to a lack of sequence generality and its inability to transverse secondary structures of long RNA templates. However, it was recently shown that those limitations could in principle be overcome by the assembly of active polymerase ribozymes from several short RNA strands. In 2014, a cross-chiral RNA polymerase ribozyme was demonstrated. It was hypothesized that it offers a new mode of recognition between an enzyme and substrates, which is based on the shape of the substrate, and allows avoiding the Watson-Crick pairing and, therefore, may provide greater sequence generality. Various other experiments have shown that, besides bearing polymerase properties, ribozymes could have developed other kinds of evolutionarily useful catalytic activity such as synthase, ligase, or aminoacylase activities. Ribozymal aminoacylators and ribozymes with the ability to form peptide bonds might have been crucial to inventing translation. An RNA ligase, in turn, could link various components of quasispecies into one chain, beginning the process of a genome integration. An RNA with a synthase or a synthetase activity could be critical for building compartments and providing building blocks for growing RNA and protein chains as well as other types of molecules. Many examples of this kind of ribozyme are currently known, including a peptidyl transferase ribozyme, a ligase, and a nucleotide synthetase. A transaminoacylator described in 2013 has five nucleotides, which is sufficient for a trans-amino acylation reaction and makes it the smallest ribozyme that has been discovered. It supports a peptidyl-RNA synthesis that could be a precursor for the contemporary process of linking amino acids to tRNA molecules. An RNA ligase's catalytic domain, consisting of 93 nucleotides, proved to be sufficient to catalyse a linking reaction between two RNA chains. Similarly, an acyltransferase ribozyme 82 nucleotides long was sufficient to perform an acyltransfer reaction. Altogether, the results concerning the RNA ligase's catalytic domain and the acyltransferase ribozyme are in agreement with the estimated upper limit of 100 nucleotides set by the error threshold problem. However, it was hypothesized that even if the putative first RNA-dependent RNA-polymerases are estimated to be longer—the smallest reported up-to-date RNA-dependent polymerase ribozyme is 165 nucleotides long—they did not have to arise in one step. It is more plausible that ligation of smaller RNA chains performed by the first RNA ligases resulted in a longer chain with the desired catalytically active polymerase domain.
Forty years after the publication of Manfred Eigen's primary work dedicated to hypercycles, Nilesh Vaidya and colleagues showed experimentally that ribozymes can form catalytic cycles and networks capable of expanding their sizes by incorporating new members. However, this is not a demonstration of a hypercycle in accordance with its definition, but an example of a collectively autocatalytic set. Earlier computer simulations showed that molecular networks can arise, evolve and be resistant to parasitic RNA branches. In their experiments, Vaidya et al. used an Azoarcus group I intron ribozyme that, when fragmented, has an ability to self-assemble by catalysing recombination reactions in an autocatalytic manner. They mutated the three-nucleotide-long sequences responsible for recognition of target sequences on the opposite end of the ribozyme (namely, Internal Guide Sequences or IGSs) as well as these target sequences. Some genotypes could introduce cooperation by recognizing target sequences of the other ribozymes, promoting their covalent binding, while other selfish genotypes were only able to self-assemble. In separation, the selfish subsystem grew faster than the cooperative one. After mixing selfish ribozymes with cooperative ones, the emergence of cooperative behaviour in a merged population was observed, outperforming the self-assembling subsystems. Moreover, the selfish ribozymes were integrated into the network of reactions, supporting its growth. These results were also explained analytically by the ODE model and its analysis. They differ substantially from results obtained in evolutionary dynamics. According to evolutionary dynamics theory, selfish molecules should dominate the system even if the growth rate of the selfish subsystem in isolation is lower than the growth rate of the cooperative system. Moreover, Vaidya et al. proved that, when fragmented into more pieces, ribozymes that are capable of self-assembly can not only still form catalytic cycles but, indeed, favour them. Results obtained from experiments by Vaidya et al. gave a glimpse on how inefficient prebiotic polymerases, capable of synthesizing only short oligomers, could be sufficient at the pre-life stage to spark off life. This could happen because coupling the synthesis of short RNA fragments by the first ribozymal polymerases to a system capable of self-assembly not only enables building longer sequences but also allows exploiting the fitness space more efficiently with the use of the recombination process. Another experiment performed by Hannes Mutschler et al. showed that the RNA polymerase ribozyme, which they described, can be synthesized in situ from the ligation of four smaller fragments, akin to a recombination of Azoarcus ribozyme from four inactive oligonucleotide fragments described earlier. Apart from a substantial contribution of the above experiments to the research on the origin of life, they have not proven the existence of hypercycles experimentally.
Related problems and reformulations
The hypercycle concept has been continuously studied since its origin. Shortly after Eigen and Schuster published their main work regarding hypercycles, John Maynard Smith raised an objection that the catalytic support for the replication given to other molecules is altruistic. Therefore, it cannot be selected and maintained in a system. He also underlined hypercycle vulnerability to parasites, as they are favoured by selection. Later on, Josef Hofbauer and Karl Sigmund indicated that in reality, a hypercycle can maintain only fewer than five members. In agreement with Eigen and Schuster's principal analysis, they argued that systems with five or more species exhibit limited and unstable cyclic behaviour, because some species can die out due to stochastic events and break the positive feedback loop that sustains the hypercycle. The extinction of the hypercycle then follows. It was also emphasized that a hypercycle size of up to four is too small to maintain the amount of information sufficient to cross the information threshold.
Several researchers proposed a solution to these problems by introducing space into the initial model either explicitly or in the form of a spatial segregation within compartments. Bresch et al. proposed a package model as a solution for the parasite problem. Later on, Szathmáry and Demeter proposed a stochastic corrector machine model. Both compartmentalized systems proved to be robust against parasites. However, package models do not solve the error threshold problem that originally motivated the idea of the hypercycle. A few years later, Maarten Boerlijst and Paulien Hogeweg, and later Nobuto Takeuchi, studied the replicator equations with the use of partial differential equations and cellular automata models, methods that already proved to be successful in other applications. They demonstrated that spatial self-structuring of the system completely solves the problem of global extinction for large systems and, partially, the problem of parasites. The latter was also analysed by Robert May, who noticed that an emergent rotating spiral wave pattern, which was observed during computational simulations performed on cellular automata, proved to be stable and able to survive the invasion of parasites if they appear at some distance from the wave core. Unfortunately, in this case, rotation decelerates as the number of hypercycle members increases, meaning that selection tends toward decreasing the amount of information stored in the hypercycle. Moreover, there is also a problem with adding new information into the system. In order to be preserved, the new information has to appear near to the core of the spiral wave. However, this would make the system vulnerable to parasites, and, as a consequence, the hypercycle would not be stable. Therefore, stable spiral waves are characterized by once-for-ever selection, which creates the restrictions that, on the one hand, once the information is added to the system, it cannot be easily abandoned; and on the other hand, new information cannot be added.
Another model based on cellular automata, taking into account a simpler replicating network of continuously mutating parasites and their interactions with one replicase species, was proposed by Takeuchi and Hogeweg and exhibited an emergent travelling wave pattern. Surprisingly, travelling waves not only proved to be stable against moderately strong parasites, if the parasites' mutation rate is not too high, but the emergent pattern itself was generated as a result of interactions between parasites and replicase species. The same technique was used to model systems that include formation of complexes. Finally, hypercycle simulation extending to three dimensions showed the emergence of the three-dimensional analogue of a spiral wave, namely, the scroll wave.
Comparison with other theories of life
The hypercycle is just one of several current theories of life, including the chemoton of Tibor Gánti, the (M,R) systems of Robert Rosen, autopoiesis (or self-building) of Humberto Maturana and Francisco Varela, and the autocatalytic sets of Stuart Kauffman, similar to an earlier proposal by Freeman Dyson.
All of these (including the hypercycle) found their original inspiration in Erwin Schrödinger's book What is Life? but at first they appear to have little in common with one another, largely because the authors did not communicate with one another, and none of them made any reference in their principal publications to any of the other theories. Nonetheless, there are more similarities than may be obvious at first sight, for example between Gánti and Rosen. Until recently there have been almost no attempts to compare the different theories and discuss them together.
Last Universal Common Ancestor (LUCA)
Some authors equate models of the origin of life with LUCA, the Last Universal Common Ancestor of all extant life. This is a serious error resulting from failure to recognize that L refers to the last common ancestor, not to the first ancestor, which is much older: a large amount of evolution occurred before the appearance of LUCA.
Gill and Forterre expressed the essential point as follows:
LUCA should not be confused with the first cell, but was the product of a long period of evolution. Being the "last" means that LUCA was preceded by a long succession of older "ancestors."
References
External links
J. Padgett's Hypercycle model implemented in repast
Origin of life
Self-organization | Hypercycle (chemistry) | Mathematics,Biology | 6,823 |
39,082,112 | https://en.wikipedia.org/wiki/Tubercle%20effect | The tubercle effect is a phenomenon where tubercles or large 'bumps' on the leading edge of an airfoil can improve its aerodynamics. The effect, while already discovered, was analyzed extensively by Frank E. Fish et al in the early 2000 onwards. The tubercle effect works by channeling flow over the airfoil into more narrow streams, creating higher velocities. Another side effect of these channels is the reduction of flow moving over the wingtip and resulting in less parasitic drag due to wingtip vortices. Using computational modeling, it was determined that the presence of tubercles produces a delay in the angle of attack until stall, thereby increasing maximum lift and decreasing drag. Fish first discovered this effect when looking at the fins of humpback whales. These whales are the only known organisms to take advantage of the tubercle effect. It is believed that this effect allows them to be much more manoeuvrable in the water, allowing for easier capture of prey. The tubercles on their fins allow them to do aquatic maneuvers to catch their prey.
The tiny hooklets on the fore edge of an owl's wing have a similar effect that contributes to its aerodynamic manoeuvrability and stealth.
Science behind the effect
The tubercle effect is a phenomenon in which tubercles, or large raised bumps on the leading edge of a wing, blade, or sail increase its aerodynamic or hydrodynamic performance. Research on this topic was inspired by the work of marine biologists on the behavior of humpback whales. Despite their large size, these whales are agile and are able to perform rolls and loops underwater. Research on humpback whales indicated that the presence of these tubercles on the leading edge of whale fins reduced stall and increased lift, while reducing noise in the post-stall regime. Researchers were motivated by these positive results to apply these concepts to aircraft wings as well as industrial and wind turbines.
Early research on this topic was performed by Watts & Fish followed by further experiments both in water and wind tunnels. Watts & Fish determined that the presence of tubercles on the leading edge of airfoil increased lift by 4.8%. Further numerical computations confirmed this result, and indicated that the presence of tubercles can decrease the effects of drag by 40%. Leading-edge tubercles have been found to reduce the point of maximum lift and increase the region of post-stall lift. In the post-stall regime, foils with tubercles experienced a gradual loss of lift as opposed to foils without tubercles, which experienced a sudden loss of lift. An example of a wing without protuberances compared to a wing with protuberances is shown.
The geometry of tubercles must also be considered, as the amplitude and wavelength of tubercles have an effect on flow control. Tubercles can be thought of as small delta wings with a curved apex, since they create a vortex on the upper edge of the tubercle. These vortical structures impose a downward deflection of the airflow (downwash) over the crests of tubercles. This downward deflection delays stall on the airfoil. On the contrary, in the troughs of these structures, there is a net upward deflection of airflow (upwash). Localized upwash is associated with higher angles of attack, which relates to increased lift, as the flow separation occurs in the troughs and stays there. The vortex created by the tubercle delays flow separation toward the trailing edge of the wing, thus reducing the effects of drag. However, in water, due to the crest/trough structure, cavitation is possible, and is undesirable. Cavitation occurs in areas of high flow velocity and low pressure, such as the trough of a tubercled structure. In water, air bubbles or pockets form on the upper side of the tubercle. These bubbles reduce lift and increase drag, while increasing noise in the flow when the bubbles collapse. However, tubercles can be modified to manipulate the location of cavitation.
The effect of amplitude of tubercles has a more significant impact on post-stall performance than wavelength. Higher amplitude of tubercles has been linked to more gradual stall and higher post-stall lift, as well as lower pre-stall lift slope. The wavelength and amplitude can both be optimized to increase the post-stall performance.
Experiments on the effects of leading-edge tubercles have primarily focused on rigid bodies, and more research is needed in order to apply the knowledge of the tubercle effect to industrial, aircraft, or energy applications.
Biological occurrences of tubercles
Tubercles are a material phenomenon that occurs in multiple organisms. These organisms include the humpback whale, hammerhead sharks, scallops, and chondrichthyans, an extinct aquatic organism.
One organism that tubercles are notable in is the humpback whale. The tubercles on humpback whales are located on the leading edge of the flippers. The tubercles allow the very large whales to execute tight turns underwater and swim efficiently; a task imperative for the humpback whales feeding. The tubercles on the flippers help to maintain lift, preventing stall, and decreasing the drag coefficient during turning maneuvers. Tubercles on the humpback whale are considered passive flow control because they are structural.
Tubercles develop in the fetus of the humpback whale. Typically 9-11 tubercles are present on each flipper and decrease in size as they near the tip of the flipper. The largest tubercles are the first and fourth tubercles from the shoulder of the whale. This anatomical structure is common among large fish species, primarily predatory species on their pectoral fins.
Modern applications in industry
Leading edge tubercles are up and coming in the manufacturing area. Wind turbine performances rely on blade aerodynamics where similar flow characteristics are observed (source # 9) modern turbines have twisted blades to account for the angle of attack at specific design conditions. However, in practical application, turbines often operate at off-design conditions where stall occurs, causing a decrease in performance and efficiency. In order to look for possible improvement of the energy efficiency of turbine, the influence of leading edge tubercles must be investigated in more depth.
Tubercles provide a bio-inspired design that offers commercial viability in design of watercraft, aircraft, ventilation fans, and windmills. Control of passive flow through tubercle designs has the advantage of eliminating complex, costly, high-maintenance heavy control mechanisms while improving properties of performance for lifting bodies in air and water. One issue that remains today is the difference in the scale of structure and operation that each of these bio-inspired technologies use. New techniques are being implemented in order to develop methods of delaying stall in flow applications. For example, jet aircraft with leading edge defects can carry greater payloads at faster speeds and higher altitudes, allowing for greater economic efficiency in the aeronautical field. While these effects are found in many aquatic animals and birds, scaling these designs up to industrial application brings forward another set of issues regarding the high stresses associated by machinery. In airplanes for example, designs are much more limited than the complex kinematics and structures of the joints in the wings of birds which produces agile turning maneuvers. This problem can be rectified by further researching into the overlap between size and performance between biological structure and engineering application. It was also observed in turbine design that leading-edge effects have the ability to improve power generation by a factor of up to 20%.
In the aeronautical engineering field, leading-edge tubercles placed on turbine blades can increase generation of energy. Blades with tubercles were also found to be effective at generation of power at both high and low wind speeds, meaning that comparing blades with smooth leading edges to those with leading-edge tubercles, the blades with leading-edge tubercles demonstrated enhanced performance. The utility of tubercle in performance improvement of engineering systems comes directly from examination of biological structures. It is important to realize the versatility that creating designs with bio-enhanced properties offers promise into many flow design applications. As these designs become more and more advanced, the application of biomimetric technologies become crucial to the next development of high-performance machinery and equipment as different methods of efficiency are developed through these methods.
See also
Biomimicry
References
External links
Other examples of biomimicry
Aerodynamics | Tubercle effect | Chemistry,Engineering | 1,732 |
6,139,571 | https://en.wikipedia.org/wiki/Nexus%20file | The extensible NEXUS file format is widely used in bioinformatics. It stores information about taxa, morphological and molecular characters, distances, genetic codes, assumptions, sets, trees, etc. Several popular phylogenetic programs such as PAUP*, MrBayes, Mesquite, MacClade and SplitsTree use this format.
Syntax
A NEXUS file is made out of a fixed header #NEXUS followed by multiple blocks. Each block starts with BEGIN block_name; and ends with END;. The keywords are case-insensitive. Comments are enclosed inside square brackets .
There are a few pre-defined block names for common types of data. Examples include:
TAXA block The TAXA block contains information about taxa.
DATA block The DATA block contains the data matrix (e.g. sequence alignment).
TREES block The TREES block contains phylogenetic trees described using the Newick format, e.g. ((A,B),C);:
The following example uses the three block types above:
#NEXUS
Begin TAXA;
Dimensions ntax=4;
TaxLabels SpaceDog SpaceCat SpaceOrc SpaceElf;
End;
Begin data;
Dimensions nchar=15;
Format datatype=dna missing=? gap=- matchchar=.;
Matrix
[ When a position is a "matchchar", it means that it is the same as the first entry at the same position. ]
SpaceDog
SpaceCat
SpaceOrc [ same as atgttagctag-tgg ]
SpaceElf
;
End;
BEGIN TREES;
Tree tree1 = (((SpaceDog,SpaceCat),SpaceOrc,SpaceElf));
END;
See also
Newick format
NeXML format
phyloXML
References
External links
NEXUS file format — detailed explanation with many examples
NEXUS format — a good description of the format and its uses in the field
Nexus to phyloXML converter
NeXML
Nexus to Fasta converter
Bioinformatics
Biological sequence format
Phylogenetics | Nexus file | Engineering,Biology | 410 |
2,154,572 | https://en.wikipedia.org/wiki/Nanobiotechnology | Nanobiotechnology, bionanotechnology, and nanobiology are terms that refer to the intersection of nanotechnology and biology. Given that the subject is one that has only emerged very recently, bionanotechnology and nanobiotechnology serve as blanket terms for various related technologies.
This discipline helps to indicate the merger of biological research with various fields of nanotechnology. Concepts that are enhanced through nanobiology include: nanodevices (such as biological machines), nanoparticles, and nanoscale phenomena that occurs within the discipline of nanotechnology. This technical approach to biology allows scientists to imagine and create systems that can be used for biological research. Biologically inspired nanotechnology uses biological systems as the inspirations for technologies not yet created. However, as with nanotechnology and biotechnology, bionanotechnology does have many potential ethical issues associated with it.
The most important objectives that are frequently found in nanobiology involve applying nanotools to relevant medical/biological problems and refining these applications. Developing new tools, such as peptoid nanosheets, for medical and biological purposes is another primary objective in nanotechnology. New nanotools are often made by refining the applications of the nanotools that are already being used. The imaging of native biomolecules, biological membranes, and tissues is also a major topic for nanobiology researchers. Other topics concerning nanobiology include the use of cantilever array sensors and the application of nanophotonics for manipulating molecular processes in living cells.
Recently, the use of microorganisms to synthesize functional nanoparticles has been of great interest. Microorganisms can change the oxidation state of metals. These microbial processes have opened up new opportunities for us to explore novel applications, for example, the biosynthesis of metal nanomaterials. In contrast to chemical and physical methods, microbial processes for synthesizing nanomaterials can be achieved in aqueous phase under gentle and environmentally benign conditions. This approach has become an attractive focus in current green bionanotechnology research towards sustainable development.
Terminology
The terms are often used interchangeably. When a distinction is intended, though, it is based on whether the focus is on applying biological ideas or on studying biology with nanotechnology. Bionanotechnology generally refers to the study of how the goals of nanotechnology can be guided by studying how biological "machines" work and adapting these biological motifs into improving existing nanotechnologies or creating new ones. Nanobiotechnology, on the other hand, refers to the ways that nanotechnology is used to create devices to study biological systems.
In other words, nanobiotechnology is essentially miniaturized biotechnology, whereas bionanotechnology is a specific application of nanotechnology. For example, DNA nanotechnology or cellular engineering would be classified as bionanotechnology because they involve working with biomolecules on the nanoscale. Conversely, many new medical technologies involving nanoparticles as delivery systems or as sensors would be examples of nanobiotechnology since they involve using nanotechnology to advance the goals of biology.
The definitions enumerated above will be utilized whenever a distinction between nanobio and bionano is made in this article. However, given the overlapping usage of the terms in modern parlance, individual technologies may need to be evaluated to determine which term is more fitting. As such, they are best discussed in parallel.
Concepts
Most of the scientific concepts in bionanotechnology are derived from other fields. Biochemical principles that are used to understand the material properties of biological systems are central in bionanotechnology because those same principles are to be used to create new technologies. Material properties and applications studied in bionanoscience include mechanical properties (e.g. deformation, adhesion, failure), electrical/electronic (e.g. electromechanical stimulation, capacitors, energy storage/batteries), optical (e.g. absorption, luminescence, photochemistry), thermal (e.g. thermomutability, thermal management), biological (e.g. how cells interact with nanomaterials, molecular flaws/defects, biosensing, biological mechanisms such as mechanosensation), nanoscience of disease (e.g. genetic disease, cancer, organ/tissue failure), as well as biological computing (e.g. DNA computing) and agriculture (target delivery of pesticides, hormones and fertilizers.
The impact of bionanoscience, achieved through structural and mechanistic analyses of biological processes at nanoscale, is their translation into synthetic and technological applications through nanotechnology.
Nanobiotechnology takes most of its fundamentals from nanotechnology. Most of the devices designed for nano-biotechnological use are directly based on other existing nanotechnologies. Nanobiotechnology is often used to describe the overlapping multidisciplinary activities associated with biosensors, particularly where photonics, chemistry, biology, biophysics, nanomedicine, and engineering converge. Measurement in biology using wave guide techniques, such as dual-polarization interferometry, is another example.
Applications
Applications of bionanotechnology are extremely widespread. Insofar as the distinction holds, nanobiotechnology is much more commonplace in that it simply provides more tools for the study of biology. Bionanotechnology, on the other hand, promises to recreate biological mechanisms and pathways in a form that is useful in other ways.
Nanomedicine
Nanomedicine is a field of medical science whose applications are increasing.
Nanobots
The field includes nanorobots and biological machines, which constitute a very useful tool to develop this area of knowledge. In the past years, researchers have made many improvements in the different devices and systems required to develop functional nanorobots – such as motion and magnetic guidance. This supposes a new way of treating and dealing with diseases such as cancer; thanks to nanorobots, side effects of chemotherapy could get controlled, reduced and even eliminated, so some years from now, cancer patients could be offered an alternative to treat such diseases instead of chemotherapy, which causes secondary effects such as hair loss, fatigue or nausea killing not only cancerous cells but also the healthy ones. Nanobots could be used for various therapies, surgery, diagnosis, and medical imaging – such as via targeted drug-delivery to the brain (similar to nanoparticles) and other sites. Programmability for combinations of features such as "tissue penetration, site-targeting, stimuli responsiveness, and cargo-loading" makes such nanobots promising candidates for "precision medicine".
At a clinical level, cancer treatment with nanomedicine would consist of the supply of nanorobots to the patient through an injection that will search for cancerous cells while leaving the healthy ones untouched. Patients that are treated through nanomedicine would thereby not notice the presence of these nanomachines inside them; the only thing that would be noticeable is the progressive improvement of their health. Nanobiotechnology may be useful for medicine formulation.
"Precision antibiotics" has been proposed to make use of bacteriocin-mechanisms for targeted antibiotics.
Nanoparticles
Nanoparticles are already widely used in medicine. Its applications overlap with those of nanobots and in some cases it may be difficult to distinguish between them. They can be used to for diagnosis and targeted drug delivery, encapsulating medicine. Some can be manipulated using magnetic fields and, for example, experimentally, remote-controlled hormone release has been achieved this way.
On example advanced application under development are "Trojan horse" designer-nanoparticles that makes blood cells eat away – from the inside out – portions of atherosclerotic plaque that cause heart attacks and are the current most common cause of death globally.
Artificial cells
Artificial cells such as synthetic red blood cells that have all or many of the natural cells' known broad natural properties and abilities could be used to load functional cargos such as hemoglobin, drugs, magnetic nanoparticles, and ATP biosensors which may enable additional non-native functionalities.
Other
Nanofibers that mimic the matrix around cells and contain molecules that were engineered to wiggle was shown to be a potential therapy for spinal cord injury in mice.
Technically, gene therapy can also be considered to be a form of nanobiotechnology or to move towards it. An example of an area of genome editing related developments that is more clearly nanobiotechnology than more conventional gene therapies, is synthetic fabrication of functional materials in tissues. Researcher made C. elegans worms synthesize, fabricate, and assemble bioelectronic materials in its brain cells. They enabled modulation of membrane properties in specific neuron populations and manipulation of behavior in the living animals which might be useful in the study and treatments for diseases such as multiple sclerosis in specific and demonstrates the viability of such synthetic in vivo fabrication. Moreover, such genetically modified neurons may enable connecting external components – such as prosthetic limbs – to nerves.
Nanosensors based on e.g. nanotubes, nanowires, cantilevers, or atomic force microscopy could be applied to diagnostic devices/sensors
Nanobiotechnology
Nanobiotechnology (sometimes referred to as nanobiology) in medicine may be best described as helping modern medicine progress from treating symptoms to generating cures and regenerating biological tissues.
Three American patients have received whole cultured bladders with the help of doctors who use nanobiology techniques in their practice. Also, it has been demonstrated in animal studies that a uterus can be grown outside the body and then placed in the body in order to produce a baby. Stem cell treatments have been used to fix diseases that are found in the human heart and are in clinical trials in the United States. There is also funding for research into allowing people to have new limbs without having to resort to prosthesis. Artificial proteins might also become available to manufacture without the need for harsh chemicals and expensive machines. It has even been surmised that by the year 2055, computers may be made out of biochemicals and organic salts.
In vivo biosensors
Another example of current nanobiotechnological research involves nanospheres coated with fluorescent polymers. Researchers are seeking to design polymers whose fluorescence is quenched when they encounter specific molecules. Different polymers would detect different metabolites. The polymer-coated spheres could become part of new biological assays, and the technology might someday lead to particles which could be introduced into the human body to track down metabolites associated with tumors and other health problems. Another example, from a different perspective, would be evaluation and therapy at the nanoscopic level, i.e. the treatment of nanobacteria (25-200 nm sized) as is done by NanoBiotech Pharma.
In vitro biosensors
"Nanoantennas" made out of DNA – a novel type of nano-scale optical antenna – can be attached to proteins and produce a signal via fluorescence when these perform their biological functions, in particular for their distinct conformational changes. This could be used for further nanobiotechnology such as various types of nanomachines, to develop new drugs, for bioresearch and for new avenues in biochemistry.
Energy
It may also be useful in sustainable energy: in 2022, researchers reported 3D-printed nano-"skyscraper" electrodes – albeit micro-scale, the pillars had nano-features of porosity due to printed metal nanoparticle inks – (nanotechnology) that house cyanobacteria for extracting substantially more sustainable bioenergy from their photosynthesis (biotechnology) than in earlier studies.
Nanobiology
While nanobiology is in its infancy, there are a lot of promising methods that may rely on nanobiology in the future. Biological systems are inherently nano in scale; nanoscience must merge with biology in order to deliver biomacromolecules and molecular machines that are similar to nature. Controlling and mimicking the devices and processes that are constructed from molecules is a tremendous challenge to face for the converging disciplines of nanobiotechnology. All living things, including humans, can be considered to be nanofoundries. Natural evolution has optimized the "natural" form of nanobiology over millions of years. In the 21st century, humans have developed the technology to artificially tap into nanobiology. This process is best described as "organic merging with synthetic". Colonies of live neurons can live together on a biochip device; according to research from Gunther Gross at the University of North Texas. Self-assembling nanotubes have the ability to be used as a structural system. They would be composed together with rhodopsins; which would facilitate the optical computing process and help with the storage of biological materials. DNA (as the software for all living things) can be used as a structural proteomic system – a logical component for molecular computing. Ned Seeman – a researcher at New York University – along with other researchers are currently researching concepts that are similar to each other.
Bionanotechnology
Distinction from nanobiotechnology
Broadly, bionanotechnology can be distinguished from nanobiotechnology in that it refers to nanotechnology that makes use of biological materials/components – it could in principle or does alternatively use abiotic components. It plays a smaller role in medicine (which is concerned with biological organisms). It makes use of natural or biomimetic systems or elements for unique nanoscale structures and various applications that may not be directionally associated with biology rather than mostly biological applications. In contrast, nanobiotechnology uses biotechnology miniaturized to nanometer size or incorporates nanomolecules into biological systems. In some future applications, both fields could be merged.
DNA
DNA nanotechnology is one important example of bionanotechnology. The utilization of the inherent properties of nucleic acids like DNA to create useful materials or devices – such as biosensors – is a promising area of modern research.
DNA digital data storage refers mostly to the use of synthesized but otherwise conventional strands of DNA to store digital data, which could be useful for e.g. high-density long-term data storage that isn't accessed and written to frequently as an alternative to 5D optical data storage or for use in combination with other nanobiotechnology.
Membrane materials
Another important area of research involves taking advantage of membrane properties to generate synthetic membranes. Proteins that self-assemble to generate functional materials could be used as a novel approach for the large-scale production of programmable nanomaterials. One example is the development of amyloids found in bacterial biofilms as engineered nanomaterials that can be programmed genetically to have different properties.
Lipid nanotechnology
Lipid nanotechnology is another major area of research in bionanotechnology, where physico-chemical properties of lipids such as their antifouling and self-assembly is exploited to build nanodevices with applications in medicine and engineering. Lipid nanotechnology approaches can also be used to develop next-generation emulsion methods to maximize both absorption of fat-soluble nutrients and the ability to incorporate them into popular beverages.
Computing
"Memristors" fabricated from protein nanowires of the bacterium Geobacter sulfurreducens which function at substantially lower voltages than previously described ones may allow the construction of artificial neurons which function at voltages of biological action potentials. The nanowires have a range of advantages over silicon nanowires and the memristors may be used to directly process biosensing signals, for neuromorphic computing (see also: wetware computer) and/or direct communication with biological neurons.
Other
Protein folding studies provide a third important avenue of research, but one that has been largely inhibited by our inability to predict protein folding with a sufficiently high degree of accuracy. Given the myriad uses that biological systems have for proteins, though, research into understanding protein folding is of high importance and could prove fruitful for bionanotechnology in the future.
Agriculture
In the agriculture industry, engineered nanoparticles have been serving as nano carriers, containing herbicides, chemicals, or genes, which target particular plant parts to release their content.
Previously nanocapsules containing herbicides have been reported to effectively penetrate through cuticles and tissues, allowing the slow and constant release of the active substances. Likewise, other literature describes that nano-encapsulated slow release of fertilizers has also become a trend to save fertilizer consumption and to minimize environmental pollution through precision farming. These are only a few examples from numerous research works which might open up exciting opportunities for nanobiotechnology application in agriculture. Also, application of this kind of engineered nanoparticles to plants should be considered the level of amicability before it is employed in agriculture practices. Based on a thorough literature survey, it was understood that there is only limited authentic information available to explain the biological consequence of engineered nanoparticles on treated plants. Certain reports underline the phytotoxicity of various origin of engineered nanoparticles to the plant caused by the subject of concentrations and sizes . At the same time, however, an equal number of studies were reported with a positive outcome of nanoparticles, which facilitate growth promoting nature to treat plant. In particular, compared to other nanoparticles, silver and gold nanoparticles based applications elicited beneficial results on various plant species with less and/or no toxicity. Silver nanoparticles (AgNPs) treated leaves of Asparagus showed the increased content of ascorbate and chlorophyll. Similarly, AgNPs-treated common bean and corn has increased shoot and root length, leaf surface area, chlorophyll, carbohydrate and protein contents reported earlier. The gold nanoparticle has been used to induce growth and seed yield in Brassica juncea.
Nanobiotechnology is used in tissue cultures. The administration of micronutrients at the level of individual atoms and molecules allows for the stimulation of various stages of development, initiation of cell division, and differentiation in the production of plant material, which must be qualitatively uniform and genetically homogeneous. The use of nanoparticles of zinc (ZnO NPs) and silver (Ag NPs) compounds gives very good results in the micropropagation of chrysanthemums using the method of single-node shoot fragments.
Tools
This field relies on a variety of research methods, including experimental tools (e.g. imaging, characterization via AFM/optical tweezers etc.), x-ray diffraction based tools, synthesis via self-assembly, characterization of self-assembly (using e.g. MP-SPR, DPI, recombinant DNA methods, etc.), theory (e.g. statistical mechanics, nanomechanics, etc.), as well as computational approaches (bottom-up multi-scale simulation, supercomputing).
Risk management
As of 2009, the risks of nanobiotechnologies are poorly understood and in the U.S. there is no solid national consensus on what kind of regulatory policy principles should be followed. For example, nanobiotechnologies may have hard to control effects on the environment or ecosystems and human health. The metal-based nanoparticles used for biomedical prospectives are extremely enticing in various applications due to their distinctive physicochemical characteristics, allowing them to influence cellular processes at the biological level. The fact that metal-based nanoparticles have high surface-to-volume ratios makes them reactive or catalytic. Due to their small size, they are more likely to be able to penetrate biological barriers such as cell membranes and cause cellular dysfunction in living organisms. Indeed, the high toxicity of some transition metals can make it challenging to use mixed oxide NPs in biomedical uses. It triggers adverse effects on organisms, causing oxidative stress, stimulating the formation of ROS, mitochondrial perturbation, and the modulation of cellular functions, with fatal results in some cases.
Bonin notes that "Nanotechnology is not a specific determinate homogenous entity, but a collection of diverse capabilities and applications" and that nanobiotechnology research and development is – as one of many fields – affected by dual-use problems.
See also
Biomimicry
Colloidal gold
Genome editing (bacteria, (micro-borgs))
Gold nanoparticle
Nanomedicine
Nanobiomechanics
Nanoparticle–biomolecule conjugate
Nanosubmarine
Nanozymes
References
External links
What is Bionanotechnology?—a video introduction to the field
Nanobiotechnology in Orthopaedic
Nanotechnology
Biotechnology
Nanomedicine | Nanobiotechnology | Materials_science,Engineering,Biology | 4,320 |
4,216,648 | https://en.wikipedia.org/wiki/Gulose | Gulose is an aldohexose sugar. It is a monosaccharide that is very rare in nature, but has been found in archaea, bacteria and eukaryotes. It also exists as a syrup with a sweet taste. It is soluble in water and slightly soluble in methanol. Neither the - nor -forms are fermentable by yeast.
D-Gulose is a C-3 epimer of D-galactose and a C-5 epimer of L-mannose.
References
Aldohexoses
Pyranoses | Gulose | Chemistry | 123 |
35,333,512 | https://en.wikipedia.org/wiki/Slow%20strain%20rate%20testing | Slow strain rate testing (SSRT), also called constant extension rate tensile testing (CERT), is a popular test used by research scientists to study stress corrosion cracking. It involves a slow (compared to conventional tensile tests) dynamic strain applied at a constant extension rate in the environment of interest. These test results are compared to those for similar tests in a, known to be inert, environment. A 50-year history of the SSRT has recently been published by its creator. The test has also been standardized and two ASTM symposia devoted to it.
Effect of strain rate
The important characteristic of these tests is that the strain rate is low, for example extension rates selected in the range from 10−8 to 10−3 s−1. The selection of the strain rate is very important because the susceptibility to cracking may not be evident from result of tests at too low or too high strain rate. For numerous material-environment systems, strain rates in range 10−5 - 10−6 s−1 are used; however, the observed absence of cracking at a given strain rate should not be taken as a proof of immunity to cracking. There are known cases wherein the susceptibility to stress-corrosion cracking only became evident at strain rates as low as 10−8 or 10−9 s−1. Nevertheless, the method is very suitable for mechanistic studies, as well as for relative ranking of susceptibility to cracking of different alloys, or the aggressiveness of environments and the effect of temperature, pH, metallurgical condition etc.
The fastest strain rate that will still promote SCC for a given environment-material system is sometimes called the "critical strain rate", some values are given in the table:
The importance of other test parameters
Electrode potential and other environmental factors such as temperature, pH and degree of aeration can greatly impact the results off this accelerated stress corrosion cracking test, as can the specimen surface finish and metallurgical condition.
The evaluation of the results
The evaluated parameters are:
time to specimen failure (e.g., breakage, or from other "failure" criteria)
ductility (by elongation to fracture or the reduction of the area)
ultimate tensile strength (from the maximum load)
area under the elongation - load curve (which represents the fracture energy)
percent of ductile/brittle fracture on the fracture surface
threshold stress for cracking
The results of the SSRT tests are evaluated using the ratio:
The departure of the ratio below unity quantifies the increased susceptibility to cracking.
The test is best used in combination with electrochemical measurements and other stress corrosion cracking tests.
References
Corrosion
Fracture mechanics
Materials testing | Slow strain rate testing | Chemistry,Materials_science,Engineering | 550 |
8,337,647 | https://en.wikipedia.org/wiki/Average%20path%20length | Average path length, or average shortest path length is a concept in network topology that is defined as the average number of steps along the shortest paths for all possible pairs of network nodes. It is a measure of the efficiency of information or mass transport on a network.
Concept
Average path length is one of the three most robust measures of network topology, along with its clustering coefficient and its degree distribution. Some examples are: the average number of clicks which will lead you from one website to another, or the number of people you will have to communicate through, on an average, to contact a complete stranger. It should not be confused with the diameter of the network, which is defined as the longest geodesic, i.e., the longest shortest path between any two nodes in the network (see Distance (graph theory)).
The average path length distinguishes an easily negotiable network from one, which is complicated and inefficient, with a shorter average path length being more desirable. However, the average path length is simply what the path length will most likely be. The network itself might have some very remotely connected nodes and many nodes, which are neighbors of each other.
Definition
Consider an unweighted directed graph with the set of vertices . Let , where denote the shortest distance between and .
Assume that if cannot be reached from . Then, the average path length is:
where is the number of vertices in .
Applications
In a real network like the Internet, a short average path length facilitates the quick transfer of information and reduces costs. The efficiency of mass transfer in a metabolic network can be judged by studying its average path length. A power grid network will have fewer losses if its average path length is minimized.
Most real networks have a very short average path length leading to the concept of a small world where everyone is connected to everyone else through a very short path.
As a result, most models of real networks are created with this condition in mind. One of the first models which tried to explain real networks was the random network model. It was later followed by the Watts and Strogatz model, and even later there were the scale-free networks starting with the BA model. All these models had one thing in common: they all predicted very short average path length.
The average path length depends on the system size but does not change drastically with it. Small world network theory predicts that the average path length changes proportionally to log n, where n is the number of nodes in the network.
References
Network theory
Graph invariants | Average path length | Mathematics | 515 |
20,770,146 | https://en.wikipedia.org/wiki/Robot%20Award | Robot Award is an annual awards event set up by Japan's Ministry of Economy, Trade and Industry (METI) in 2006. It aims to promote research and development for the commercialization of robots and the use of robotics. In 2008, eight winners were selected from 65 applications.
Robots that have provided outstanding services in the past year are eligible for the award. The program selects and recognizes robots that have made, or are highly likely to make, significant contribution to future market development.
In this program, a robot is defined as an intelligent mechanical system that incorporates three technological elements: sensing, intelligence and control, and drive. Robots (and their parts and software) that fall within the following categories are eligible.
Service Robot - designed for service in offices, homes and public facilities.
Industrial Robot - designed to take part in manufacturing processes on the shop floor.
Public and Frontier Robot - designed to work for special purposes, such as survivor search and recovery operations at disaster sites and space and deep-sea exploration.
Parts and Software - parts and software used in robot manufacture.
The selection process consisting of experts evaluating entries based on contribution to and potential for future market development. The selection criteria include: (a) social needs, (b) value from the user's point of view, and (c) technological innovativeness.
See also
List of mechanical engineering awards
References
Mechanical engineering awards
Robotics events | Robot Award | Engineering | 279 |
78,259,344 | https://en.wikipedia.org/wiki/Luxdegalutamide | Luxdegalutamide, also known as ARV-766, is an investigational oral androgen receptor (AR) degrader being developed by Arvinas for the treatment of metastatic castration-resistant prostate cancer (mCRPC). It belongs to a class of drugs called proteolysis targeting chimeras (PROTACs), which are designed to selectively degrade specific proteins by hijacking the ubiquitin-proteasome system. Luxdegalutamide is a second-generation PROTAC AR degrader that has demonstrated a broader efficacy profile and better tolerability compared to its predecessor, ARV-110, in clinical settings. It has shown promise in overcoming resistance associated with certain AR mutations, including the L702H mutation, which is prevalent in up to 24% of treated mCRPC patients. As of 2024, luxdegalutamide is being evaluated in phase I/II clinical trials for prostate cancer.
References
Benzamides
Benzonitriles
Cyclobutanes
Fluorobenzenes
Piperazines
Piperidines
Resorcinols | Luxdegalutamide | Chemistry | 233 |
49,253,142 | https://en.wikipedia.org/wiki/Zinc%20finger%20protein%20112 | Zinc finger protein 112 is a protein that in humans is encoded by the ZNF112 gene.
References
Further reading
Proteins | Zinc finger protein 112 | Chemistry | 27 |
155,835 | https://en.wikipedia.org/wiki/Becquerel | The becquerel (; symbol: Bq) is the unit of radioactivity in the International System of Units (SI). One becquerel is defined as an activity of one per second, on average, for aperiodic activity events referred to a radionuclide. For applications relating to human health this is a small quantity, and SI multiples of the unit are commonly used.
The becquerel is named after Henri Becquerel, who shared a Nobel Prize in Physics with Pierre and Marie Curie in 1903 for their work in discovering radioactivity.
Definition
1 Bq = 1 s−1
A special name was introduced for the reciprocal second (s) to represent radioactivity to avoid potentially dangerous mistakes with prefixes. For example, 1 μs would mean 10 disintegrations per second: , whereas 1 μBq would mean 1 disintegration per 1 million seconds. Other names considered were hertz (Hz), a special name already in use for the reciprocal second (for periodic events of any kind), and fourier (Fr; after Joseph Fourier). The hertz is now only used for periodic phenomena. While 1 Hz replaces the deprecated term cycle per second, 1 Bq refers to one event per second on average for aperiodic radioactive decays.
The gray (Gy) and the becquerel (Bq) were introduced in 1975. Between 1953 and 1975, absorbed dose was often measured with the rad. Decay activity was given with the curie before 1946 and often with the rutherford between 1946 and 1975.
Unit capitalization and prefixes
As with every International System of Units (SI) unit named after a person, the first letter of its symbol is uppercase (Bq). However, when an SI unit is spelled out in English, it should always begin with a lowercase letter (becquerel)—except in a situation where any word in that position would be capitalized, such as at the beginning of a sentence or in material using title case.
Like any SI unit, Bq can be prefixed; commonly used multiples are kBq (kilobecquerel, ), MBq (megabecquerel, , equivalent to 1 rutherford), GBq (gigabecquerel, ), TBq (terabecquerel, ), and PBq (petabecquerel, ). Large prefixes are common for practical uses of the unit.
Examples
For practical applications, 1 Bq is a small unit. For example, there is roughly 0.017 g of potassium-40 in a typical human body, producing about 4,400 decays per second (Bq).
The activity of radioactive americium in a home smoke detector is about 37 kBq (1 μCi).
The global inventory of carbon-14 is estimated to be (8.5 EBq, 8.5 exabecquerel).
These examples are useful for comparing the amount of activity of these radioactive materials, but should not be confused with the amount of exposure to ionizing radiation that these materials represent. The level of exposure and thus the absorbed dose received are what should be considered when assessing the effects of ionizing radiation on humans.
Relation to the curie
The becquerel succeeded the curie (Ci), an older, non-SI unit of radioactivity based on the activity of 1 gram of radium-226. The curie is defined as , or 37 GBq.
Conversion factors:
1 Ci = = 37 GBq
1 μCi = = 37 kBq
1 Bq = =
1 MBq = 0.027 mCi
Relation to other radiation-related quantities
The following table shows radiation quantities in SI and non-SI units. W (formerly 'Q' factor) is a factor that scales the biological effect for different types of radiation, relative to x-rays (e.g. 1 for beta radiation, 20 for alpha radiation, and a complicated function of energy for neutrons). In general, conversion between rates of emission, the density of radiation, the fraction absorbed, and the biological effects, requires knowledge of the geometry between source and target, the energy and the type of the radiation emitted, among other factors.
See also
Background radiation
Banana equivalent dose
Counts per minute
Ionizing radiation
Orders of magnitude (radiation)
Radiation poisoning
Relative biological effectiveness
References
External links
Derived units on the International Bureau of Weights and Measures (BIPM) web site
SI derived units
Units of radioactivity
Units of frequency | Becquerel | Chemistry,Mathematics | 939 |
62,398,551 | https://en.wikipedia.org/wiki/International%20Society%20for%20Sexual%20Medicine | The International Society for Sexual Medicine (ISSM) is a medical society devoted to the study of the medicine of human sexuality. It publishes two journals, The Journal of Sexual Medicine, and the open-access Sexual Medicine Reviews. It was founded in 1978 and was formerly known as the ISIR/ISSIR.
References
Medical associations
Sexuality
Medical and health organisations based in the Netherlands | International Society for Sexual Medicine | Biology | 76 |
1,448,702 | https://en.wikipedia.org/wiki/Iterated%20function | In mathematics, an iterated function is a function that is obtained by composing another function with itself two or several times. The process of repeatedly applying the same function is called iteration. In this process, starting from some initial object, the result of applying a given function is fed again into the function as input, and this process is repeated.
For example, on the image on the right:
Iterated functions are studied in computer science, fractals, dynamical systems, mathematics and renormalization group physics.
Definition
The formal definition of an iterated function on a set X follows.
Let be a set and be a function.
Defining as the n-th iterate of , where n is a non-negative integer, by:
and
where is the identity function on and denotes function composition. This notation has been traced to and John Frederick William Herschel in 1813. Herschel credited Hans Heinrich Bürmann for it, but without giving a specific reference to the work of Bürmann, which remains undiscovered.
Because the notation may refer to both iteration (composition) of the function or exponentiation of the function (the latter is commonly used in trigonometry), some mathematicians choose to use to denote the compositional meaning, writing for the -th iterate of the function , as in, for example, meaning . For the same purpose, was used by Benjamin Peirce whereas Alfred Pringsheim and Jules Molk suggested instead.
Abelian property and iteration sequences
In general, the following identity holds for all non-negative integers and ,
This is structurally identical to the property of exponentiation that .
In general, for arbitrary general (negative, non-integer, etc.) indices and , this relation is called the translation functional equation, cf. Schröder's equation and Abel equation. On a logarithmic scale, this reduces to the nesting property of Chebyshev polynomials, , since .
The relation also holds, analogous to the property of exponentiation that .
The sequence of functions is called a Picard sequence, named after Charles Émile Picard.
For a given in , the sequence of values is called the orbit of .
If for some integer , the orbit is called a periodic orbit. The smallest such value of for a given is called the period of the orbit. The point itself is called a periodic point. The cycle detection problem in computer science is the algorithmic problem of finding the first periodic point in an orbit, and the period of the orbit.
Fixed points
If for some in (that is, the period of the orbit of is ), then is called a fixed point of the iterated sequence. The set of fixed points is often denoted as . There exist a number of fixed-point theorems that guarantee the existence of fixed points in various situations, including the Banach fixed point theorem and the Brouwer fixed point theorem.
There are several techniques for convergence acceleration of the sequences produced by fixed point iteration. For example, the Aitken method applied to an iterated fixed point is known as Steffensen's method, and produces quadratic convergence.
Limiting behaviour
Upon iteration, one may find that there are sets that shrink and converge towards a single point. In such a case, the point that is converged to is known as an attractive fixed point. Conversely, iteration may give the appearance of points diverging away from a single point; this would be the case for an unstable fixed point.
When the points of the orbit converge to one or more limits, the set of accumulation points of the orbit is known as the limit set or the ω-limit set.
The ideas of attraction and repulsion generalize similarly; one may categorize iterates into stable sets and unstable sets, according to the behavior of small neighborhoods under iteration. Also see infinite compositions of analytic functions.
Other limiting behaviors are possible; for example, wandering points are points that move away, and never come back even close to where they started.
Invariant measure
If one considers the evolution of a density distribution, rather than that of individual point dynamics, then the limiting behavior is given by the invariant measure. It can be visualized as the behavior of a point-cloud or dust-cloud under repeated iteration. The invariant measure is an eigenstate of the Ruelle-Frobenius-Perron operator or transfer operator, corresponding to an eigenvalue of 1. Smaller eigenvalues correspond to unstable, decaying states.
In general, because repeated iteration corresponds to a shift, the transfer operator, and its adjoint, the Koopman operator can both be interpreted as shift operators action on a shift space. The theory of subshifts of finite type provides general insight into many iterated functions, especially those leading to chaos.
Fractional iterates and flows, and negative iterates
The notion must be used with care when the equation has multiple solutions, which is normally the case, as in Babbage's equation of the functional roots of the identity map. For example, for and , both and are solutions; so the expression does not denote a unique function, just as numbers have multiple algebraic roots. A trivial root of f can always be obtained if fs domain can be extended sufficiently, cf. picture. The roots chosen are normally the ones belonging to the orbit under study.
Fractional iteration of a function can be defined: for instance, a half iterate of a function is a function such that . This function can be written using the index notation as . Similarly, is the function defined such that , while may be defined as equal to , and so forth, all based on the principle, mentioned earlier, that . This idea can be generalized so that the iteration count becomes a continuous parameter, a sort of continuous "time" of a continuous orbit.
In such cases, one refers to the system as a flow (cf. section on conjugacy below.)
If a function is bijective (and so possesses an inverse function), then negative iterates correspond to function inverses and their compositions. For example, is the normal inverse of , while is the inverse composed with itself, i.e. . Fractional negative iterates are defined analogously to fractional positive ones; for example, is defined such that , or, equivalently, such that .
Some formulas for fractional iteration
One of several methods of finding a series formula for fractional iteration, making use of a fixed point, is as follows.
First determine a fixed point for the function such that .
Define for all n belonging to the reals. This, in some ways, is the most natural extra condition to place upon the fractional iterates.
Expand around the fixed point a as a Taylor series,
Expand out
Substitute in for , for any k,
Make use of the geometric progression to simplify terms, There is a special case when ,
This can be carried on indefinitely, although inefficiently, as the latter terms become increasingly complicated. A more systematic procedure is outlined in the following section on Conjugacy.
Example 1
For example, setting gives the fixed point , so the above formula terminates to just
which is trivial to check.
Example 2
Find the value of where this is done n times (and possibly the interpolated values when n is not an integer). We have . A fixed point is .
So set and expanded around the fixed point value of 2 is then an infinite series,
which, taking just the first three terms, is correct to the first decimal place when n is positive. Also see Tetration: . Using the other fixed point causes the series to diverge.
For , the series computes the inverse function .
Example 3
With the function , expand around the fixed point 1 to get the series
which is simply the Taylor series of x(bn ) expanded around 1.
Conjugacy
If and are two iterated functions, and there exists a homeomorphism such that , then and are said to be topologically conjugate.
Clearly, topological conjugacy is preserved under iteration, as . Thus, if one can solve for one iterated function system, one also has solutions for all topologically conjugate systems. For example, the tent map is topologically conjugate to the logistic map. As a special case, taking , one has the iteration of as
, for any function .
Making the substitution yields
, a form known as the Abel equation.
Even in the absence of a strict homeomorphism, near a fixed point, here taken to be at = 0, (0) = 0, one may often solve Schröder's equation for a function Ψ, which makes locally conjugate to a mere dilation, , that is
.
Thus, its iteration orbit, or flow, under suitable provisions (e.g., ), amounts to the conjugate of the orbit of the monomial,
,
where in this expression serves as a plain exponent: functional iteration has been reduced to multiplication! Here, however, the exponent no longer needs be integer or positive, and is a continuous "time" of evolution for the full orbit: the monoid of the Picard sequence (cf. transformation semigroup) has generalized to a full continuous group.
This method (perturbative determination of the principal eigenfunction Ψ, cf. Carleman matrix) is equivalent to the algorithm of the preceding section, albeit, in practice, more powerful and systematic.
Markov chains
If the function is linear and can be described by a stochastic matrix, that is, a matrix whose rows or columns sum to one, then the iterated system is known as a Markov chain.
Examples
There are many chaotic maps. Well-known iterated functions include the Mandelbrot set and iterated function systems.
Ernst Schröder, in 1870, worked out special cases of the logistic map, such as the chaotic case , so that , hence .
A nonchaotic case Schröder also illustrated with his method, , yielded , and hence .
If is the action of a group element on a set, then the iterated function corresponds to a free group.
Most functions do not have explicit general closed-form expressions for the n-th iterate. The table below lists some that do. Note that all these expressions are valid even for non-integer and negative n, as well as non-negative integer n.
Note: these two special cases of are the only cases that have a closed-form solution. Choosing b = 2 = –a and b = 4 = –a, respectively, further reduces them to the nonchaotic and chaotic logistic cases discussed prior to the table.
Some of these examples are related among themselves by simple conjugacies.
Means of study
Iterated functions can be studied with the Artin–Mazur zeta function and with transfer operators.
In computer science
In computer science, iterated functions occur as a special case of recursive functions, which in turn anchor the study of such broad topics as lambda calculus, or narrower ones, such as the denotational semantics of computer programs.
Definitions in terms of iterated functions
Two important functionals can be defined in terms of iterated functions. These are summation:
and the equivalent product:
Functional derivative
The functional derivative of an iterated function is given by the recursive formula:
Lie's data transport equation
Iterated functions crop up in the series expansion of combined functions, such as .
Given the iteration velocity, or beta function (physics),
for the th iterate of the function , we have
For example, for rigid advection, if , then . Consequently, , action by a plain shift operator.
Conversely, one may specify given an arbitrary , through the generic Abel equation discussed above,
where
This is evident by noting that
For continuous iteration index , then, now written as a subscript, this amounts to Lie's celebrated exponential realization of a continuous group,
The initial flow velocity suffices to determine the entire flow, given this exponential realization which automatically provides the general solution to the translation functional equation,
See also
Irrational rotation
Iterated function system
Iterative method
Rotation number
Sarkovskii's theorem
Fractional calculus
Recurrence relation
Schröder's equation
Functional square root
Abel function
Schröder's equation
Böttcher's equation
Infinite compositions of analytic functions
Flow (mathematics)
Tetration
Functional equation
Notes
References
External links
Dynamical systems
Fractals
Sequences and series
Fixed points (mathematics)
Functions and mappings
Functional equations | Iterated function | Physics,Mathematics | 2,564 |
60,914,866 | https://en.wikipedia.org/wiki/Vadim%20Utkin | Vadim Ivanovich Utkin (; 30 October 1937 – 18 September 2022) was a Russian-American control theorist, electrical engineer and a professor of Electrical Engineering and Mechanical Engineering at the Ohio State University. He is best known for being one of the originators of Sliding Mode Control and Variable Structure Systems, which have become fundamental concepts in the field of nonlinear control (e.g. robust control).
Biography
Utkin was born in Moscow, Soviet Union. He was with the Institute of Control Sciences from 1960 to 1994, where he served as its Head of Discontinuous Control Systems Laboratory from 1973 to 1994. He joined the Ohio State University in 1994 as the Ford Chair of Electromechanical Systems, and was the first professor to hold this distinction until 2002.
He was an IEEE Fellow, and he has been the recipient of awards such as the Lenin Prize and the Humboldt Prize. He also holds honorary doctorates from the University of Sarajevo and Rovira and Vergil University.
Selected works
Sliding Modes and their Applications in Variable Structure Systems. Mir, Moscow, 1978.
Sliding Modes in Control and Optimization, Springer Verlag, 1992.
Sliding Mode Control in Electro-Mechanical Systems, Taylor & Frencis. 1st edition 1999, 2nd edition 2009.
References
1937 births
2022 deaths
Control theorists
Fellows of the IEEE
Ohio State University faculty
Russian emigrants to the United States
Scientists from Moscow | Vadim Utkin | Engineering | 281 |
39,086,408 | https://en.wikipedia.org/wiki/Samsung%20Galaxy%20Mega | The Samsung Galaxy Mega is an Android-based phablet that was manufactured and released by Samsung. It was announced on April 11, 2013. The original model featured a screen, though a revised version was released with a screen. It has a 1,280×720 screen, a dual-core 1.7 GHz processor and an 8-megapixel camera. The phone runs Android 4.2.2 "Jelly Bean" software, and internal storage is 8 or 16 GB (usable 5.34 or 12 GB respectively).
The Galaxy Mega has received the Android 4.4.2 "KitKat" update. Also available is the unofficial update Cyanogenmod 11 Android 4.4.2 update, for the Mega 6.3 (GT-I9200/I9205). The device's successor is the Samsung Galaxy Mega 2.
Features
Multi-window – Split screen capability
Home Screen is available in landscape mode
Group Play – Link with other Galaxy devices to share photos or create surround sound using each device's speaker
Air View – Hover over areas in supported apps to view on-screen previews. (6.3 model in portrait mode only)
S Memo – Samsung's note app. Create hand-written notes with your finger, text using the keyboard, and embed audio or images.
S Voice – Personal assistant and knowledge navigator.
S Translator – Translator app, with support for nine languages.
Smart Stay – Uses the front-facing camera to track the user's eyes, and only powers off the display if the user is not looking.
IR blaster (6.3 model)
Smart dual SIM (5.8 model)
Hardware
The Galaxy Mega largely resembles the Galaxy S4, and the two share similar features. Users can customize the lock screen and quickly access settings from the drop-down notification bar. Other features include Air View, which allows users to preview emails and photos by hovering a finger an inch above the screen, and WatchON, which lets users control a television with the smartphone. It also includes Multi Window, which allows users to use multiple apps on the same screen, a feature that is enhanced with the phone's 6.3-inch LCD (720 x 1280) display. S Translator provides quick and easy translations, and ChatON lets users share their screens with others.
The rear-facing 8-megapixel camera comes with numerous shooting modes such as Panorama and Sound & Shot. The Story Album feature lends itself to quick photo album creation on the go. The Mega runs Android 4.2.2 Jelly Bean. Users can store additional music, photos and videos with up to 64 GB of expandable storage with an external microSD card. A 3,200-mAh removable battery should allow the phone to run throughout the day on a single charge. The Galaxy Mega is powered by a dual-core 1.7-GHz Qualcomm MSM8930 Snapdragon 400 processor with 1.5GB of RAM.
Design
The Galaxy Mega measures 6.6 x 3.46 x 0.31 inches, and is larger than the Samsung Galaxy Note II (5.9 x 3.2 x 0.37 inches), HTC One (2013) (5.1 ounces, 5.31 x 2.63 x 0.28 inches) and Motorola Moto X (1st generation) (4.8 ounces, 5.1 x 2.6 x 0.22-0.4 inches). It weighs 7.1 ounces.
Software
The Samsung Galaxy Mega runs Android Jelly Bean OS 4.2.2 skinned with Samsung's TouchWiz interface. Samsung's Multi Window Mode is front and center on the device. Similar to other Galaxy phones, you can customize the Mega's lock screen with widgets and shortcuts. Seven customizable home screens are available to the user. 16 quick settings buttons in the notification drawer enable users to toggle features including Wi-Fi connectivity and the proprietary Smart Stay. These buttons can be rearranged by clicking on a tile button in the top right corner of the notification drawer.
The home screen can be viewed horizontally.
Variants
GT-I9150 - 5.8-inch screen, 1.4 GHz CPU, 1.5 GB RAM, 8 GB built-in storage - no LTE support - no dual SIM support
GT-I9152 - 5.8-inch screen, 1.4 GHz CPU, 1.5 GB RAM, 8 GB built-in storage - no LTE support - dual SIM support
GT-I9200 - 6.3-inch screen, 1.7 GHz CPU, 1.5 GB RAM, 8 GB or 16 GB built-in storage - no LTE support - no dual SIM support
GT-I9205 - 6.3-inch screen, 1.7 GHz CPU, 1.5 GB RAM, 8 GB or 16 GB built-in storage - LTE support - no dual SIM support
Only the GT-I9152 has dual SIM support.
Only the GT-I9205 has LTE support (i.e. 4G-LTE support) and no FM Radio.
The AT&T version of the GT-I9205 is known as the SGH-i527.
Gallery
See also
Samsung Galaxy S4
Samsung Galaxy Note II
Samsung Galaxy Note 3
Samsung Galaxy Note 8.0
Samsung Galaxy Tab 3 8.0
References
External links
Samsung press release
Video review by GSM Arena
Samsung Galaxy Mega Phablet Debuts Information Week
Android (operating system) devices
Samsung mobile phones
Samsung Galaxy
Mobile phones introduced in 2013
Phablets
Mobile phones with infrared transmitter | Samsung Galaxy Mega | Technology | 1,170 |
30,083,828 | https://en.wikipedia.org/wiki/Tricholosporum%20longicystidiosum | Tricholosporum longicystidiosum is a species of fungus in the family Tricholomataceae. Found in Mexico, it was described as new to science in 1990.
References
External links
longicystidiosum
Fungi of North America
Fungi described in 1990
Fungus species | Tricholosporum longicystidiosum | Biology | 60 |
23,949,345 | https://en.wikipedia.org/wiki/Instituto%20Nacional%20de%20Sismolog%C3%ADa%2C%20Vulcanolog%C3%ADa%2C%20Meteorolog%C3%ADa%20e%20Hidrolog%C3%ADa | The National Institute for Seismology, Vulcanology, Meteorology and Hydrology of Guatemala (in Spanish: Instituto Nacional de Sismología, Vulcanología, Meteorología e Hidrología (INSIVUMEH)) is a scientific agency of the Guatemalan government. The agency was created to study and monitor atmospheric, geophysical and hydrological phenomena and events, their hazards to Guatemalan society, and to provide recommendations to the government and the private sector in the occurrence of natural disasters. The agency has four major scientific disciplines, concerning Seismology, Vulcanology, Meteorology and Hydrology.
The INSIVUMEH was created in March 1976, shortly after the major 1976 Guatemala earthquake and is part of the Ministry of Communications, Infrastructure and Housing.
References
Government agencies of Guatemala
Seismological observatories, organisations and projects
Volcanology
Governmental meteorological agencies in North America
Hydrology organizations
Research institutes in Guatemala | Instituto Nacional de Sismología, Vulcanología, Meteorología e Hidrología | Environmental_science | 189 |
15,143,012 | https://en.wikipedia.org/wiki/Overlapping%20distribution%20method | The Overlapping distribution method was introduced by Charles H. Bennett for estimating chemical potential.
Theory
For two N particle systems 0 and 1 with partition function and ,
from
get the thermodynamic free energy difference is
For every configuration visited during this sampling of system 1 we can compute the potential energy U as a function of the configuration space, and the potential energy difference is
Now construct a probability density of the potential energy from the above equation:
where in is a configurational part of a partition function
since
now define two functions:
thus that
and can be obtained by fitting and
References
Potentials
Chemical thermodynamics | Overlapping distribution method | Chemistry | 125 |
5,976,324 | https://en.wikipedia.org/wiki/Adder%20stone | An adder stone is a type of stone, usually glassy, with a naturally occurring hole through it. Such stones, which usually consist of flint, have been discovered by archaeologists in both Britain and Egypt. Commonly, they are found in Northern Germany at the coasts of the North and Baltic Seas.
In Britain they are also called hag stones, witch stones, fairy stones, serpent's eggs, snake's eggs, or Glain Neidr in Wales, milpreve in Cornwall, adderstanes in the south of Scotland and Gloine nan Druidh ("Druids' glass" in Scottish Gaelic) in the north. In Germany they are called Hühnergötter ("chicken gods").
Various traditions exist as to the origins of adder stones. One holds that the stones are the hardened saliva of large numbers of serpents massing together, the perforations being caused by their tongues. There are other claims that an adder stone comes from the head of a serpent or is made by the sting of an adder. The more modern and perhaps easier to attain artefact would be any rock with a hole bored through the middle by water. Human intervention (i.e., direction of water or placement of the stone) is not allowed.
In Pliny's Natural History
According to Ancient Roman natural philosopher Pliny’s Natural History, book XXIX, adder stone was held in high esteem amongst the Druids. Pliny described rituals the druids allegedly conducted to acquire the stone, and the magical properties they ascribed to it. He wrote:
There is a sort of egg in great repute among the Gauls, of which the Greek writers have made no mention. A vast number of serpents are twisted together in summer, and coiled up in an artificial knot by their saliva and slime; and this is called "the serpent's egg". The druids say that it is tossed in the air with hissings and must be caught in a cloak before it touches the earth. The person who thus intercepts it, flies on horseback; for the serpents will pursue him until prevented by intervening water. This egg, though bound in gold will swim against the stream. And the magi are cunning to conceal their frauds, they give out that this egg must be obtained at a certain age of the moon. I have seen that egg as large and as round as a common sized apple, in a chequered cartilaginous cover, and worn by the Druids. It is wonderfully extolled for gaining lawsuits, and access to kings. It is a badge which is worn with such ostentation, that I knew a Roman knight, a Vocontian, who was slain by the stupid emperor Claudius, merely because he wore it in his breast when a lawsuit was pending.
In Welsh mythology
The Glain Neidr or Maen Magi of Welsh folklore is also closely connected to Druidism. The Glain Neidr of Wales are believed to be created by a congress of snakes, normally occurring in spring, but most auspicious on May Eve.
Although not named as Glain Neidr, magic stones with the properties of adder stones appear frequently in Welsh mythology and folklore. The Mabinogion, translated into English in the mid-nineteenth century by Lady Charlotte Guest, mentions such stones on two occasions. In the story of Peredur son of Efrawg (Percival of the Arthurian cycle), in a departure from Chrétien de Troyes' Perceval, the Story of the Grail, Peredur is given a magical stone that allows him to see and kill an invisible creature called the Addanc. In another tale, Owain, or the Lady of the Fountain (Ywain of Arthurian legend), the hero Owain mab Urien is trapped in the gatehouse of a castle. He is given a stone by a maiden, which turns Owain invisible, allowing him to escape capture.
In Russian mythology
In Russian folklore, adder stones were believed to be the abodes of spirits called Kurinyi Bog ("The Chicken God"). Kurinyi Bog were the guardians of chickens, and their stones were placed into farmyards to counteract the possible evil effects of the Kikimora (The wives of the Domovoi, the house spirits.) Kikimora, who also guarded and took care of chickens, could often unleash misery upon hens they did not like by plucking out their feathers.
In English folklore
In the seaside town Hastings there is a local legend that the town is under an enchantment known as Crowley's Curse, said to have been conjured by Aleister Crowley who lived in Hastings at the end of his life. The curse compels anyone who has lived in Hastings to always return, no matter how far away they move, or for how long. The curse can only be broken by taking a stone with a hole running through it from Hastings beach.
See also
Creirwy
Druid
List of mythological objects
Omarolluk and Pholad borings, other rocks with curious but naturally created holes.
Toadstone
References
Henkin, Leo J. (Jan. 1943). "The Carbuncle in the Adder's Head". Modern Language Notes. Vol. 58, No. 1. pp. 34–39. . .
ghostvillage.com: Dictionary of Superstitions A-Z
Witchcraft & second sight in the Highlands & islands of Scotland. John Gregorson Campbell, pg 84.
(Gloine)
External links
Druidry
Egyptian mythology
Magic items
Mythological objects
Scottish folklore
Welsh folklore | Adder stone | Physics | 1,171 |
2,567,156 | https://en.wikipedia.org/wiki/Inter-working%20function | The inter-working function (IWF) is a method for interfacing a wireless telecommunication network with the public switched telephone network (PSTN). The IWF converts the data transmitted over the air interface into a format suitable for the PSTN.
IWF contains both the hardware and software elements that provide the rate adaptation and protocol conversion between PSTN and the wireless network.
Some systems require more IWF capability than others, depending on the network which is being connected. The IWF also incorporates a "modem bank", which may be used when, for example, the GSM data terminal equipment (DTE) exchanges data with a land DTE connected via analogue modem
The IWF provides the function to enable the GSM system to interface with the various
forms of public and private data networks currently available.
The basic features of the IWF are:
Data rate adaption
Protocol conversion
References
Wireless networking
Telephony | Inter-working function | Technology,Engineering | 198 |
59,159,271 | https://en.wikipedia.org/wiki/From%20Fauna | From Fauna, formerly known as the Cellular Agriculture Society, is an international 501(c)(3) organization that has been involved in research, funding, advancement of, and most recently education in, cellular agriculture. It is based in San Francisco, and was founded by Kris Spiros in the early 2010s.
In 2023, the Cellular Agriculture Society released Modern Meat, a freely available 600-page textbook which was the first on the subject of cultured meat. They have also created children's books, educational simulations, social science research, and designed the theoretical workings and architecture of a cultured meat facility through Project CMF, which envisions what cultured meat production could look like in 2040.
Notes
References
Biotechnology
Cellular agriculture
Sustainable food system
Synthetic biology | From Fauna | Engineering,Biology | 156 |
3,790,249 | https://en.wikipedia.org/wiki/Saltern | A saltern is an area or installation for making salt. Salterns include modern salt-making works (saltworks), as well as hypersaline waters that usually contain high concentrations of halophilic microorganisms, primarily haloarchaea but also other halophiles including algae and bacteria.
Salterns usually begin with seawater as the initial source of brine but may also use natural saltwater springs and streams. The water is evaporated, usually over a series of ponds, to the point where sodium chloride and other salts precipitate out of the saturated brine, allowing pure salts to be harvested. Where complete evaporation in this fashion was not routinely achievable due to weather, salt was produced from the concentrated brine by boiling the brine.
Background
Earliest examples of pans used in the solution mining of salt date back to prehistoric times and the pans were made of ceramics known as briquetage. Later examples were made from lead and then iron. The change from lead to iron coincided with a change from wood to coal for the purpose of heating the brine. Brine would be pumped into the pans, and concentrated by the heat of the fire burning underneath. As crystals of salt formed these would be raked out and more brine added. In warmer climates no additional heat would be supplied, the sun's heat being sufficient to evaporate off the brine.
One of the earliest salterns for the harvesting of salt is argued to have taken place on Xiechi Lake, Shanxi, China by 6000 BC. Strong archaeological evidence of salt making dating to 2000 BC is found in the ruins of Zhongba at Chongqing.
See also
Sodium chloride
Alberger process
Salt evaporation pond
Seawater greenhouse
History of salt
Salt March (India)
Red hill (salt making)
References
External links
This article incorporates text from http://www.dawlish.com/ , a site which allows free use of its content.
Salt Making in the Adur Valley
Archaeology, arable landscapes and drainage ...Excavations at the Bourne–Morton Canal and the Roman saltern recorded. Archaeology, arable landscapes and drainage in the Fenland of Eastern England.
A medieval saltern mound at Millfields Caravan Park, Bramber,West Medieval (13th-16th century) saltern mound
Definition of Saltern Mound | Saltern | Chemistry | 484 |
52,348,387 | https://en.wikipedia.org/wiki/Australasian%20Tunnelling%20Society | In 1973 the Institution of Engineers Australia (now Engineers Australia) and the Australasian Institute of Mining and Metallurgy (AusIMM) collaborated in the formation of the Australian Tunnelling Association. It is a professional organization of engineers and other skilled professionals committed to the maintenance of high standards and the expansion of technical and scientific knowledge pertaining to tunnel construction.
In 1981 the Australian Tunnelling Association became the Australian Underground Construction and Tunnelling Association (AUCTA), operating as a technical society sponsored by Engineers Australia and AusIMM under status approved by the Councils of both organisations. In 2005 a New Zealand Chapter of the Technical Society was formed, and to better reflect its international membership, AUCTA changed its name to the Australasian Tunnelling Society (ATS).
References
Engineering societies based in Australia
Tunnelling organizations
Tunnels in Australia
1973 establishments in Australia
2005 establishments in New Zealand | Australasian Tunnelling Society | Engineering | 177 |
7,813,346 | https://en.wikipedia.org/wiki/Automatic%20message%20accounting | Automatic message accounting (AMA) provides detailed accounting for telephone calls. When direct distance dialing (DDD) was introduced in the US, message registers no longer sufficed for dialed telephone calls. The need to record the time and phone number of each long-distance call was met by electromechanical data processing equipment.
Centralized AMA
In centralized AMA (CAMA), the originating Class 5 telephone switches used automatic number identification (ANI) and multi-frequency (MF) signaling to send the originating and dialed telephone numbers to the Class 4 toll connecting office. The Class 4 office recorded this information with punched tape machines on long strips of paper, that had approximately the width of a hand. Each day a technician cut the paper tapes and sent them to the accounting center to be read and processed to generate customer telephone bills. Each punch recorder was responsible for 100 trunks, and its associated call identity indexer (CII) identified the trunk for an initial entry when connecting the call, an answer entry when the called party answered, and a disconnect entry when the call was cleared.
In Bell System telephone exchanges, particularly the 5XB switches, information from the marker told the sender that the call required ANI, and stored the calling equipment number in reed relay packs in the sender. The sender used the transverter connector (TVC) to seize a transverter (TV), which was a bay of a few hundred flat spring relays that controlled all AMA functions. The TV looked in the AMA translator (AMAT) that took care of these particular few thousand lines. AMAT was a rack of ferrite ring cores with cross-connect wires passing through holes of 3 × 4 inches or about a decimeter square, one wire per line. The wire was terminated on a wire wrap peg representing that particular line, and passed through a ring that represented the NNX digits of the billing number, then the M, C, D and finally Units of that number. When queried, AMAT sent a high-current pulse through the wire for that particular line, inducing pulses in the appropriate rings which were amplified by a cold cathode tube amplifier and then by a relay, and sent back to the transverter which supplied it to the sender for transmission by ANI to the tandem office.
In case of billing complaints, a test apparatus allowed scanning through all the lines in an office at the rate of about a hundred per minute, to find which ones were translated to a particular billing number.
Local AMA
In local AMA (LAMA) all this equipment was located at the Class 5 office. In this case, it also recorded the completion of local calls, thus obviating message registers. For detail billed calls, the punch recorded both calling and called numbers, as well as time of day. For message rate calls, only the calling number and time of day.
In some electromechanical offices in the 1970s, the paper tape punch recorders were replaced by magnetic tape recorders. Most punches remained in service until the exchange switch itself was replaced by more advanced systems. Stored program control exchanges, having computers anyway, do not need separate AMA equipment. They sent magnetic tapes to the Accounting Center until approximately 1990, when data links took over this job.
Billing automatic message format
Around the same time period, the billing AMA format (BAF) was developed to support the full range of local exchange carrier services. BAF is now the preferred format for all AMA data generated for processing by a LEC Revenue Accounting Office (RAO). BAF supports the complete spectrum of services and technologies, including local and network interconnection services, operator services, toll-free services, Intelligent Network database services, wireline and wireless call recording, IP addressing, and broadband data services.
BAF is administered by Telcordia Technologies, with the Billing AMA Format Advisory Group (BAFAG) playing a central role in the overall approval and administration of BAF records. The BAFAG consists of subject matter experts and representatives from the Telcordia Consulting Services Business Group who review and authorize proposed BAF elements, as well as subject matter experts from AT&T, CenturyLink (formerly Qwest), and Verizon.
The BAFAG uses the GR-1100 (Billing Automatic Message Accounting Format, BAF) specification to record call history. It describes the possible groupings of BAF structures and modules that form BAF records, the connection between service, technology, and call type, how call type and call conditions determine the structure and modules (if any) that are selected for generation of BAF records, and how the characteristics of the calling and called addresses, as well as the services provided, are factors in module generation.
The members of the 3rd Generation Partnership Project (3GPP) have been working toward Abstract Syntax Notation One (ASN.1)-encoded Charging Data Records (CDRs) to be mapped to the AMA records in BAF. In telecommunications mediation, a billing mediation system converts the 3GPP CDRs to BAF when the call and service usage data is processed by a legacy billing system or any other downstream recipient system.
Next Generation Networks (NGN) Accounting Management Generic Requirements adopt the NGN Charging and Accounting Architecture defined by 3GPP and uses industry standard terminology. It includes alignment with 3GPP Charging Principles, the International Telecommunication Union-Telecommunication (ITU-T) NGN Charging and Accounting Framework Architecture, and the Internet Engineering Task Force (IETF) Diameter Accounting protocol. The 3GPP effort also includes adding a conversion guide for 3GPP CDRs to AMA records.
Message register
Electromechanical pulse counters counted message units for message rate service lines in panel switches and similar exchanges installed in the early and middle 20th century. The metering pulses were generated in a junctor circuit, at a rate set by the sender, usually one pulse every few minutes. Every month a worker read and recorded the indicated number of message units, similar to the accounting of a gas meter. In the middle of the 20th century it became customary to photograph the meters, about a hundred per film frame, for examination in comfort.
American message unit counters generally had four digits, which sufficed because they were only used on local calls and most residential lines did not pay for local calls. Despite the arrival of subscriber trunk dialling (automatic long-distance dialing) in Europe, central offices there continued making and using message registers in the 1970s, designing ones that could register more than one click per second on a trunk call and display five or six digits.
See also
Operations support systems
References
External links
History of Bell System AMA
Telephone exchanges
Telecommunications billing systems | Automatic message accounting | Technology | 1,362 |
61,198,684 | https://en.wikipedia.org/wiki/Irina%20Grigorieva%20%28academic%29 | Irina Grigorieva, Lady Geim is a Professor of Physics at the University of Manchester and Director of the Engineering and Physical Sciences Research Council Centre for Doctoral Training in Science and the Applications of Graphene. She was awarded the 2019 David Tabor Medal and Prize of the Institute of Physics and was elected as a Fellow of the Institute.
Early life and education
Grigorieva was born in Russia. She studied physics at the Institute of Solid State Physics in Russia and earned her PhD in 1989.
Research and career
In 1990, Grigorieva moved to Nottingham with her husband Andre Geim. She visited the University of Oxford, University of Cambridge and Imperial College London to deliver seminars on her PhD research. Eventually, she joined the University of Bristol as a postdoctoral researcher.
She moved to Nijmegen where she worked as a laboratory assistant.
Grigorieva suggested to Geim that he use a frog to demonstrate magnetic levitation, for which Geim won the Ig Nobel Prize.
She joined the University of Manchester in 2001, where she works in the Condensed Matter Physics group. When she joined the group, she started studying the adhesive mechanisms of the feet of gecko lizards. In 2003, she created a gecko-like adhesive that is self-cleaning and re-attachable. Grigorieva is a member of the Graphene Council.
Grigorieva is a Professor of Physics at the University of Manchester and Director of the Engineering and Physical Sciences Research Council Centre for Doctoral Training in Science and the Applications of Graphene. She works on the electronic and magnetic properties of two-dimensional materials. She is interested in superconducting materials and the application of graphene in spintronics. In 2013, she was the first to demonstrate that graphene could be magnetic through the use of non-magnetic atoms and vacancies. Defects in graphene carry Spin-½ magnetic moments.
In 2015, she demonstrated that it is possible to switch the magnetism in graphene on and off. She created small bubbles out of graphene and showed that they can withstand pressures of 200 megapascals, which is greater that in the deep ocean. To measure the pressure inside a graphene bubble, they used atomic force microscopy and a monolayer of boron nitride.
Grigorieva used graphene as a filter to remove subatomic particles, including taking protons from heavy water. This includes removing deuterium for the cleaning of nuclear waste.
Awards and honours
2019 Institute of Physics David Tabor Medal and Prize
Personal life
Grigorieva and husband, physicist Sir Andre Geim, have a daughter. She serves on the Board of Governors of Withington Girls' School.
References
Academics of the University of Manchester
Living people
Russian materials scientists
Russian women scientists
Women materials scientists and engineers
Year of birth missing (living people)
Wives of knights | Irina Grigorieva (academic) | Materials_science,Technology | 582 |
228,569 | https://en.wikipedia.org/wiki/Conversation | Conversation is interactive communication between two or more people. The development of conversational skills and etiquette is an important part of socialization. The development of conversational skills in a new language is a frequent focus of language teaching and learning. Conversation analysis is a branch of sociology which studies the structure and organization of human interaction, with a more specific focus on conversational interaction.
Definition and characterization
No generally accepted definition of conversation exists, beyond the fact that a conversation involves at least two people talking together. Consequently, the term is often defined by what it is not. A ritualized exchange such as a mutual greeting is not a conversation, and an interaction that includes a marked status differential (such as a boss giving orders) is also not a conversation. An interaction with a tightly focused topic or purpose is also generally not considered a conversation. Summarizing these properties, one authority writes that "Conversation is the kind of speech that happens informally, symmetrically, and for the purposes of establishing and maintaining social ties."
From a less technical perspective, a writer on etiquette in the early 20th century defined conversation as the polite give and take of subjects thought of by people talking with each other for company.
Conversations follow rules of etiquette because conversations are social interactions, and therefore depend on social convention. Specific rules for conversation arise from the cooperative principle. Failure to adhere to these rules causes the conversation to deteriorate or eventually to end. Contributions to a conversation are responses to what has previously been said.
Conversations may be the optimal form of communication, depending on the participants' intended ends. Conversations may be ideal when, for example, each party desires a relatively equal exchange of information, or when the parties desire to build social ties. On the other hand, if permanency or the ability to review such information is important, written communication may be ideal. Or if time-efficient communication is most important, a speech may be preferable.
Conversation involves a lot more nuanced and implied context, that lies beneath just the words.
Conversation is generally face-to-face person-to-person at the same time (synchronous) – possibly online with video applications such as Skype, but might also include audio-only phone calls. It would not generally include internet written communication which tends to be asynchronous (not same time – can read and respond later if at all) and does not fit the 'con'='with' in 'conversation'. In face to face conversation it has been suggested that 85% of the communication is non-verbal/body language – a smile, a frown, a shrug, tone of voice conveying much added meaning to the mere words. Short forms of written communication such as sms are thus frequently misunderstood.
In English slang, a conversation that is generally found to be uninteresting is referred to as 'boring' and the person at the center of that conversation a 'bore'
Classification
Banter
Banter is short witty sentences that bounce back and forth between individuals. Often banter uses clever put-downs and witty insults similar to flyting, misunderstandings (often intentional), zippy wisecracks, zingers, flirtation, and puns. The idea is that each line of banter should "top" the one before it and be, in short, a verbal war of wit.
Films that have used banter as a way of structure in conversations are:
Bringing Up Baby (1938)
His Girl Friday (1940)
The Big Sleep (1946)
Much Ado About Nothing (1993)
Important factors in delivering a banter is the subtext, situation and the rapport with the person. Every line in a banter should be able to evoke both an emotional response and ownership without hurting one's feelings. Following a structure that the involved parties understand is important, even if the subject and structure is absurd, a certain level of progression should be kept in a manner that it connects with the involved parties.
Different methods of story telling could be used in delivering banter, like making an unexpected turn in the flow of structure (interrupting a comfortable structure), taking the conversation towards an expected crude form with evoking questions, doubts, self-conscientiousness (creating intentional misunderstandings), or layering the existing pattern with multiple anchors. It is important to quit the bantering with the sensibility of playground rules, both parties should not obsess on topping each other, continuously after a certain point of interest. It is as Shakespeare said "Brevity is the soul of wit."
Discussion
One element of conversation is discussion: sharing opinions on subjects that are thought of during the conversation. In polite society the subject changes before discussion becomes dispute or controversial. For example, if theology is being discussed, maybe no one is insisting a particular view be accepted.
Subject
Many conversations can be divided into four categories according to their major subject content:
Subjective ideas, which often serve to extend understanding and awareness.
Objective facts, which may serve to consolidate a widely held view.
Other people (usually absent), which may be either critical, competitive, or supportive. This includes gossip.
Oneself, which sometimes indicate attention-seeking behavior or can provide relevant information about oneself to participants in the conversation.
The proportional distribution of any given conversation between the categories can offer useful psychological insights into the mind set of the participants. Practically, however, few conversations fall exclusively into one category. This is the reason that the majority of conversations are difficult to categorize.
Functions
Most conversations may be classified by their goal. Conversational ends may shift over the life of the conversation.
Functional conversation is designed to convey information in order to help achieve an individual or group goal.
Small talk is a type of conversation where the topic is less important than the social purpose of achieving bonding between people or managing personal distance, such as 'how is the weather' might be portrayed as an example, which conveys no practicality whatsoever.
Aspects
Differences between men and women
A study completed in July 2007 by Matthias Mehl of the University of Arizona shows that contrary to popular belief, there is little difference in the number of words used by men and women in conversation. The study showed that on average each gender uses about 16,000 words per day.
Between strangers
There are certain situations, typically encountered while traveling, which result in strangers sharing what would ordinarily be an intimate social space such as sitting together on a bus or airplane. In such situations strangers are likely to share intimate personal information they would not ordinarily share with strangers. A special case emerges when one of the travelers is a mental health professional and the other party shares details of their personal life in the apparent hope of receiving help or advice.
Narcissism
Conversational narcissism is a term used by the Marxist sociologist Charles Derber in his book The Pursuit of Attention: Power and Ego in Everyday Life.
Derber argued that the social support system in America is relatively weak, which leads people to compete for attention. In social situations, he believes that people tend to steer the conversation away from others and toward themselves. "Conversational narcissism is the key manifestation of the dominant attention-getting psychology in America," he wrote. "It occurs in informal conversations among friends, family and coworkers. The profusion of popular literature about listening and the etiquette of managing those who talk constantly about themselves suggests its pervasiveness in everyday life." Derber asserts that this "conversational narcissism" often occurs subtly rather than overtly because it is socially prudent to avoid being judged an egotist.
Derber distinguishes the "shift-response" from the "support-response". A "shift-response" takes the focus of attention away from the last speaker and refocuses on the new speaker, as in: "John: I'm feeling really starved. Mary: Oh, I just ate." Whereas, a "support-response" maintains the focus on the last speaker, as in: "John: I'm feeling really starved. Mary: When was the last time you ate?"
Artificial intelligence
The ability to generate conversation that cannot be distinguished from a human participant has been one test of a successful artificial intelligence (the Turing test). A human judge engages in a natural-language conversation with one human and one machine, during which the machine tries to appear human (and the human does not try to appear other than human). If the judge cannot tell the machine from the human, the machine is said to have passed the test. One limitation of this test is that the conversation is by text as opposed to speech, not allowing tone to be shown.
One's self
Also called intrapersonal communication, the act of conversing with oneself can help solve problems or serve therapeutic purposes like avoiding silence.
Literature
Authors who have written extensively on conversation and attempted to analyze its nature include:
Milton Wright wrote The Art of Conversation, a comprehensive treatment of the subject, in 1936. The book deals with conversation both for its own sake, and for political, sales, or religious ends. Milton portrays conversation as an art or creation that people can play with and give life to.
Kerry Patterson, Joseph Grenny, Al Switzler, and Ron McMillan have written two New York Times bestselling books on conversation. The first one, Crucial Conversations: Tools for Talking When Stakes are High, McGraw-Hill, 2002, teaches skills for handling disagreement and high-stakes issues at work and at home. The second book, Crucial Accountability: Tools for Resolving Violated Expectations, Broken Commitments, and Bad Behavior, McGraw-Hill, 2013, teaches important skills for dealing with accountability issues.
Difficult Conversations: How to Discuss What Matters Most (Viking Penguin, 1999), a book by Bruce Patton, Douglas Patterson and Sheila Heen was one of the work products from the Harvard Negotiation Project. This book built on, and extended the approach developed by Roger Fisher and William Ury in Getting To Yes: Negotiating Agreement Without Giving In (Houghton Mifflin, 1981). The book introduced useful concepts such as the Three Conversations (The 'What Happened' Conversation, The Feelings Conversation, and The Identity Conversation), Creating a Learning Conversation, and Collaborative Problem Solving.
Charles Blattberg has written two books defending an approach to politics that emphasizes conversation, in contrast to negotiation, as the preferred means of resolving conflict. His From Pluralist to Patriotic Politics: Putting Practice First, Oxford and New York: Oxford University Press, 2000, , is a work of political philosophy; and his Shall We Dance? A Patriotic Politics for Canada, Montreal and Kingston: McGill-Queen's University Press, 2003, , applies that philosophy to the Canadian case.
Paul Drew & John Heritage – Talk at Work, a study of how conversation changes in social and workplace situations.
Neil Postman – Amusing Ourselves to Death (Conversation is not the book's specific focus, but discourse in general gets good treatment here)
Deborah Tannen
The Argument Culture: Stopping America's War of Words
Conversational Style: Analyzing Talk Among Friends
Gender and Discourse
I Only Say This Because I Love You
Talking from 9 to 5: Women and Men at Work
That's Not What I Meant!
You Just Don't Understand: Women and Men in Conversation
Daniel Menaker – A Good Talk: The Story and Skill of Conversation (published 2010)
In fiction
Conversation in The Cathedral (1969) is one of the main novels by the Peruvian writer Mario Vargas Llosa.
See also
A Complete Collection of Genteel and Ingenious Conversation (book)
Aizuchi
Awkward silence
Bohm Dialogue
Compulsive talking
Dialectic
Conversation theory
Conversational narcissism
Conversational scoreboard
"Conversation" Sharp MPdoyen of the Georgian period conversationalists
Conversazionea social gathering for conversation and discussion, especially about the arts, literature and science.
Debate
Dialogue
Discourse
King of Clubsfamous Whig conversation club
Online chat
Speech (public address)
References
Works cited
External links
Empathic listening skills How to listen so others will feel heard, or listening first aid (University of California). Download a one-hour seminar on empathic listening and attending skills.
"The art of conversation", Economist, 19 December 2006
Oral communication
Interpersonal communication
Human activities | Conversation | Biology | 2,497 |
4,668,395 | https://en.wikipedia.org/wiki/Sequential%20space | In topology and related fields of mathematics, a sequential space is a topological space whose topology can be completely characterized by its convergent/divergent sequences. They can be thought of as spaces that satisfy a very weak axiom of countability, and all first-countable spaces (notably metric spaces) are sequential.
In any topological space if a convergent sequence is contained in a closed set then the limit of that sequence must be contained in as well. Sets with this property are known as sequentially closed. Sequential spaces are precisely those topological spaces for which sequentially closed sets are in fact closed. (These definitions can also be rephrased in terms of sequentially open sets; see below.) Said differently, any topology can be described in terms of nets (also known as Moore–Smith sequences), but those sequences may be "too long" (indexed by too large an ordinal) to compress into a sequence. Sequential spaces are those topological spaces for which nets of countable length (i.e., sequences) suffice to describe the topology.
Any topology can be refined (that is, made finer) to a sequential topology, called the sequential coreflection of
The related concepts of Fréchet–Urysohn spaces, -sequential spaces, and -sequential spaces are also defined in terms of how a space's topology interacts with sequences, but have subtly different properties.
Sequential spaces and -sequential spaces were introduced by S. P. Franklin.
History
Although spaces satisfying such properties had implicitly been studied for several years, the first formal definition is due to S. P. Franklin in 1965. Franklin wanted to determine "the classes of topological spaces that can be specified completely by the knowledge of their convergent sequences", and began by investigating the first-countable spaces, for which it was already known that sequences sufficed. Franklin then arrived at the modern definition by abstracting the necessary properties of first-countable spaces.
Preliminary definitions
Let be a set and let be a sequence in ; that is, a family of elements of , indexed by the natural numbers. In this article, means that each element in the sequence is an element of and, if is a map, then For any index the tail of starting at is the sequence A sequence is eventually in if some tail of satisfies
Let be a topology on and a sequence therein. The sequence converges to a point written (when context allows, ), if, for every neighborhood of eventually is in is then called a limit point of
A function between topological spaces is sequentially continuous if implies
Sequential closure/interior
Let be a topological space and let be a subset. The topological closure (resp. topological interior) of in is denoted by (resp. ).
The sequential closure of in is the setwhich defines a map, the sequential closure operator, on the power set of If necessary for clarity, this set may also be written or It is always the case that but the reverse may fail.
The sequential interior of in is the set(the topological space again indicated with a subscript if necessary).
Sequential closure and interior satisfy many of the nice properties of topological closure and interior: for all subsets
and ;
and ;
;
; and
That is, sequential closure is a preclosure operator. Unlike topological closure, sequential closure is not idempotent: the last containment may be strict. Thus sequential closure is not a (Kuratowski) closure operator.
Sequentially closed and open sets
A set is sequentially closed if ; equivalently, for all and such that we must have
A set is defined to be sequentially open if its complement is sequentially closed. Equivalent conditions include:
or
For all and such that eventually is in (that is, there exists some integer such that the tail ).
A set is a sequential neighborhood of a point if it contains in its sequential interior; sequential neighborhoods need not be sequentially open (see below).
It is possible for a subset of to be sequentially open but not open. Similarly, it is possible for there to exist a sequentially closed subset that is not closed.
Sequential spaces and coreflection
As discussed above, sequential closure is not in general idempotent, and so not the closure operator of a topology. One can obtain an idempotent sequential closure via transfinite iteration: for a successor ordinal define (as usual)and, for a limit ordinal defineThis process gives an ordinal-indexed increasing sequence of sets; as it turns out, that sequence always stabilizes by index (the first uncountable ordinal). Conversely, the sequential order of is the minimal ordinal at which, for any choice of the above sequence will stabilize.
The transfinite sequential closure of is the terminal set in the above sequence: The operator is idempotent and thus a closure operator. In particular, it defines a topology, the sequential coreflection. In the sequential coreflection, every sequentially-closed set is closed (and every sequentially-open set is open).
Sequential spaces
A topological space is sequential if it satisfies any of the following equivalent conditions:
is its own sequential coreflection.
Every sequentially open subset of is open.
Every sequentially closed subset of is closed.
For any subset that is closed in there exists some and a sequence in that converges to
(Universal Property) For every topological space a map is continuous if and only if it is sequentially continuous (if then ).
is the quotient of a first-countable space.
is the quotient of a metric space.
By taking and to be the identity map on in the universal property, it follows that the class of sequential spaces consists precisely of those spaces whose topological structure is determined by convergent sequences. If two topologies agree on convergent sequences, then they necessarily have the same sequential coreflection. Moreover, a function from is sequentially continuous if and only if it is continuous on the sequential coreflection (that is, when pre-composed with ).
- and -sequential spaces
A -sequential space is a topological space with sequential order 1, which is equivalent to any of the following conditions: The sequential closure (or interior) of every subset of is sequentially closed (resp. open).
or are idempotent.
or
Any sequential neighborhood of can be shrunk to a sequentially-open set that contains ; formally, sequentially-open neighborhoods are a neighborhood basis for the sequential neighborhoods.
For any and any sequential neighborhood of there exists a sequential neighborhood of such that, for every the set is a sequential neighborhood of
Being a -sequential space is incomparable with being a sequential space; there are sequential spaces that are not -sequential and vice-versa. However, a topological space is called a -sequential (or neighborhood-sequential) if it is both sequential and -sequential. An equivalent condition is that every sequential neighborhood contains an open (classical) neighborhood.
Every first-countable space (and thus every metrizable space) is -sequential. There exist topological vector spaces that are sequential but -sequential (and thus not -sequential).
Fréchet–Urysohn spaces
A topological space is called Fréchet–Urysohn if it satisfies any of the following equivalent conditions: is hereditarily sequential; that is, every topological subspace is sequential.
For every subset
For any subset that is not closed in and every there exists a sequence in that converges to
Fréchet–Urysohn spaces are also sometimes said to be "Fréchet," but should be confused with neither Fréchet spaces in functional analysis nor the T1 condition.
Examples and sufficient conditions
Every CW-complex is sequential, as it can be considered as a quotient of a metric space.
The prime spectrum of a commutative Noetherian ring with the Zariski topology is sequential.
Take the real line and identify the set of integers to a point. As a quotient of a metric space, the result is sequential, but it is not first countable.
Every first-countable space is Fréchet–Urysohn and every Fréchet-Urysohn space is sequential. Thus every metrizable or pseudometrizable space — in particular, every second-countable space, metric space, or discrete space — is sequential.
Let be a set of maps from Fréchet–Urysohn spaces to Then the final topology that induces on is sequential.
A Hausdorff topological vector space is sequential if and only if there exists no strictly finer topology with the same convergent sequences.
Spaces that are sequential but not Fréchet-Urysohn
Schwartz space and the space of smooth functions, as discussed in the article on distributions, are both widely-used sequential spaces.
More generally, every infinite-dimensional Montel DF-space is sequential but not Fréchet–Urysohn.
Arens' space is sequential, but not Fréchet–Urysohn.
Non-examples (spaces that are not sequential)
The simplest space that is not sequential is the cocountable topology on an uncountable set. Every convergent sequence in such a space is eventually constant; hence every set is sequentially open. But the cocountable topology is not discrete. (One could call the topology "sequentially discrete".)
Let denote the space of -smooth test functions with its canonical topology and let denote the space of distributions, the strong dual space of ; neither are sequential (nor even an Ascoli space). On the other hand, both and are Montel spaces and, in the dual space of any Montel space, a sequence of continuous linear functionals converges in the strong dual topology if and only if it converges in the weak* topology (that is, converges pointwise).
Consequences
Every sequential space has countable tightness and is compactly generated.
If is a continuous open surjection between two Hausdorff sequential spaces then the set of points with unique preimage is closed. (By continuity, so is its preimage in the set of all points on which is injective.)
If is a surjective map (not necessarily continuous) onto a Hausdorff sequential space and bases for the topology on then is an open map if and only if, for every basic neighborhood of and sequence in there is a subsequence of that is eventually in
Categorical properties
The full subcategory Seq of all sequential spaces is closed under the following operations in the category Top of topological spaces:
The category Seq is closed under the following operations in Top:
Since they are closed under topological sums and quotients, the sequential spaces form a coreflective subcategory of the category of topological spaces. In fact, they are the coreflective hull of metrizable spaces (that is, the smallest class of topological spaces closed under sums and quotients and containing the metrizable spaces).
The subcategory Seq is a Cartesian closed category with respect to its own product (not that of Top). The exponential objects are equipped with the (convergent sequence)-open topology.
P.I. Booth and A. Tillotson have shown that Seq is the smallest Cartesian closed subcategory of Top containing the underlying topological spaces of all metric spaces, CW-complexes, and differentiable manifolds and that is closed under colimits, quotients, and other "certain reasonable identities" that Norman Steenrod described as "convenient".
Every sequential space is compactly generated, and finite products in Seq coincide with those for compactly generated spaces, since products in the category of compactly generated spaces preserve quotients of metric spaces.
See also
Notes
Citations
References
Arkhangel'skii, A.V. and Pontryagin, L.S., General Topology I, Springer-Verlag, New York (1990) .
Engelking, R., General Topology, Heldermann, Berlin (1989). Revised and completed edition.
Goreham, Anthony, "Sequential Convergence in Topological Spaces", (2016)
General topology
Properties of topological spaces | Sequential space | Mathematics | 2,525 |
3,140,754 | https://en.wikipedia.org/wiki/Intermedia%20%28hypertext%29 | Intermedia was the third notable hypertext project to emerge from Brown University, after HES (1967) and FRESS (1969). Intermedia was started in 1985 by Norman Meyrowitz, who had been associated with sooner hypertext research at Brown. The Intermedia project coincided with the establishment of the Institute for Research in Information and Scholarship (IRIS). Some of the materials that came from Intermedia, authored by Meyrowitz, Nancy Garrett, and Karen Catlin were used in the development of HTML.
Intermedia ran on A/UX version 1.1. Intermedia was programmed using an object-oriented toolkit and standard DBMS functions. Intermedia supported bi-directional, dual-anchor links for both text and graphics. Small icons are used as anchor markers. Intermedia properties include author, creation date, title, and keywords. Link information is stored by the system apart from the source text. More than one such set of data can be kept, which allows each user to have their own "web" of information. Intermedia has complete multi-user support, with three levels of access rights: read, write, and annotate, which is similar to Unix permissions.
As promising as Intermedia was, it used a lot of resources for its time (it required 4 MB of RAM and 80 MB of hard drive space in 1989). It was also highly tied to A/UX, a less popular Unix-like operating system that ran on Apple Macintosh computers; thus, it wasn't very portable. In 1991, changes in A/UX and lack of funding ended the Intermedia project.
See also
Xanadu
Microcosm
Hyper-G (or HyperWave)
References
L. Nancy Garrett and Karen E. Smith. "Building a Timeline Editor from Prefab Parts: The Architecture of an Object-Oriented Application". ACM Proceedings of OOPSLA ’86 (September 1986).
L. Nancy Garrett, Norman Meyrowitz, and Karen E. Smith. "Intermedia: Issues, Strategies, and Tactics in the Design of a Hypermedia System". ACM Proceedings of the Conference on Computer-Supported Cooperative Work (December 1986).
Nicole Yankelovich, Karen E. Smith, L. Nancy Garrett and Norman Meyrowitz. "Issues in Designing a Hypermedia Document System: The Intermedia Case Study" in Learning Tomorrow: Journal of the Apple Education Advisory Council, n3 p35-87 Spring 1987.
Karen E. Smith and Stanley B. Zdonik. "Intermedia: A case study of the differences between relational and object-oriented database systems". ACM SIGPLAN Notices, Volume 22, Issue 12 (December 1987) Pages: 452 - 465.
L. Nancy Garrett, Julie Launhardt, and Karen Smith Catlin. "Hypermedia Templates: An Author’s Tool". ACM Proceedings of Hypertext ‘91 (December 1991).
Paul Kahn. "Linking Together Books: Experiments in Adapting Published Materials into Intermedia Documents " in Hypermedia and Literary Studies, Paul Delany and George P. Landow (editors). The MIT Press (March 19, 1994)
The History of Hypertext by Jacob Nielsen (February 1, 1995), https://www.nngroup.com/articles/hypertext-history/ (accessed 1/31/2017)
Brown University Department of Computer Science. (May 23, 2019). A Half-Century of Hypertext at Brown: A Symposium.
Hypertext
Hypermedia
Computer-related introductions in 1985
External links
Video of Norman Meyrowitz demonstrating Intermedia at ACM HUMAN’20 conference, Dec 2020 | Intermedia (hypertext) | Technology | 758 |
5,211,259 | https://en.wikipedia.org/wiki/Ethionamide | Ethionamide is an antibiotic used to treat tuberculosis. Specifically it is used, along with other antituberculosis medications, to treat active multidrug-resistant tuberculosis. It is no longer recommended for leprosy. It is taken by mouth.
Ethionamide has a high rate of side effects. Common side effects include nausea, diarrhea, abdominal pain, and loss of appetite. Serious side effects may include liver inflammation and depression. It should not be used in people with significant liver problems. Use in pregnancy is not recommended as safety is unclear. Ethionamide is in the thioamides family of medications. It is believed to work by interfering with the use of mycolic acid.
Ethionamide was discovered in 1956 and approved for medical use in the United States in 1965. It is on the World Health Organization's List of Essential Medicines.
Medical uses
Ethionamide is used in combination with other antituberculosis agents as part of a second-line regimen for active tuberculosis.
Ethionamide is well absorbed orally with or without food, but is often administered with food to improve tolerance. It crosses the blood brain barrier to achieve concentrations in the cerebral-spinal fluid equivalent to plasma.
The antimicrobial spectrum of ethionamide includes M. tuberculosis, M. bovis and M. smegmatis. It also is used rarely against infections with M. leprae and other nontuberculous mycobacteria such as M. avium and M. kansasii. While working in a similar manner to isoniazid, cross resistance is only seen in 13% of strains, since they are both prodrugs but activated by different pathways. Resistance can emerge from mutations in ethA, which is needed to activate the drug, or ethR, which can be overexpressed to repress ethA. Mutations in inhA or the promoter of inhA can also lead to resistance through changing the binding site or overexpression.
The FDA has placed it in pregnancy category C, because it has caused birth defects in animal studies. It is not known whether ethionamide is excreted into breast milk.
Adverse effects
Ethionamide frequently causes gastrointestinal distress with nausea and vomiting which can lead patients to stop taking it. This can sometimes be improved by taking it with food.
Ethionamide can cause hepatocellular toxicity and is contraindicated in patients with severe liver impairment. Patients on ethionamide should have regular monitoring of their liver function tests. Liver toxicity occurs in up to 5% of patients and follows a pattern similar to isoniazid, usually arising in the first 1 to 3 months of therapy, but can occur even after more than 6 months of therapy. The pattern of liver function test derangement is often a rise in the ALT and AST.
Both central neurological side effects such as psychiatric disturbances and encephalopathy, along with peripheral neuropathy have been reported. Administering pyridoxine along with ethionamide may reduce these effects and is recommended.
Ethionamide is structurally similar to methimazole, which is used to inhibit thyroid hormone synthesis, and has been linked to hypothyroidism in several TB patients. Periodic monitoring of thyroid function while on ethionamide is recommended.
Interactions
Ethionamide may worsen the adverse effects of other antituberculous drugs being taken at the same time. It boosts levels of isoniazid when taken together and can lead to increased rates of peripheral neuropathy and hepatotoxicity. When taken with cycloserine, seizures have been reported. High rates of hepatotoxicty have been reported when taken with rifampicin. The drug's labeling cautions against excessive alcohol ingestion as it may provoke a psychotic reaction.
Mechanism of action
Ethionamide is a prodrug which is activated by the enzyme ethA, a mono-oxygenase in Mycobacterium tuberculosis, and then binds NAD+ to form an adduct which inhibits InhA in the same way as isoniazid. The mechanism of action is thought to be through disruption of mycolic acid.
Expression of the ethA gene is controlled by ethR, a transcriptional repressor. It is thought that improving ethA expression will increase the efficacy of ethionamide and prompting interest by drug developers in EthR inhibitors as a co-drug.
Other names
1314
2-ethylisothionicotinamide
amidazine
thioamide
iridocin
It is sold under the brand name Trecator by Wyeth Pharmaceuticals which was purchased by Pfizer in 2009.
References
External links
Ethionamide (PIM 224)
Pyridines
World Health Organization essential medicines
Thioamides
Wikipedia medicine articles ready to translate
Anti-tuberculosis drugs
Antileprotic drugs | Ethionamide | Chemistry | 1,027 |
63,799 | https://en.wikipedia.org/wiki/Immunoperoxidase | Immunoperoxidase is a type of immunostain used in molecular biology, medical research, and clinical diagnostics. In particular, immunoperoxidase reactions refer to a sub-class of immunohistochemical or immunocytochemical procedures in which the antibodies are visualized via a peroxidase-catalyzed reaction.
Immunohistochemistry and immunocytochemistry are methods used to determine in which cells or parts of cells, a particular protein or other macromolecule are located. These stains use antibodies to bind to specific antigens, usually of protein or glycoprotein origin. Since antibodies are normally invisible, special strategies must be employed to detect these bound antibodies. In an immunoperoxidase procedure, an enzyme known as a peroxidase is used to catalyze a chemical reaction to produce a coloured product.
Simply, a very thin slice of tissue is fixed onto glass, incubated with antibody or a series of antibodies, the last of which is chemically linked to peroxidase. After developing the stain by adding the chemical substrate, the distribution of the stain can be examined by microscopy.
Types of antibodies
Originally all antibodies produced for immunostaining were polyclonal, i.e. raised by normal antibody reactions in animals such as horses or rabbits. Now, many are monoclonal, i.e. produced in tissue culture. Monoclonal antibodies that consist of only one type of antibody tend to provide greater antigen specificity, and also tend to be more consistent between batches.
Methods for immunoperoxidase staining
The first step in immunoperoxidase staining is the binding of the specific (primary) antibody to the cell or tissue sample. The detection of the primary antibody can be then accomplished directly (example 1) or indirectly (examples 2 & 3).
Example 1. The primary antibody can be directly tagged with the enzyme peroxidase which is then used to catalyse a chemical reaction to generate a coloured product.
Example 2. The primary antibody can be tagged with a small molecule that can be recognised by a peroxidase-conjugated binding molecule with high affinity. The most common example of this is a biotin linked primary antibody that binds to an enzyme-bound streptavidin. This method can be used to amplify the signal.
Example 3. An untagged primary antibody is detected using a general secondary antibody that recognises all antibodies originating from same animal species as the primary. The secondary antibody is tagged with peroxidase.
Optimal staining depends on a number of factors including the antibody dilution, the staining chemicals, the preparation and/or fixation of the cells/tissue, and length of incubation with antibody/staining reagents. These are often determined by trial and error rather than any sort of systematic approach.
Alternatives to peroxidase stains
Other catalytic enzymes such as alkaline phosphatase can be used instead of peroxidases for both direct and indirect staining methods. Alternatively, the primary antibody can be detected using fluorescent label (immunofluorescence), or be attached to colloidal gold particles for electron microscopy.
Uses of immunoperoxidase staining
Immunoperoxidase staining is used in clinical diagnostics and in laboratory research.
In clinical diagnostics, immunostaining can be used on tissue biopsies for more detailed histopathological study. In the case of cancer, it can aid in sub-classifying tumours. Immunostaining can also be used to help diagnose skin conditions, glomerulonephritis and to sub classify amyloid deposits. Related techniques are also useful in sub-typing lymphocytes which all look quite similar on light microscopy.
In laboratory research, antibodies against specific markers of cellular differentiation can be used to label individual cell types. This can enable a better understanding of mechanistic changes to specific cell lineages resulting from a particular experimental intervention.
See also
Indirect immunoperoxidase assay
External links
Immunohistochemistry Protocols, Buffers and Troubleshooting
Laboratory techniques | Immunoperoxidase | Chemistry | 873 |
10,116 | https://en.wikipedia.org/wiki/Endocytosis | Endocytosis is a cellular process in which substances are brought into the cell. The material to be internalized is surrounded by an area of cell membrane, which then buds off inside the cell to form a vesicle containing the ingested materials. Endocytosis includes pinocytosis (cell drinking) and phagocytosis (cell eating). It is a form of active transport.
History
The term was proposed by De Duve in 1963. Phagocytosis was discovered by Élie Metchnikoff in 1882.
Pathways
Endocytosis pathways can be subdivided into four categories: namely, receptor-mediated endocytosis (also known as clathrin-mediated endocytosis), caveolae, pinocytosis, and phagocytosis.
Clathrin-mediated endocytosis is mediated by the production of small (approx. 100 nm in diameter) vesicles that have a morphologically characteristic coat made up of the cytosolic protein clathrin. Clathrin-coated vesicles (CCVs) are found in virtually all cells and form domains of the plasma membrane termed clathrin-coated pits. Coated pits can concentrate large extracellular molecules that have different receptors responsible for the receptor-mediated endocytosis of ligands, e.g. low density lipoprotein, transferrin, growth factors, antibodies and many others.
Study in mammalian cells confirm a reduction in clathrin coat size in an increased tension environment. In addition, it suggests that the two apparently distinct clathrin assembly modes, namely coated pits and coated plaques, observed in experimental investigations might be a consequence of varied tensions in the plasma membrane.
Caveolae are the most commonly reported non-clathrin-coated plasma membrane buds, which exist on the surface of many, but not all cell types. They consist of the cholesterol-binding protein caveolin (Vip21) with a bilayer enriched in cholesterol and glycolipids. Caveolae are small (approx. 50 nm in diameter) flask-shape pits in the membrane that resemble the shape of a cave (hence the name caveolae). They can constitute up to a third of the plasma membrane area of the cells of some tissues, being especially abundant in smooth muscle, type I pneumocytes, fibroblasts, adipocytes, and endothelial cells. Uptake of extracellular molecules is also believed to be specifically mediated via receptors in caveolae.
Potocytosis is a form of receptor-mediated endocytosis that uses caveolae vesicles to bring molecules of various sizes into the cell. Unlike most endocytosis that uses caveolae to deliver contents of vesicles to lysosomes or other organelles, material endocytosed via potocytosis is released into the cytosol.
Pinocytosis, which usually occurs from highly ruffled regions of the plasma membrane, is the invagination of the cell membrane to form a pocket, which then pinches off into the cell to form a vesicle (0.5–5 μm in diameter) filled with a large volume of extracellular fluid and molecules within it (equivalent to ~100 CCVs). The filling of the pocket occurs in a non-specific manner. The vesicle then travels into the cytosol and fuses with other vesicles such as endosomes and lysosomes.
Phagocytosis is the process by which cells bind and internalize particulate matter larger than around 0.75 μm in diameter, such as small-sized dust particles, cell debris, microorganisms and apoptotic cells. These processes involve the uptake of larger membrane areas than clathrin-mediated endocytosis and caveolae pathway.
More recent experiments have suggested that these morphological descriptions of endocytic events may be inadequate, and a more appropriate method of classification may be based upon whether particular pathways are dependent on clathrin and dynamin.
Dynamin-dependent clathrin-independent pathways include FEME, UFE, ADBE, EGFR-NCE and IL2Rβ uptake.
Dynamin-independent clathrin-independent pathways include the CLIC/GEEC pathway (regulated by Graf1), as well as MEND and macropinocytosis.
Clathrin-mediated endocytosis is the only pathway dependent on both clathrin and dynamin.
Principal components
The endocytic pathway of mammalian cells consists of distinct membrane compartments, which internalize molecules from the plasma membrane and recycle them back to the surface (as in early endosomes and recycling endosomes), or sort them to degradation (as in late endosomes and lysosomes). The principal components of the endocytic pathway are:
Early endosomes are the first compartment of the endocytic pathway. Early endosomes are often located in the periphery of the cell, and receive most types of vesicles coming from the cell surface. They have a characteristic tubulo-vesicular structure (vesicles up to 1 μm in diameter with connected tubules of approx. 50 nm diameter) and a mildly acidic pH. They are principally sorting organelles where many endocytosed ligands dissociate from their receptors in the acid pH of the compartment, and from which many of the receptors recycle to the cell surface (via tubules). It is also the site of sorting into transcytotic pathway to later compartments (like late endosomes or lysosomes) via transvesicular compartments (like multivesicular bodies (MVB) or endosomal carrier vesicles (ECVs)).
Late endosomes receive endocytosed material en route to lysosomes, usually from early endosomes in the endocytic pathway, from trans-Golgi network (TGN) in the biosynthetic pathway, and from phagosomes in the phagocytic pathway. Late endosomes often contain proteins characteristic of nucleosomes, mitochondria and mRNAs including lysosomal membrane glycoproteins and acid hydrolases. They are acidic (approx. pH 5.5), and are part of the trafficking pathway of mannose-6-phosphate receptors. Late endosomes are thought to mediate a final set of sorting events prior the delivery of material to lysosomes.
Lysosomes are the last compartment of the endocytic pathway. Their chief function is to break down cellular waste products, fats, carbohydrates, proteins, and other macromolecules into simple compounds. These are then returned to the cytoplasm as new cell-building materials. To accomplish this, lysosomes use some 40 different types of hydrolytic enzymes, all of which are manufactured in the endoplasmic reticulum, modified in the Golgi apparatus and function in an acidic environment. The approximate pH of a lysosome is 4.8 and by electron microscopy (EM) usually appear as large vacuoles (1-2 μm in diameter) containing electron dense material. They have a high content of lysosomal membrane proteins and active lysosomal hydrolases, but no mannose-6-phosphate receptor. They are generally regarded as the principal hydrolytic compartment of the cell.
It was recently found that an eisosome serves as a portal of endocytosis in yeast.
Clathrin-mediated
The major route for endocytosis in most cells, and the best-understood, is that mediated by the molecule clathrin. This large protein assists in the formation of a coated pit on the inner surface of the plasma membrane of the cell. This pit then buds into the cell to form a coated vesicle in the cytoplasm of the cell. In so doing, it brings into the cell not only a small area of the surface of the cell but also a small volume of fluid from outside the cell.
Coats function to deform the donor membrane to produce a vesicle, and they also function in the selection of the vesicle cargo. Coat complexes that have been well characterized so far include coat protein-I (COP-I), COP-II, and clathrin. Clathrin coats are involved in two crucial transport steps: (i) receptor-mediated and fluid-phase endocytosis from the plasma membrane to early endosome and (ii) transport from the TGN to endosomes. In endocytosis, the clathrin coat is assembled on the cytoplasmic face of the plasma membrane, forming pits that invaginate to pinch off (scission) and become free CCVs. In cultured cells, the assembly of a CCV takes ~ 1min, and several hundred to a thousand or more can form every minute. The main scaffold component of clathrin coat is the 190-kD protein called clathrin heavy chain (CHC), which is associated with a 25- kD protein called clathrin light chain (CLC), forming three-legged trimers called triskelions.
Vesicles selectively concentrate and exclude certain proteins during formation and are not representative of the membrane as a whole. AP2 adaptors are multisubunit complexes that perform this function at the plasma membrane. The best-understood receptors that are found concentrated in coated vesicles of mammalian cells are the LDL receptor (which removes LDL from circulating blood), the transferrin receptor (which brings ferric ions bound by transferrin into the cell) and certain hormone receptors (such as that for EGF).
At any one moment, about 25% of the plasma membrane of a fibroblast is made up of coated pits. As a coated pit has a life of about a minute before it buds into the cell, a fibroblast takes up its surface by this route about once every 50 minutes. Coated vesicles formed from the plasma membrane have a diameter of about 100 nm and a lifetime measured in a few seconds. Once the coat has been shed, the remaining vesicle fuses with endosomes and proceeds down the endocytic pathway. The actual budding-in process, whereby a pit is converted to a vesicle, is carried out by clathrin; Assisted by a set of cytoplasmic proteins, which includes dynamin and adaptors such as adaptin.
Coated pits and vesicles were first seen in thin sections of tissue in the electron microscope by Thomas F Roth and Keith R. Porter. The importance of them for the clearance of LDL from blood was discovered by Richard G. Anderson, Michael S. Brown and Joseph L. Goldstein in 1977. Coated vesicles were first purified by Barbara Pearse, who discovered the clathrin coat molecule in 1976.
Processes and components
Caveolin proteins like caveolin-1 (CAV1), caveolin-2 (CAV2), and caveolin-3 (CAV3), play significant roles in the caveolar formation process. More specifically, CAV1 and CAV2 are responsible for caveolae formation in non-muscle cells while CAV3 functions in muscle cells. The process starts with CAV1 being synthesized in the ER where it forms detergent-resistant oligomers. Then, these oligomers travel through the Golgi complex before arriving at the cell surface to aid in caveolar formation. Caveolae formation is also reversible through disassembly under certain conditions such as increased plasma membrane tension. These certain conditions then depend on the type of tissues that are expressing the caveolar function. For example, not all tissues that have caveolar proteins have a caveolar structure i.e. the blood-brain barrier.
Though there are many morphological features conserved among caveolae, the functions of each CAV protein are diverse. One common feature among caveolins is their hydrophobic stretches of potential hairpin structures that are made of α-helices. The insertion of these hairpin-like α-helices forms a caveolae coat which leads to membrane curvature. In addition to insertion, caveolins are also capable of oligomerization which further plays a role in membrane curvature. Recent studies have also discovered that polymerase I, transcript release factor, and serum deprivation protein response also play a role in the assembly of caveolae. Besides caveolae assembly, researchers have also discovered that CAV1 proteins can also influence other endocytic pathways. When CAV1 binds to Cdc42, CAV1 inactivates it and regulates Cdc42 activity during membrane trafficking events.
Mechanisms
The process of cell uptake depends on the tilt and chirality of constituent molecules to induce membrane budding. Since such chiral and tilted lipid molecules are likely to be in a "raft" form, researchers suggest that caveolae formation also follows this mechanism since caveolae are also enriched in raft constituents. When caveolin proteins bind to the inner leaflet via cholesterol, the membrane starts to bend, leading to spontaneous curvature. This effect is due to the force distribution generated when the caveolin oligomer binds to the membrane. The force distribution then alters the tension of the membrane which leads to budding and eventually vesicle formation.
Gallery
See also
References
Further reading
External links
Endocytosis at biologyreference.com
Endocytosis - researching endocytic mechanisms at endocytosis.org
Clathrin-mediated endocytosis ASCB Image & Video Library
Types of Endocytosis (Animation)
Cellular processes
Membrane biology
Cell anatomy | Endocytosis | Chemistry,Biology | 2,865 |
27,230,828 | https://en.wikipedia.org/wiki/Funky%20caching | Funky caching is the generation, display and storage of dynamic content when a requested static web page resource isn't available.
The name is based on the idea of treating the web server, serving static pages, as a cache. However, unlike common reverse caches, the funky cache is part of the web server software, and has the ability to dynamically generate this content.
It assumes that all pages are potentially generatable on-demand. If they are not, the conventional HTTP 404 error is returned, as usual.
The overall advantage is relatively small, compared to a conventional cache. Architecturally it is also a poor design. However it does allow small sites with no separate cache layer to achieve some of the advantages of caching (albeit a little inflexibly). This is why it became popular at one time for small, single-server dynamic web sites, particularly those built within the PHP community, where the technique originated.
A drawback to the technique is that it requires the web server process to have write access to the web content space. For security reasons, this is not usually required or permitted.
Origin
It is also known as the ErrorDocument trick, Smarter Caching and Rasmus' Trick, the latter name in honor of Rasmus Lerdorf, creator of the PHP programming language, who was allegedly the first to present this mechanism (though it is also attributed to Stig Bakken).
One common usage is the replacement of the HTTP Error404 ErrorDocument with a dynamic script.
Another way to look at it as a variation of the cache-aside pattern where, instead of reading the data from the data store, it is generated dynamically, and where the implementation spans an architecture (in this case the Web server and the Web app language) instead of being implemented in a single system.
References
Cache (computing) | Funky caching | Technology | 378 |
78,220,898 | https://en.wikipedia.org/wiki/4-Hydroxytryptamine | 4-Hydroxytryptamine (4-HT, 4-HTA), also known as N,N-didesmethylpsilocin, is a naturally occurring tryptamine alkaloid. It is closely related chemically to the neurotransmitter serotonin, the psychedelic psilocin, and is the active form of the tryptamine alkaloid norbaeocystin.
The compound is a serotonin receptor agonist, including of the serotonin 5-HT2A receptor, but in contrast to certain closely related compounds like psilocin, appears to be non-hallucinogenic.
4-HT may occur naturally in Psilocybe baeocystis and Psilocybe cyanescens. It may serve as an alternative precursor in the biosynthesis of psilocybin (4-PO-DMT) in psilocybin mushrooms.
Pharmacology
4-HT is a potent agonist of the serotonin 5-HT2A receptor similarly to psilocin ( = 38nM and 21nM, respectively). It produces serotonergic peripheral effects in animals, shows similar metabolism and metabolic stability to psilocin, and appears to cross the blood–brain barrier and hence is centrally penetrant.
Surprisingly however, the compound, similarly to baeocystin, norbaeocystin, and norpsilocin, does not produce the head-twitch response, a behavioral proxy of psychedelic effects, in animals, and hence is putatively non-hallucinogenic. In older literature, the psychoactive effects of 4-hydroxylated tryptamines have been said to increase in the series of 4-hydroxytryptamine, 4-hydroxy-N-methyltryptamine (norpsilocin), and 4-hydroxy-N,N-dimethyltryptamine (psilocin).
The reason for the lack of hallucinogenic effects with 4-HT and related compounds is unknown, but may be due to biased agonism of the serotonin 5-HT2A receptor; or, more specifically, biased agonism for the β-arrestin2 signaling pathway.
Norbaeocystin is thought to be a prodrug of 4-HT, analogously to how psilocybin is a prodrug of psilocin and how baeocystin is thought to be a prodrug of norpsilocin.
Chemistry
4-HT, also known as 4-hydroxytryptamine, is a substituted tryptamine derivative. It is a positional isomer of the neurotransmitter serotonin (5-hydroxytryptamine; 5-HT), an analogue of the serotonergic psychedelic psilocin (4-HO-DMT), and the dephosphorylated form of the tryptamine alkaloid norbaeocystin (4-phosphoryloxytryptamine; 4-PO-T).
The predicted log P of 4-HT is 0.65 to 1.1.
History
4-HT was first described in the scientific literature by 1959. Its pharmacology was first thoroughly characterized in 2024.
References
Human drug metabolites
Hydroxyarenes
Non-hallucinogenic 5-HT2A receptor agonists
Tryptamine alkaloids | 4-Hydroxytryptamine | Chemistry | 731 |
70,325,208 | https://en.wikipedia.org/wiki/J.%20Arthur%20Reavell | James Arthur Reavell (10 June 1872—26 August 1973) M.I.Mech.E., M.I.Chem.E., F.Inst.F., F.I.M. was a British chemical engineer, who created a major company and was one of the founders and a president of the Institution of Chemical Engineers.
Life
Reavell was born 10 June 1872 in Alnwick, the son of George and Martha Reavell. He attended Alnwick Grammar School and Silcoates School.
He married Emma Mabel Clowes on 24 May 1898, and they had two sons, and one daughter. After her death, he married Winifred E. Haydon on 16 August 1941. One of his sons, Brian Noble Reavell, was also a chemical engineer, and took over as chairman when he retired from his business in 1960.
He died 26 August 1973.
Career
He wanted to be a chemist, but instead served an apprenticeship in electrical engineering, and had several positions, rising to be manager for the European operations of an American chemical engineering company Worthington Pumps then manager of Manlove, Alliott & Co. Ltd., dealing with sugar refining, of which a particular aspect is evaporation. In 1907 he set up his own company Kestner Evaporator and Engineering Co., to deal with the British and Empire market of an improved design of evaporator patented by his friend, French inventor Paul Kestner.
During the First World War his engineering expertise was applied to solving the shortage of explosives in a team headed by Lord Moulton.
He continued as Chairman of Kestner until 1960, when he stepped down in favour of his son, Brian, but continued as president until 1963. The company made a variety of chemical plants with subsidiaries in Australia and South Africa.
Institutions
As Chairman of the Chemical Engineering Group of the Society of Chemical Industry (SCI) he was one of the small group of enthusiasts who founded the Institution of Chemical Engineers, becoming its President 1929. He continued to be active in the SCI, being vice-president from 1931 to 1934.
He was also a Member of the Institution of Mechanical Engineers; a Fellow of Institute of Fuel; and a Fellow, Institute of Metals; Chairman of the British Chemical Plant Manufacturers' Association; Chairman of the chemical engineering industry section of the British Standards Institute; and President of the Combustion Appliance Manufacturer's Association from which he formed the British Coal Utilisation Research Association, and was its vice-president.
References
British chemical engineers
People from Alnwick
Institution of Chemical Engineers
1872 births
1973 deaths | J. Arthur Reavell | Chemistry,Engineering | 535 |
41,746,913 | https://en.wikipedia.org/wiki/Four-spiral%20semigroup | In mathematics, the four-spiral semigroup is a special semigroup generated by four idempotent elements. This special semigroup was first studied by Karl Byleen in a doctoral dissertation submitted to the University of Nebraska in 1977. It has several interesting properties: it is one of the most important examples of bi-simple but not completely-simple semigroups; it is also an important example of a fundamental regular semigroup; it is an indispensable building block of bisimple, idempotent-generated regular semigroups. A certain semigroup, called double four-spiral semigroup, generated by five idempotent elements has also been studied along with the four-spiral semigroup.
Definition
The four-spiral semigroup, denoted by Sp4, is the free semigroup generated by four elements a, b, c, and d satisfying the following eleven conditions:
a2 = a, b2 = b, c2 = c, d2 = d.
ab = b, ba = a, bc = b, cb = c, cd = d, dc = c.
da = d.
The first set of conditions imply that the elements a, b, c, d are idempotents. The second set of conditions imply that a R b L c R d where R and L are the Green's relations in a semigroup. The lone condition in the third set can be written as d ωl a, where ωl is a biorder relation defined by Nambooripad. The diagram below summarises the various relations among a, b, c, d:
Elements of the four-spiral semigroup
General elements
Every element of Sp4 can be written uniquely in one of the following forms:
[c] (ac)m [a]
[d] (bd)n [b]
[c] (ac)m ad (bd)n [b]
where m and n are non-negative integers and terms in square brackets may be omitted as long as the remaining product is not empty. The forms of these elements imply that Sp4 has a partition Sp4 = A ∪ B ∪ C ∪ D ∪ E where
A = { a(ca)n, (bd)n+1, a(ca)md(bd)n : m, n non-negative integers }
B = { (ac)n+1, b(db)n, a(ca)m(db) n+1 : m, n non-negative integers }
C = { c(ac)m, (db)n+1, (ca)m+1(db)n+1 : m, n non-negative integers }
D = { d(bd)n, (ca)m+1(db)n+1d : m, n non-negative integers }
E = { (ca)m : m positive integer }
The sets A, B, C, D are bicyclic semigroups, E is an infinite cyclic semigroup and the subsemigroup D ∪ E is a nonregular semigroup.
Idempotent elements
The set of idempotents of Sp4, is {an, bn, cn, dn : n = 0, 1, 2, ...} where, a0 = a, b0 = b, c0 = c, d0 = d, and for n = 0, 1, 2, ....,
an+1 = a(ca)n(db)nd
bn+1 = a(ca)n(db)n+1
cn+1 = (ca)n+1(db)n+1
dn+1 = (ca)n+1(db)n+ld
The sets of idempotents in the subsemigroups A, B, C, D (there are no idempotents in the subsemigoup E) are respectively:
EA = { an : n = 0,1,2, ... }
EB = { bn : n = 0,1,2, ... }
EC = { cn : n = 0,1,2, ... }
ED = { dn : n = 0,1,2, ... }
Four-spiral semigroup as a Rees-matrix semigroup
Let S be the set of all quadruples (r, x, y, s) where r, s, ∈ { 0, 1 } and x and y are nonnegative integers and define a binary operation in S by
The set S with this operation is a Rees matrix semigroup over the bicyclic semigroup, and the four-spiral semigroup Sp4 is isomorphic to S.
Properties
By definition itself, the four-spiral semigroup is an idempotent generated semigroup (Sp4 is generated by the four idempotents a, b. c, d.)
The four-spiral semigroup is a fundamental semigroup, that is, the only congruence on Sp4 which is contained in the Green's relation H in Sp4 is the equality relation.
Double four-spiral semigroup
The fundamental double four-spiral semigroup, denoted by DSp4, is the semigroup generated by five elements a, b, c, d, e satisfying the following conditions:
a2 = a, b2 = b, c2 = c, d2 = d, e2 = e
ab = b, ba = a, bc = b, cb = c, cd = d, dc = c, de = d, ed = e
ae = e, ea = e
The first set of conditions imply that the elements a, b, c, d, e are idempotents. The second set of conditions state the Green's relations among these idempotents, namely, a R b L c R d L e. The two conditions in the third set imply that e ω a where ω is the biorder relation defined as ω = ωl ∩ ωr.
References
Semigroup theory
4 (number)
Spirals | Four-spiral semigroup | Mathematics | 1,263 |
6,077,451 | https://en.wikipedia.org/wiki/Medication%20package%20insert | A package insert is a document included in the package of a medication that provides information about that drug and its use. For prescription medications, the insert is technical, providing information for medical professionals about how to prescribe the drug. Package inserts for prescription drugs often include a separate document called a "patient package insert" with information written in plain language intended for the end-user—the person who will take the drug or give the drug to another person, such as a minor. Inserts for over-the-counter medications are also written plainly.
In the United States, labelling for the healthcare practitioner is called "Prescribing Information" (PI), and labelling for patients and/or caregivers includes "Medication Guides", "Patient Package Inserts", and "Instructions for Use". In Europe, the technical document is called the "summary of product characteristics" (SmPC), and the document for end-users is called the "patient information leaflet" (PIL) or "package leaflet".
Similar documents attached to the outside of a package are sometimes called outserts.
Responsible agencies
Each country or region has their own regulatory body.
In the European Union, the European Medicines Agency has jurisdiction and the relevant documents are called the "summary of product characteristics" (SPC or SmPC) and the document for end-users is called the "patient information leaflet" or "package leaflet". The SPC is not intended to give general advice about treatment of a condition but does state how the product is to be used for a specific treatment. It forms the basis of information for health professionals to know how to use the specific product safely and effectively. The package leaflet supplied with the product is aimed at end-users.
In the United States, the Food and Drug Administration (FDA) determines the requirements for patient package inserts. In the United States, the FDA will occasionally issue revisions to previously approved package inserts, in much the same way as an auto manufacturer will issue recalls upon discovering a problem with a certain car. The list of 1997 drug labelling changes can be found on the FDA's website, here. The first patient package insert required by the FDA was in 1968, mandating that isoproterenol inhalation medication must contain a short warning that excessive use could cause breathing difficulties. The second patient package insert required by the FDA was in 1970, mandating that combined oral contraceptive pills must contain information for the patient about specific risks and benefits. The patient package insert issue was revisited in 1980 and in 1995 without conclusive action being taken. Finally, in January 2006, the FDA released a major revision to the patient package insert guidelines, the first in 25 years. The new requirements include a section called Highlights which summarizes the most important information about benefits and risks; a Table of Contents for easy reference; the date of initial product approval; and a toll-free number and Internet address to encourage more widespread reporting of information regarding suspected adverse events.
Other national or international organizations that regulate medical information include the Japanese Ministry of Health, Labour, and Welfare (MHLW). Other country-specific agencies, especially in the case of EU (European Union) countries and candidates, plus countries of South America and many in Asia and the Far East, rely heavily on the work of these three primary regulators.
Sections of the Prescribing Information
The Prescribing Information follows one of two formats: "physician labeling rule" format or "old" (non-PLR) format. For "old" format labeling a "product title" may be listed first and may include the proprietary name (if any), the nonproprietary name, dosage form(s), and other information about the product. The other sections are as follows:
Description - includes the proprietary name (if any), nonproprietary name, dosage form(s), qualitative and/or quantitative ingredient information, the pharmacologic or therapeutic class of the drug, chemical name and structural formula of the drug, and if appropriate, other important chemical or physical information, such as physical constants, or pH.
Clinical Pharmacology - tells how the medicine works in the body, how it is absorbed and eliminated, and what its effects are likely to be at various concentrations. May also contain results of various clinical trials (studies) and/or explanations of the medication's effect on various populations (e.g. children, women, etc.).
Indications and Usage - uses (indications) for which the drug has been FDA-approved (e.g. migraines, seizures, high blood pressure). Physicians legally can and often do prescribe medicines for purposes not listed in this section (so-called "off-label uses").
Contraindications - lists situations in which the medication should not be used, for example in patients with other medical conditions such as kidney problems or allergies
Warnings - covers possible serious side effects that may occur (e.g., boxed warning)
Precautions - explains how to use the medication safely including physical impairments, food (grapefruits) and drug interactions; for example "Do not drink alcohol while taking this medication" or "Do not take this medication if you are currently taking MAOI inhibitors"
Adverse Reactions - lists all side effects observed in all studies of the drug (as opposed to just the dangerous side effects which are separately listed in "Warnings" section)
Use in specific populations (pregnancy, lactation (breast-feeding), females and males of reproductive potential, pediatric, geriatric)
Drug Abuse and Dependence - provides information regarding whether prolonged use of the medication can cause physical dependence (only included if applicable)
Overdosage - gives the results of an overdose and provides recommended action in such cases
Dosage and Administration - gives recommended dosage(s); may list more than one for different conditions or different patients (e.g., lower dosages for children)
How Supplied - includes the dosage form(s), strength(s), units in which the dosage form(s) are ordinarily available, identifying features of the dosage form(s) such as the National Drug Code (NDC), and special handling and storage conditions (e.g., "Store between 68 and 78°F ")
Other uses and initiatives
In addition to the obvious use of inclusion with medications, Prescribing Information have been used or provided in other forms. In the United States, the Prescribing Information for thousands of prescription drugs are available at the DailyMed website, provided by the National Library of Medicine.
South Africa has taken the initiative of making all package inserts available electronically via the internet, listed by trade name, generic name, and classification, and Canada is working on a similar capability. The UK-based electronic medicines compendium provides freely available online access to both Patient Information Leaflets (intended for consumers) and Summary of Product Characteristics (aimed at healthcare professionals) for products available in the UK.
Patient information is, understandably, usually generated initially in the native language of the country where the product is being developed. This leads to inconsistency in format, terminology, tone, and content. PILLS (Patient Information Language Localisation System) is a one-year effort by the European Commission to produce a prototype tool which will support the creation of various kinds of medical documentation simultaneously in multiple languages, by storing the information in a database and allowing a variety of forms and languages of output.
See also
Drug labelling
Patient education
References
External links
South African Electronic Package Inserts
EMA guidance on preparing SmPC
Electronic Medicines Compendium, which published SmPCs and Package Leaflets in the UK
dailymed.nlm.nih.gov Drug labels at DailyMed website
labels.fda.gov Drug labels at FDA website
Health informatics
drug marketing and sales
Packaging
Labels | Medication package insert | Biology | 1,620 |
985,616 | https://en.wikipedia.org/wiki/Ceric%20ammonium%20nitrate | Ceric ammonium nitrate (CAN) is the inorganic compound with the formula . This orange-red, water-soluble cerium salt is a specialised oxidizing agent in organic synthesis and a standard oxidant in quantitative analysis.
Preparation, properties, and structure
The anion is generated by dissolving in hot and concentrated nitric acid ().
The salt consists of the hexanitratocerate(IV) anion and a pair of ammonium cations . The ammonium ions are not involved in the oxidising reactions of this salt. In the anion each nitrate group chelates the cerium atom in a bidentate manner as shown below:
The anion has Th (idealized Oh) molecular symmetry. The core defines an icosahedron.
is a strong one-electron oxidizing agent. In terms of its redox potential ( vs. N.H.E.) it is an even stronger oxidizing agent than (). Few shelf-stable reagents are stronger oxidants. In the redox process Ce(IV) is converted to Ce(III), a one-electron change, signaled by the fading of the solution color from orange to a pale yellow (providing that the substrate and product are not strongly colored).
Applications in organic chemistry
In organic synthesis, CAN is useful as an oxidant for many functional groups (alcohols, phenols, and ethers) as well as C–H bonds, especially those that are benzylic. Alkenes undergo dinitroxylation, although the outcome is solvent-dependent. Quinones are produced from catechols and hydroquinones and even nitroalkanes are oxidized.
CAN provides an alternative to the Nef reaction; for example, for ketomacrolide synthesis where complicating side reactions usually encountered using other reagents. Oxidative halogenation can be promoted by CAN as an in situ oxidant for benzylic bromination, and the iodination of ketones and uracil derivatives.
For the synthesis of heterocycles
Catalytic amounts of aqueous CAN allow the efficient synthesis of quinoxaline derivatives. Quinoxalines are known for their applications as dyes, organic semiconductors, and DNA cleaving agents. These derivatives are also components in antibiotics such as echinomycin and actinomycin. The CAN-catalyzed three-component reaction between anilines and alkyl vinyl ethers provides an efficient entry into 2-methyl-1,2,3,4-tetrahydroquinolines and the corresponding quinolines obtained by their aromatization.
As a deprotection reagent
CAN is traditionally used to release organic ligands from metal carbonyls. In the process, the metal is oxidised, CO is evolved, and the organic ligand is released for further manipulation. For example, with the Wulff–Dötz reaction an alkyne, carbon monoxide, and a chromium carbene are combined to form a chromium half-sandwich complex and the phenol ligand can be isolated by mild CAN oxidation.
CAN is used to cleave para-methoxybenzyl and 3,4-dimethoxybenzyl ethers, which are protecting groups for alcohols. Two equivalents of CAN are required for each equivalent of para-methoxybenzyl ether. The alcohol is released, and the para-methoxybenzyl ether converts to para-methoxybenzaldehyde. The balanced equation is as follows:
Other applications
CAN is also a component of chrome etchant, a material that is used in the production of photomasks and liquid crystal displays. It is also an effective nitration reagent, especially for the nitration of aromatic ring systems. In acetonitrile, CAN reacts with anisole to obtain ortho-nitration products.
References
External links
Oxidizing Agents: Cerium Ammonium Nitrate
Ammonium compounds
Cerium(IV) compounds
Nitrates
Coordination complexes
Oxidizing agents | Ceric ammonium nitrate | Chemistry | 863 |
2,821,615 | https://en.wikipedia.org/wiki/Electrochemical%20gradient | An electrochemical gradient is a gradient of electrochemical potential, usually for an ion that can move across a membrane. The gradient consists of two parts:
The chemical gradient, or difference in solute concentration across a membrane.
The electrical gradient, or difference in charge across a membrane.
If there are unequal concentrations of an ion across a permeable membrane, the ion will move across the membrane from the area of higher concentration to the area of lower concentration through simple diffusion. Ions also carry an electric charge that forms an electric potential across a membrane. If there is an unequal distribution of charges across the membrane, then the difference in electric potential generates a force that drives ion diffusion until the charges are balanced on both sides of the membrane.
Electrochemical gradients are essential to the operation of batteries and other electrochemical cells, photosynthesis and cellular respiration, and certain other biological processes.
Overview
Electrochemical energy is one of the many interchangeable forms of potential energy through which energy may be conserved. It appears in electroanalytical chemistry and has industrial applications such as batteries and fuel cells. In biology, electrochemical gradients allow cells to control the direction ions move across membranes. In mitochondria and chloroplasts, proton gradients generate a chemiosmotic potential used to synthesize ATP, and the sodium-potassium gradient helps neural synapses quickly transmit information.
An electrochemical gradient has two components: a differential concentration of electric charge across a membrane and a differential concentration of chemical species across that same membrane. In the former effect, the concentrated charge attracts charges of the opposite sign; in the latter, the concentrated species tends to diffuse across the membrane to an equalize concentrations. The combination of these two phenomena determines the thermodynamically-preferred direction for an ion's movement across the membrane.
The combined effect can be quantified as a gradient in the thermodynamic electrochemical potential:
with
the chemical potential of the ion species
the charge per ion of the species
, Faraday constant (the electrochemical potential is implicitly measured on a per-mole basis)
, the local electric potential.
Sometimes, the term "electrochemical potential" is abused to describe the electric potential generated by an ionic concentration gradient; that is, .
An electrochemical gradient is analogous to the water pressure across a hydroelectric dam. Routes unblocked by the membrane (e.g. membrane transport protein or electrodes) correspond to turbines that convert the water's potential energy to other forms of physical or chemical energy, and the ions that pass through the membrane correspond to water traveling into the lower river. Conversely, energy can be used to pump water up into the lake above the dam, and chemical energy can be used to create electrochemical gradients.
Chemistry
The term typically applies in electrochemistry, when electrical energy in the form of an applied voltage is used to modulate the thermodynamic favorability of a chemical reaction. In a battery, an electrochemical potential arising from the movement of ions balances the reaction energy of the electrodes. The maximum voltage that a battery reaction can produce is sometimes called the standard electrochemical potential of that reaction.
Biological context
The generation of a transmembrane electrical potential through ion movement across a cell membrane drives biological processes like nerve conduction, muscle contraction, hormone secretion, and sensation. By convention, physiological voltages are measured relative to the extracellular region; a typical animal cell has an internal electrical potential of (−70)–(−50) mV.
An electrochemical gradient is essential to mitochondrial oxidative phosphorylation. The final step of cellular respiration is the electron transport chain, composed of four complexes embedded in the inner mitochondrial membrane. Complexes I, III, and IV pump protons from the matrix to the intermembrane space (IMS); for every electron pair entering the chain, ten protons translocate into the IMS. The result is an electric potential of more than . The energy resulting from the flux of protons back into the matrix is used by ATP synthase to combine inorganic phosphate and ADP.
Similar to the electron transport chain, the light-dependent reactions of photosynthesis pump protons into the thylakoid lumen of chloroplasts to drive the synthesis of ATP. The proton gradient can be generated through either noncyclic or cyclic photophosphorylation. Of the proteins that participate in noncyclic photophosphorylation, photosystem II (PSII), plastiquinone, and cytochrome b6f complex directly contribute to generating the proton gradient. For each four photons absorbed by PSII, eight protons are pumped into the lumen.
Several other transporters and ion channels play a role in generating a proton electrochemical gradient. One is TPK3, a potassium channel that is activated by Ca2+ and conducts K+ from the thylakoid lumen to the stroma, which helps establish the electric field. On the other hand, the electro-neutral K+ efflux antiporter (KEA3) transports K+ into the thylakoid lumen and H+ into the stroma, which helps establish the pH gradient.
Ion gradients
Since the ions are charged, they cannot pass through cellular membranes via simple diffusion. Two different mechanisms can transport the ions across the membrane: active or passive transport.
An example of active transport of ions is the Na+-K+-ATPase (NKA). NKA is powered by the hydrolysis of ATP into ADP and an inorganic phosphate; for every molecule of ATP hydrolized, three Na+ are transported outside and two K+ are transported inside the cell. This makes the inside of the cell more negative than the outside and more specifically generates a membrane potential Vmembrane of about .
An example of passive transport is ion fluxes through Na+, K+, Ca2+, and Cl− channels. Unlike active transport, passive transport is powered by the arithmetic sum of osmosis (a concentration gradient) and an electric field (the transmembrane potential). Formally, the molar Gibbs free energy change associated with successful transport is where represents the gas constant, represents absolute temperature, is the charge per ion, and represents the Faraday constant.
In the example of Na+, both terms tend to support transport: the negative electric potential inside the cell attracts the positive ion and since Na+ is concentrated outside the cell, osmosis supports diffusion through the Na+ channel into the cell. In the case of K+, the effect of osmosis is reversed: although external ions are attracted by the negative intracellular potential, entropy seeks to diffuse the ions already concentrated inside the cell. The converse phenomenon (osmosis supports transport, electric potential opposes it) can be achieved for Na+ in cells with abnormal transmembrane potentials: at , the Na+ influx halts; at higher potentials, it becomes an efflux.
Proton gradients
Proton gradients in particular are important in many types of cells as a form of energy storage. The gradient is usually used to drive ATP synthase, flagellar rotation, or metabolite transport. This section will focus on three processes that help establish proton gradients in their respective cells: bacteriorhodopsin and noncyclic photophosphorylation and oxidative phosphorylation.
Bacteriorhodopsin
The way bacteriorhodopsin generates a proton gradient in Archaea is through a proton pump. The proton pump relies on proton carriers to drive protons from the side of the membrane with a low H+ concentration to the side of the membrane with a high H+ concentration. In bacteriorhodopsin, the proton pump is activated by absorption of photons of 568nm wavelength, which leads to isomerization of the Schiff base (SB) in retinal forming the K state. This moves SB away from Asp85 and Asp212, causing H+ transfer from the SB to Asp85 forming the M1 state. The protein then shifts to the M2 state by separating Glu204 from Glu194 which releases a proton from Glu204 into the external medium. The SB is reprotonated by Asp96 which forms the N state. It is important that the second proton comes from Asp96 since its deprotonated state is unstable and rapidly reprotonated with a proton from the cytosol. The protonation of Asp85 and Asp96 causes re-isomerization of the SB, forming the O state. Finally, bacteriorhodopsin returns to its resting state when Asp85 releases its proton to Glu204.
Photophosphorylation
PSII also relies on light to drive the formation of proton gradients in chloroplasts, however, PSII utilizes vectorial redox chemistry to achieve this goal. Rather than physically transporting protons through the protein, reactions requiring the binding of protons will occur on the extracellular side while reactions requiring the release of protons will occur on the intracellular side. Absorption of photons of 680nm wavelength is used to excite two electrons in P680 to a higher energy level. These higher energy electrons are transferred to protein-bound plastoquinone (PQA) and then to unbound plastoquinone (PQB). This reduces plastoquinone (PQ) to plastoquinol (PQH2) which is released from PSII after gaining two protons from the stroma. The electrons in P680 are replenished by oxidizing water through the oxygen-evolving complex (OEC). This results in release of O2 and H+ into the lumen, for a total reaction of
4h\nu + 2H2O + 2PQ + 4H+ stroma -> O2 + 2PQH2 + 4H+ lumen
After being released from PSII, PQH2 travels to the cytochrome b6f complex, which then transfers two electrons from PQH2 to plastocyanin in two separate reactions. The process that occurs is similar to the Q-cycle in Complex III of the electron transport chain. In the first reaction, PQH2 binds to the complex on the lumen side and one electron is transferred to the iron-sulfur center which then transfers it to cytochrome f which then transfers it to plastocyanin. The second electron is transferred to heme bL which then transfers it to heme bH which then transfers it to PQ. In the second reaction, a second PQH2 gets oxidized, adding an electron to another plastocyanin and PQ. Both reactions together transfer four protons into the lumen.
Oxidative phosphorylation
In the electron transport chain, complex I (CI) catalyzes the reduction of ubiquinone (UQ) to ubiquinol (UQH2) by the transfer of two electrons from reduced nicotinamide adenine dinucleotide (NADH) which translocates four protons from the mitochondrial matrix to the IMS:
Complex III (CIII) catalyzes the Q-cycle. The first step involving the transfer of two electrons from the UQH2 reduced by CI to two molecules of oxidized cytochrome c at the Qo site. In the second step, two more electrons reduce UQ to UQH2 at the Qi site. The total reaction is:
Complex IV (CIV) catalyzes the transfer of two electrons from the cytochrome c reduced by CIII to one half of a full oxygen. Utilizing one full oxygen in oxidative phosphorylation requires the transfer of four electrons. The oxygen will then consume four protons from the matrix to form water while another four protons are pumped into the IMS, to give a total reaction
See also
Concentration cell
Transmembrane potential difference
Action potential
Cell potential
Electrodiffusion
Galvanic cell
Electrochemical cell
Proton exchange membrane
Reversal potential
References
Stephen T. Abedon, "Important words and concepts from Chapter 8, Campbell & Reece, 2002 (1/14/2005)", for Biology 113 at the Ohio State University
Cellular respiration
Electrochemical concepts
Electrophysiology
Membrane biology
Physical quantities
Thermodynamics | Electrochemical gradient | Physics,Chemistry,Mathematics,Biology | 2,591 |
7,004,848 | https://en.wikipedia.org/wiki/F%C3%A1inne | (; pl. Fáinní but often Fáinnes in English) is the name of a pin badge worn to show fluency in, or a willingness to speak, the Irish language.
The three modern versions of the pin as relaunched in 2014 by Conradh na Gaeilge are the Fáinne Óir (gold circle), Seanfháinne (old fáinne/circle) and Fáinne Airgid (silver circle).
In other contexts, fáinne simply means "ring" or "circle" and is also used to give such terms as fáinne pósta (wedding ring), fáinne an lae (daybreak), Tiarna na bhFáinní (The Lord of the Rings), and fáinne cluaise (earring).
An Fáinne Úr
An Fáinne Úr ('úr' meaning 'new') is the modernised rendition of the Fáinne, having been updated in 2014 by Conradh na Gaeilge. There are three versions presently available, none requiring test or certification:
Fáinne Óir (Gold Fáinne) – for fluent speakers;
Fáinne Mór Óir (literally, "Large Gold Fáinne") – traditional larger, old style solid 9ct Gold (Colour), the style worn by Liam Neeson in his film portrayal of Michael Collins;
Fáinne Airgid (Silver Fáinne) – for speakers with a basic working knowledge of the language.
An Fáinne
(The Original Organisation)
Two Irish language organisations, An Fáinne (est. 1916) ("The Ring" or "The Circle" in Irish) and the Society of Gaelic Writers (est. 1911), were founded by Piaras Béaslaí (1881–1965).
They were intended to work together to a certain extent, the former promoting the language and awarding those fluent in its speaking with a Fáinne Óir (Gold Ring) lapel pin, and the latter would promote and create a pool of quality literary works in the language.
All the personnel actively involved in promoting the concept of An Fáinne were associated with Conradh na Gaeilge, and from an early time, An Fáinne used the Dublin postal address of 25 Cearnóg Pharnell / Parnell Square, the then HQ of Conradh na Gaeilge though the organisations were officially separate, at least at first.
The effectiveness of the organisation was acknowledged in the Dáil Éireann on 6 August 1920, when Richard Mulcahy, the Sinn Féin Teachta Dála for Clontarf suggested that a league on the model of the Fáinne for the support of Irish manufactures might be established.
The Fáinne lapel pins were, at first, a limited success. They appealed mainly to Nationalists and Republicans, for whom the language was generally learnt as adults as a second language. The appeal to people for whom Irish was the native tongue was limited. They spoke Irish, as did everyone from their village, so there was no point whatsoever wearing a pin to prove it, even if they could have afforded one, or for that matter, even known they existed.
In the early 1920s, many people who earned their Fáinne did so in prison, the majority of these being anti-treaty Irish Republican Army (IRA) Volunteers during the Irish Civil War.
History
According to Piaras Béaslaí's own article in the magazine Iris An Fháinne in 1922, he states that in the winter of 1915 the language movement was at a low ebb due to lack of funds and a large portion of the best Gaels being so involved in the work of the volunteers that they were forgetting about speaking Irish. He says he wrote an article in The Leader proposing that Gaels establish an association of those who would take a solemn oath to only speak Irish at certain events and to other Gaeilgeoirí and that they should wear a clear symbol.
The article got many letters in favour and against, but two men, Tadhg Ó Scanaill and Colm Ó Murchadha, came to him asking him to organise a meeting towards setting up a council. He says that it was they who set the whole thing up. He says that he went to speak to Cú Uladh (Peadar Mac Fhionnlaíoch 1856–1942), then vice president of Conradh na Gaeilge, and he highly praised the idea.
The meeting was organised for some time in the spring of 1916 in Craobh an Chéitinnigh (the Keating Branch). They went to a 'seanchus' prior to their own meeting in the Ard Chraobh (High Branch) and presented their idea to all those present. They were so taken with the idea that they all came with them to their own meeting in Craobh an Chéitinnigh.
Cú Uladh was there before them and at this meeting and they decided they would (1) form the association and (2) name it "An Fáinne" instead of "An Fáinne Gaedhalach", which was proposed by Colm Ó Murchadha, and three officers were elected to conduct the work of the association.
Piaras supposes that Tadhg Ó Sganaill first thought of the Fáinne (ring) as the symbol. It was an inspired idea, he says, because no one had even thought of this symbol when the name was first proposed.
He states at the end of the article that they had only begun the work of the committee when Easter Week arrived and some of the small amount that were involved were snatched away, but he says, the work continued and the world knows how they well they got on since then.
Recognition
The consistently high standard required to qualify for the Fáinne at this time made them quite prestigious, and there are many reports of people being recruited as night-school teachers of Irish-based purely on the fact they wore the pin.
The President of the Executive Council of the Irish Free State, W. T. Cosgrave acknowledged the Fáinne on 8 February 1924 as an indicator of Irish Language proficiency.
Demise
The fact that the underlying reason many Fáinne wearers had studied Irish was political meant that the semi-independence of the Irish Free State, and the later complete independence of the Irish Republic, along with a period of relative peace in the new province of Northern Ireland, meant they had, to some extent, achieved their aim. Twenty years or so later, a Fáinne would be a very rare sight. Due to lack of demand they were no longer manufactured, and the organisation had fizzled out.
'An Fáinne Nua'
Conradh na Gaeilge and other Irish-language bodies attempted a revival, circa 1965, of the Fáinne, which, for a short time at least, became successful: An Fáinne Nua ('The New Fáinne') was marketed with the slogan Is duitse an Fáinne Nua! – meaning "The New Fáinne is for you!."
It came in three varieties:
An Fáinne Nua Óir (The new Gold Fáinne),
An Fáinne Nua Airgid (The new Silver Fáinne),
An Fáinne Nua Daite (The new coloured Fáinne).
The Gold Fáinne was manufactured from 9ct Gold, whilst the other two were sterling silver. The Coloured Fáinne also had an enamel blue ring separating two concentric silver circles. The prices for the Gold, Silver and Coloured varieties in 1968 were twelve shillings and sixpence, four shillings and five shillings respectively.
They were popular in Ireland during the 1960s–1970s, but fell into relative disuse shortly afterwards. Included among reasons commonly given for this were that the change in fashion made it impractical to wear a lapel pin; the resumption of hostilities in Northern Ireland making people either not wanting to show publicly a "love for things Irish" for fear of intimidation; or, for the more radical elements to place "Irishness" second to "freedom".
Non-Fáinne variations
Cúpla Focal badge
As cúpla focal means "a couple of words". The Conradh na Gaeilge website notes that this badge is "Suitable for anyone who has a few words of Irish."
Béal na nGael
The Béal na nGael (Mouth of the Irish) is a different pin badge that shows a face with spiked hair and an open mouth. It was developed by the students of the Gaelcholáiste Reachrann gaelscoil and marketed primarily to youth in the Dublin Area. "The aim of the badge is to let the world know that the user is both willing and able to speak Irish, and the students say that what they are promoting is 'a practical product to stimulate more peer-to-peer communication through Irish.'"
"The badge won't threaten the place of the Fáinne, they say, because their target market is an age group which is not wearing the Fáinne and which, their market research suggests, is in many cases not even aware that the Fáinne exists. They hope this target market will latch on to the badge and wear it as an invitation to others to speak to them in Irish."
References
External links
Official website
Culture of Ireland
Irish words and phrases
Types of jewellery
Symbols
Rings (jewellery) | Fáinne | Mathematics | 1,888 |
29,574,235 | https://en.wikipedia.org/wiki/Ministry%20of%20Energy%20%28Azerbaijan%29 | The Ministry of Energy of Azerbaijan Republic () is a governmental agency within the Cabinet of Azerbaijan in charge of regulating the activities in the industry of production and energy sector of Azerbaijan Republic. The ministry is headed by Parviz Shahbazov.
History
The ministry was established according to the Presidential Decree No. 458 on April 18, 2001. The functions and obligations of the ministry were stipulated in Presidential Decree No. 575 dated September 6, 2001. The ministry's statute was approved by the Azerbaijani Parliament on May 15, 2006. Later, on 22 October 2013, this ministry was liquidated, and its function was passed to the Ministry of Energy established on the same date according to Presidential Order No.3. Regulations of the Ministry of Energy of Azerbaijan were approved according to Presidential Decree No.149 on 11 April 2014.
Structure
The ministry regulates the activities in the production and energy production complex. These activities include upstream and downstream activities, exploration and development of fields, operations of oil and gas refineries, power and heat generation, supply and distribution through the networks, and so forth. The State Oil Company of Azerbaijan (SOCAR), Azerkimya State Company, Azerigas Company, Azerenerji JSC, Azneftkimyamash JSC are all part of the complex. On January 11, 2018, structural changes were conducted in the Ministry of Energy to optimize the administration. New departments and divisions were established, among them: Oil Chemistry Department, Internal Control Department. Also, several other departments were reorganized.
The Ministry of Industry and Energy of Azerbaijan Republic has agreements and cooperates with European Energy Charter, Organization of the Black Sea Economic Cooperation, Executive Committee of the CIS Energy Council, Organization for Economic Cooperation, US Agency for International Development, European Commission of European Union (INOGATE, TACIS, TRACECA), UN Economic Commission for Europe (UNECE), International Atomic Energy Agency, Coordination Council for the development of oil transportation corridor within the framework of GUAM, World Trade Organization, Work Group for cooperation with NATO, Special Work Group of the UN Economic and Social Council, International Monetary Fund, World Bank, European Bank for Reconstruction and Development, German KFW Bank, Islamic Development Bank, Asia Development Bank, Japanese Bank for International Cooperation.
Cooperation with other countries
Germany
The Ministry of Energy of Azerbaijan held a meeting with the Eastern Committee of the German economy on 13 February 2018. Parviz Shahbazov mentioned that the history of economic and cultural cooperation of Azerbaijan with Germany started 200 years ago when the German from Vurtemberg moved to Azerbaijan. Also, he noted that more than 200 Germany-based firms operate in Azerbaijan. “Uniper” company plays a role in the development of this cooperation too. It is also one of the companies, which is going to buy gas from Shahdeniz Stage 2 Azerbaijan and Germany are going to exploit the alternative energy to strengthen economic relations.
Saudi Arabia
On 16 January 2018, a delegation from Azerbaijan led by Parviz Shahbazov visited Saudi Arabia. The delegation of Azerbaijan had a meeting with Salman bin Abdulaziz al Saud, during which the cooperation in the energy sector was discussed. Khalid Abdulaziz Al Falih, Minister of Energy, Industry and Natural Resources, mentioned that the branch of “Saudi Aramco” company would operate in Azerbaijan.
Czech Republic
Azerbaijan and the Czech Republic are currently working on a new inter-ministerial energy agreement. “The essence of the document is to underline the strategic importance of supplies of Azerbaijani oil to the Czech Republic, but also to facilitate the cooperation in the development of alternative energy sources (e.g., the hydropower and other green energy sources) and create conditions for the involvement of entrepreneurial subjects in energy projects in both countries,” Thomas Huner, Czech Minister for Industry and Trade said. Azerbaijani oil represents one-third of oil consumption in the Czech Republic.
Southern Gas Corridor
Due to the Southern gas corridor, gas will be transported from the Caspian region to Europe. The main source of gas will be Shah Deniz Stage 2. The Ministry of Energy of Azerbaijan cooperates with BP for the realization of this project in the Caspian Region. First, gas will be supplied to Georgia and Turkey in 2018. Thereafter, in 2020, gas is expected to be delivered to Europe.
See also
Cabinet of Azerbaijan
Petroleum industry in Azerbaijan
Southern Gas Corridor
Further reading
"The Political Economy of Oil in Azerbaijan" Caucasus Analytical Digest No. 16
References
Government ministries of Azerbaijan
Azerbaijan
Ministries established in 2001
Energy in Azerbaijan
2001 establishments in Azerbaijan | Ministry of Energy (Azerbaijan) | Engineering | 922 |
289,419 | https://en.wikipedia.org/wiki/Firestorm | A firestorm is a conflagration which attains such intensity that it creates and sustains its own wind system. It is most commonly a natural phenomenon, created during some of the largest bushfires and wildfires. Although the term has been used to describe certain large fires, the phenomenon's determining characteristic is a fire with its own storm-force winds from every point of the compass towards the storm's center, where the air is heated and then ascends.
The Black Saturday bushfires, the 2021 British Columbia wildfires, and the Great Peshtigo Fire are possible examples of forest fires with some portion of combustion due to a firestorm, as is the Great Hinckley Fire. Firestorms have also occurred in cities, usually due to targeted explosives, such as in the aerial firebombings of London, Hamburg, Dresden, and Tokyo, and the atomic bombing of Hiroshima.
Mechanism
A firestorm is created as a result of the stack effect as the heat of the original fire draws in more and more of the surrounding air. This draft can be quickly increased if a low-level jet stream exists over or near the fire. As the updraft mushrooms, strong inwardly-directed gusty winds develop around the fire, supplying it with additional air. This would seem to prevent the firestorm from spreading on the wind, but the tremendous turbulence created may also cause the strong surface inflow winds to change direction erratically. Firestorms resulting from the bombardment of urban areas in the Second World War were generally confined to the areas initially seeded with incendiary devices, and the firestorm did not appreciably spread outward.
A firestorm may also develop into a mesocyclone and induce true tornadoes/fire whirls. This occurred with the 2002 Durango fire, and probably with the much greater Peshtigo Fire. The greater draft of a firestorm draws in greater quantities of oxygen, which significantly increases combustion, thereby also substantially increasing the production of heat. The intense heat of a firestorm manifests largely as radiated heat (infrared radiation), which may ignite flammable material at a distance ahead of the fire itself. This also serves to expand the area and the intensity of the firestorm. Violent, erratic wind drafts suck movables into the fire and as is observed with all intense conflagrations, radiated heat from the fire can melt some metals, glass, and turn street tarmac into flammable hot liquid. The very high temperatures ignite anything that might possibly burn, until the firestorm runs low on fuel.
A firestorm does not appreciably ignite material at a distance ahead of itself; more accurately, the heat desiccates those materials and makes them more vulnerable to ignition by embers or firebrands, increasing the rate of fire spotting. During the formation of a firestorm many fires merge to form a single convective column of hot gases rising from the burning area and strong, fire-induced, radial (inwardly directed) winds are associated with the convective column. Thus the fire front is essentially stationary and the outward spread of fire is prevented by the in-rushing wind.
Characterization of a firestorm
A firestorm is characterized by strong to gale-force winds blowing toward the fire, everywhere around the fire perimeter, an effect which is caused by the buoyancy of the rising column of hot gases over the intense mass fire, drawing in cool air from the periphery. These winds from the perimeter blow the fire brands into the burning area and tend to cool the unignited fuel outside the fire area so that ignition of material outside the periphery by radiated heat and fire embers is more difficult, thus limiting fire spread. At Hiroshima, this inrushing to feed the fire is said to have prevented the firestorm perimeter from expanding, and thus the firestorm was confined to the area of the city damaged by the blast.
Large wildfire conflagrations are distinct from firestorms if they have moving fire fronts which are driven by the ambient wind and do not develop their own wind system like true firestorms. (This does not mean that a firestorm must be stationary; as with any other convective storm, the circulation may follow surrounding pressure gradients and winds, if those lead it onto fresh fuel sources.) Furthermore, non-firestorm conflagrations can develop from a single ignition, whereas firestorms have only been observed where large numbers of fires are burning simultaneously over a relatively large area, with the important caveat that the density of simultaneously burning fires needs to be above a critical threshold for a firestorm to form (a notable example of large numbers of fires burning simultaneously over a large area without a firestorm developing was the Kuwaiti oil fires of 1991, where the distance between individual fires was too large).
The high temperatures within the firestorm zone ignite most everything that might possibly burn, until a tipping point is reached, that is, upon running low on fuel, which occurs after the firestorm has consumed so much of the available fuel within the firestorm zone that the necessary fuel density required to keep the firestorm's wind system active drops below the threshold level, at which time the firestorm breaks up into isolated conflagrations.
In Australia, the prevalence of eucalyptus trees that have oil in their leaves results in forest fires that are noted for their extremely tall and intense flame front. Hence the bush fires appear more as a firestorm than a simple forest fire. Sometimes, emission of combustible gases from swamps (e.g., methane) has a similar effect. For instance, methane explosions enforced the Peshtigo Fire.
Weather and climate effects
Firestorms will produce hot buoyant smoke clouds of primarily water vapor that will form condensation clouds as it enters the cooler upper atmosphere, generating what is known as pyrocumulus clouds ("fire clouds") or, if large enough, pyrocumulonimbus ("fire storm") clouds. For example, the black rain that began to fall approximately 20 minutes after the atomic bombing of Hiroshima produced in total 5–10 cm of black soot-filled rain in a 1–3 hour period. Moreover, if the conditions are right, a large pyrocumulus can grow into a pyrocumulonimbus and produce lightning, which could potentially set off further fires. Apart from city and forest fires, pyrocumulus clouds can also be produced by volcanic eruptions due to the comparable amounts of hot buoyant material formed.
On a more continental and global extent, away from the direct vicinity of the fire, wildfire firestorms that produce pyrocumulonimbus cloud events have been found to "surprisingly frequently" generate minor "nuclear winter" effects. These are analogous to minor volcanic winters, with each mass addition of volcanic gases additive in increasing the depth of the "winter" cooling, from near-imperceptible to "year without a summer" levels.
Pyro-cumulonimbus and atmospheric effects (in wildfires)
A very important but poorly understood aspect of wildfire behavior are pyrocumulonimbus (pyroCb) firestorm dynamics and their atmospheric impact. These are well illustrated in the Black Saturday case study below. The "pyroCb" is a fire-started or fire-augmented thunderstorm that in its most extreme manifestation injects huge abundances of smoke and other biomass-burning emissions into the lower stratosphere. The observed hemispheric spread of smoke and other biomass-burning emissions has known important climate consequences. Direct attribution of the stratospheric aerosols to pyroCbs only occurred in the last decade.
Such an extreme injection by thunderstorms was previously judged to be unlikely because the extratropical tropopause is considered to be a strong barrier to convection. Two recurring themes have developed as pyroCb research unfolds. First, puzzling stratospheric aerosol-layer observations—and other layers reported as volcanic aerosol can now be explained in terms of pyroconvection. Second, pyroCb events occur surprisingly frequently, and they are likely a relevant aspect of several historic wildfires.
On an intraseasonal level it is established that pyroCbs occur with surprising frequency. In 2002, at least 17 pyroCbs erupted in North America alone. Still to be determined is how often this process occurred in the boreal forests of Asia in 2002. However, it is now established that this most extreme form of pyroconvection, along with more frequent pyrocumulus convection, was widespread and persisted for at least two months. The characteristic injection height of pyroCb emissions is the upper troposphere, and a subset of these storms pollutes the lower stratosphere. Thus, a new appreciation for the role of extreme wildfire behavior and its atmospheric ramifications is now coming into focus.
Black Saturday firestorm (Wildfire case study)
Background
The Black Saturday bushfires are some of Australia's most destructive and deadly fires that fall under the category of a "firestorm" due to the extreme fire behavior and relationship with atmospheric responses that occurred during the fires. This major wildfire event led to a number of distinct electrified pyrocumulonimbus plume clusters ranging roughly 15 km high. These plumes were proven susceptible to striking new spot fires ahead of the main fire front. The newly ignited fires by this pyrogenic lightning further highlight the feedback loops of influence between the atmosphere and fire behavior on Black Saturday associated with these pyroconvective processes.
Role that pyroCbs have on fire in case study
The examinations presented here for Black Saturday demonstrate that fires ignited by lightning generated within the fire plume can occur at much larger distances ahead of the main fire front—of up to 100 km. In comparison to fires ignited by burning debris transported by the fire plume, these only go ahead of the fire front up to about 33 km, noting that this also has implications in relation to understanding the maximum rate of spread of a wildfire. This finding is important for the understanding and modeling of future firestorms and the large scale areas that can be affected by this phenomenon.
As the individual spot fires grow together, they will begin to interact. This interaction will increase the burning rates, heat release rates, and flame height until the distance between them reaches a critical level. At the critical separation distance, the flames will begin to merge and burn with the maximum rate and flame height. As these spot fires continue to grow together, the burning and heat release rates will finally start to decrease but remain at a much elevated level compared to the independent spot fire. The flame height is not expected to change significantly. The more spot fires, the bigger the increase in burning rate and flame height.
Importance for continued study of these firestorms
Black Saturday is just one of many varieties of firestorms with these pyroconvective processes and they are still being widely studied and compared. In addition to indicating this strong coupling on Black Saturday between the atmosphere and the fire activity, the lightning observations also suggest considerable differences in pyroCb characteristics between Black Saturday and the Canberra fire event. Differences between pyroCb events, such as for the Black Saturday and Canberra cases, indicate considerable potential for improved understanding of pyroconvection based on combining different data sets as presented in the research of the Black Saturday pyroCb's (including in relation to lightning, radar, precipitation, and satellite observations).
A greater understanding of pyroCb activity is important, given that fire-atmosphere feedback processes can exacerbate the conditions associated with dangerous fire behavior. Additionally, understanding the combined effects of heat, moisture, and aerosols on cloud microphysics is important for a range of weather and climate processes, including in relation to improved modeling and prediction capabilities. It is essential to fully explore events such as these to properly characterize the fire behavior, pyroCb dynamics, and resultant influence on conditions in the upper troposphere and lower stratosphere (UTLS). It is also important to accurately characterize this transport process so that cloud, chemistry, and climate models have a firm basis on which to evaluate the pyrogenic source term, pathway from the boundary layer through cumulus cloud, and exhaust from the convective column.
Since the discovery of smoke in the stratosphere and the pyroCb, only a small number of individual case studies and modeling experiments have been performed. Hence, there is still much to be learned about the pyroCb and its importance. With this work scientists have attempted to reduce the unknowns by revealing several additional occasions when pyroCbs were either a significant or sole cause for the type of stratospheric pollution usually attributed to volcanic injections.
City firestorms
The same underlying combustion physics can also apply to human-made structures such as cities during war or natural disaster.
Firestorms are thought to have been part of the mechanism of large urban fires, such as accompanied the 1755 Lisbon earthquake, the 1906 San Francisco earthquake and the 1923 Great Kantō earthquake. Genuine firestorms are occurring more frequently in California wildfires, such as the 1991 wildfire disaster in Oakland, California, and the October 2017 Tubbs Fire in Santa Rosa, California.
During the July–August 2018 Carr Fire, a deadly fire vortex equivalent in size and strength to an EF-3 tornado spawned during the firestorm in Redding, California and caused tornado-like wind damage. Another wildfire which may be characterized as a firestorm was the Camp Fire, which at one point travelled at a speed of up to 76 acres per minute, completely destroying the town of Paradise, California within 24 hours on November 8, 2018.
Firestorms were also created by the firebombing raids of World War II in cities like Hamburg and Dresden. Of the two nuclear weapons used in combat, only Hiroshima resulted in a firestorm. In contrast, experts suggest that due to the nature of modern U.S. city design and construction, a firestorm is unlikely after a nuclear detonation.
Firebombing
Firebombing is a technique designed to damage a target, generally an urban area, through the use of fire, caused by incendiary devices, rather than from the blast effect of large bombs. Such raids often employ both incendiary devices and high explosives. The high explosive destroys roofs, making it easier for the incendiary devices to penetrate the structures and cause fires. The high explosives also disrupt the ability of firefighters to douse the fires.
Although incendiary bombs have been used to destroy buildings since the start of gunpowder warfare, World War II saw the first use of strategic bombing from the air to destroy the ability of the enemy to wage war. London, Coventry, and many other British cities were firebombed during the Blitz. Most large German cities were extensively firebombed starting in 1942, and almost all large Japanese cities were firebombed during the last six months of World War II. As Sir Arthur Harris, the officer commanding RAF Bomber Command from 1942 through to the end of the war in Europe, pointed out in his post-war analysis, although many attempts were made to create deliberate human-made firestorms during World War II, few attempts succeeded:
According to physicist David Hafemeister, firestorms occurred after about 5% of all fire-bombing raids during World War II (but he does not explain if this is a percentage based on both Allied and Axis raids, or combined Allied raids, or U.S. raids alone). In 2005, the American National Fire Protection Association stated in a report that three major firestorms resulted from Allied conventional bombing campaigns during World War II: Hamburg, Dresden, and Tokyo. They do not include the comparatively minor firestorms at Kassel, Darmstadt or even Ube into their major firestorm category. Despite later quoting and corroborating Glasstone and Dolan and data collected from these smaller firestorms:
21st-century cities in comparison to World War II cities
Unlike the highly combustible World War II cities that firestormed from conventional and nuclear weapons, a FEMA report suggests that due to the nature of modern U.S. city design and construction, a firestorm is unlikely to occur even after a nuclear detonation because highrise buildings do not lend themselves to the formation of firestorms because of the baffle effect of the structures, and firestorms are unlikely in areas whose modern buildings have totally collapsed, with the exceptions of Tokyo and Hiroshima, because of the nature of their densely-packed "flimsy" wooden buildings in World War II.
There is also a sizable difference between the fuel loading of World War II cities that firestormed and that of modern cities, where the quantity of combustibles per square meter in the fire area in the latter is below the necessary requirement for a firestorm to form (40 kg/m2). Therefore, firestorms are not to be expected in modern North American cities after a nuclear detonation, and are expected to be unlikely in modern European cities.
Similarly, one reason for the lack of success in creating a true firestorm in the bombing of Berlin in World War II was that the building density in Berlin was too low to support easy fire spread from building to building. Another reason was that much of the building construction was newer and better than in most of the old German city centers. Modern building practices in the Berlin of World War II led to more effective firewalls and fire-resistant construction. Mass firestorms never proved to be possible in Berlin. No matter how heavy the raid or what kinds of firebombs were dropped, no true firestorm ever developed.
Nuclear weapons in comparison to conventional weapons
The incendiary effects of a nuclear explosion do not present any especially characteristic features. In principle, the same overall result with respect to destruction of life and property can be achieved by the use of conventional incendiary and high-explosive bombs. It has been estimated, for example, that the same fire ferocity and damage produced at Hiroshima by one 16-kiloton nuclear bomb from a single B-29 could have instead been produced by about 1,200 tons/1.2 kilotons of incendiary bombs from 220 B-29s distributed over the city; for Nagasaki, a single 21 kiloton nuclear bomb dropped on the city could have been estimated to be caused by 1,200 tons of incendiary bombs from 125 B-29s.
It may seem counterintuitive that the same amount of fire damage caused by a nuclear weapon could have instead been produced by a smaller total yield of thousands of incendiary bombs; however, World War II experience supports this assertion. For example, although not a perfect clone of the city of Hiroshima in 1945, in the conventional bombing of Dresden, the combined Royal Air Force (RAF) and United States Army Air Forces (USAAF) dropped a total of 3441.3 tons (approximately 3.4 kilotons) of ordnance (about half of which was incendiary bombs) on the night of 13–14 February 1945, and this resulted in "more than" of the city being destroyed by fire and firestorm effects according to one authoritative source, or approximately by another.
In total about 4.5 kilotons of conventional ordnance was dropped on the city over a number of months during 1945 and this resulted in approximately of the city being destroyed by blast and fire effects. During the Operation MeetingHouse firebombing of Tokyo on 9–10 March 1945, 279 of the 334 B-29s dropped 1,665 tons of incendiary and high-explosive bombs on the city, resulting in the destruction of over 10,000 acres of buildings—, a quarter of the city.
In contrast to these raids, when a single 16-kiloton nuclear bomb was dropped on Hiroshima, of the city was destroyed by blast, fire, and firestorm effects. Similarly, Major Cortez F. Enloe, a surgeon in the USAAF who worked with the United States Strategic Bombing Survey (USSBS), said that the 21-kiloton nuclear bomb dropped on Nagasaki did not do as much fire damage as the extended conventional airstrikes on Hamburg.
American historian Gabriel Kolko also echoed this sentiment:
This break from the linear expectation of more fire damage to occur after greater explosive yield is dropped can be easily explained by two major factors. First, the order of blast and thermal events during a nuclear explosion is not ideal for the creation of fires. In an incendiary bombing raid, incendiary weapons followed after high-explosive blast weapons were dropped, in a manner designed to create the greatest probability of fires from a limited quantity of explosive and incendiary weapons. The so-called two-ton "cookies", also known as "blockbusters", were dropped first and were intended to rupture water mains, as well as to blow off roofs, doors, and windows, creating an air flow that would feed the fires caused by the incendiaries that would then follow and be dropped, ideally, into holes created by the prior blast weapons, such as into attic and roof spaces.
On the other hand, nuclear weapons produce effects that are in the reverse order, with thermal effects and "flash" occurring first, which are then followed by the slower blast wave. It is for this reason that conventional incendiary bombing raids are considered to be a great deal more efficient at causing mass fires than nuclear weapons of comparable yield. It is likely this led the nuclear weapon effects experts Franklin D'Olier, Samuel Glasstone and Philip J. Dolan to state that the same fire damage suffered at Hiroshima could have instead been produced by about 1 kiloton/1,000 tons of incendiary bombs.
The second factor explaining the non-intuitive break in the expected results of greater explosive yield producing greater city fire damage is that city fire damage is largely dependent not on the yield of the weapons used, but on the conditions in and around the city itself, with the fuel loading per square meter value of the city being one of the major factors. A few hundred strategically placed incendiary devices would be sufficient to start a firestorm in a city if the conditions for a firestorm, namely high fuel loading, are already inherent to the city (see Bat bomb).
The Great Fire of London in 1666, although not forming a firestorm due to the single point of ignition, serves as an example that, given a densely packed and predominantly wooden and thatch building construction in the urban area, a mass fire is conceivable from the mere incendiary power of no more than a domestic fireplace. On the other hand, the largest nuclear weapon conceivable (more than a gigaton blast yield) will be incapable of igniting a city into a firestorm if the city's properties, namely its fuel density, are not conducive to one developing. It's worth remembering that such a device would still destroy any city in the world today from its shockwave alone, as well as irradiate the ruins to the point of uninhabitability. A device so large could even vaporize the city (and the crust beneath) all at once without such damage qualifying as a "firestorm".
Despite the disadvantage of nuclear weapons when compared to conventional weapons of lower or comparable yield in terms of effectiveness at starting fires, for the reasons discussed above, one undeniable advantage of nuclear weapons over conventional weapons when it comes to creating fires is that nuclear weapons undoubtedly produce all their thermal and explosive effects in a very short period of time. That is, to use Arthur Harris's terminology, they are the epitome of an air raid guaranteed to be concentrated in "point in time".
In contrast, early in World War II, the ability to achieve conventional air raids concentrated in "point of time" depended largely upon the skill of pilots to remain in formation, and their ability to hit the target whilst at times also being under heavy fire from anti-aircraft fire from the cities below. Nuclear weapons largely remove these uncertain variables. Therefore, nuclear weapons reduce the question of whether a city will firestorm or not to a smaller number of variables, to the point of becoming entirely reliant on the intrinsic properties of the city, such as fuel loading, and predictable atmospheric conditions, such as wind speed, in and around the city, and less reliant on the unpredictable possibility of hundreds of bomber crews acting together successfully as a single unit.
See also
Blackout (wartime)
Civilian casualties of strategic bombing
Fire whirl
Wildfire
Wildfire modeling
Potential firestorms
Portions of the following fires are often described as firestorms, but that has not been corroborated by any reliable references:
Great Fire of Rome (64 AD)
Great Fire of London (1666)
Great Chicago Fire (1871)
Peshtigo Fire (1871)
San Francisco earthquake (1906)
Great Kantō earthquake (1923)
Tillamook Burn (1933–1951)
Second Great Fire of London (1940)
Ash Wednesday bushfires (1983)
Yellowstone fires (1988)
Canberra bushfires (2003)
Okanagan Mountain Park Fire (2003)
Black Saturday bushfires (2009)
Fort McMurray wildfire (2016)
Predrógâo Grande wildfire (2017)
Carr Fire (2018)
2021 British Columbia wildfires (2021)
References
Further reading
Fire | Firestorm | Chemistry | 5,246 |
21,922,289 | https://en.wikipedia.org/wiki/R-Gator | The iRobot R-Gator is an unmanned robotic platform from iRobot Corporation and John Deere.
Technical details
The robot is built upon Deere's M-Gator currently in use by the US Military. The R-Gator can operate autonomously, performing perimeter patrol and other missions while keeping personnel out of harm's way. It can operate autonomously by following a map or choosing its own waypoints to reach a pre-determined destination. It can also do "follow the leader" operations to keep up with troops, be tele-operated, or driven manually if needed. In military exercises, the R-Gator has shown an ability to carry gear for soldiers to lighten their loads. It also demonstrated its capacity to carry and drop off explosive ordnance disposal robots weighing more than . An R-Gator could deploy the smaller machine and provide unmanned perimeter security while the EOD robot dismantles the bomb. The first R-Gator sale to the military was to the U.S. Navy Space and Naval Warfare Systems Command for autonomous perimeter security.
References
External links
John Deere: R-Gator
Gizmag article
Defense Update article
Unmanned ground vehicles
IRobot
John Deere vehicles | R-Gator | Engineering | 252 |
18,865,257 | https://en.wikipedia.org/wiki/Explosion%20crater | An explosion crater is a type of crater formed when material is ejected from the surface of the ground by an explosion at or immediately above or below the surface.
A crater is formed by an explosion through the displacement and ejection of material from the ground. It is typically bowl-shaped. High-pressure gas and shock waves cause three processes responsible for the creation of the crater:
Plastic deformation of the ground.
Projection of material (ejecta) from the ground by the explosion.
Spallation of the ground surface.
Two processes partially fill the crater back in:
Fall-back of ejecta.
Erosion and landslides of the crater lip and wall.
The relative importance of the five processes varies, depending on the height above or depth below the ground surface at which the explosion occurs and on the composition of the ground.
Examples
One of the largest explosion craters in Germany is in the borough of Prüm. It was caused by a huge explosion in 1949, in which an ammunition depot exploded due to unknown causes and large parts of the town were destroyed.
See also
Hydrothermal explosion, caused by rapid heating of groundwater by volcanic sources
Maar, a crater caused by a volcanic explosion
Pseudocrater, a volcanic crater formed by a steam explosion
Subsidence crater, a depression in the ground formed by collapse of a void below the surface
Impact crater, a depression formed by excavation following a hypervelocity impact
Volcanic crater
Gas emission crater
References | Explosion crater | Chemistry | 290 |
43,725,144 | https://en.wikipedia.org/wiki/Seeblatt | (, German for 'lake leaf', plural ; ; ; East Frisian: Pupkeblad) is the term for the stylized leaf of a water lily, used as a charge in heraldry.
Background
This charge is used in the heraldry of Germany, the Netherlands and Scandinavia, but not so much in France and Britain. Seeblätter feature prominently on the coat of arms of Denmark as well as on Danish coins.
In West Frisian, the term pompeblêd is used. The name is used to indicate the seven red lily leaf-shaped blades on the Frisian flag. The seven red pompeblêden (leaves of the yellow water lily and the European white waterlily) refer to the medieval Frisian 'sea districts': more or less autonomous regions along the Southern North Sea coast from the city of Alkmaar to the Weser River. There never have been exactly seven of these administrative units, the number of seven bears the suggestion of 'a lot'. Late medieval sources identify seven Frisian districts, though with different names. The most important regions were West Friesland, Westergo, Oostergo, Hunsingo, Fivelingo, Reiderland, Emsingo, Brokmerland, Harlingerland and Rüstringen (Jeverland and Butjadingen).
Gallery
See also
Flag of Friesland
Heart symbol
Cardioid, a geometrical curve resembling the outline of a seeblatt
SC Heerenveen, Dutch football club whose home kit features seven lily-shaped blades
References
Symbols
Heraldic charges
Visual motifs | Seeblatt | Mathematics | 328 |
13,532,062 | https://en.wikipedia.org/wiki/Carbarsone | Carbarsone is an organoarsenic compound used as an antiprotozoal drug for treatment of amebiasis and other infections. It was available for amebiasis in the United States as late as 1991. Thereafter, it remained available as a turkey feed additive for increasing weight gain and controlling histomoniasis (blackhead disease).
Carbarsone is one of four arsenical animal drugs approved by the U.S. Food and Drug Administration for use in poultry and/or swine, along with nitarsone, arsanilic acid, and roxarsone. In September 2013, the FDA announced that Zoetis and Fleming Laboratories would voluntarily withdraw current roxarsone, arsanilic acid, and carbarsone approvals, leaving only nitarsone approvals in place. In 2015 FDA withdrew the approval of using nitarsone in animal feeds. The ban came into effect at the end of 2015.
References
Antiprotozoal agents
Arsonic acids
Ureas
Anilines | Carbarsone | Chemistry,Biology | 216 |
211,566 | https://en.wikipedia.org/wiki/Direct%20limit | In mathematics, a direct limit is a way to construct a (typically large) object from many (typically smaller) objects that are put together in a specific way. These objects may be groups, rings, vector spaces or in general objects from any category. The way they are put together is specified by a system of homomorphisms (group homomorphism, ring homomorphism, or in general morphisms in the category) between those smaller objects. The direct limit of the objects , where ranges over some directed set , is denoted by . This notation suppresses the system of homomorphisms; however, the limit depends on the system of homomorphisms.
Direct limits are a special case of the concept of colimit in category theory. Direct limits are dual to inverse limits, which are a special case of limits in category theory.
Formal definition
We will first give the definition for algebraic structures like groups and modules, and then the general definition, which can be used in any category.
Direct limits of algebraic objects
In this section objects are understood to consist of underlying sets equipped with a given algebraic structure, such as groups, rings, modules (over a fixed ring), algebras (over a fixed field), etc. With this in mind, homomorphisms are understood in the corresponding setting (group homomorphisms, etc.).
Let be a directed set. Let be a family of objects indexed by and be a homomorphism for all with the following properties:
is the identity on , and
for all .
Then the pair is called a direct system over .
The direct limit of the direct system is denoted by and is defined as follows. Its underlying set is the disjoint union of the 's modulo a certain :
Here, if and , then if and only if there is some with and such that .
Intuitively, two elements in the disjoint union are equivalent if and only if they "eventually become equal" in the direct system. An equivalent formulation that highlights the duality to the inverse limit is that an element is equivalent to all its images under the maps of the direct system, i.e. whenever .
One obtains from this definition canonical functions sending each element to its equivalence class. The algebraic operations on are defined such that these maps become homomorphisms. Formally, the direct limit of the direct system consists of the object together with the canonical homomorphisms .
Direct limits in an arbitrary category
The direct limit can be defined in an arbitrary category by means of a universal property. Let be a direct system of objects and morphisms in (as defined above). A target is a pair where is an object in and are morphisms for each such that whenever . A direct limit of the direct system is a universally repelling target in the sense that is a target and for each target , there is a unique morphism such that for each i. The following diagram
will then commute for all i, j.
The direct limit is often denoted
with the direct system and the canonical morphisms (or, more precisely, canonical injections ) being understood.
Unlike for algebraic objects, not every direct system in an arbitrary category has a direct limit. If it does, however, the direct limit is unique in a strong sense: given another direct limit X′ there exists a unique isomorphism X′ → X that commutes with the canonical morphisms.
Examples
A collection of subsets of a set can be partially ordered by inclusion. If the collection is directed, its direct limit is the union . The same is true for a directed collection of subgroups of a given group, or a directed collection of subrings of a given ring, etc.
The weak topology of a CW complex is defined as a direct limit.
Let be any directed set with a greatest element . The direct limit of any corresponding direct system is isomorphic to and the canonical morphism is an isomorphism.
Let K be a field. For a positive integer n, consider the general linear group GL(n;K) consisting of invertible n x n - matrices with entries from K. We have a group homomorphism GL(n;K) → GL(n+1;K) that enlarges matrices by putting a 1 in the lower right corner and zeros elsewhere in the last row and column. The direct limit of this system is the general linear group of K, written as GL(K). An element of GL(K) can be thought of as an infinite invertible matrix that differs from the infinite identity matrix in only finitely many entries. The group GL(K) is of vital importance in algebraic K-theory.
Let p be a prime number. Consider the direct system composed of the factor groups and the homomorphisms induced by multiplication by . The direct limit of this system consists of all the roots of unity of order some power of , and is called the Prüfer group .
There is a (non-obvious) injective ring homomorphism from the ring of symmetric polynomials in variables to the ring of symmetric polynomials in variables. Forming the direct limit of this direct system yields the ring of symmetric functions.
Let F be a C-valued sheaf on a topological space X. Fix a point x in X. The open neighborhoods of x form a directed set ordered by inclusion (U ≤ V if and only if U contains V). The corresponding direct system is (F(U), rU,V) where r is the restriction map. The direct limit of this system is called the stalk of F at x, denoted Fx. For each neighborhood U of x, the canonical morphism F(U) → Fx associates to a section s of F over U an element sx of the stalk Fx called the germ of s at x.
Direct limits in the category of topological spaces are given by placing the final topology on the underlying set-theoretic direct limit.
An ind-scheme is an inductive limit of schemes.
Properties
Direct limits are linked to inverse limits via
An important property is that taking direct limits in the category of modules is an exact functor. This means that if you start with a directed system of short exact sequences and form direct limits, you obtain a short exact sequence .
Related constructions and generalizations
We note that a direct system in a category admits an alternative description in terms of functors. Any directed set can be considered as a small category whose objects are the elements and there is a morphisms if and only if . A direct system over is then the same as a covariant functor . The colimit of this functor is the same as the direct limit of the original direct system.
A notion closely related to direct limits are the filtered colimits. Here we start with a covariant functor from a filtered category to some category and form the colimit of this functor. One can show that a category has all directed limits if and only if it has all filtered colimits, and a functor defined on such a category commutes with all direct limits if and only if it commutes with all filtered colimits.
Given an arbitrary category , there may be direct systems in that don't have a direct limit in (consider for example the category of finite sets, or the category of finitely generated abelian groups). In this case, we can always embed into a category in which all direct limits exist; the objects of are called ind-objects of .
The categorical dual of the direct limit is called the inverse limit. As above, inverse limits can be viewed as limits of certain functors and are closely related to limits over cofiltered categories.
Terminology
In the literature, one finds the terms "directed limit", "direct inductive limit", "directed colimit", "direct colimit" and "inductive limit" for the concept of direct limit defined above. The term "inductive limit" is ambiguous however, as some authors use it for the general concept of colimit.
See also
Direct limits of groups
Notes
References
Limits (category theory)
Abstract algebra | Direct limit | Mathematics | 1,663 |
5,260,835 | https://en.wikipedia.org/wiki/Solitary%20tract | The solitary tract (tractus solitarius or fasciculus solitarius) is a compact fiber bundle that extends longitudinally through the posterolateral region of the medulla oblongata. The solitary tract is surrounded by the solitary nucleus, and descends to the upper cervical segments of the spinal cord. It was first named by Theodor Meynert in 1872.
Composition
The solitary tract is made up of primary sensory fibers and descending fibers of the vagus, glossopharyngeal, and facial nerves.
Function
The solitary tract conveys afferent information from stretch receptors and chemoreceptors in the walls of the cardiovascular, respiratory, and intestinal tracts. Afferent fibers from cranial nerves 7, 9 and 10 convey taste (SVA) in its rostral portion, and general visceral sense (general visceral afferent fibers, GVA) in its caudal part. Taste buds in the mucosa of the tongue can also generate impulses in the rostral regions of the solitary tract. The efferent fibers are distributed to the solitary tract nucleus.
Synonyms
There are numerous synonyms for the solitary tract:
round fasciculus (Latin: fasciculus rotundus)
solitary fasciculus (Latin: fasciculus solitarius)
solitary bundle (Latin: funiculus solitarius)
Gierke respiratory bundle (Named for German anatomist Hans Paul Bernhard Gierke).
Krause respiratory bundle (Named for German anatomist Johann Friedrich Wilhelm Krause).
References
Medulla oblongata
Neurophysiology
Vagus nerve
Glossopharyngeal nerve
Facial nerve
Gustatory system
Human homeostasis
Innervation of the tongue | Solitary tract | Biology | 362 |
29,635,297 | https://en.wikipedia.org/wiki/S-tag | S-tag is the name of an oligopeptide derived from pancreatic ribonuclease A (RNase A).
If RNase A is digested with subtilisin, a single peptide bond is cleaved, but the resulting two products remain weakly bound to each other and the protein, called ribonuclease S, remains active although each of the two products alone shows no enzymatic activity. The N-terminus of the original RNase A, also called S-peptide, consists of 20 amino acid residues, of which only the first 15 are required for ribonuclease activity. This 15 amino acids long peptide is called S15 or S-tag.
The amino acid sequence of the S-tag is: Lys-Glu-Thr-Ala-Ala-Ala-Lys-Phe-Glu-Arg-Gln-His-Met-Asp-Ser. It is believed that the peptide with its abundance of charged and polar residues could improve solubility of proteins it is attached to. Moreover, the peptide alone is thought not to fold into a distinct structure. On DNA-level the S-tag can be attached to the N- or C-terminus of any protein. After gene expression, such a tagged protein can be detected by commercially available antibodies.
References
1. R.T. Raines et al., The S-Tag Fusion System for Protein Purification. Methods Enzymol. 326, 362-367 (2000)
Peptides | S-tag | Chemistry | 319 |
39,093,199 | https://en.wikipedia.org/wiki/Mathematical%20Q%20models | Mathematical Q models provide a model of the earth's response to seismic waves. In reflection seismology, the anelastic attenuation factor, often expressed as seismic quality factor or Q, which is inversely proportional to attenuation factor, quantifies the effects of anelastic attenuation on the seismic wavelet caused by fluid movement and grain boundary friction. When a plane wave propagates through a homogeneous viscoelastic medium, the effects of amplitude attenuation and velocity dispersion may be combined conveniently into the single dimensionless parameter, Q. As a seismic wave propagates through a medium, the elastic energy associated with the wave is gradually absorbed by the medium, eventually ending up as heat energy. This is known as absorption (or anelastic attenuation) and will eventually cause the total disappearance of the seismic wave.
The frequency-dependent attenuation of seismic waves leads to decreased resolution of seismic images with depth. Transmission losses may also occur due to friction or fluid movement, and for a given physical mechanism, they can be conveniently described with an empirical formulation where elastic moduli and propagation velocity are complex functions of frequency. Bjørn Ursin and Tommy Toverud published an article where they compared different Q models.
Basics
In order to compare the different models they considered plane-wave propagation in a homogeneous viscoelastic medium. They used the Kolsky–Futterman model as a reference and studied several other models. These other models were compared with the behavior of the Kolsky–Futterman model.
The Kolsky–Futterman model was first described in the article ‘Dispersive body waves’ by Futterman (1962).
'Seismic inverse Q-filtering' by Yanghua Wang (2008) contains an outline discussing the theory of Futterman, beginning with the wave equation:
where U(r,w) is the plane wave of radial frequency w at travel distance r, k is the wavenumber and i is the imaginary unit. Reflection seismograms record the reflection wave along the propagation path r from the source to reflector and back to the surface.
Equation (1.1) has an analytical solution given by:
where k is the wave number. When the wave propagates in inhomogeneous seismic media the propagation constant k must be a complex value that includes not only an imaginary part, the frequency-dependent attenuation coefficient, but also a real part, the dispersive wave number. We can call this K(w) a propagation constant in line with Futterman.
k(w) can be linked to the phase velocity of the wave with the formula:
Kolsky's attenuation-dispersion model
To obtain a solution that can be applied to seismic k(w) must be connected to a function that represents the way in which U(r,w) propagates in the seismic media. This function can be regarded as a Q-model.
In his outline Wang calls the Kolsky–Futterman model the Kolsky model. The model assumes the attenuation α(w) to be strictly linear with frequency over the range of measurement:
And defines the phase velocity as:
where cr and Qr are the phase velocity and the Q value at a reference frequency wr.
For a large value of Qr >> 1 the solution (1.6) can be approximated to
where
Kolsky’s model was derived from and fit well with experimental observations. The theory for materials satisfying the linear attenuation assumption requires that the reference frequency wr is a finite (arbitrarily small but nonzero) cut-off on the absorption. According to Kolsky, we are free to choose wr following the phenomenological criterion that it be small compared with the lowest measured frequency w in the frequency band. More information regarding this concept can be found in Futterman (1962)
Computations
For each of the Q models Ursin B. and Toverud T. presented in their article they computed the attenuation (1.5) and phase velocity (1.6) in the frequency band 0–300 Hz. Fig.1. presents the graph for the Kolsky model – attenuation (left) and phase velocity (right) with cr = 2000 m/s, Qr = 100 and wr = 2100 Hz.
Q models
Wang listed the different Q models that Ursin B. and Toverud T. applied in their study, classifying the models into two groups. The first group consists of models 1-5 below, the other group including models 6-8. The main difference between these two groups is the behaviour of the phase velocity when the frequency approaches zero. Whereas the first group has a zero-valued phase velocity, the second group has a finite, nonzero phase velocity.
1) the Kolsky model (linear attenuation)
2) the Strick–Azimi model (power-law attenuation)
3) the Kjartansson model (constant Q)
4) Azimi's second and third models (non-linear attenuation)
5) Müller's model (power-law Q)
6) Standard linear solid Q model for attenuation and dispersion the Zener model (the standard linear solid)
7) the Cole–Cole model (a general linear-solid)
8) a new general linear model
Notes
References
External links
Some aspects of seismic inverse Q-filtering theory by Knut Sørsdal
Seismology measurement
Geophysics | Mathematical Q models | Physics | 1,147 |
18,788,210 | https://en.wikipedia.org/wiki/International%20Committee%20for%20the%20History%20of%20Technology | The International Committee for the History of Technology (ICOHTEC) is an UNESCO-based non-profit-organization of scholars working on the history of technology. It was founded in Paris in 1968, when the Cold War divided the nations in the Eastern and Western worlds. At that time, ICOHTEC provided a forum for scholars of the history of technology from both sides of the iron curtain. It was constituted as a Scientific Section within the Division of History of Science and Technology of the International Union of History and Philosophy of Science and Technology (IUHPST/DHST). The first President was Eugeniusz Olszewski (Poland), with Vice-Presidents S. V. Schuchardine (Soviet Union) and Melvin Kranzberg (USA), whose role in the foundation of ICOHTEC deserves a special mention. The first Secretary-General was Maurice Daumas (France); following his initiative the French government hosted the first independent ICOHTEC symposium at Pont-a-Mousson (1970).
For the past several decades, ICOHTEC's principal activity has been an annual meeting, where scholars from many countries and from many disciplines gather and share their work. Papers presented at the meetings are usually published in the Committee's annual journal ICON. Nowadays, these symposias are attended by 150–400 participants. They usually take place in Europe, but ICOHTEC has visited nearly all continents. ICOHTECians met in Mexico City in 2001; in Beijing in 2005; in Victoria, British Columbia, in 2009; in Tel Aviv in 2015; and in Rio de Janeiro in 2017. Due to the COVID-19 pandemic, ICOHTEC organized one of the first digital conferences in the humanities: “ICOHTEC digital 2020.” The conference was hosted by Eindhoven University of Technology in the Netherlands. ICOHTEC participates in the International Congress for the History of Science and Technology, ICHST, every four years.
Since its beginning, the society’s aim has been to bring together scholars from different countries, providing a forum for discussing their approaches and promoting new approaches in the history of technology. These are presented at ICOHTEC symposia and developed in thematic sections. One of the early topics of ICOHTEC symposia was “Science–Technology Relationships”. Whereas many historians conducted research on successful innovations, “Failed Innovations” has become a topic of ICOHTEC symposia already in the late 1980s.
“Technology and Music” and “Sound Studies” have been important topics of discussion since the mid-1990s and approaches to “Creativity in Engineering, Music and the Arts” followed in the 2000s. Sessions on the development of gunpowder and the “Social History of Military Technology” opened new perspectives on military history. “Energy, Technology and the Environment” has become a long-time subject, focused on different aspects of the field, which have been important for contemporary research. The cultural influence of “Playing with Technology” was analyzed in several sessions since 2009. “History of Technology for an Age of Crises” was the general theme of ICOHTEC’s first digital conference in 2020. It motivated many scholars. Beside contributions to the general theme, sessions on the “Technology of the Body” and on “Robots and AI” offered new perspectives.
Discussion between Eastern and Western scholars dominated the first decades of ICOHTEC. The society’s main task today is to stimulate and support research in the history of technology on different continents. Results of ICOHTEC symposia have been published in proceedings of many meetings. Annual reports, published in the journal Technology and Culture, inform about past symposia as well.
ICOHTEC’s peer-reviewed journal ICON was founded in 1995. It currently publishes two volumes a year. The journal includes the best papers of the symposia and other important articles on the history of technology and its methodology. Besides organizing symposia and publishing ICON, ICOHTEC promotes scholarship at early career stages. The organization awards prizes for outstanding books and articles of early career scholars in the history of technology: the Turriano ICOHTEC Prize for books or PhD theses and the Maurice Daumas Prize for articles. Summer Schools for PhD students have been organized since 2016. They focus on methodological approaches in the history of technology, linked to the main themes of the symposia.
ICOHTEC’s peer-reviewed journal ICON was founded in 1995. It currently publishes two volumes a year. The journal includes the best papers of the symposia and other important articles on the history of technology and its methodology. Besides organizing symposia and publishing ICON, ICOHTEC promotes scholarship at early career stages. The organization awards prizes for outstanding books and articles of early career scholars in the history of technology: the Turriano ICOHTEC Prize for books or PhD theses and the Maurice Daumas Prize for articles. Summer Schools for PhD students have been organized since 2016. They focus on methodological approaches in the history of technology, linked to the main themes of the symposia.
See also
History of technology
References
External links
International Committee for the History of Technology
International learned societies
History of technology | International Committee for the History of Technology | Technology | 1,070 |
1,714,439 | https://en.wikipedia.org/wiki/Samarium%E2%80%93cobalt%20magnet | A samarium–cobalt (SmCo) magnet, a type of rare-earth magnet, is a strong permanent magnet made of two basic elements: samarium and cobalt.
They were developed in the early 1960s based on work done by Karl Strnat at Wright-Patterson Air Force Base and Alden Ray at the University of Dayton. In particular, Strnat and Ray developed the first formulation of SmCo5.
Samarium–cobalt magnets are generally ranked similarly in strength to neodymium magnets, but have higher temperature ratings and higher coercivity.
Attributes
Some attributes of samarium-cobalts are:
Samarium–cobalt magnets are extremely resistant to demagnetization.
These magnets have good temperature stability [(maximum use temperatures between and ]; Curie temperatures from to .
They are expensive and subject to price fluctuations (cobalt is market price sensitive).
Samarium–cobalt magnets have a strong resistance to corrosion and oxidation resistance, usually do not need to be coated and can be widely used in high temperature and poor working conditions.
They are brittle, and prone to cracking and chipping. Samarium–cobalt magnets have maximum energy products (BHmax) that range from 14 megagauss-oersteds (MG·Oe) to 33 MG·Oe, that is approx. 112 kJ/m3 to 264 kJ/m3; their theoretical limit is 34 MG·Oe, about 272 kJ/m3.
Sintered samarium–cobalt magnets exhibit magnetic anisotropy, meaning they can only be magnetized in the axis of their magnetic orientation. This is done by aligning the crystal structure of the material during the manufacturing process.
Series
Samarium–cobalt magnets are available in two "series", namely SmCo5 magnets and Sm2Co17 magnets.
Series 1:5
These samarium–cobalt magnet alloys (generally written as SmCo5, or SmCo Series 1:5) have one atom of rare-earth samarium per five atoms of cobalt. By weight, this magnet alloy will typically contain 36% samarium with the balance cobalt. The energy products of these samarium–cobalt alloys range from 16 MG·Oe to 25 MG·Oe, that is, approx. 128–200 kJ/m3. These samarium–cobalt magnets generally have a reversible temperature coefficient of -0.05%/°C. Saturation magnetization can be achieved with a moderate magnetizing field. This series of magnet is easier to calibrate to a specific magnetic field than the SmCo 2:17 series magnets.
In the presence of a moderately strong magnetic field, unmagnetized magnets of this series will try to align their orientation axis to the magnetic field, thus becoming slightly magnetized. This can be an issue if postprocessing requires that the magnet be plated or coated. The slight field that the magnet picks up can attract debris during the plating or coating process, causing coating failure or a mechanically out-of-tolerance condition.
Br drifts with temperature and it is one of the important characteristics of magnet performance. Some applications, such as inertial gyroscopes and travelling wave tubes (TWTs), need to have constant field over a wide temperature range. The reversible temperature coefficient (RTC) of Br is defined as
(∆Br/Br) x (1/∆T) × 100%.
To address these requirements, temperature compensated magnets were developed in the late 1970s. For conventional SmCo magnets, Br decreases as temperature increases. Conversely, for GdCo magnets, Br increases as temperature increases within certain temperature ranges. By combining samarium and gadolinium in the alloy, the temperature coefficient can be reduced to nearly zero.
SmCo5 magnets have a very high coercivity (coercive force); that is, they are not easily demagnetized. They are fabricated by packing wide-grain lone-domain magnetic powders. All of the magnetic domains are aligned with the easy axis direction. In this case, all of the domain walls are at 180 degrees. When there are no impurities, the reversal process of the bulk magnet is equivalent to lone-domain motes, where coherent rotation is the dominant mechanism. However, due to the imperfection of fabricating, impurities may be introduced in the magnets, which form nuclei. In this case, because the impurities may have lower anisotropy or misaligned easy axes, their directions of magnetization are easier to spin, which breaks the 180° domain wall configuration. In such materials, the coercivity is controlled by nucleation. To obtain much coercivity, impurity control is critical in the fabrication process.
Series 2:17
These alloys (written as Sm2Co17, or SmCo Series 2:17) are age-hardened with a composition of two atoms of rare-earth samarium per 13–17 atoms of transition metals (TM). The TM content is rich in cobalt, but contains other elements such as iron and copper. Other elements like zirconium, hafnium, and such may be added in small quantities to achieve better heat treatment response. By weight, the alloy will generally contain 25% of samarium. The maximum energy products of these alloys range from 20 to 32 MGOe, what is about 160-260 kJ/m3. These alloys have the best reversible temperature coefficient of all rare-earth alloys, typically being -0.03%/°C. The "second generation" materials can also be used at higher temperatures.
In Sm2Co17 magnets, the coercivity mechanism is based on domain wall pinning. Impurities inside the magnets impede the domain wall motion and thereby resist the magnetization reversal process. To increase the coercivity, impurities are intentionally added during the fabrication process.
Production
Samarium–cobalt alloys are typically machined in the unmagnetized state. Samarium–cobalt should be ground using a wet grinding process (water-based coolants) and a diamond grinding wheel. The same type of process is required if drilling holes or other features that are confined. The grinding waste produced must not be allowed to completely dry as samarium–cobalt has a low ignition point. A small spark, such as that produced with static electricity, can easily initiate combustion. The resulting fire produced can be extremely hot and difficult to control.
The reduction/melt method and reduction/diffusion method are used to manufacture samarium–cobalt magnets. The reduction/melt method will be described since it is used for both SmCo5 and Sm2Co17 production. The raw materials are melted in an induction furnace filled with argon gas. The mixture is cast into a mold and cooled with water to form an ingot. The ingot is pulverized and the particles are further milled to further reduce the particle size. The resulting powder is pressed in a die of desired shape, in a magnetic field to orient the magnetic field of the particles. Sintering is applied at a temperature of 1100˚C–1250˚C, followed by solution treatment at 1100˚C–1200˚C and tempering is finally performed on the magnet at about 700˚C–900˚C. It then is ground and further magnetized to increase its magnetic properties. The finished product is tested, inspected and packed.
Samarium can be substituted by a portion of other rare-earth elements including praseodymium, cerium, and gadolinium; the cobalt can be substituted by a portion of other transition metals including iron, copper, and zirconium.
Uses
Fender used one of designer Bill Lawrence's Samarium Cobalt Noiseless series of electric guitar pickups in Fender's Vintage Hot Rod '57 Stratocaster. These pickups were used in American Deluxe Series Guitars and Basses from 2004 until early 2010.
Samarium-cobalt (SmCo) magnets are used in aerospace and defense due to their exceptional magnetic properties. They are utilized in high-performance motors and actuators, precision sensors and gyroscopes, and satellite systems where stability and reliability are essential. They are also used in medical technologies, including MRI machines, pacemakers, and medical pumps.
In the mid-1980s some expensive headphones such as the Ross RE-278 used samarium–cobalt "Super Magnet" transducers.
Other uses include:
High-end electric motors used in the more competitive classes in slotcar racing
Turbomachinery
Traveling-wave tube field magnets
Applications that will require the system to function at cryogenic temperatures or very hot temperatures (over 180 °C)
Applications in which performance is required to be consistent with temperature change
Benchtop NMR spectrometers
Rotary encoders where it performs the function of magnetic actuator
See also
References
Cobalt alloys
Ferromagnetic materials
Loudspeaker technology
Magnetic alloys
Samarium compounds | Samarium–cobalt magnet | Physics,Chemistry,Materials_science,Engineering | 1,830 |
8,582,679 | https://en.wikipedia.org/wiki/PowerColor | PowerColor is a Taiwanese graphics card brand established in 1997 by TUL Corporation (撼訊科技), based in New Taipei, Taiwan. PowerColor maintains office locations in a number of countries, including Taiwan, the Netherlands and the United States. The United States branch is located in City of Industry, California and serves the North and Latin American markets. TUL also has another brand, VTX3D, which serves the European market and some Asian markets.
Products
PowerColor is a licensed producer of AMD Radeon video cards. The majority of PowerColor cards are manufactured by Foxconn.
PowerColor's AMD video cards range from affordable cards appropriate for low-end workstations, to cards for high-end gaming machines, thus catering to a wide range of the market. PowerColor's manufacturing arrangement with FoxConn has given it the ability to change the specifications of cards, allowing them to announce products with higher specifications—overclocked by default—than AMD or its main competitor, Sapphire Technology.
PowerColor products have been widely reviewed and have gained a number of awards at computer hardware review sites.
Support
PowerColor provides a two-year warranty on its products. To return a video card, the end-user must sign in and register their card. The return process is available only to end users in North America, with the customer liable for shipping.
See also
Diamond Multimedia – for North and South American markets
References
1997 establishments in Taiwan
Computer companies of Taiwan
Computer hardware companies
Electronics companies of Taiwan
Graphics hardware companies
Electronics companies established in 1997
Taiwanese brands
Manufacturing companies based in New Taipei | PowerColor | Technology | 327 |
76,053,547 | https://en.wikipedia.org/wiki/JNJ-20788560 | JNJ-20788560 is a potent opioid drug selective for the delta opioid receptor.
Mechanism of action
It works by activating opioid receptors, but it is selective for the δ-opioid receptor. This selectivity allows this drug to have less side effects than opioids such as morphine.
Tests have revealed that JNJ-20788560 does not produce hypoventilation, tolerance, or physical dependence.
References
Opioid agonists
Delta-opioid receptor agonists
Diethylamino compounds
Xanthenes
Nitrogen heterocycles
Heterocyclic compounds with 2 rings
Amides | JNJ-20788560 | Chemistry | 140 |
4,918,930 | https://en.wikipedia.org/wiki/Sophia%20Brahe | Sophia (or Sophie) Thott Lange (; 24 August 1559 or 22 September 1556 – 1643), known by her maiden name, was a Danish noblewoman and horticulturalist with knowledge of astronomy, chemistry, and medicine. She worked alongside her brother Tycho Brahe in making astronomical observations.
Life
She was born in Knudstrup Castle, Denmark as the youngest of ten children, to Otte Brahe, the rigsråd, or advisor, to the King of Denmark; and Beate Bille Brahe, leader of the royal household for Queen Sophie. Sophia's oldest brother was astronomer Tycho Brahe. Though he was both more than a decade her senior and raised in a separate household, the pair became quite close by the time Sophia was a teenager. The brother and sister were united by their work in science, and by their family's opposition to science as an appropriate activity for members of the aristocracy. They both desired a life filled with science and knowledge instead of the duties of a noble person.
She married Otto Thott in 1579, when he was 33 and she was at least twenty, though possibly older. They had one child before he died on 23 March 1588. Their son was , born in 1580. Upon her husband's death, Sophie Thott managed his property in Eriksholm (today Trolleholm Castle), running the estate to keep it profitable until her son came of age. During this time, she also became a horticulturalist, in addition to her studies in chemistry and medicine. The gardens she created in Eriksholm were said to be exceptional. Sophie was particularly interested in studying chemistry and medicine according to Paracelsus, in which small doses of poison might serve as strong medicines, and used her skills to treat the local poor. Sophie enjoyed the partial freedom she was allowed in the medical field. However, women were often not entitled to receive a college degree, the lack of which prevented her from practicing medicine as a legitimate physician. Similarly, she was devoted to the study of astrology and helped her brother with producing horoscopes.
On 21 July 1587, King Frederick II of Denmark signed a document transferring to Sophia Brahe the title of Årup farm in what is now Sweden.
Sophia continued to be a frequent visitor at Uranienborg where she met Erik Lange, a nobleman who studied alchemy and a friend of Tycho's. Erik Lange was a nobleman yet had little money to his name. His pursuit of alchemy left him financially unstable. He was especially fixated on producing gold, which led to his monetary problems. In 1590, Sophie took 13 visits to Uranienborg and became engaged to Lange. Since Lange used up most of his fortune with alchemy experiments, their marriage was delayed some years while he avoided his creditors and traveled to Germany to try to find patrons for his work. Tycho Brahe wrote the Latin epic poem "Urania Titani" during the couple's separation, expressed as a letter from his sister Sophia to her fiancé in 1594. Tycho casts Sophia as Urania, muse of astronomy, a further suggestion of his respect for her scientific endeavours.
In 1599, she visited Lange in Hamburg, but they did not marry until 1602 in Eckernförde. They lived in this town for a while in extreme poverty. Sophie wrote a long letter to her sister Margrethe Brahe, describing having to wear stockings with holes in them for her wedding. Lange's wedding clothes had to be returned to the pawn shop after the wedding, because the couple could not afford to keep them. She expressed anger with her family for not accepting her science studies, and for depriving her of money owed to her. By 1608, Erik Lange was living in Prague, and he died there in 1613 (Det Kongelige Bibliotek).
Sophia was often ridiculed and avoided due to her personal life and marriage. Many alienated her due to her marriage to Erik Lange which was opposed by all in her family except for her brother Tycho.
Sophie Brahe personally financed the restoration of the local church, Ivetofta Kyrka. She planned to be buried there, and the lid for her unused sarcophagus remains in the church's armory. But, by 1616 she had moved permanently to Zealand and settled in Helsingør. In Zealand, she lived specifically in Elsinore where she worked primarily on horticulture and healing plants. She spent her last years writing up the genealogy of Danish noble families, publishing the first major version in 1626 (there were later additions). Her work is still considered a major source for early history of Danish nobility (Det Kongelige Bibliotek). She died in Helsingør in the year 1643, and was buried in the Torrlösa old church in the village of Torrlösa, east of the town of Landskrona in what was then Denmark but now is southern Sweden. That church housed a burial chapel for the Thott family that remained for some time even after the church itself was pulled down in the mid-19th century (the new Torrlösa church was built nearby). Currently, a stone setting marks the outlines of the Thott chapel, while the tombstone for Sophie Brahe is still standing on the site.
Career and research
Tycho wrote that he had trained Sophia in horticulture and chemistry, but he initially discouraged her from studying astronomy. Instead, Sophia learned astronomy on her own, studying books in German, and having Latin books translated with her own money so that she could read them as well. Later in both of their careers, Tycho began to discourage her from continuing her research into astronomy because he believed it to be too complex for the talents of a woman.
However, much of Tycho's apprehension about Sophia's learning actually did not come from concerns about her ability to perform astronomical observations. Rather, he worried that she would not be able to achieve the level of understanding necessary to work in the field of astrology, which was inextricably linked to astronomy. As astronomers, the Brahes would have been expected to provide horoscopes, which would have been taken very seriously by their customers.
Sophia frequently visited Uranienborg, Tycho's observatory on the then-Danish island of Hveen. There, she assisted him with astronomical observations associated with his publication De nova stella, or On the New Star. Specifically, she assisted with a set of observations on 11 November 1572, which led to the discovery of the supernova that is now called SN 1572, as well as observations of the 8 December 1573 lunar eclipse. The discovery of SN 1572 was especially significant in that it added to the growing body of evidence that seemed to refute the geocentric model of the universe. Sophia's assistance was also instrumental in Tycho's work on orbits, which was foundational to the modern methods used to predict the positions of the planets. Tycho's studies of orbits involved the most precise measurements of the planets' movements made prior to the invention of the telescope, and while Tycho created many of the astronomical devices used to conduct the measurements, Sophia was among the assistants who actually made the measurements. Tycho did have other assistants, however, and while Sophia was present for each of these discoveries, the extent to which she contributed personally is unknown. Tycho did commend Sophia for her efforts, though, referring with admiration to Sophia's animus invictus, or "determined mind."
After her series of contributions in the 1570s, Sophia achieved more autonomy with regards to her astronomical research than before. Despite the serious doubts Tycho had previously expressed about Sophia's ability to comprehend the nuances of horoscopes, when he was frequently away from Uranienborg between 1588 and 1597, Sophia took on much of Tycho's astrological responsibilities with their clients.
Once some major observations were made by the Brahes, Tycho requested money from King Frederick II of Denmark, Frederick the Great, to move forward with more observation facilities in Hveen. The king was under the impression that the observatories were for Tycho and his personal research; however it is known that some of the observatories were made for Sophia to work in for her own observations. Much of the data that was gathered throughout Tycho's life was passed down to his pupil, Johannes Kepler, rather than his sister, Sophia Brahe. It can be said that the work that Sophia Brahe assisted her brother in laid the groundwork for Sir Isaac Newton.
As one can see Sophia was interested more in hands on experience and observations rather than experimenting. This really shines true during her marriage. Sophia remarries in 1602 to the alchemist Erik Lange. Lange, like many alchemists was striving to change different metal into gold. Meanwhile, Tycho and Sophie both rejected the idea of creating gold via the science of alchemy. In the pursuit of this goal Lange, with the support of his wife, spent all of the money that the two had saved. They lived in extreme poverty until Erik Lange passed away. This allowed Brahe to move back with her son in Denmark, who likely supported her financially. Now she could continue her works in science and write the genealogy of Danish noble families.
Urania Titani
"Urania Titani" was a six-hundred-line poem written in February 1594 about a fictional love correspondence between Sophia and Erik. The poem was written in 1594 but was published in 1668 by Peder Resen. "Urania Titani" was written in an Ovidian heroid form. In an Ovidian heroid form, the poem reads as a series of letters from a female protagonist to her lover. The form's name, Ovidian heroid, comes from the Roman poet Ovid.
The poem "Urania Titani" has been contested on who wrote it: some say it was Sophia, while others believe it was her brother Tycho. Peder Resen, the publisher of "Urania Titani", thought that Sophia was the author due to her role as the narrator. However, Tycho wrote a letter to Thomas Craig on 26 July 1594, in which Tycho stated he was the author. Likewise, the Ovidian heroid form had never been used in Denmark, where Sophie was from. The first poem in that form from Denmark, which we know of, was written in 1775. That would have been approximately 200 years after "Urania Titani" was written. Therefore, the likelihood that Sophia knew of this type of poem is slim compared to her brother. There is no evidence that would prove Tycho's statement to Thomas Craig as incorrect. Meanwhile, some think that Sophie helped Tycho write "Urania Titani." This is because Tycho wrote the poem in Latin, a language Sophia was not fluent in. Yet the poem was very personal, so some people think that Tycho must have had help in creating "Urania Titani."
"Urania Titani" contains a love story and descriptions of Sophie, Tycho, and Erik's horoscopes, which helps historians narrow down their correct birthdates. In the poem, Tycho represents Sophia as Urania, the Muse of astronomy in Greek mythology, and Erik as a Titan, a son of Uranus (mythology). Sophia is depicted as longing for her husband while he was studying alchemy abroad. The poem contains personal and sensitive information. For example, the poem describes Sophie's desire to have a child with her second husband, Erik Lange. "Urania Titani" established the co-dependence that Sophia and Tycho maintained, including their similar beliefs. Lastly, the poem was a large indicator of Tycho publicizing his bond with his sister, establishing himself as a Renaissance man and unashamed of his work with his sister.
Genealogy
Sophia is known for her work in genealogy. Sophia's first work was completed in 1600. During this time, genealogy was placed in documents called family books. These books contained many aspects of the family's life such as family members, traditions, and different family branches. In Sophia's renditions of her family book she included letters and correspondence with other women concerning their interwoven heritage and possible relatives. Sophia also included anecdotes from her family and rarely placed her own comments within her works. Sophia's work was common among women during her time, as women were valued for their penmanship and ability to maintain their households.
Legacy
Sophie, along with her brother Tycho, have come to represent the flowering of letters and science during the Danish Renaissance. She worked closely with her brother in his scientific endeavors and is thought to have acted as his muse. The two were so close that poet Johan L. Heiberg admonished that "Denmark must never forget the noble woman who, in spirit much more than flesh and blood, was Tycho Brahe's sister; the shining star in our Danish heaven is indeed a double one." In 1626 Sophie had completed a 900-page manuscript on the genealogies of 60 Danish noble families, which is held by Lund University.
See also
Timeline of women in science
Notes
References
Bibliography
Further reading
External links
Sophie Brahe
Sophie Brahe Manuscripts- The Royal Library (Copenhagen )
History of Scientific Women: Sophia BRAHE
1550s births
1643 deaths
16th-century Danish scientists
16th-century Danish historians
17th-century Danish historians
Sophia
Danish women scientists
Women astronomers
16th-century Danish astronomers
16th-century women scientists
17th-century women scientists
16th-century chemists
17th-century chemists
17th-century Danish women writers
17th-century Danish scientists
17th-century Danish writers
16th-century Danish women
Danish women historians
People from Helsingør
17th-century Danish astronomers
Scientists from Denmark–Norway | Sophia Brahe | Astronomy | 2,857 |
1,426,278 | https://en.wikipedia.org/wiki/Bionic%20Tower | The Bionic Tower (Spanish: Torre Biónica; Chinese: 仿生塔) was an imagined vertical city, designed for human habitation by Spanish architects Eloy Celaya, María Rosa Cervera and Javier Gómez. It would have a main tower high, with 300 stories housing approximately 100,000 people. The purpose of the Bionic Tower was to utilize bionics to address the issue of the world's rising population in an eco-friendly manner.
The Bionic Tower would be exactly 400 meters taller than the current tallest building, the Burj Khalifa.
The Bionic Tower is composed of two complexes. The first complex, Bionic Tower, is made up of twelve vertical neighborhoods, each eighty meters in height. The neighborhoods are separated by safety areas, designed to make for easier construction and evacuation in the case of emergency. Each neighborhood has two groups of buildings, one on the interior of the building and one on the exterior. Both groups of buildings are situated around large gardens and pools. The second complex, called the Base Island, is 1,000 meters in diameter, and is made up of many buildings, gardens, pools, and communication infrastructures. Foreseen uses of these complexes include hotels, offices, residential, commerce, cultural, sports and leisure.
In 1997, work on the prototype Bionic Vertical Space began. This was developed by the architects Eloy Celaya, María Rosa Cervera and Javier Gómez through the beginning of 2001. Eloy Celaya, who studied at Columbia University, is developing another project similar to the Bionic Tower.
While in office, then-Shanghai mayor Xu Kuangdi expressed an interest in the concept for his city. Hong Kong also reportedly expressed interest in the project.
Specifications
Authorship: Spanish architects Eloy Celaya, María Rosa Cervera and Javier Gómez
Urban model: Vertical city
Inhabitants: 100,000
Height:
Floors: 300
Elevators: 368 elevators ( or ), with vertical and horizontal movement)
Footprint: × at base, expanding to × max
Area:
Artificial base island: diameter
Structure: Micro-structured high strength concrete (2 tons/cm3 or 1,372 MPa)
Maximum sway: lateral displacement
Technology: Bionic vertical space
Cost: USD $16 billion+
Location: Shanghai or Hong Kong
See also
List of tallest buildings in Hong Kong
List of tallest buildings in Shanghai
List of buildings with 100 floors or more
Arcology
Proposed tall buildings and structures
References
External links
torrebionica.com
you.com.au
Planned communities in China
Architecture in Spain
Proposed buildings and structures in China
Proposed skyscrapers in China
Unbuilt skyscrapers
Proposed arcologies | Bionic Tower | Technology | 534 |
3,007,213 | https://en.wikipedia.org/wiki/Electric%20discharge | In electromagnetism, an electric discharge is the release and transmission of electricity in an applied electric field through a medium such as a gas (i.e., an outgoing flow of electric current through a non-metal medium).
Applications
The properties and effects of electric discharges are useful over a wide range of magnitudes. Tiny pulses of current are used to detect ionizing radiation in a Geiger–Müller tube. A low steady current can illustrate the gas spectrum in a gas-filled tube. A neon lamp is an example of a gas-discharge lamp, useful both for illumination and as a voltage regulator. A flashtube generates a short pulse of intense light useful for photography by sending a heavy current through a gas arc discharge. Corona discharges are used in photocopiers.
Electric discharges can convey substantial energy to the electrodes at the ends of the discharge. A spark gap is used in internal combustion engines to ignite the fuel/air mixture on every power stroke. Spark gaps are also used to switch heavy currents in a Marx generator and to protect electrical apparatus. In electric discharge machining, multiple tiny electric arcs erode a conductive workpiece to a finished shape. Arc welding is used to assemble heavy steel structures, where the base metal is heated to melting by the arc's heat. An electric arc furnace sustains arc currents of tens of thousands of amperes and is used for steelmaking and the production of alloys and other products.
Examples
Examples of electric discharge phenomena include:
Brush discharge
Dielectric barrier discharge
Corona discharge
Electric glow discharge
Electric arc
Electrostatic discharge
Electric discharge in gases
Leader (spark)
Partial discharge
Streamer discharge
Vacuum arc
Townsend discharge
St. Elmo's fire
Lightning
Electric organ
See also
Debye sheath
Electrical breakdown
Electric discharge in gases
Lichtenberg figure
Space charge
References
Electrical phenomena
Plasma phenomena | Electric discharge | Physics | 374 |
4,908,574 | https://en.wikipedia.org/wiki/The%20Eyes%20of%20Darkness | The Eyes of Darkness is a thriller novel by American writer Dean Koontz, released in 1981. The book focuses on a mother who sets out on a quest to find out if her son indeed died one year ago, or if he's still alive.
Plot
A year after her son Danny dies in an alleged accident on a camping trip, stage producer Tina Evans starts receiving paranormal signals insinuating that the boy is still alive. Having never seen Danny's deceased body, she plans to exhume his corpse to put her mind to rest. Assisting Tina is a newly acquainted lawyer Elliot Stryker, formerly working for Army Intelligence, with whom she is having an affair. They are soon targeted by assassins hired by Project Pandora and barely escape alive. Tina, strongly convinced that Danny is still alive, sets out to discover what really happened to her son and rescue him. Elliot accompanies her and the pair are chased by other agents instructed to kill them. Tina is telepathically guided by Danny to an underground lab in Sierra Nevada where her son has been subjected to horrific experiments by a top secret governmental organisation.
Characters
Christina (Tina) Evans – Danny's mother
Michael Evans – Danny's father and Tina's ex-husband
Elliot Stryker – Tina's partner and love interest
Danny Evans – Tina and Michael's son
Harold Kennebeck – judge
Carlton Dombey – scientist for Project Pandora
Aaron Zachariah – scientist for Project Pandora
George Alexander – boss of Project Pandora
Jack Morgan – pilot
Vivienne Neddler – Tina's house maid
Willis Bruckster – assassin hired by Project Pandora
Bob – assassin hired by Project Pandora
Vince – assassin hired by Project Pandora
Planned television adaptation
According to author Dean Koontz in the afterword of a 2008 paperback reissue, television producer Lee Rich purchased the rights for the book along with The Face of Fear, Darkfall, and a fourth unnamed novel for a television series based on Koontz's work. The Eyes of Darkness was assigned to Ann Powell and Rose Schacht, co-writers of Drug Wars: The Camarena Story, but they could never deliver an acceptable script. Ultimately, The Face of Fear is the only book of the four made into a television movie.
COVID-19 speculation
The novel mentions a bioweapon that in earlier editions is named Gorki-400 after the Soviet city of Gorki in which it was created. Due to the end of the Cold War, the origin of the bioweapon was changed to the Chinese city of Wuhan and it was renamed Wuhan-400 for the 1989 edition onward, prompting speculation from some in early 2020 that Koontz had somehow predicted coronavirus disease 2019 (COVID-19).
References
External links
1981 American novels
1981 science fiction novels
1980s horror novels
American horror novels
American science fiction novels
American thriller novels
Biological weapons in popular culture
Novels by Dean Koontz
Novels set in Nevada
Works published under a pseudonym | The Eyes of Darkness | Biology | 599 |
21,937,394 | https://en.wikipedia.org/wiki/Toxin%20Reviews | Toxin Reviews is a quarterly peer-reviewed medical journal covering research on multidisciplinary research in the area of toxins derived from animals, plants and microorganisms. The aim is to publish reviews that are of broad interest and importance to the toxicology as well as other life science communities. Toxin Reviews aims to encourage scientists to highlight the contribution of toxins as research tools in deciphering molecular and cellular mechanisms, and as prototypes of therapeutic agents. The reviews should emphasize the role of toxins in enhancing our fundamental understanding of life sciences, protein chemistry, structural biology, pharmacology, clinical toxicology and evolution. Moreover, prominence is given to reviews that propose new ideas or approaches and further the knowledge of toxicology. Toxin Reviews delivers up-to-date research on toxins, their characteristics, activities, and mechanisms of action, ranging in scope from new, underutilized substances, through anti-venoms to chemical and biological weapons. It is published by Taylor & Francis Group. The editor-in-chief is R. Manjunatha Kini, National University of Singapore.
The journal has an impact factor of 3.840 (2018) with a H-Index of 38 in the journal category "Toxicology".
References
External links
Academic journals established in 1982
Toxicology journals
English-language journals
Taylor & Francis academic journals
Quarterly journals | Toxin Reviews | Environmental_science | 277 |
9,471,618 | https://en.wikipedia.org/wiki/Wildlife%20of%20Myanmar | The wildlife of Myanmar includes its flora and fauna and their natural habitats.
Flora
Like all Southeastern Asian forests, the forests of Myanmar can be divided into two categories: monsoon forest and rainforest. Monsoon forest is dry at least three months a year, and is dominated by deciduous trees. Rainforest has a rainy season of at least nine months, and are dominated by broadleaf evergreen.
In the region north of the Tropic of Cancer, in the Himalayan region, subtropical broadleaf evergreen dominates to an elevation of 2000 m, and from 2000 m to 3000 m, semi-deciduous broadleaf dominates, and above 3000 m, evergreen conifers and subalpine forest are the primary fauna until the alpine scrubland.
The area from Yangon to Myitkyina is mostly monsoon forest, while peninsular Malaysia south of Mawlamyine is primarily rainforest, with some overlap between the two. Along the coasts of Rakhine State and Tanintharyi Division, tidal forests occur in estuaries, lagoons, tidal creeks, and low islands. These forests are host to the much-depleted Myanmar Coast mangroves habitat of mangrove and other trees that grow in mud and are resistant to sea water. Forests along the beaches consist of palm trees, hibiscus, casuarinas, and other trees resistant to storms.
Fauna
Myanmar is home to nearly 300 known mammal species, 300 reptile species, and about 1000 bird species. There are also many non-marine molluscs in Myanmar.
See also
Deforestation in Myanmar
References
Sources
Myanmar
Biota of Myanmar | Wildlife of Myanmar | Biology | 314 |
1,131,331 | https://en.wikipedia.org/wiki/Position%20error | Position error is one of the errors affecting the systems in an aircraft for measuring airspeed and altitude. It is not practical or necessary for an aircraft to have an airspeed indicating system and an altitude indicating system that are exactly accurate. A small amount of error is tolerable. It is caused by the location of the static vent that supplies air pressure to the airspeed indicator and altimeter; there is no position on an aircraft where, at all angles of attack, the static pressure is always equal to atmospheric pressure.
Static system
All aircraft are equipped with a small hole in the surface of the aircraft called the static port. The air pressure in the vicinity of the static port is conveyed by a conduit to the altimeter and the airspeed indicator. This static port and the conduit constitute the aircraft's static system. The objective of the static system is to sense the pressure of the air at the altitude at which the aircraft is flying. In an ideal static system the air pressure fed to the altimeter and airspeed indicator is equal to the pressure of the air at the altitude at which the aircraft is flying.
As the air flows past an aircraft in flight, the streamlines are affected by the presence of the aircraft, and the speed of the air relative to the aircraft is different at different positions on the aircraft's outer surface. In consequence of Bernoulli's principle, the different speeds of the air result in different pressures at different positions on the aircraft's surface. The ideal position for a static port is a position where the local air pressure in flight is always equal to the pressure remote from the aircraft, however there is no position on an aircraft where this ideal situation exists for all angles of attack. When deciding on a position for a static port, aircraft designers attempt to find a position where the error between static pressure and free-stream pressure is a minimum across the operating range of angle of attack of the aircraft. The residual error at any given angle of attack is called the position error.
Position error affects the indicated airspeed and the indicated altitude. Aircraft manufacturers use the aircraft flight manual to publish details of the error in indicated airspeed and indicated altitude across the operating range of speeds. In many aircraft, the effect of position error on airspeed is shown as the difference between indicated airspeed and calibrated airspeed. In some low-speed aircraft, the position error is shown as the difference between indicated airspeed and equivalent airspeed.
Pitot system
Bernoulli's principle states that total pressure (or stagnation pressure) is constant along a streamline. There is no variation in stagnation pressure, regardless of the position on the streamline where it is measured. There is no position error associated with stagnation pressure.
The pitot tube supplies pressure to the airspeed indicator. Pitot pressure is equal to stagnation pressure providing the pitot tube is aligned with the local airflow, it is located outside the boundary layer, and outside the wash from the propeller. Pitot pressure can suffer alignment error but it is not vulnerable to position error.
Aircraft design standards
Aircraft design standards specify a maximum amount of Pitot-static system error. The error in indicated altitude must not be excessive because it is important for pilots to know their altitude with reasonable accuracy for the purpose of traffic separation. US Federal Aviation Regulations, Part 23, §23.1325(e) includes the following requirement for the static pressure system:
The system error, in indicated pressure altitude, ..., may not exceed ± per speed for the [operating speed range for the aircraft].
The error in indicated airspeed must also not be excessive. Part 23, §23.1323(b) includes the following requirement for the airspeed indicating system:
The system error, including position error, ..., may not exceed three percent of the calibrated airspeed or , whichever is greater, throughout the [operating speed range for the aircraft].
Measuring position error
For the purpose of complying with an aircraft design standard that specifies a maximum permissible error in the airspeed indicating system it is necessary to measure the position error in a representative aircraft. There are many different methods for measuring position error. Some of the more common methods are:
use of a GNSS receiver while flying a triangular course
trailing conduit with static source, stabilized by a plastic cone
tower fly-by with photographs of the passing aircraft taken from the tower to accurately show the height of the aircraft above or below the tower
trailing bob with both Pitot and static sources
See also
Airspeed
Altitude
Global Positioning System
Pitot-static system
Reduced vertical separation minima
References
Bibliography
Clancy, L.J. (1975), Aerodynamics, Pitman Publishing Limited, London
.
Kermode, A.C. (1972) Mechanics of Flight, Longman Group Limited, London
External links
Determining static system error
Aircraft instruments
Air navigation
Airspeed | Position error | Physics,Technology,Engineering | 999 |
8,238,982 | https://en.wikipedia.org/wiki/Decomposition%20matrix | In mathematics, and in particular modular representation theory, a decomposition matrix is a matrix that results from writing the irreducible ordinary characters in terms of the irreducible modular characters, where the entries of the two sets of characters are taken to be over all conjugacy classes of elements of order coprime to the characteristic of the field. All such entries in the matrix are non-negative integers. The decomposition matrix, multiplied by its transpose, forms the Cartan matrix, listing the composition factors of the projective modules.
References
See also
Matrix decomposition
Representation theory of groups
Matrices | Decomposition matrix | Mathematics | 119 |
84,566 | https://en.wikipedia.org/wiki/Killer%20application | A killer application (often shortened to killer app) is any software that is so necessary or desirable that it proves the core value of some larger technology, such as its host computer hardware, video game console, software platform, or operating system. Consumers would buy the host platform just to access that application, possibly substantially increasing sales of its host platform.
Examples
Although the term was coined in the late 1980s one of the first retroactively recognized examples of a killer application is the VisiCalc spreadsheet, released in 1979 for the Apple II. Because it was not released for other computers for 12 months, people spent for the software first, then $2,000 to $10,000 (equivalent to $ to $) on the requisite Apple II. BYTE wrote in 1980, "VisiCalc is the first program available on a microcomputer that has been responsible for sales of entire systems", and Creative Computings VisiCalc review is subtitled "reason enough for owning a computer". Others also chose to develop software, such as EasyWriter, for the Apple II first because of its higher sales, helping Apple defeat rivals Commodore International and Tandy Corporation.
The co-creator of WordStar, Seymour Rubinstein, argued that the honor of the first killer app should go to that popular word processor, given that it came out a year before VisiCalc and that it gave a reason for people to buy a computer. However, whereas WordStar could be considered an incremental improvement (albeit a large one) over smart typewriters like the IBM Electronic Selectric Composer, VisiCalc, with its ability to instantly recalculate rows and columns, introduced an entirely new paradigm and capability.
Although released four years after VisiCalc, Lotus 1-2-3 also benefited sales of the IBM PC. Noting that computer purchasers did not want PC compatibility as much as compatibility with certain PC software, InfoWorld suggested "let's tell it like it is. Let's not say 'PC compatible', or even 'MS-DOS compatible'. Instead, let's say '1-2-3 compatible'."
The UNIX Operating System became a killer application for the DEC PDP-11 and VAX-11 minicomputers during roughly 1975–1985. Many of the PDP-11 and VAX-11 processors never ran DEC's operating systems (RSTS or VAX/VMS), but instead, they ran UNIX, which was first licensed in 1975. To get a virtual-memory UNIX (BSD 3.0), requires a VAX-11 computer. Many universities wanted a general-purpose timesharing system that would meet the needs of students and researchers. Early versions of UNIX included free compilers for C, Fortran, and Pascal, at a time when offering even one free compiler was unprecedented. From its inception, UNIX drives high-quality typesetting equipment and later PostScript printers using the nroff/troff typesetting language, and this was also unprecedented. UNIX is the first operating system offered in source-license form (a university license cost only $10,000, less than a PDP-11), allowing it to run on an unlimited number of machines, and allowing the machines to interface to any type of hardware because the UNIX I/O system is extensible.
Usage
The earliest recorded use of the term in print is in the May 24, 1988 issue of PC Week: "Everybody has only one killer application. The secretary has a word processor. The manager has a spreadsheet."
The definition of "killer app" came up during the deposition of Bill Gates in the United States v. Microsoft Corp. antitrust case. He had written an email in which he described Internet Explorer as a killer app. In the questioning, he said that the term meant "a popular application," and did not connote an application that would fuel sales of a larger product or one that would supplant its competition, as the Microsoft Computer Dictionary defined it.
Introducing the iPhone in 2007, Steve Jobs said that "the killer app is making calls". Reviewing the iPhone's first decade, David Pierce for Wired wrote that although Jobs prioritized a good experience making calls in the phone's development, other features of the phone soon became more important, such as its data connectivity and ability to install third-party software (which was added later).
The World Wide Web (through the web browsers Mosaic and Netscape Navigator) is the killer app that popularized the Internet, as is the music sharing program Napster.
Applications and operating systems
1979: Apple II: VisiCalc (first spreadsheet program and killer app)
1979: TRS-80, CP/M systems: WordStar 1982: ported to CP/M-86 and IBM PC compatible/MS-DOS
1983: IBM PC compatible/MS-DOS: Lotus 1-2-3 (spreadsheet)
1985: Macintosh: Aldus (now Adobe) PageMaker (first desktop publishing program)
1985: AmigaOS: Deluxe Paint, Video Toaster, Prevue Guide
1993: Acorn Archimedes: Sibelius
1995: Windows 95
Video games
The term applies to video games that persuade consumers to buy a particular video game console or accessory, by virtue of platform exclusivity. Such a game is also called a "system seller".
Space Invaders, originally released for arcades in 1978, became a killer app when it was ported to the Atari VCS console in 1980, quadrupling sales of the three-year-old console.
Star Raiders, released in 1980, was the first killer app computer game. BYTE named it the single most important reason for sales of Atari 400 and 800 computers. Another was Eastern Front (1941), released in 1981.
Dungeon_Master_(video_game), released for the Atari ST
Defender of the Crown, released in 1986 for the Amiga as the first game from Cinemaware, has graphics which "have set new standards for computer game".
In 1996, Computer Gaming World wrote that Wizardry: Proving Grounds of the Mad Overlord (1981) "sent AD&D fans scrambling to buy Apple IIs".
The Famicom home port of Xevious is considered the console's first killer app, which caused system sales to jump by nearly 2 million units.
Computer Gaming World stated that The Legend of Zelda on the Nintendo Entertainment System, Phantasy Star II on the Sega Genesis, and Far East of Eden for the NEC TurboGrafx-16 were killer apps for their consoles.
The Super Mario, Final Fantasy, and Dragon Quest series were killer apps for Nintendo's Famicom and Super Famicom consoles in Japan.
John Madden Footballs popularity in 1990 helped the Genesis gain market share against the Super NES in North America.
Sonic the Hedgehog, released in 1991, was hailed as a killer app as it revived sales of the three-year-old Genesis.
Mortal Kombat helped pushed the sales of the Genesis due to being uncensored unlike the Nintendo version.
Streets of Rage became a system seller for the Mega Drive/Genesis in the UK.
Street Fighter II, originally released for arcades in 1991, became a system-seller for the Super NES when it was ported to the platform in 1992.
Donkey Kong Country for the SNES helped Nintendo's comeback against Sega.
Myst and The 7th Guest, both released in 1993, drove adoption of CD-ROM drives for personal computers.
Virtua Fighter 2, Nights into Dreams, and Sakura Wars are the killer apps for the Sega Saturn.
Euro 96 and Sega Rally Championship are major system-sellers for the Sega Saturn in the United Kingdom, with the latter becoming the fastest selling CD game.
Die Hard Arcade and Fighters Megamix boosted the Sega Saturn's sales in the United States.
Ridge Racer, Tekken, Wipeout, Tomb Raider, and Crash Bandicoot are the killer apps for the PlayStation. Tomb Raider was released for the Sega Saturn first and for MS-DOS at the same time, but the games contributed substantially to the original PlayStation's early success. See Blache Fabian & Lauren Fielder and NG Alphas.
Final Fantasy VII is another killer app for the PlayStation. Computing Japan magazine said that it was largely responsible for the PlayStation's global installed base increasing 60% from units sold by November 1996 to units sold by May 1997.
Super Mario 64 and GoldenEye 007 are the killer apps for the Nintendo 64.
Virtua Fighter 3, Sonic Adventure, and The House of the Dead 2 are the killer apps for the Dreamcast.
NFL 2K is a killer app for the Dreamcast in the United States.
Gran Turismo 3 and the Grand Theft Auto games are the killer apps for the PlayStation 2.
Star Wars Rogue Squadron II: Rogue Leader, Super Smash Bros. Melee, and Super Mario Sunshine are the killer apps for the GameCube.
Halo: Combat Evolved and Halo 2 are the killer apps for the Xbox, and the subsequent series entries became killer apps for the Xbox 360 and Xbox One.
Many video game and technology critics call Xbox Live a more general killer app for the Xbox.
Blue Dragon is a killer app for the Xbox 360 in Japan.
Wii Sports is the killer app for the Wii.
Metal Gear Solid 4: Guns of the Patriots boosted PlayStation 3 sales.
Mario Kart 8 is a killer app for the Wii U in the UK.
The Legend of Zelda: Breath of the Wild is a killer app for the Nintendo Switch.
Half-Life: Alyx is a killer app for virtual reality headsets, as the first true AAA virtual reality game. Sales of VR headsets such as the Valve Index increased dramatically after its announcement, suggesting users bought the product specifically for the game.
Microsoft Flight Simulator was called a killer app for Xbox Game Studios's Xbox Game Pass subscription, and the Xbox Series X/S.
Pokémon games are killer apps for Nintendo handhelds, often topping the best-selling charts for whatever system they appear on.
See also
Disruptive innovation
Unique selling point
Vendor lock-in
Use case
References
Computer jargon
Video game marketing | Killer application | Technology | 2,079 |
70,426,361 | https://en.wikipedia.org/wiki/Dry%20stone%20hut | Types of dry stone hut include:
Clochán, associated with the south-western Irish seaboard
Mitato, found in Greece, especially on the mountains of Crete
Orri, associated with Ariège, France
Shielings in Scotland
Trulli, in Apulia, Italy
stone made roundavel in sotho culture
Uses of dry-stone huts include temporary shelter for shepherds and their animals, permanent habitations for monks or agricultural workers, storage and cheese making. Dry-stone huts may be thatched or roofed with sod, sometimes bound together with plant roots such as those of Madonna lily or sedum.
References
Stonemasonry
Huts
Roof construction | Dry stone hut | Engineering | 137 |
30,292,595 | https://en.wikipedia.org/wiki/Ising%20critical%20exponents | This article lists the critical exponents of the ferromagnetic transition in the Ising model. In statistical physics, the Ising model is the simplest system exhibiting a continuous phase transition with a scalar order parameter and symmetry. The critical exponents of the transition are universal values and characterize the singular properties of physical quantities. The ferromagnetic transition of the Ising model establishes an important universality class, which contains a variety of phase transitions as different as ferromagnetism close to the Curie point and critical opalescence of liquid near its critical point.
From the quantum field theory point of view, the critical exponents can be expressed in terms of scaling dimensions of the local operators of the conformal field theory describing the phase transition (In the Ginzburg–Landau description, these are the operators normally called .) These expressions are given in the last column of the above table, and were used to calculate the values of the critical exponents using the operator dimensions values from the following table:
In d=2, the two-dimensional critical Ising model's critical exponents can be computed exactly using the minimal model . In d=4, it is the free massless scalar theory (also referred to as mean field theory). These two theories are exactly solved, and the exact solutions give values reported in the table.
The d=3 theory is not yet exactly solved. The most accurate results come from the conformal bootstrap. These are the values reported in the tables. Renormalization group methods, Monte-Carlo simulations, and the fuzzy sphere regulator give results in agreement with the conformal bootstrap, but are several orders of magnitude less accurate.
Based on the numerical conformal bootstrap results, Ning Su conjectured in 2019 that in d=3. As of 2024, this conjecture is still compatible with the most precise numerical bootstrap results.
See also
Universality class
XY model
References
Books
Kleinert, H. and Schulte-Frohlinde, V.; Critical Properties of φ4-Theories, World Scientific (Singapore, 2001); Paperback (also available online) (together with V. Schulte-Frohlinde)
External links
A discussion of critical exponents in general at the Statistical Mechanics Wiki
Critical exponents (phase transitions) | Ising critical exponents | Physics | 486 |
57,222,451 | https://en.wikipedia.org/wiki/NGC%204886 | NGC 4886 is an elliptical galaxy located about 327 million light-years away in the constellation Coma Berenices. NGC 4886 was discovered by astronomer Heinrich d'Arrest on April 6, 1864. It was then rediscovered by d'Arrest on April 22, 1865, and was listed as NGC 4882. NGC 4886 is a member of the Coma Cluster.
See also
List of NGC objects (4001–5000)
NGC 4889
References
External links
Coma Berenices
Coma Cluster
Elliptical galaxies
4886
44698
Astronomical objects discovered in 1864
+5-31-76 | NGC 4886 | Astronomy | 120 |
35,683,859 | https://en.wikipedia.org/wiki/Unparser | In computing, an unparser is a system that constructs a set of characters or image components from a given parse tree.
An unparser is in effect the reverse of a traditional parser that takes a set of string of characters and produces a parse tree. Unparsing generally involves the application of a specific set of rules to the parse tree as a "tree walk" takes place.
Given that the tree may involve both textual and graphic elements, the unparser may have two separate modules, each of which handles the relevant components. In such cases the "master unparser" looks up the "master unparse table" to determine if a given nested structure should be handled by one module, or the other.
See also
Bidirectional transformation
Formal grammar
Natural language generation
References
Syntax
Compiler construction | Unparser | Technology | 170 |
9,173,273 | https://en.wikipedia.org/wiki/Coronal%20loop | In solar physics, a coronal loop is a well-defined arch-like structure in the Sun's atmosphere made up of relatively dense plasma confined and isolated from the surrounding medium by magnetic flux tubes. Coronal loops begin and end at two footpoints on the photosphere and project into the transition region and lower corona. They typically form and dissipate over periods of seconds to days and may span anywhere from in length.
Coronal loops are often associated with the strong magnetic fields located within active regions and sunspots. The number of coronal loops varies with the 11 year solar cycle.
Origin and physical features
Due to a natural process called the solar dynamo driven by heat produced in the Sun's core, convective motion of the electrically conductive plasma which makes up the Sun creates electric currents, which in turn create powerful magnetic fields in the Sun's interior. These magnetic fields are in the form of closed loops of magnetic flux, which are twisted and tangled by solar differential rotation (the different rotation rates of the plasma at different latitudes of the solar sphere). A coronal loop occurs when a curved arc of the magnetic field projects through the visible surface of the Sun, the photosphere, protruding into the solar atmosphere.
Within a coronal loop, the paths of the moving electrically charged particles which make up its plasma—electrons and ions—are sharply bent by the Lorentz force when moving transverse to the loop's magnetic field. As a result, they can only move freely parallel to the magnetic field lines, tending to spiral around these lines. Thus, the plasma within a coronal loop cannot escape sideways out of the loop and can only flow along its length. This is known as the frozen-in condition.
The strong interaction of the magnetic field with the dense plasma on and below the Sun's surface tends to tie the magnetic field lines to the motion of the Sun's plasma; thus, the two footpoints (the location where the loop enters the photosphere) are anchored to and rotate with the Sun's surface. Within each footpoint, the strong magnetic flux tends to inhibit the convection currents which carry hot plasma from the Sun's interior to the surface, so the footpoints are often (but not always) cooler than the surrounding photosphere. These appear as dark spots on the Sun's surface, known as sunspots. Thus, sunspots tend to occur under coronal loops, and tend to come in pairs of opposite magnetic polarity; a point where the magnetic field loop emerges from the photosphere is a North magnetic pole, and the other where the loop enters the surface again is a South magnetic pole.
Coronal loops form in a wide range of sizes, from 10 km to 10,000 km. Coronal loops have a wide variety of temperatures along their lengths. Loops at temperatures below 1 megakelvin (MK) are generally known as cool loops; those existing at around 1 MK are known as warm loops; and those beyond 1 MK are known as hot loops. Naturally, these different categories radiate at different wavelengths.
A related phenomenon is the open flux tube, in which magnetic fields extend from the surface far into the corona and heliosphere; these are the source of the Sun's large scale magnetic field (magnetosphere) and the solar wind.
Location
Coronal loops have been shown on both active and quiet regions of the solar surface. Active regions on the solar surface take up small areas but produce the majority of activity and are often the source of flares and coronal mass ejections due to the intense magnetic field present. Active regions produce 82% of the total coronal heating energy.
Dynamic flows
Many solar observation missions have observed strong plasma flows and highly dynamic processes in coronal loops. For example, SUMER observations suggest flow velocities of 5–16 km/s in the solar disk, and other joint SUMER/TRACE observations detect flows of 15–40 km/s. Very high plasma velocities (in the range of 40–60 km/s) have been detected by the Flat Crystal Spectrometer (FCS) on board the Solar Maximum Mission.
History of observations
Before 1991
Despite progress made by ground-based telescopes and eclipse observations of the corona, space-based observations became necessary to escape the obscuring effect of the Earth's atmosphere. Rocket missions such as the Aerobee flights and Skylark rockets successfully measured solar extreme ultraviolet (EUV) and X-ray emissions. However, these rocket missions were limited in lifetime and payload. Later, satellites such as the Orbiting Solar Observatory series (OSO-1 to OSO-8), Skylab, and the Solar Maximum Mission (the first observatory to last the majority of a solar cycle: from 1980 to 1989) were able to gain far more data across a much wider range of emission.
1991–present day
In August 1991, the solar observatory spacecraft Yohkoh launched from the Kagoshima Space Center. During its 10 years of operation, it revolutionized X-ray observations. Yohkoh carried four instruments; of particular interest is the SXT instrument, which observed X-ray-emitting coronal loops. This instrument observed X-rays in the 0.25–4.0 keV range, resolving solar features to 2.5 arc seconds with a temporal resolution of 0.5–2 seconds. SXT was sensitive to plasma in the 2–4 MK temperature range, making its data ideal for comparison with data later collected by TRACE of coronal loops radiating in the extra ultraviolet (EUV) wavelengths.
The next major step in solar physics came in December 1995, with the launch of the Solar and Heliospheric Observatory (SOHO) from Cape Canaveral Air Force Station. SOHO originally had an operational lifetime of two years. The mission was extended to March 2007 due to its resounding success, allowing SOHO to observe a complete 11-year solar cycle. SOHO has 12 instruments on board, all of which are used to study the transition region and corona. In particular, the Extreme ultraviolet Imaging Telescope (EIT) instrument is used extensively in coronal loop observations. EIT images the transition region through to the inner corona by using four band passes—171 Å FeIX, 195 Å FeXII, 284 Å FeXV, and 304 Å HeII, each corresponding to different EUV temperatures—to probe the chromospheric network to the lower corona.
In April 1998, the Transition Region and Coronal Explorer (TRACE) was launched from Vandenberg Air Force Base. Its observations of the transition region and lower corona, made in conjunction with SOHO, give an unprecedented view of the solar environment during the rising phase of the solar maximum, an active phase in the solar cycle. Due to the high spatial (1 arc second) and temporal resolution (1–5 seconds), TRACE has been able to capture highly detailed images of coronal structures, whilst SOHO provides the global (lower resolution) picture of the Sun. This campaign demonstrates the observatory's ability to track the evolution of steady-state (or 'quiescent') coronal loops. TRACE uses filters sensitive to various types of electromagnetic radiation; in particular, the 171 Å, 195 Å, and 284 Å band passes are sensitive to the radiation emitted by quiescent coronal loops.
See also
Solar spicule
Solar prominence
Coronal hole
References
External links
TRACE homepage
Solar and Heliospheric Observatory, including near-real-time images of the solar corona
Coronal heating problem at Innovation Reports
NASA/GSFC description of the coronal heating problem
FAQ about coronal heating
Animated explanation of Coronal loops and their role in creating Prominences (University of South Wales)
Sun
Space plasmas
Astrophysics
Articles containing video clips | Coronal loop | Physics,Astronomy | 1,594 |
40,430,374 | https://en.wikipedia.org/wiki/Macromomycin%20B | Macromomycin B is an antibiotic with anticancer activity.
References
Antibiotics
Vinylidene compounds | Macromomycin B | Chemistry,Biology | 23 |
984,020 | https://en.wikipedia.org/wiki/What%20the%20Bleep%20Do%20We%20Know%21%3F | What the Bleep Do We Know!? (stylized as What tнē #$*! D̄ө ωΣ (k)πow!? and What the #$*! Do We Know!?) is a 2004 American pseudo-scientific film that posits a spiritual connection between quantum physics and consciousness (as part of a belief system known as quantum mysticism). The plot follows the fictional story of a photographer, using documentary-style interviews and computer-animated graphics, as she encounters emotional and existential obstacles in her life and begins to consider the idea that individual and group consciousness can influence the material world. Her experiences are offered by the creators to illustrate the film's scientifically unsupported ideas.
Bleep was conceived and its production funded by William Arntz, who serves as co-director along with Betsy Chasse and Mark Vicente; all three were students of Ramtha's School of Enlightenment. A moderately low-budget independent film, it was promoted using viral marketing methods and opened in art-house theaters in the western United States, winning several independent film awards before being picked up by a major distributor and eventually grossing over $10 million. The 2004 theatrical release was succeeded by a substantially changed, extended home media version in 2006.
The film has been described as an example of quantum mysticism, and has been criticized for both misrepresenting science and containing pseudoscience. While many of its interviewees and subjects are professional scientists in the fields of physics, chemistry, and biology, one of them has noted that the film quotes him out of context.
Synopsis
Filmed in Portland, Oregon, What the Bleep Do We Know!? presents a viewpoint of the physical universe and human life within it, with connections to neuroscience and quantum physics. Some ideas discussed in the film are:
That the universe is best seen as constructed from thoughts and ideas rather than from matter.
That "empty space" is not empty.
That matter is not solid, and electrons are able to pop in and out of existence without it being known where they disappear to.
That beliefs about who one is and what is real are a direct cause of oneself and of one's own realities.
That peptides produced by the brain can cause a bodily reaction to emotion.
In the narrative segments of the film, Marlee Matlin portrays Amanda, a photographer who plays the role of everywoman as she experiences her life from startlingly new and different perspectives.
In the documentary segments of the film, interviewees discuss the roots and meaning of Amanda's experiences. The comments focus primarily on a single theme: "We create our own reality." The director, William Arntz, has described What the Bleep as a film for the "metaphysical left".
Cast
Marlee Matlin as Amanda
Elaine Hendrix as Jennifer
Barry Newman as Frank
Robert Bailey Jr. as Reggie
John Ross Bowie as Elliot
Armin Shimerman as Man
Robert Blanche as Bob
Larry Brandenburg as Bruno
Patti B. Collins as Mother of the Bride
Production
Work was split between Toronto-based Mr. X Inc., Lost Boys Studios in Vancouver, and Atomic Visual Effects in Cape Town, South Africa. The visual-effects team, led by Evan Jacobs, worked closely with the other film-makers to create visual metaphors that would capture the essence of the film's technical subjects with attention to aesthetic detail.
Release
Promotion
Lacking the funding and resources of the typical Hollywood film, the filmmakers relied on "guerrilla marketing" first to get the film into theaters, and then to attract audiences. This has led to accusations, both formal and informal, directed towards the film's proponents, of spamming online message boards and forums with many thinly veiled promotional posts. Initially, the film was released in only two theaters: one in Yelm, Washington (the home of the producers, which is also the home of Ramtha), and the other the Bagdad Theater in Portland, Oregon, where it was filmed. Within several weeks, the film had appeared in a dozen or more theaters (mostly in the western United States), and within six months it had made its way into 200 theaters across the US.
Box office
According to Publishers Weekly, the film was one of the sleeper hits of 2004, as "word-of-mouth and strategic marketing kept it in theaters for an entire year." The article states that the domestic gross exceeded $10 million, described as not bad for a low-budget documentary, and that the DVD release attained even more significant success with over a million units shipped in the first six months following its release in March 2005. Foreign gross added another $5 million for a worldwide gross of just over $21 million.
Critical response
In the Publishers Weekly article, publicist Linda Rienecker of New Page Books says that she sees the success as part of a wider phenomenon, stating "A large part of the population is seeking spiritual connections, and they have the whole world to choose from now". Author Barrie Dolnick adds that "people don't want to learn how to do one thing. They'll take a little bit of Buddhism, a little bit of veganism, a little bit of astrology... They're coming into the marketplace hungry for direction, but they don't want some person who claims to have all the answers. They want suggestions, not formulas." The same article quotes Bill Pfau, Advertising Manager of Inner Traditions, as saying "More and more ideas from the New Age community have become accepted into the mainstream."
Critics offered mixed reviews as seen on the film review website Rotten Tomatoes, where it scored a "Rotten" 34% score with an average score of 4.6/10, based on 77 reviews. In his review, Dave Kehr of The New York Times described the "transition from quantum mechanics to cognitive therapy" as "plausible", but stated also that "the subsequent leap—from cognitive therapy into large, hazy spiritual beliefs—isn't as effectively executed. Suddenly people who were talking about subatomic particles are alluding to alternate universes and cosmic forces, all of which can be harnessed in the interest of making Ms. Matlin's character feel better about her thighs."
What the Bleep Do We Know!? has been described as "a kind of New Age answer to The Passion of the Christ and other films that adhere to traditional religious teachings." It offers alternative spirituality views characteristic of New Age philosophy, including critiques of the competing claims of stewardship among traditional religions [viz., institutional Judaism, Christianity, and Islam] of universally recognized and accepted moral values.
Academic reaction
Scientists who have reviewed What the Bleep Do We Know!? have described distinct assertions made as pseudoscience. Lisa Randall refers to the film as "the bane of scientists". Amongst the assertions in the film that have been challenged are that water molecules can be influenced by thought (as popularized by Masaru Emoto), that meditation can reduce violent crime rates of a city, and that quantum physics implies that "consciousness is the ground of all being." The film was also discussed in a letter published in Physics Today that challenges how physics is taught, saying teaching fails to "expose the mysteries physics has encountered [and] reveal the limits of our understanding". In the letter, the authors write: "the movie illustrates the uncertainty principle with a bouncing basketball being in several places at once. There's nothing wrong with that. It's recognized as pedagogical exaggeration. But the movie gradually moves to quantum 'insights' that lead a woman to toss away her antidepressant medication, to the quantum channeling of Ramtha, the 35,000-year-old Lemurian warrior, and on to even greater nonsense." It went on to say that "Most laypeople cannot tell where the quantum physics ends and the quantum nonsense begins, and many are susceptible to being misguided," and that "a physics student may be unable to convincingly confront unjustified extrapolations of quantum mechanics," a shortcoming which the authors attribute to the current teaching of quantum mechanics, in which "we tacitly deny the mysteries physics has encountered".
Richard Dawkins stated that "the authors seem undecided whether their theme is quantum theory or consciousness. Both are indeed mysterious, and their genuine mystery needs none of the hype with which this film relentlessly and noisily belabours us", concluding that the film is "tosh". Professor Clive Greated wrote that "thinking on neurology and addiction are covered in some detail but, unfortunately, early references in the film to quantum physics are not followed through, leading to a confused message". Despite his caveats, he recommends that people see the film, stating: "I hope it develops into a cult movie in the UK as it has in the US. Science and engineering are important for our future, and anything that engages the public can only be a good thing." Simon Singh called it pseudoscience and said the suggestion "that if observing water changes its molecular structure, and if we are 90% water, then by observing ourselves we can change at a fundamental level via the laws of quantum physics" was "ridiculous balderdash". According to João Magueijo, professor in theoretical physics at Imperial College, the film deliberately misquotes science. The American Chemical Society's review criticizes the film as a "pseudoscientific docudrama", saying "Among the more outlandish assertions are that people can travel backward in time, and that matter is actually thought."
Bernie Hobbs, a science writer with ABC Science Online, explains why the film is incorrect about quantum physics and reality: "The observer effect of quantum physics isn't about people or reality. It comes from the Heisenberg Uncertainty Principle, and it's about the limitations of trying to measure the position and momentum of subatomic particles... this only applies to sub-atomic particles—a rock doesn't need you to bump into it to exist. It's there. The sub-atomic particles that make up the atoms that make up the rock are there too." Hobbs also discusses Hagelin's experiment with Transcendental Meditation and the Washington DC rate of violent crime, saying that "the number of murders actually went up". Hobbs further disputed the film's use of the ten percent of the brain myth.
David Albert, a philosopher of physics who appears in the film, has accused the filmmakers of selectively editing his interview to make it appear that he endorses the film's thesis that quantum mechanics is linked with consciousness. He says he is "profoundly unsympathetic to attempts at linking quantum mechanics with consciousness".
In the film, during a discussion of the influence of experience on perception, Candace Pert gives an apocryphal version of the invisible ships myth whereby Native Americans were unable to see Columbus's ships because they were outside the natives' experience. According to an article in Fortean Times by David Hambling, the origins of this story likely involved the voyages of Captain James Cook, not Columbus, and an account related by Robert Hughes which said Cook's ships were "...complex and unfamiliar as to defy the natives' understanding". Hambling says it is likely that both the Hughes account and the story told by Pert were exaggerations of the records left by Captain Cook and the botanist Joseph Banks.
Skeptic James Randi described the film as "a fantasy docudrama" and "[a] rampant example of abuse by charlatans and cults". Eric Scerri in a review for Committee for Skeptical Inquiry dismisses it as "a hodgepodge of all kinds of crackpot nonsense," where "science [is] distorted and sensationalized". A BBC reviewer described it as "a documentary aimed at the totally gullible".
According to Margaret Wertheim, "History abounds with religious enthusiasts who have read spiritual portent into the arrangement of the planets, the vacuum of space, electromagnetic waves and the big bang. But no scientific discovery has proved so ripe for spiritual projection as the theories of quantum physics, replete with their quixotic qualities of uncertainty, simultaneity and parallelism." Wertheim continues that the film "abandons itself entirely to the ecstasies of quantum mysticism, finding in this aleatory description of nature the key to spiritual transformation. As one of the film's characters gushes early in the proceedings, 'The moment we acknowledge the quantum self, we say that somebody has become enlightened'. A moment in which 'the mathematical formalisms of quantum mechanics [...] are stripped of all empirical content and reduced to a set of syrupy nostrums'."
Journalist John Gorenfeld, writing in Salon, notes that the film's three directors, William Arntz, Betsy Chasse, and Mark Vicente, were at the time students of Ramtha's School of Enlightenment, which he says has been described as a cult. Mark Vicente later became involved with another prominent cult: NXIVM, the human-potential-development and sex-trafficking pyramid scheme founded by convicted con artist Keith Raniere. After leaving NXIVM, Vicente participated in the exposé documentary series The Vow, revealing many of the cult's damaging tactics; however, nowhere in The Vow does Vicente admit that NXIVM was not his first time adhering to a cult-like group.
Accolades
Ashland Independent Film Festival – Best Documentary
DCIFF – DC Independent Film Festival – Grand Jury Documentary Award
Maui Film Festival – Audience Choice Award – Best Hybrid Documentary
Sedona International Film Festival – Audience Choice Award, Most Thought-Provoking Film
Pigasus Award – an annual tongue-in-cheek award, this particular award's category was #3: "to the media outlet that reported as factual the most outrageous supernatural, paranormal or occult claims".
Legacy
In mid-2005, the filmmakers worked with HCI Books to expand on the themes in a book titled What the Bleep Do We Know!?—Discovering the Endless Possibilities of Your Everyday Reality. HCI president Peter Vegso stated that in regard to this book, "What the Bleep is the quantum leap in the New Age world," and "by marrying science and spirituality, it is the foundation of future thought."
On August 1, 2006, What the Bleep! Down the Rabbit Hole - Quantum Edition multi-disc DVD set was released, containing two extended versions of What the Bleep Do We Know!?, with over 15 hours of material on three double-sided DVDs.
Featured individuals
The film features interview segments with:
Dean Radin, Senior Scientist at the Institute of Noetic Sciences (IONS) in Petaluma, California and proponent of paranormal phenomena.
John Hagelin of Maharishi University of Management, director of MUM's Institute for Science, Technology, and Public Policy, and three-time presidential candidate of the Transcendental Meditation-linked Natural Law Party.
Stuart Hameroff, anesthesiologist, author, and associate director of the Center for Consciousness Studies at the University of Arizona, who developed with Roger Penrose a quantum hypothesis of consciousness in the books The Emperor's New Mind, and Shadows of the Mind.
JZ Knight, a spiritual teacher who is identified in interview segments as the spirit "Ramtha" that Knight claims to channel.
Andrew B. Newberg, assistant professor of radiology at the University of Pennsylvania Hospital, and physician in nuclear medicine, who coauthored the book Why God Won't Go Away: Brain Science & the Biology of Belief ()
Candace Pert, a neuroscientist, who discovered the cellular bonding site for endorphins in the brain, and in 1997 wrote the book Molecules of Emotion ()
Fred Alan Wolf, independent physicist, author of Taking the Quantum Leap, winner of the 1982 National Book Award in science, and featured in the documentary film Spirit Space. Wolf has taught at San Diego State University, the University of Paris, the Hebrew University of Jerusalem, the University of London, and Birkbeck College, London.
David Albert, philosopher of physics and professor at Columbia University, author of Quantum Mechanics and Experience, who according to a Popular Science article was "outraged at the final product" of his interview which he felt misrepresented his views about quantum mechanics and consciousness.
Micheál Ledwith, author and former professor of theology at St. Patrick's College, Maynooth;
Daniel Monti, physician and director of the Mind-Body Medicine Program at Thomas Jefferson University;
Jeffrey Satinover, psychiatrist, author and professor;
William Tiller, Professor Emeritus of Material Science and Engineering at Stanford University;
Joe Dispenza, former Ramtha School of Enlightenment teacher, chiropractor.
See also
Mind-body problem
Hard problem of consciousness
Law of attraction
List of films featuring the deaf and hard of hearing
References
Further reading
External links
2004 films
2004 comedy-drama films
2000s American films
2000s English-language films
2000s German-language films
2000s Spanish-language films
American comedy-drama films
English-language comedy-drama films
Films about quantum mechanics
Films about spirituality
Films set in Oregon
Films scored by Christopher Franke
Films shot in Portland, Oregon
Quantum mysticism
New Age media
Pseudoscience documentary films
Roadside Attractions films | What the Bleep Do We Know!? | Physics | 3,580 |
4,050,658 | https://en.wikipedia.org/wiki/Chakravala%20method | The chakravala method () is a cyclic algorithm to solve indeterminate quadratic equations, including Pell's equation. It is commonly attributed to Bhāskara II, (c. 1114 – 1185 CE) although some attribute it to Jayadeva (c. 950 ~ 1000 CE). Jayadeva pointed out that Brahmagupta's approach to solving equations of this type could be generalized, and he then described this general method, which was later refined by Bhāskara II in his Bijaganita treatise. He called it the Chakravala method: chakra meaning "wheel" in Sanskrit, a reference to the cyclic nature of the algorithm. C.-O. Selenius held that no European performances at the time of Bhāskara, nor much later, exceeded its marvellous height of mathematical complexity.
This method is also known as the cyclic method and contains traces of mathematical induction.
History
Chakra in Sanskrit means cycle. As per popular legend, Chakravala indicates a mythical range of mountains which orbits around the Earth like a wall and not limited by light and darkness.
Brahmagupta in 628 CE studied indeterminate quadratic equations, including Pell's equation
for minimum integers x and y. Brahmagupta could solve it for several N, but not all.
Jayadeva and Bhaskara offered the first complete solution to the equation, using the chakravala method to find for the solution
This case was notorious for its difficulty, and was first solved in Europe by Brouncker in 1657–58 in response to a challenge by Fermat, using continued fractions. A method for the general problem was first completely described rigorously by Lagrange in 1766. Lagrange's method, however, requires the calculation of 21 successive convergents of the simple continued fraction for the square root of 61, while the chakravala method is much simpler. Selenius, in his assessment of the chakravala method, states
"The method represents a best approximation algorithm of minimal length that, owing to several minimization properties, with minimal effort and avoiding large numbers automatically produces the best solutions to the equation. The chakravala method anticipated the European methods by more than a thousand years. But no European performances in the whole field of algebra at a time much later than Bhaskara's, nay nearly equal up to our times, equalled the marvellous complexity and ingenuity of chakravala."
Hermann Hankel calls the chakravala method
"the finest thing achieved in the theory of numbers before Lagrange."
The method
From Brahmagupta's identity, we observe that for given N,
For the equation , this allows the "composition" (samāsa) of two solution triples and into a new triple
In the general method, the main idea is that any triple (that is, one which satisfies ) can be composed with the trivial triple to get the new triple for any m. Assuming we started with a triple for which , this can be scaled down by k (this is Bhaskara's lemma):
Since the signs inside the squares do not matter, the following substitutions are possible:
When a positive integer m is chosen so that (a + bm)/k is an integer, so are the other two numbers in the triple. Among such m, the method chooses one that minimizes the absolute value of m2 − N and hence that of (m2 − N)/k. Then the substitution relations are applied for m equal to the chosen value. This results in a new triple (a, b, k). The process is repeated until a triple with is found. This method always terminates with a solution (proved by Lagrange in 1768).
Optionally, we can stop when k is ±1, ±2, or ±4, as Brahmagupta's approach gives a solution for those cases.
Brahmagupta's composition method
In AD 628, Brahmagupta discovered a general way to find and of when given , when k is ±1, ±2, or ±4.
k = ±1
Using Brahmagupta's identity to compose the triple with itself:
The new triple can be expressed as .
Substituting gives a solution:
For , the original was already a solution. Substituting yields a second:
k = ±2
Again using the equation,
Substituting ,
Substituting ,
k = 4
Substituting into the equation creates the triple .
Which is a solution if is even:
If a is odd, start with the equations and .
Leading to the triples and . Composing the triples gives
When is odd,
k = -4
When , then . Composing with itself yields .
Again composing itself yields
Finally, from the earlier equations, compose the triples and , to get
.
This give us the solutions
(Note, is useful to find a solution to Pell's Equation, but it is not always the smallest integer pair. e.g. . The equation will give you , which when put into Pell's Equation yields , which works, but so does for .
Examples
n = 61
The n = 61 case (determining an integer solution satisfying ), issued as a challenge by Fermat many centuries later, was given by Bhaskara as an example.
We start with a solution for any k found by any means. In this case we can let b be 1, thus, since , we have the triple . Composing it with gives the triple , which is scaled down (or Bhaskara's lemma is directly used) to get:
For 3 to divide and to be minimal, we choose , so that we have the triple . Now that k is −4, we can use Brahmagupta's idea: it can be scaled down to the rational solution , which composed with itself three times, with respectively, when k becomes square and scaling can be applied, this gives . Finally, such procedure can be repeated until the solution is found (requiring 9 additional self-compositions and 4 additional square-scalings): . This is the minimal integer solution.
n = 67
Suppose we are to solve for x and y.
We start with a solution for any k found by any means; in this case we can let b be 1, thus producing . At each step, we find an m > 0 such that k divides a + bm, and |m2 − 67| is minimal. We then update a, b, and k to and respectively.
First iteration
We have . We want a positive integer m such that k divides a + bm, i.e. 3 divides 8 + m, and |m2 − 67| is minimal. The first condition implies that m is of the form 3t + 1 (i.e. 1, 4, 7, 10,… etc.), and among such m, the minimal value is attained for m = 7. Replacing (a, b, k) with , we get the new values . That is, we have the new solution:
At this point, one round of the cyclic algorithm is complete.
Second iteration
We now repeat the process. We have . We want an m > 0 such that k divides a + bm, i.e. 6 divides 41 + 5m, and |m2 − 67| is minimal. The first condition implies that m is of the form 6t + 5 (i.e. 5, 11, 17,… etc.), and among such m, |m2 − 67| is minimal for m = 5. This leads to the new solution a = (41⋅5 + 67⋅5)/6, etc.:
Third iteration
For 7 to divide 90 + 11m, we must have m = 2 + 7t (i.e. 2, 9, 16,… etc.) and among such m, we pick m = 9.
Final solution
At this point, we could continue with the cyclic method (and it would end, after seven iterations), but since the right-hand side is among ±1, ±2, ±4, we can also use Brahmagupta's observation directly. Composing the triple (221, 27, −2) with itself, we get
that is, we have the integer solution:
This equation approximates as to within a margin of about .
Notes
References
Florian Cajori (1918), Origin of the Name "Mathematical Induction", The American Mathematical Monthly 25 (5), p. 197-201.
George Gheverghese Joseph, The Crest of the Peacock: Non-European Roots of Mathematics (1975).
G. R. Kaye, "Indian Mathematics", Isis 2:2 (1919), p. 326–356.
Clas-Olaf Selenius, "Rationale of the chakravala process of Jayadeva and Bhaskara II" , Historia Mathematica 2 (1975), pp. 167–184.
Clas-Olaf Selenius, "Kettenbruchtheoretische Erklärung der zyklischen Methode zur Lösung der Bhaskara-Pell-Gleichung", Acta Acad. Abo. Math. Phys. 23 (10) (1963), pp. 1–44.
Hoiberg, Dale & Ramchandani, Indu (2000). Students' Britannica India. Mumbai: Popular Prakashan.
Goonatilake, Susantha (1998). Toward a Global Science: Mining Civilizational Knowledge. Indiana: Indiana University Press. .
Kumar, Narendra (2004). Science in Ancient India. Delhi: Anmol Publications Pvt Ltd.
Ploker, Kim (2007) "Mathematics in India". The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook New Jersey: Princeton University Press.
External links
Introduction to chakravala
Brahmagupta
Diophantine equations
Number theoretic algorithms
Indian mathematics | Chakravala method | Mathematics | 2,072 |
3,421,375 | https://en.wikipedia.org/wiki/M1-92 | M1-92 (Minkowski 92), also known as Minkowski’s Footprint or the Footprint Nebula, is a bipolar protoplanetary nebula in the constellation of Cygnus. It is a type of reflection nebula, visible only by light reflected from the central star. The central star is not yet a white dwarf but is quickly becoming one. In a few thousand years the star will be hot enough to emit vast quantities of ultraviolet radiation that will ionize the nebula surrounding it, making it a fully fledged planetary nebula.
M1-92 was discovered by Rudolph Minkowski in 1946. It was imaged by the Very Large Array (VLA) in 1983 and by Hubble Space Telescope in 1996. The object is 8,000 light-years away from Earth, and has a radial velocity of -7.086 kilometers (-4.4 miles) a second. Its axis is tilted 35 degrees from our line of sight. It is 11 by 6 arcseconds in angular size. It is 0.42 light-years in diameter and shines at 10,000 solar luminosities.
The most obvious feature of M1-92 is the two onion-shaped lobes on either side of the central star. Stellar winds sculpt the lobes into their shape. Dust partially obscures the southeastern lobe, making it appear much dimmer than the other. Material is bursting through the ends of the lobes, creating two polar tips. The temperature of these tips is roughly 450 K, while the lobes themselves are at 17 K. Inside the lobes are bright knots, ionized from shock, which dash through the nebula at 55 km/s (34 mi/s). Also present is a narrow jet. The jet’s axis slightly differs from that of the lobes, indicating that the central star is precessing. The central star is surrounded by an expanding circumstellar dust disk. The temperature of the progenitor star is estimated to be around 27,000 K, much cooler than the temperatures of white dwarfs which run from 50,000 to 250,000 K. The hypothesis that the central star is actually a binary system is promising for this object.
M1-92’s spectrum is rather complex. It is composed of the highly polarized spectrum of the central star, and various emission lines from ionized gas. Some lines are intermediately polarized, and others are not polarized at all. The intermediately polarized emission lines are also reflected, but the unpolarized lines are shock emission generated from inside the bipolar lobes. Shock emission is a major part of the spectrum of the M1-92 nebula.
The dust grains in the bipolar lobes are submicron-sized; on the other hand, dust grains in the circumstellar disk could be up to 1 mm. If the disk grains were micron-sized or smaller, it would be difficult to explain the amount of carbon monoxide (CO) in the disk. The largest dust grains in the disk must be at least 0.1 mm. Grain growth is likely to occur in the M1-92 disk. Hydroxide (OH) was discovered in the object in 1974, and is located in the dust disk but not in the lobes.
It has been proposed that M1-92 is similar to MWC 560, a symbiotic star. Their spectra have striking similarities, and they both have a thick equatorial dust disk and a collimated jet. It is thought that MWC 560 is in an earlier evolutionary stage by 900-1,200 years, not having ejected a nebula yet.
See also
List of protoplanetary nebulae
References
External links
Footprint Nebula Observation Page
Footprint Nebula Strasbourg astronomical Data Center
Protoplanetary nebulae
Cygnus (constellation) | M1-92 | Astronomy | 764 |
75,418,219 | https://en.wikipedia.org/wiki/The%20Politics%20of%20Large%20Numbers | The Politics of Large Numbers:A History of Statistical Reasoning is a book by French statistician, sociologist and historian of science, Alain Desrosières, which was originally published in French in 1993. The English translation, by Camille Naish, was published in 1998 by Harvard University Press.
Synopsis
Alain Desrosières's ambition is to reconcile an “Internal” history of the field, focusing on theory building and data collection, with an “External” history, examining the social conditions where and why a discipline develops. In his words, applying a science-in-the-making perspective “the distinction between technical and social objects—underlying the separation between internal and external history—
disappears” (p. 5).
The work of Desrosières mobilize the French style of social analysis of cognitive forms, looking at statistics as the ensemble of concepts, methods, and practices concerned with "making up things that hold".
A central part of the book explores how
socio-political structures of France, Britain, Germany, and the
United States affect the establishment and evolution of the nationals statistical offices in these countries. The author discusses in depth how the activity of cathegorization, allocating individuals to classes, provide the encoding necessary for the realization of statistical constructs, following Durkheim's motto to 'treat social facts as things', thus creating new entities as poverty or unemployment. This project that Desrosières names 'objectification' is also offered by the author as a way to reconcile objective and subjective visions of probabilities, a dichotomy he retraces to the fourteenth-century confrontation between realists and nominalists.
Reactions
Among the critiques to this work is that it reads more as a work of sociology and political economy than as a technical account
of how statistical operations developed, and the critical balance Desrosières needs to maintain between defending the necessity and legitimacy of critical attacks on
statistical concepts and methods in the name of sociopolitical progress and the stated need for "durably solidified forms" of statistical technique and concepts.
Related readings
Ian Hacking, 2006. The Emergence of Probability : A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference. Cambridge University Press.
Theodore M. Porter, 1988. The Rise of Statistical Thinking, 1820–1900. Reprint edition. Princeton, NJ: Princeton University Press.
Stephen M. Stigler, 1986. The History of Statistics: The Measurement of Uncertainty before 1900. Cambridge, Mass.: Belknap Press.
See also
Sociology of quantification
References
1993 non-fiction books
French non-fiction books
History books about science
History of probability and statistics
Science and technology studies | The Politics of Large Numbers | Mathematics,Technology | 545 |
14,101,118 | https://en.wikipedia.org/wiki/Green%20Revolution%20in%20India | The Green Revolution was a period that began in the 1960s during which agriculture in India was converted into a modern industrial system by the adoption of technology, such as the use of high yielding variety (HYV) seeds, mechanized farm tools, irrigation facilities, pesticides, and fertilizers. Mainly led by agricultural scientist M. S. Swaminathan in India, this period was part of the larger Green Revolution endeavor initiated by Norman Borlaug, which leveraged agricultural research and technology to increase agricultural productivity in the developing world. Varieties or strains of crops can be selected by breeding for various useful characteristics such as disease resistance, response to fertilizers, product quality and high yields.
Under the premiership of Congress leaders Lal Bahadur Shastri and Indira Gandhi, the Green Revolution within India commenced in 1968, leading to an increase in food grain production, especially in Punjab, Haryana, and Western Uttar Pradesh. Major milestones in this undertaking were the development of high-yielding varieties of wheat, and rust resistant strains of wheat.
Notable figures and institutions
A number of people have been recognized for their efforts during India's Green Revolution.
M. S. Swaminathan, the main architect or the Father of the Green Revolution in India.
Chidambaram Subramaniam, the food and agriculture minister at the time, a Bharat Ratna, has been called the Political Father of the Green Revolution.
Dilbagh Singh Athwal, is called the Father of the Wheat Revolution.
Scientists such as Atmaram Bhairav Joshi.
Institutions such as Indian Agricultural Research Institute (IARI).
Practices
Wheat production
The main development was higher-yielding varieties of wheat, for developing rust resistant strains of wheat. The introduction of high-yielding varieties (HYV) of seeds and the improved quality of fertilizers and irrigation techniques led to the increase in the production to make the country self-sufficient in food grains, thus improving agriculture in India. Also, other varieties such as Kalyan Sona and Sonalika were introduced by cross-breeding of wheat with other crops. The methods adopted included the use of high-yielding varieties (HYVs) of seeds with modern farming methods.
The production of wheat has produced the best results in fueling the self-sufficiency of India. Along with high-yielding seeds and irrigation facilities, the enthusiasm of farmers mobilized the idea of an agricultural revolution. Due to the rise in the use of chemical pesticides and fertilizers, there was a negative effect on the soil and the land (e.g., land degradation).
Other practices
The other practices include irrigation infrastructure, use of pesticides, insecticides and herbicides, consolidation of holdings, land reforms, improved rural infrastructure, supply of agricultural credit, use of chemical or synthetic fertilizers, use of sprinklers or drip irrigation systems, and use of advanced machinery.
Rationale for the Green Revolution
The Green Revolution in India was first introduced in Punjab in late 1966-67 as part of a development program issued by international donor agencies and the Government of India.
During the British Raj, India's grain economy hinged on a unilateral relation of exploitation. Consequently, when India gained independence, the weakened country quickly became vulnerable to frequent famines, financial instabilities, and low productivity. These factors formed a rationale for the implementation of the Green Revolution as a development strategy in India.
Frequent famines: In 1964–65 and 1965–66, India experienced two severe droughts which led to food shortages and famines among the country's growing population. Modern agricultural technologies appeared to offer strategies to counter the frequency of famines. There is debate regarding India's famines prior to independence, with some arguing they were intensified by British taxation and agrarian policies in the 19th and 20th centuries, and others downplaying such impact of colonial rule.
Lack of finance: Marginal farmers found it very difficult to get finance and credit at economical rates from the government and banks and hence, fell as easy prey to the money lenders. They took loans from landlords, who charged high rates of interest and also exploited the farmers later on to work in their fields to repay the loans (farm labourers). Proper financing was not given during the Green Revolution period, which created a lot of problems and sufferings for the farmers of India. The government also helped those under loans.
Low productivity: In the context of India's rapidly growing population, the country's traditional agricultural practices yielded insufficient food production. By the 1960s, this low productivity led India to experience food grain shortages that were more severe than those of other developing countries. Agricultural technological advancements offered opportunities to increase productivity.
Criticism
The Green Revolution yielded great economic prosperity during its early years. In Punjab, where it was first introduced, the Green Revolution led to significant increases in the state's agricultural output, supporting India's overall economy. By 1970, Punjab was producing 70% of the country's total food grains, and farmers' incomes were increasing by over 70%. Punjab's prosperity following the Green Revolution became a model to which other states aspired to reach.
However, despite the initial prosperity experienced in Punjab, the Green Revolution was met with much controversy throughout India.
Indian economic sovereignty (negative impact)
Criticism of the effects of the green revolution includes the cost for many small farmers using HYV seeds, with their associated demands of increased irrigation systems and pesticides. A case study is found in India, where farmers are buying Monsanto BT cotton seeds—were sold on the idea that these seeds produced 'non-natural insecticides'. In reality, they still had to pay for expensive pesticides and irrigation systems, which led to increased borrowing to finance the change from traditional seed varieties. Many farmers had difficulty paying for the expensive technologies, especially if they had a bad harvest. These high costs of cultivation pushed rural farmers to take out loans—typically at high interest rates. Over-borrowing entrapped the farmers into a cycle of debt.
India's liberalized economy further exacerbated the farmers' economic conditions. Indian environmentalist Vandana Shiva writes that this is the "second Green Revolution". The first Green Revolution, she suggests, was mostly publicly funded (by the Indian Government). This new Green Revolution, she says, is driven by private (and foreign) interest—notably MNCs like Monsanto—as encouraged by Neoliberalism. Ultimately, this is leading to foreign ownership over most of India's farmland, undermining farmers' interests.
Farmers' financial issues have become especially apparent in Punjab, where its rural areas have witnessed an alarming rise in suicide rates. Excluding the countless unreported cases, there has been estimated to be a 51.97% increase in the number of suicides in Punjab in 1992–93, compared to the recorded 5.11% increase in the country as a whole. According to a 2019 Indian news report, indebtedness continues to be a grave issue affecting the people of Punjab today, demonstrated by the more than 900 recorded farmer committed suicide in Punjab in the last two years.
Environmental damage
Excessive and inappropriate use of fertilizers and pesticides polluted waterways and killed beneficial insects and wildlife. It has caused over-use of soil and rapidly depleted its nutrients. The rampant irrigation practices led to eventual soil degradation. Groundwater practices have fallen dramatically. Further, heavy dependence on few major crops has led to the loss of biodiversity of farmers and the increase of stubble burning cases since 1980. These problems were aggravated due to the absence of training to use modern technology and vast illiteracy leading to excessive use of chemicals.
Increased regional disparities
The green revolution spread only in irrigated and high-potential rainfed areas. The villages or regions without access to sufficient water were left out that widened the regional disparities between adopters and non-adopters. Since, the HYV seeds technically can be applied only on land with assured water supply and availability of other inputs like chemicals, fertilizers, etc. The application of the new technology in dry-land areas is simply ruled out.
States like Punjab, Haryana, Uttar Pradesh, etc. having good irrigation and other infrastructure facilities were able to derive the benefits of the green revolution and achieve faster economic development while other states have recorded slow growth in agriculture production.
Alternative farming methods
In the years since Green Revolution was adopted, issues of sustainability have come up due to the adverse environmental and social impacts. To meet this challenge other alternatives to farming have emerged like small subsistence farms, family homesteads, New Age communes, village and community farming collectives and women's cooperatives with the common purpose of producing organically grown, chemical-free food. In green revolution areas of the country, increasing numbers of families are experimenting on their own with alternative systems of land management and the growing of crops. Building upon the idea of sustainable development, commercial models for large-scale food production have been developed by integrating traditional farming systems with appropriate energy efficient technology.
References
Further reading
Chakravarti, A.K. 1973. "Green Revolution in India" in Annals of the Association of American Geographers 63 (September 1973): 319–30.
Frankel, Francine R. 1971. India's Green Revolution: Economic Gains and Political Costs. Princeton: Princeton University Press.
Gill, Monohar Singh. 1983. "The Development of Punjab Agriculture, 1977-80." Asian Survey 23 (July 1983):830-44.
Ladejinsky, Wolf. 1970. "Ironies of India's Green Revolution". Foreign Affairs no. 4. (July 1970): 758–68.
Parayil, Govindan. 1992. "The Green Revolution in India: A Case Study in Technological Change," Technology and Culture 33 (October 1992): 737-56.
Saha, Madhumita. "The State, Scientists, and Staple Crops: Agricultural 'Modernization' in Pre-Green Revolution India." Agricultural History 87 (Spring 2003):201-23
Sebby, Kathryn. 2010. "The Green Revolution of the 1960's and Its Impact on Small Farmers in India", Environmental Studies Undergraduate Student Theses 10 (PDF).
Sen, Bandhudas. 1974. The Green Revolution in India: A Perspective.New york: John Wiley & Sons.
Agricultural revolutions
History of agriculture in India
Intensive farming
History of the Republic of India
Economic history of India
Indira Gandhi administration
History of Punjab, India (1947–present) | Green Revolution in India | Chemistry | 2,148 |
54,238,399 | https://en.wikipedia.org/wiki/Gordon%20Center%20for%20Medical%20Imaging | The Gordon Center for Medical Imaging is an American multidisciplinary research center at Massachusetts General Hospital (MGH) and Harvard Medical School that develops biomedical imaging technologies.
The center's central activities include: research, training and education in medical imaging, and translation of basic research into clinical applications.
The MGH Gordon Center also operates the PET Core, an MGH research service facility that synthesizes radiotracers and provides positron emission tomography (PET) imaging services for investigators.
Created in 2015 with an endowment from the Bernard and Sophia Gordon Foundation, the Gordon Center is a direct continuation of MGH's Division of Radiological Sciences where the first positron-imaging device was invented.
Dr. Georges El Fakhri is the founding director of the Gordon Center. The center is located in two campuses in Boston and Charlestown Navy Yard, Massachusetts.
See also
Athinoula A. Martinos Center for Biomedical Imaging
Massachusetts General Hospital
Nuclear medicine
PET
References
Laboratories in the United States
Medical research institutes in Massachusetts
Radiology organizations
Medical imaging
Nuclear medicine organizations
Massachusetts General Hospital | Gordon Center for Medical Imaging | Engineering | 221 |
70,399,451 | https://en.wikipedia.org/wiki/SZ%20Piscium | SZ Piscium is a triple star system in the equatorial constellation of Pisces. The inner pair form a double-lined spectroscopic binary with an orbital period of 3.966 days. It is a detached Algol-type eclipsing binary of the RS Canum Venaticorum class with a subgiant component. (This means the pair have a close but separated orbit with the stars eclipsing one another, and the primary component is an evolving star showing star spots and other magnetic activity.) The system is too faint to be readily visible to the naked eye with a combined apparent visual magnitude of 7.18. It is located at a distance of approximately 306 light years based on parallax measurements.
The variability of this star was reported by A. Jensch in 1934, who published the first elements. In 1956 the spectrum of the system was examined by N. G. Roman, who found the cooler component is the brighter and more evolved. The system was studied by G. A. Bakos and J. F. Heard in 1958, who found a magnitude of 7.72 for the primary eclipse minimum and 7.30 for the secondary. They refined the class estimates, finding the primary is probably a K1IV subgiant in close orbit with an F8V main sequence star. In 1972, H. L. Atkins and D. S. Hall included it on a list of RS Canum Venaticorum type variable systems and showed it has an infrared excess.
S. Jakate and associates in 1976 found that the period of luminosity variation is changing over time. They discovered strong emission in the H and K lines of the K star and noted that it showed intrinsic variability. The system displayed unusual episodes of emission and variation in the Hα line, which was interpreted by astronomers as ejected material possibly forming a transient disk. The orbital period of the system varies in a 56 year cycle with an amplitude of , which may be explained by influences of the stellar wind and magnetic activity.
Significant star spot activity was found all over the K-type star, with variations in the total spot coverage observed over time. It is estimated to be filling 85% of its Roche lobe due to the gravitational influence of the secondary. The rotation period of this star is several times slower than its orbital period, while the rotation of the F-type star is synchronous. Changes in radial velocity of the system over time suggest it is a triple star system with the tertiary component having about 75% of the mass of the Sun and an orbital period of 1,530 days.
References
Further reading
As '35.1934 Piscium'.
K-type subgiants
F-type main-sequence stars
RS Canum Venaticorum variables
Triple stars
Eclipsing binaries
Pisces (constellation)
Durchmusterung objects
219113
114639
Piscium, SZ | SZ Piscium | Astronomy | 596 |
118,695 | https://en.wikipedia.org/wiki/CONELRAD | CONELRAD (Control of Electromagnetic Radiation) was a method of emergency broadcasting to the public of the United States in the event of enemy attack during the Cold War. It was intended to allow continuous broadcast of civil defense information to the public using radio stations, while rapidly switching the transmitter stations to make the broadcasts unsuitable for Soviet bombers that might attempt to home in on the signals (as was done during World War II, when German radio stations, based in or near cities, were used as beacons by bomber pilots).
U.S. President Harry S. Truman established CONELRAD in 1951. After the development of intercontinental ballistic missiles reduced the likelihood of a bomber attack, and the development of superior navigation systems that did not rely on radio direction finding for use in those bomber aircraft which were sent against the United States, CONELRAD was replaced by the Emergency Broadcast System (EBS) on August 5, 1963, which was later replaced by the Emergency Alert System (EAS) on January 1, 1997; all have been administered by the Federal Communications Commission (FCC).
Unlike the EBS and EAS, CONELRAD was never intended for use in local civil emergencies such as severe weather. However, the system's alerting protocol could be used for alerting of a natural disaster by 1957.
History
Before 1951, there was no systematic way for the U.S. government to communicate with citizens during an emergency. However, broadcasters would typically interrupt normal programming to issue emergency bulletins, as happened during the attack on Pearl Harbor on December 7, 1941 and the first successful tornado warning in 1948. Such bulletins were the forerunner to CONELRAD.
The CONELRAD concept was originally known as the Key Station System. According to an FCC document created during the "Informal Government–Industry Technical Conference" on March 26, 1951:
CONELRAD had a simple system for alerting the public and other "downstream" stations, consisting of a sequence of shutting the station off for five seconds, returning to the air for five seconds, again shutting down for five seconds, returning to the air again (for 5 seconds), and then transmitting a 1 kHz tone for 15 seconds. Key stations would be alerted directly. All other broadcast stations would monitor a designated station in their area.
In the event of an emergency, all United States television and FM radio stations were required to stop broadcasting. Upon alert, most AM medium-wave stations shut down. The stations that stayed on the air would transmit on either 640 or 1240 kHz. They would transmit for several minutes and then go off the air, and another station would take over on the same frequency in a "round robin" chain. This was to confuse enemy aircraft who might be navigating using radio direction finding. By law, radio sets manufactured between 1953 and 1963 had these two frequencies marked by the triangle-in-circle ("CD Mark") symbol of Civil Defense.
Although the system by which the CONELRAD process was initiated (switching the transmitter on and off) was simple, it was prone to numerous false alarms, especially during lightning storms. Transmitters could be damaged by the quick cycling. The switching later became known informally as the "EBS Stress Test" (due to many transmitters failing during tests) and was eventually discontinued when broadcast technology advanced enough to make it unnecessary.
Beginning January 2, 1957, U.S. amateur radio came under CONELRAD rules and amateur stations were also required to stop transmitting if commercial radio stations went off the air due to an alert. Several companies marketed special receivers that monitored local broadcast stations, sounding an alarm and automatically deactivating the amateur's transmitter when the broadcast station went off the air.
In a Time magazine article featured in the November 14, 1960, issue, the author details why the warning system consisting of localized civil defense sirens and the CONELRAD radio-alert system was "basically unsound". The author's alternative was to advocate for the National Emergency Alarm Repeater as a supplement, which did not need a radio or television to be switched on to warn citizens, nor a large CD siren to be in their vicinity.
False alarms
On May 5, 1955, the Continental Air Defense Command Western Division went to yellow alert for 3 to 10 minutes (depending on the alerted state), beginning at 10:40 AM PDT. The alert was raised by a Canadian radar emplacement which was unaware of an outbound United States B-47 bomber training exercise, due to communication failures. A yellow alert meant "attack expectable", and the word was sent to government and civil defense organizations. In the seven-minute window, the city of Oakland, the Sacramento Capitol Building, and others quickly sounded their alert sirens. In contrast, the City of Sacramento civil defense director waited for further confirmation before sounding the citywide siren; ultimately, he never did so. The alert was not acted on at all in Colorado due to the short length, and in Nevada, there was no alert because the person responsible for acting on it "did not know what to do with it". In Utah, Oklahoma, Arkansas, Missouri, Kansas, Texas, and Louisiana, a yellow alert was not passed along to civil authorities at all, and those states issued a "white" (military emergency) alert to units in their state instead. Even with the short alert window, many radio and television stations went off the air in accordance with CONELRAD procedure, but the alert was not long enough for stations to start broadcasting on the two authorized CONELRAD frequencies.
On the evening of November 5, 1959, WJPG, the CONELRAD control station for northeast Wisconsin and Upper Michigan was incorrectly sent an alert status message, "This is an air defense radio alert", rather than what should have been sent for a test, "This is an air defense line check." All three of Green Bay, Wisconsin's television stations (WFRV-TV, WLUK-TV, and WBAY-TV), as well as Green Bay radio stations WBAY and WJPG (and other Upper Michigan radio stations) were immediately taken off line as preparations were made for high priority stations to begin broadcasting on the two authorized CONELRAD AM frequencies (which in that area would force WOMT, a station in nearby Manitowoc at 1240 AM, off the air). The transmission error was realized and CONELRAD alert preparation (and its media blackout) reversed for affected stations about 20 minutes later.
A very similar false attack alarm was sent to radio and television stations through CONELRAD's replacement, the Emergency Broadcast System, at 9:33 AM EST on Saturday, February 20, 1971. This message was sent by accident instead of the usual weekly EAN test.
See also
Blast shelter
Civil defense Geiger counters
Civil protection
Duck and cover
Fallout shelter
Nuclear warfare
Nuclear weapon
SCATANA
WGU-20
World War III
References
External links
"A History of CONELRAD, EBS, and the plan for EAS"
Cold War history of the United States
Disaster preparedness in the United States
Emergency Alert System
Emergency population warning systems
United States civil defense
Warning systems
Government agencies established in 1951
Government agencies disestablished in 1963 | CONELRAD | Technology,Engineering | 1,463 |
58,455,919 | https://en.wikipedia.org/wiki/Aspergillus%20falconensis | Aspergillus falconensis is a species of fungus in the genus Aspergillus. It is from the Nidulantes section. The species was first described in 1989. It has been reported to produce 3,30-Dihydroxy-5,50-dimethyldiphenyl ether, falconensin A-N, falconenson A-B, hopane-6α,7β,22-triol, hopane-7β,22-diol, mitorubrin, monomethyldihydromitorubrin, monomethylmitorubrin, and zeorin.
Growth and morphology
A. falconensis has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
falconensis
Fungi described in 1989
Fungus species | Aspergillus falconensis | Biology | 197 |
20,805,702 | https://en.wikipedia.org/wiki/Zero-forcing%20precoding | Zero-forcing (or null-steering) precoding is a method of spatial signal processing by which a multiple antenna transmitter can null the multiuser interference in a multi-user MIMO wireless communication system. When the channel state information is perfectly known at the transmitter, the zero-forcing precoder is given by the pseudo-inverse of the channel matrix. Zero-forcing has been used in LTE mobile networks.
Mathematical description
In a multiple antenna downlink system which comprises transmit antenna access points and single receive antenna users, such that , the received signal of user is described as
where is the vector of transmitted symbols, is the noise signal, is the channel vector and is some linear precoding vector. Here is the matrix transpose, is the square root of transmit power, and is the message signal with zero mean and variance .
The above signal model can be more compactly re-written as
where
is the received signal vector,
is channel matrix,
is the precoding matrix,
is a diagonal power matrix, and
is the transmit signal.
A zero-forcing precoder is defined as a precoder where intended for user is orthogonal to every channel vector associated with users where . That is,
Thus the interference caused by the signal meant for one user is effectively nullified for rest of the users via zero-forcing precoder.
From the fact that each beam generated by zero-forcing precoder is orthogonal to all the other user channel vectors, one can rewrite the received signal as
The orthogonality condition can be expressed in matrix form as
where is some diagonal matrix. Typically, is selected to be an identity matrix. This makes the right Moore-Penrose pseudo-inverse of given by
Given this zero-forcing precoder design, the received signal at each user is decoupled from each other as
Quantify the feedback amount
Quantify the amount of the feedback resource required to maintain at least a given throughput performance gap between zero-forcing with perfect feedback and with limited feedback, i.e.,
.
Jindal showed that the required feedback bits of a spatially uncorrelated channel should be scaled according to SNR of the downlink channel, which is given by:
where M is the number of transmit antennas and is the SNR of the downlink channel.
To feed back B bits though the uplink channel, the throughput performance of the uplink channel should be larger than or equal to 'B'
where is the feedback resource consisted of multiplying the feedback frequency resource and the frequency temporal resource subsequently and is SNR of the feedback channel. Then, the required feedback resource to satisfy is
.
Note that differently from the feedback bits case, the required feedback resource is a function of both downlink and uplink channel conditions. It is reasonable to include the uplink channel status in the calculation of the feedback resource since the uplink channel status determines the capacity, i.e., bits/second per unit frequency band (Hz), of the feedback link. Consider a case when SNR of the downlink and uplink are proportion such that is constant and both SNRs are sufficiently high. Then, the feedback resource will be only proportional to the number of transmit antennas
.
It follows from the above equation that the feedback resource () is not necessary to scale according to SNR of the downlink channel, which is almost contradict to the case of the feedback bits. One, hence, sees that the whole systematic analysis can reverse the facts resulted from each reductioned situation.
Performance
If the transmitter knows the downlink channel state information (CSI) perfectly, ZF-precoding can achieve almost the system capacity when the number of users is large. On the other hand, with limited channel state information at the transmitter (CSIT) the performance of ZF-precoding decreases depending on the accuracy of CSIT. ZF-precoding requires the significant feedback overhead with respect to signal-to-noise-ratio (SNR) so as to achieve the full multiplexing gain. Inaccurate CSIT results in the significant throughput loss because of residual multiuser interferences. Multiuser interferences remain since they can not be nulled with beams generated by imperfect CSIT.
See also
Channel state information
Precoding
MIMO
References
External links
Schelkunoff Polynomial Method (Null-Steering) www.antenna-theory.com
IEEE 802
Information theory
Radio resource management
Signal processing | Zero-forcing precoding | Mathematics,Technology,Engineering | 900 |
1,945,670 | https://en.wikipedia.org/wiki/Sound%20transmission%20class | Sound Transmission Class (or STC) is an integer rating of how well a building partition attenuates airborne sound. In the US, it is widely used to rate interior partitions, ceilings, floors, doors, windows and exterior wall configurations. Outside the US, the ISO Sound Reduction Index (SRI) is used. The STC rating very roughly reflects the decibel reduction of noise that a partition can provide. The STC is useful for evaluating annoyance due to speech sounds, but not music or machinery noise as these sources contain more low frequency energy than speech.
There are many ways to improve the sound transmission class of a partition, though the two most basic principles are adding mass and increasing the overall thickness. In general, the sound transmission class of a double wythe wall (e.g. two block walls separated by a airspace) is greater than a single wall of equivalent mass (e.g. homogeneous block wall).
Definition
The STC or sound transmission class is a single number method of rating how well wall partitions reduce sound transmission. The STC provides a standardized way to compare products such as doors and windows made by competing manufacturers. A higher number indicates more effective sound insulation than a lower number. The STC is a standardized rating provided by ASTM E413 based on laboratory measurements performed in accordance with ASRM E90. ASTM E413 can also be used to determine similar ratings from field measurements performed in accordance with ASTM E336.
Sound Isolation and Sound Insulation are used interchangeably, though the term "Insulation" is preferred outside the US. The term "sound proofing" is typically avoided in architectural acoustics as it is a misnomer and connotes inaudibility.
Subjective correlation
Through research, acousticians have developed tables that pair a given STC rating with a subjective experience. The table below is used to determine the degree of sound isolation provided by typical multi-family construction. Generally, a difference of one or two STC points between similar constructions is subjectively insignificant.
Tables like the one above are highly dependent on the background noise levels in the receiving room: the louder the background noise, the greater the perceived sound isolation.
Rating methodology
Historical
Prior to the STC rating, the sound isolation performance of a partition was measured and reported as the average transmission loss of over the frequency range 128 to 4096 Hz or 256 to 1021 Hz. This method is valuable at comparing homogeneous partitions that follow the mass law, but can be misleading when comparing complex or multi-leaf walls.
In 1961, the ASTM International Standards Organization adopted E90-61T, which served as the basis for the STC method used today. The STC standard curve is based on European studies of multi-family residential construction, and closely resembles the sound isolation performance of a brick wall.
Current
The STC number is derived from sound attenuation values tested at sixteen standard frequencies from 125 Hz to 4000 Hz. These Transmission Loss values are then plotted on a sound pressure level graph and the resulting curve is compared to a standard reference contour provided by the ASTM.
Sound isolation metrics, such as the STC, are measured in specially-isolated and designed laboratory test chambers. There are nearly infinite field conditions that will affect sound isolation on site when designing building partitions and enclosures.
Factors affecting sound transmission class
Acoustic medium
Sound travels through both the air and structure, and both paths must be considered when designing sound isolating walls and ceilings. To eliminate air borne sound all air paths between the areas must be eliminated. This is achieved by making seams airtight and closing all sound leaks. To eliminate structure-borne noise one must create isolation systems that reduce mechanical connections between those structures.
Mass
Adding mass to a partition reduces the transmission of sound. This is often achieved by adding additional layers of gypsum. It is preferable to have non symmetrical leaves, for example with different thickness gypsum. The effect of adding multiple layers of gypsum wallboard to a frame also varies depending on the framing type and configuration. Doubling the mass of a partition does not double the STC, as the STC is calculated from a non-linear decibel sound transmission loss measurement. So, whereas installing an additional layer of gypsum wallboard to a light-gauge (25-ga. or lighter) steel stud partition will result in about a 5 STC-point increase, doing the same on single wood or single heavy-gauge steel will result in only 2 to 3 additional STC points. Adding a second additional layer (to the already three-layer system) does not result in as drastic an STC change as the first additional layer. The effect of additional gypsum wallboard layers on double- and staggered-stud partitions is similar to that of light-gauge steel partitions.
Due to increased mass, poured concrete and concrete blocks typically achieve higher STC values (in the mid STC 40s to the mid STC 50s) than equally thick framed walls. However the additional weight, added complexity of construction, and poor thermal insulation tend to limit masonry wall partitions as a viable sound isolation solution in many building construction projects.
In recent years, gypsum board manufacturers have started to offer lightweight drywall board: Normal-weight gypsum has a nominal density of , and lightweight drywall has a nominal density of . This does not have a large effect on the STC rating, though lightweight gypsum can significantly degrade the low frequency performance of a partition as compared to normal weight gypsum.
Sound absorption
Sound absorption entails turning acoustical energy into some other form of energy, usually heat.
Adding absorptive materials to the interior surfaces of rooms, for example fabric-faced fiberglass panels and thick curtains, will result in a decrease of reverberated sound energy within the room. However, absorptive interior surface treatments of this kind do not significantly improve the sound transmission class. Installing absorptive insulation, for example fiberglass batts and blow-in cellulose, into the wall or ceiling cavities does increase the sound transmission class significantly. The presence of insulation in single 2x4 wood stud framing spaced on-center results in only a few STC points. This is because a wall with 2x4 wood stud framing spaced 16 inches develops significant resonances which are not mitigated by the cavity insulation. In contrast, adding standard fiberglass insulation to an otherwise empty cavity in light-gauge (25-gauge or lighter) steel stud partitions can result in a nearly 10 STC-point improvement.
Other studies have shown that fibrous insulation materials, such as mineral wool, can increase the STC by 5 to 8 points.
Stiffness
The effect of stiffness on sound isolation can relate to either the material stiffness of the sound isolating material or the stiffness caused by framing methods.
Framing methods
Structurally decoupling the gypsum wallboard panels from the partition framing can result in a large increase in sound isolation when installed correctly. Examples of structural decoupling in building construction include resilient channels, sound isolation clips and hat channels, and staggered- or double-stud framing. The STC results of decoupling in wall and ceiling assemblies varies significantly depending on the framing type, air cavity volume, and decoupling material type. Great care must be taken in each type of decoupled partition construction, as any fastener that becomes mechanically (rigidly) coupled to the framing can undermine the decoupling and result in drastically lower sound isolation results.
When two leaves are rigidly tied or coupled by a stud, the sound isolation of the system depends on the stiffness of the stud. Light-gauge (25-gauge or lighter) provides better sound isolation than 16-20-gauge steel, and noticeably better performance than wood studs. When heavy gauge steel or wood studs are spaced on center, additional resonances form which further lower the sound isolation performance of a partition. For typical gypsum stud walls, this resonance occurs in the 100–160 Hz region and is thought to be a hybrid of the mass-air-mass resonance and a bending mode resonance caused when a plate is closely supported by stiff members.
Single metal stud partitions are more effective than single wood stud partitions, and have been shown to increase the STC rating by up to 10 points. However, there is little difference between metal and wood studs when used in double stud partitions. Double stud partitions have a higher STC than single stud.
In certain assemblies, increasing the stud spacing from increases the STC rating by 2 to 3 points.
Damping
Though the terms sound absorption and damping are often interchangeable when discussing room acoustics, acousticians define these as two distinct properties of sound-isolating walls.
Several gypsum manufacturers offer specialty products which use constrained layer damping, which is a form of viscous damping. Damping generally increases the sound isolation of partitions, particularly at mid-and-high frequencies.
Damping is also used to improve the sound isolation performance of glazing assemblies. Laminated glazing, which consists of a Polyvinyl butyral (or PVB) inter-layer, performs better acoustically than a non-laminated glass of equivalent thickness.
Sound leakage
All holes and gaps should be filled and the enclosure hermetically sealed for sound isolation to be effective. The table below illustrates sound proofing test results from a wall partition that has a theoretical maximum loss of 40 dB from one room to the next and a partition area of 10 meters squared. Even small open gaps and holes in the partition have a disproportionate reduction in sound proofing. A 5% opening in the partition, which offers unrestricted sound transmission from one room to the next, caused the transmission loss to reduce from 40 dB to 13 dB. A 0.1% open area will reduce the transmission loss from 40 dB to 30 dB, which is typical of walls where caulking has not been applied effectively Partitions that are inadequately sealed and contain back-to-back electrical boxes, untreated recessed lighting and unsealed pipes offer flanking paths for sound and significant leakage.
Acoustic joint tapes and caulking have been used to improve sound isolation since the early 1930s. Although the applications of tapes was largely limited to defense and industrial applications such as naval vessels and aircraft in the past, recent research has proven the effectiveness of sealing gaps and thereby improving the sound isolation performance of a partition.
Flanking
Building codes typically allow for a 5-point tolerance between the lab-tested and field-measured STC rating; however, studies have shown that even in well-built and sealed installations the difference between the lab and field rating is highly dependent on the type of assembly.
Special variations of STC
By nature, the STC rating is derived from lab testing under ideal conditions. There are other versions of the STC rating to account for real-world conditions.
Composite STC
The net sound isolation performance of a partition containing multiple sound isolating elements such as doors, windows, etc.
Apparent Sound Transmission Class (ASTC)
The sound isolation performance of a partition measured in the field according to ASTM E336, normalized to account for different room finishes and the area of the tested partition (i.e. compare the same wall measured in a bare living room and an acoustically dry recording booth).
Normalized Noise Isolation Class (NNIC)
The sound isolation performance of a partition measured in the field according to ASTM E336, normalized to account for the reverberation time in the room.
Noise Isolation Class (NIC)
The sound isolation performance of a partition measured in the field according to ASTM E336, not normalized to the room conditions of the test.
Field Sound Transmission Class (FSTC)
The sound isolation performance of a specific elements in a partition, as measured in the field and achieved by suppressing the effects of sound flanking paths. This can be useful for measuring walls with doors, when you are interested in removing the influence of the door on the measured field STC. The FSTC testing method was historically prescribed by ASTM E336, however the latest version of this standard does not include FSTC.
Door Sound Transmission Class (DTC)
The sound isolation performance of doors when measured according to ASTM E2964.
Legal and practical requirements
Section 1206 of International Building Code 2021 states that separation between dwelling units and public and service areas must achieve STC 50 where tested in accordance with ASTM E90, or NNIC 45 if field tested in accordance with ASTM E336. However, not all jurisdictions use the IBC for their building or municipal code.
Common partition STC
Interior walls with 1 sheet of gypsum wallboard (drywall) on either side of 2x4 () wood studs spaced on-center with no fiberglass insulation filling each stud cavity have an STC of about 33. When asked to rate their acoustical performance, people often describe these walls as "paper thin." They offer little in the way of privacy. Double stud partition walls are typically constructed with varying gypsum wallboard panel layers attached to both sides of double 2x4 wood studs spaced 16 inches on-center and separated by a airspace. These walls vary in sound isolation performance from the mid STC-40s into the high STC-60s depending on the presence of insulation and the gypsum wallboard type and quantity. Commercial buildings are typically constructed using steel studs of varying widths, gauges, and on-center spacings. Each of these framing characteristics have an effect on the sound isolation of the partition to varying degrees.
STC prediction
There are several commercially available software which predict the STC ratings of partitions using a combination of theoretical models and empirically derived lab data. These programs can predict STC ratings within several points of a tested partition and are an approximation at best.
Outdoor-Indoor Transmission Class (OITC)
The Outdoor–Indoor Transmission Class (OITC) is a standard used for indicating the rate of sound transmission from outdoor noise sources into a building. It is based on the ASTM E-1332 Standard Classification for Rating Outdoor-Indoor Sound Attenuation. Unlike the STC, which is based on a noise spectrum targeting speech sounds, OITC uses a source noise spectrum that considers frequencies down to 80 Hz (aircraft/rail/truck traffic) and is weighted more to lower frequencies. The OITC value is typically used to rate, evaluate, and select exterior glazing assemblies.
See also
References
Bibliography
Construction
Noise control | Sound transmission class | Engineering | 2,996 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.