id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
3,158,873 | https://en.wikipedia.org/wiki/Pothook | A pothook (or pot hook) is an S-shaped metal hook for suspending a pot over a fire.
Usage
While one extremity of the pothook is hooked to the handle of the pot, the other is caught upon an iron crane moving on a pivot over the fire. Later stoves obviated the necessity for this arrangement, but in the early twentieth century it was still to be seen in great numbers of country cottages and farmhouse kitchens all over England, and in small artisan's houses in the West Midlands and the North.
Writing
In the elementary teaching of writing, a glyph of similar shape is called a pothook.
Gallery
References and notes
See also
Trammel hook
Food preparation utensils
Fire | Pothook | Chemistry | 154 |
1,908,033 | https://en.wikipedia.org/wiki/Bookworm%20%28comics%29 | Bookworm was a British humoristic comic strip, first published on 22 April 1978 in the magazine Whoopee! and survived Whoopee!'''s merger with Whizzer and Chips'' in 1985, becoming a Chip-ite. It was drawn by Sid Burgon for most of its history, although Barry Glennard drew a substantial number of episodes.
Concept
The comic strip centers around a young boy, "Bookworm", who indeed is a huge bibliophile. He is never seen without a book, and his parents often try to force him to do more "boyish" things, like playing football. The results are typically disastrous.
References
British comic strips
1978 comics debuts
Comics characters introduced in 1978
1985 comics endings
Child characters in comics
Male characters in comics
Comics about children
Humor comics
British comics characters
Works about bibliophilia | Bookworm (comics) | Biology | 178 |
41,684,133 | https://en.wikipedia.org/wiki/Pi1%20Gruis | {{DISPLAYTITLE:Pi1 Gruis}}
π1 Gruis (Pi1 Gruis) is a semiregular variable star in the constellation Grus around 530 light-years from Earth. It forms a close naked eye double with π2 Gru four arc-minutes away.
π1 Gruis is an asymptotic giant branch (AGB) star of spectral type S5. It is one of the brightest members of a class of stars known as S stars. It is also a semi-regular variable star ranging from apparent magnitude 5.3 to 7.0 over a period of 198.8 days. It is an ageing star, thought to be well on its way transitioning from a red giant to a planetary nebula. A shell of material has been detected at a distance of 0.91 light-years (0.28 parsecs), which is estimated to have been ejected 21,000 years ago. Closer to the star, there appears to be a cavity within , suggesting a drop off in the ejection of material in the past 90 years. The presence of one companion makes the shape of the shell irregular (rather than spherical), and there may as yet be another undetected companion contributing to this.
π1 Gruis has a companion star of apparent magnitude 10.9 that is sunlike in properties—a yellow main sequence star of spectral type G0V. Separated by , the pair make up a likely binary system. The primary star has a measured diameter of 18.37 milliarcseconds, corresponding to a size 350 times that of the Sun.
The star was catalogued by French explorer and astronomer Nicolas Louis de Lacaille in 1756 but not given a name. Instead, he gave the Bayer designation of "π Gruis" to π2. It was Thomas Brisbane who designated this star as π1. Annie Jump Cannon was the first to report its unusual spectrum, sending a plate of its spectrograph made in 1895 to Paul W. Merrill and noting its similarity to R Andromedae. Merrill selected these two stars along with R Cygni to be the three prototypes of the S star class. π1 Gruis was one of the first 17 stars defined as S-stars by Merrill in 1922; the only star not observed from Mount Wilson due to its southerly location in the sky. Analysis of its spectrum showed bands indicating the presence of technetium, as well as oxides of zirconium, lanthanum, cerium and yttrium but not titanium nor barium which have been recorded in other S stars.
Notes
References
Grus (constellation)
Gruis, Pi1
S-type stars
G-type main-sequence stars
110478
8521
212087
Durchmusterung objects
Semiregular variable stars
Asymptotic-giant-branch stars
Binary stars | Pi1 Gruis | Astronomy | 590 |
20,991,984 | https://en.wikipedia.org/wiki/Remote%20racking%20system | A remote racking system or remote racking device is a system that allows an operator racking in and out a withdrawable circuit breaker from a remote location. It offers a safe alternative to manually racking circuit breakers, which reduces the requirement for service personnel to wear a full-body arc flash hazard suit for protection.
Advantages
A circuit breaker is an automatically operated electrical switch designed to protect an electrical circuit from damage caused by overload or short circuit.
There are fixed and withdrawable circuit breakers.
An arc flash is a type of electrical explosion that results from a low impedance connection to ground or another voltage phase in an electrical system. By permitting the automatic racking of the circuit breaker from a remote location, the remote racking systems move service personnel outside the arc flash protection boundary, thus reducing the need for a full-body arc flash hazard suit.
A remote switch operator is used to remotely operate various types and styles of circuit breakers and controls. When the remote racking system is used in conjunction with a remote switch operator, the user can also operate, trip, and release the circuit breaker from a safe distance.
Designs
There are several designs of remote racking systems on the market and most include either a wired or wireless remote control. The distance at which these can be used varies by which product is chosen. Also, the style and size will also be a factor in choosing a remote racking system as some are larger than others. Many systems either implement a "roll-up" design, similar to that of a small hand truck or dolly, while some are integral to the switchgear that the breaker is mounted in.
See also
Circuit breaker
Arc flash
Switchgear
References
Electrical breakdown
Electric power systems components
Safety switches
External links
Switchgear Safety | Remote racking system | Physics | 353 |
35,435,442 | https://en.wikipedia.org/wiki/GPCR%20oligomer | A GPCR oligomer is a protein complex that consists of a small number ( oligoi "a few", méros "part, piece, component") of G protein-coupled receptors (GPCRs). It is held together by covalent bonds or by intermolecular forces. The subunits within this complex are called protomers, while unconnected receptors are called monomers. Receptor homomers consist of identical protomers, while heteromers consist of different protomers.
Receptor homodimers – which consist of two identical GPCRs – are the simplest homomeric GPCR oligomers.
Receptor heterodimers – which consist of two different GPCRs – are the simplest heteromeric GPCR oligomers.
The existence of receptor oligomers is a general phenomenon, whose discovery has superseded the prevailing paradigmatic concept of the function of receptors as plain monomers, and has far-reaching implications for the understanding of neurobiological diseases as well as for the development of drugs.
Discovery
For a long time it was assumed that receptors transmitted their effects exclusively from their basic functional forms – as monomers. The first clue to the existence of GPCR oligomers goes back to 1975 when Robert Lefkowitz observed that β-adrenoceptors display negative binding cooperativity. At the beginning of the 1980s, it was hypothesized, receptors could form larger complexes, the so-called mosaic form, where two receptors may interact directly with each other. Mass determination of β-adrenoceptors (1982) and muscarinic receptors (1983), supported the existence of homodimer or tetrameric complexes. In 1991, the phenomenon of receptor crosstalk was observed between adenosine A2A (A2A) and dopamine D2 receptor (DRD2) thus suggesting the formation of heteromers. While initially thought to be a receptor heterodimer, a review from 2015 determined that the A2A-DRD2 heteromer is a heterotetramer composed of A2A and DRD2 homodimers (i.e., two adenosine A2A receptors and two dopamine D2 receptors). Maggio and co-workers showed in 1993 the ability of the muscarinic M3 receptor and α2C-adrenoceptor to heterodimerize. The first direct evidence that GPCRs functioned as oligomers in vivo came from Overton and Blumer in 2000 by fluorescence resonance energy transfer (FRET) analysis of the α-factor receptor in the yeast Saccharomyces cerevisiae. In 2005, further evidence was provided that receptor oligomerization plays a functional role in a living organism with regulatory implication. The crystal structure of the CXCR4 dimer was published in 2010.
Consequences of oligomerization
GPCR oligomers consist of receptor dimers, trimers, tetramers, and complexes of higher order. These oligomers are entities with properties that can differ from those of the monomers in several ways. The functional character of a receptor is dependent on its tertiary or quaternary structure. Within the complex protomers act as allosteric modulators of another. This has consequences for:
the supply of the cell surface with receptors
the ligand binding at corresponding binding sites
the G-protein coupling
the GPCR-mediated signal transduction
modifying the desensitization profile
the tendency for endocytosis and internalization
the post-endocytotic fate of the receptors
Detection
There are various methods to detect and observe GPCR oligomers.
See also
D1-D2 dopamine receptor
References
Further reading
External links
–
–
G protein-coupled receptors
Cell biology | GPCR oligomer | Chemistry,Biology | 774 |
24,118,534 | https://en.wikipedia.org/wiki/ZooBank | ZooBank is an open access website intended to be the official International Commission on Zoological Nomenclature (ICZN) registry of zoological nomenclature. Any nomenclatural acts (e.g. publications that create or change a taxonomic name) published electronically need to be registered with ZooBank prior to publication to be "officially" recognized by the ICZN Code of Nomenclature. Acts published in physical publications are encouraged, but not required to be registered prior to their publication.
Life Science Identifiers (LSIDs) are used as the globally unique identifier for ZooBank registration entries.
The ZooBank prototype was seeded with data from Index to Organism Names (http://www.organismnames.com), which was compiled from the scientific literature in Zoological Record now owned by Thomson Reuters.
History
ZooBank was officially proposed in 2005 by the executive secretary of ICZN. The registry was live on 10 August 2006 with 1.5 million species entered.
The first ZooBank LSIDs were issued on 1 January 2008, precisely 250 years after 1 January 1758, which is the date defined by the ICZN Code as the official start of scientific zoological nomenclature. Chromis abyssus was the first species entered into the ZooBank system with a timestamp of 2008-01-01T00:00:02.
Contents
Four main types of data objects are stored in ZooBank. Nomenclatural acts are governed by the ICZN Code of Nomenclature, and are typically "original descriptions" of new scientific names, however other acts, such as emendations and lectotypifications, are also governed by the ICZN code and technically require registration by ZooBank. Publications include journal articles and other publications containing Nomenclatural Acts. Authors records the academic authorship of Nomenclatural Acts. Type Specimens record the biological type specimens of animals which are provisionally registered, until the bodies responsible for such types implement their own registries.
In addition to those, periodicals which have published articles are also entities within the system, providing access to a list of "Nomenclatural Acts" published in the periodical over time.
Electronic publications
Traditionally, taxonomic data was published in journals or books. However, with the increase in electronic publications, the ICZN established new rules that include e-publications, especially electronic only publications. Such publications are now regulated by amendments of ICZN Articles 8, 9, 10, 21 and 78. Technically, nomenclatural acts that are published in electronic only papers are not recognized if they have not been registered with ZooBank and are considered as "non-existent".
See also
Plazi
References
External links
ZooBank papers and mailing list
Zoological nomenclature
Open-access archives
Internet properties established in 2006
Online taxonomy databases | ZooBank | Biology | 569 |
2,468,995 | https://en.wikipedia.org/wiki/Malonyl-CoA | Malonyl-CoA is a coenzyme A derivative of malonic acid.
Functions
It plays a key role in chain elongation in fatty acid biosynthesis and polyketide biosynthesis.
Cytosolic fatty acid biosynthesis
Malonyl-CoA provides 2-carbon units to fatty acids and commits them to fatty acid chain synthesis.
Malonyl-CoA is formed by carboxylating acetyl-CoA using the enzyme acetyl-CoA carboxylase. One molecule of acetyl-CoA joins with a molecule of bicarbonate, requiring energy rendered from ATP.
Malonyl-CoA is utilised in fatty acid biosynthesis by the enzyme malonyl coenzyme A:acyl carrier protein transacylase (MCAT). MCAT serves to transfer malonate from malonyl-CoA to the terminal thiol of holo-acyl carrier protein (ACP).
Mitochondrial fatty acid synthesis
Malonyl-CoA is formed in the first step of mitochondrial fatty acid synthesis (mtFASII) from malonic acid by malonyl-CoA synthetase (ACSF3).
Polyketide biosynthesis
MCAT is also involved in bacterial polyketide biosynthesis. The enzyme MCAT together with an acyl carrier protein (ACP), and a polyketide synthase (PKS) and chain-length factor heterodimer, constitutes the minimal PKS of type II polyketides.
Regulation
Malonyl-CoA is a highly regulated molecule in fatty acid synthesis; as such, it inhibits the rate-limiting step in beta-oxidation of fatty acids. Malonyl-CoA inhibits fatty acids from associating with carnitine by regulating the enzyme carnitine acyltransferase, thereby preventing them from entering the mitochondria, where fatty acid oxidation and degradation occur.
Related diseases
Malonyl-CoA plays a special role in the mitochondrial clearance of toxic malonic acid in the metabolic disorder combined malonic and methylmalonic aciduria (CMAMMA). In CMAMMA due to ACSF3, malonyl-CoA synthetase is decreased, which can generate malonyl-CoA from malonic acid, which can then be converted to acetyl-CoA by malonyl-CoA decarboxylase. In contrast, in CMAMMA due to malonyl-CoA decarboxylase deficiency, malonyl-CoA decarboxylase is decreased, which converts malonyl-CoA to acetyl-CoA.
See also
MCAT (gene)
References
External links
Hope for new way to beat obesity
Metabolism
Thioesters of coenzyme A | Malonyl-CoA | Chemistry,Biology | 576 |
4,641,891 | https://en.wikipedia.org/wiki/Gp41 | Gp41 also known as glycoprotein 41 is a subunit of the envelope protein complex of retroviruses, including human immunodeficiency virus (HIV). Gp41 is a transmembrane protein that contains several sites within its ectodomain that are required for infection of host cells. As a result of its importance in host cell infection, it has also received much attention as a potential target for HIV vaccines.
Gene and post-translational modifications
Gp41 is coded with gp120 as one gp160 by the env gene of HIV. Gp160 is then extensively glycosylated and proteolytically cleaved by furin, a host cellular protease. The high glycosylation of the env-coded glycoproteins allows them to escape the human body's immune system. In contrast to gp120, however, gp41 is less glycosylated and more conserved (less prone to genetic variations). Once gp160 has been cleaved into its individual subunits, the subunits are then associated non-covalently on the surface of the viral envelope.
Structure
Gp41 and gp120, when non-covalently bound to each other, are referred to as the envelope spike complex and are formed as a heterotrimer of three gp41 and three gp120. These complexes found on the surface of HIV are responsible for the attachment, fusion, and ultimately the infection of host cells. The structure is cage-like with a hollow center that inhibits antibody access. While gp120 sits on the surface of the viral envelope, gp41 is the transmembrane portion of the spike protein complex with a portion of the glycoprotein buried within the viral envelope at all times.
Gp41 has three prominent regions within the sequence: the ectodomain, the transmembrane domain, and the cytoplasmic domain. The ectodomain, which comprises residues 511-684, can be further broken down into the fusion peptide region (residues 512-527), the helical N-terminal heptad repeat (NHR) and C-terminal heptad repeat (CHR). In addition to these regions, there is also a loop region that contains disulfide bonds that stabilize the hairpin structure (the folded conformation of gp41) and a region called the membrane proximal external region (MPER) which contains kinks that are antigen target regions. The fusion peptide region is normally buried or hidden by the non-covalent interactions between gp120 and gp41, at a point which looks torus-like. This prevents the fusion peptide from interacting with other regions that are not its intended target region.
Function
In a free virion, the fusion peptides at the amino termini of gp41 are buried within the envelope complex in an inactive non-fusogengic state that is stabilized by a non-covalent bond with gp120. Gp120 binds to a CD4 and a co-receptor (CCR5 or CXCR4), found on susceptible cells such as Helper T cells and macrophages. As a result, a cascade of conformational changes occurs in the gp120 and gp41 proteins. These conformational changes start with gp120 that rearranges to expose the binding sites for the coreceptors mentioned above. The core of gp41 then folds into a six helical bundle (a coiled coil) structure exposing the previously hidden hydrophobic gp41 fusion peptides that are inserted in the host cell membrane allowing fusion to take place. This fusion process is facilitated by the hairpin conformational structure. The inner core of this conformation is 3 NHRs which have hydrophobic pockets that allow it to bind anti-parallel to specific residues on the CHR. The activation process occurs readily, which suggests that the inactive state of gp41 is metastable and the conformational changes allow gp41 to achieve its more stable active state. Furthermore, these conformational changes are irreversible processes.
As a drug target
The interaction of gp41 fusion peptides with the target cell causes a formation of an intermediate, pre-hairpin structure which bridges and fuses the viral and host membranes together. The pre-hairpin structure has a relatively long half-life which makes it a potential target for therapeutic intervention and inhibitory peptides.
Enfuvirtide (also known as T-20) is a 36-residue alpha-peptide fusion inhibitor drug that binds to the pre-hairpin structure and prevents membrane fusion and HIV-1 entry to the cell. The vulnerability of this structure has initiated development towards a whole spectrum of fusion preventing drugs. In developing these drugs, researchers face challenges because the conformation that allows for inhibition occurs very quickly and then rearranges. Enfuviritide specifically has a low oral availability and is quickly processed and expelled by the body. Certain strains of HIV have also developed resistance to T-20. In order to circumvent the difficulties that come with using T-20, researchers have sought out peptide-based inhibitors. A variety of naturally occurring molecules have also been shown to bind gp41 and prevent HIV-1 entry.
The MPER is one region that has been studied as a potential target because of its ability to be recognized by broadly neutralizing antibodies (bNAbs), but it hasn't been a very good target because the immune response it elicits isn't very strong and because it is the portion of gp41 that enters the cell membrane (and it cannot be reached by antibodies then). In addition to antigen binding regions on MPER kinks, there are other targets that could prove to be effective antigen binding regions, including the hydrophobic pockets of the NHR core that is formed following the conformational change in gp41 that creates the six-helix bundle. These pockets could potentially serve as targets for small molecule inhibitors. The fusion peptide on the N-terminus of the gp41 is also a potential target because it contains neutralizing antibody epitopes. N36 and C34, or NHR- and CHR-based peptides (or short sequences of amino acids that mimic portions of gp41) can also act as effective antigens because of their high affinity binding. In addition to having a much higher affinity for binding when compared to its monomer, C34 also inhibits T-20 resistant HIV very well, which makes it a potentially good alternative to treatments involving enfuviritide. Small-molecule inhibitors that are able to bind to two hydrophobic pockets at once have also been shown to be 40-60 times more potent and have potential for further developments. Most recently, the gp120-gp41 interface is being considered as a target for bNAbs.
References
External links
HIV/AIDS
Glycoproteins
Viral structural proteins | Gp41 | Chemistry | 1,433 |
32,196,401 | https://en.wikipedia.org/wiki/Integer%20broom%20topology | In general topology, a branch of mathematics, the integer broom topology is an example of a topology on the so-called integer broom space X.
Definition of the integer broom space
The integer broom space X is a subset of the plane R2. Assume that the plane is parametrised by polar coordinates. The integer broom contains the origin and the points such that n is a non-negative integer and }, where Z+ is the set of positive integers. The image on the right gives an illustration for and . Geometrically, the space consists of a collection of convergent sequences. For a fixed n, we have a sequence of points − lying on circle with centre (0, 0) and radius n − that converges to the point (n, 0).
Definition of the integer broom topology
We define the topology on X by means of a product topology. The integer broom space is given by the polar coordinates
Let us write for simplicity. The integer broom topology on X is the product topology induced by giving U the right order topology, and V the subspace topology from R.
Properties
The integer broom space, together with the integer broom topology, is a compact topological space. It is a T0 space, but it is neither a T1 space nor a Hausdorff space. The space is path connected, while neither locally connected nor arc connected.
See also
Comb space
Infinite broom
List of topologies
References
General topology
Topological spaces | Integer broom topology | Mathematics | 291 |
51,905,555 | https://en.wikipedia.org/wiki/ProbOnto | ProbOnto is a knowledge base and ontology of probability distributions. ProbOnto 2.5 (released on January 16, 2017) contains over 150 uni- and multivariate distributions and alternative parameterizations, more than 220 relationships and re-parameterization formulas, supporting also the encoding of empirical and univariate mixture distributions.
Introduction
ProbOnto was initially designed to facilitate the encoding of nonlinear-mixed effect models and their annotation in Pharmacometrics Markup Language (PharmML) developed by DDMoRe, an Innovative Medicines Initiative project. However, ProbOnto, due to its generic structure can be applied in other platforms and modeling tools for encoding and annotation of diverse models applicable to discrete (e.g. count, categorical and time-to-event) and continuous data.
Knowledge base
The knowledge base stores for each distribution:
Probability density or mass functions and where available cumulative distribution, hazard and survival functions.
Related quantities such as mean, median, mode and variance.
Parameter and support/range definitions and distribution type.
LaTeX and R code for mathematical functions.
Model definition and references.
Relationships
ProbOnto stores in Version 2.5 over 220 relationships between univariate distributions with re-parameterizations as a special case, see figure. While this form of relationships is often neglected in literature, and the authors concentrate one a particular form for each distribution, they are crucial from the interoperability point of view. ProbOnto focuses on this aspect and features more than 15 distributions with alternative parameterizations.
Alternative parameterizations
Many distributions are defined with mathematically equivalent but algebraically different formulas. This leads to issues when exchanging models between software tools. The following examples illustrate that.
Normal distribution
Normal distribution can be defined in at least three ways
Normal1(μ,σ) with mean, μ, and standard deviation, σ
Normal2(μ,υ) with mean, μ, and variance, υ = σ^2 or
Normal3(μ,τ) with mean, μ, and precision, τ = 1/υ = 1/σ^2.
Re-parameterization formulas
The following formulas can be used to re-calculate the three different forms of the normal distribution (we use abbreviations i.e. instead of etc.)
Log-normal distribution
In the case of the log-normal distribution there are more options. This is due to the fact that it can be parameterized in terms of parameters on the natural and log scale, see figure.
The available forms in ProbOnto 2.0 are
LogNormal1(μ,σ) with mean, μ, and standard deviation, σ, both on the log-scale
LogNormal2(μ,υ) with mean, μ, and variance, υ, both on the log-scale
LogNormal3(m,σ) with median, m, on the natural scale and standard deviation, σ, on the log-scale
LogNormal4(m,cv) with median, m, and coefficient of variation, cv, both on the natural scale
LogNormal5(μ,τ) with mean, μ, and precision, τ, both on the log-scale
LogNormal6(m,σg) with median, m, and geometric standard deviation, σg, both on the natural scale
LogNormal7(μN,σN) with mean, μN, and standard deviation, σN, both on the natural scale
ProbOnto knowledge base stores such re-parameterization formulas to allow for a correct translation of models between tools.
Examples for re-parameterization
Consider the situation when one would like to run a model using two different optimal design tools, e.g. PFIM and PopED. The former supports the LN2, the latter LN7 parameterization, respectively. Therefore, the re-parameterization is required, otherwise the two tools would produce different results.
For the transition following formulas hold
.
For the transition following formulas hold
.
All remaining re-parameterisation formulas can be found in the specification document on the project website.
Ontology
The knowledge base is built from a simple ontological model. At its core, a probability distribution is an instance of the class thereof, a specialization of the class of mathematical objects. A distribution relates to a number of other individuals, which are instances of various categories in the ontology. For example, these are parameters and related functions associated with a given probability distribution. This strategy allows for the rich representation of attributes and relationships between domain objects. The ontology can be seen as a conceptual schema in the domain of mathematics and has been implemented as a PowerLoom knowledge base. An OWL version is generated programmatically using the Jena API.
Output for ProbOnto are provided as supplementary materials and published on or linked from the probonto.org website. The OWL version of ProbOnto is available via Ontology Lookup Service (OLS) to facilitate simple searching and visualization of the content. In addition the OLS API provides methods to programmatically access ProbOnto and to integrate it into applications. ProbOnto is also registered on the BioSharing portal.
ProbOnto in PharmML
A PharmML interface is provided in form of a generic XML schema for the definition of the distributions and their parameters. Defining functions, such as probability density function (PDF), probability mass function (PMF), hazard function (HF) and survival function (SF), can be accessed via methods provided in the PharmML schema.
Use example
This example shows how the zero-inflated Poisson distribution is encoded by using its codename and declaring that of its parameters (‘rate’ and ‘probabilityOfZero’). Model parameters Lambda and P0 are assigned to the parameter code names.
<Distribution>
<po:ProbOnto name="ZeroInflatedPoisson1">
<po:Parameter name="rate">
<ct:Assign>
<ct:SymbRef symbIdRef="Lambda" />
</ct:Assign>
</po:Parameter>
<po:Parameter name="probabilityOfZero">
<ct:Assign>
<ct:SymbRef symbIdRef="P0" />
</ct:Assign>
</po:Parameter>
</po:ProbOnto>
</Distribution>
To specify any given distribution unambiguously using ProbOnto, it is sufficient to declare its code name and the code names of its parameters.
More examples and a detailed specification can be found on the project website.
See also
List of probability distributions
Ontology (computer science)
Relationships among probability distributions
Web Ontology Language
References
External links
Leemis chart
Ultimate Univariate Probability Distribution Explorer – most likely the largest, free collection of univariate distributions and their features.
UncertML
Probability distributions | ProbOnto | Mathematics | 1,412 |
663,680 | https://en.wikipedia.org/wiki/Northeast%20%28disambiguation%29 | Northeast is a compass point.
Northeast, north-east, north east, northeastern or north-eastern or north eastern may also refer to:
Northeast (direction), an intercardinal direction
Places
Africa
North East (Nigeria)
North Eastern Province (Kenya)
North-East District (Botswana)
North East Region (Ghana)
North Eastern District, Eritrea
Asia and Oceania
Northeast India or the Seven Sister States
North East Delhi, a district of Delhi
North Eastern Province, Sri Lanka
Northeast China or Manchuria
North Eastern (General Electors Communal Constituency, Fiji), an electoral division of Fiji
North-East Region, Singapore
North East Community Development Council, Singapore
Northeast Province (IMCRA region), an Australian marine biogeographic province
Northeast (Vietnam)
Tōhoku region or "Northeast Region", Japan
United Kingdom
North East (London Assembly constituency), a constituency of the London Assembly
North East (London sub region), a sub-region of the London Plan
North East England, one of the official government regions of England
North East Scotland (Scottish Parliament electoral region), an electoral region, but in wider use to refer to the area made up of Aberdeen, Aberdeenshire and Moray
North East (Dundee ward), Scotland
North East (Glasgow ward), Scotland
America
North East, Maryland
North East, New York
North East, Pennsylvania
Northeast, Minneapolis (sometimes referred to as Nordeast)
Northeast Region, Brazil, an official grouping of states for economic and statistical purposes
Atlantic Northeast, a region of North America
Northeast, Washington, D.C., the northeast quadrant of Washington, D.C.
Northeastern United States
Northeast Community, a neighborhood in Tampa, Florida
Northeast (Billings), a section of Billings, Montana
Northeast Township, Adams County, Illinois
Northeast Township, Orange County, Indiana
People
Sam Northeast (born 1989), English cricketer
Airlines
Northeast Airlines, a now defunct US airline which began operations in 1931 and merged with Delta Air Lines 1972
Northeast Airlines (UK), a now defunct British airline which began operations in 1951 as BKS and was merged into British Airways in 1976
Northeast Airlines (China), a planned start-up airline to be based in Shenyang, People's Republic of China
Northeast Express Regional Airlines, a now defunct Maine-based regional airline which operated as an affiliate of Northwest Airlines
Sports
NorthEast United FC, football team based in Guwahati, Assam, India which competes in Indian Super League
Northeastern Warriors, badminton team based in Guwahati, Assam, India which competes in Premier Badminton League
North East Re-Organising Cultural Association FC, football club based in Imphal, Manipur, India which competes in I-League
North East Tigers, boxing team which competes in Super Boxing League (India)
Northeastern Huskies, are athletic teams representing Northeastern University in Boston, Massachusetts, United States
Other uses
Northeast (film), a 2005 Argentine film
North East (film), a 2016 Nigerian romantic drama film
North East Island (disambiguation)
Northeastern University, a university in Boston, Massachusetts, USA
Northeastern University (disambiguation)
Northeastern Conference, a high school athletic conference in Massachusetts
Northeastern Limited, named passenger train of the Illinois Central, from Shreveport, Louisiana to Meridian, Mississippi.
See also
Nord-Est (disambiguation), French for northeast
Nor'easter, a storm
Nord-Ost, a Russian musical theatre production
Orientation (geometry)
fr:Nord-Est
pam:Pangulu-aslagan
pt:Nordeste (desambiguação)
fi:Koillinen
vo:North East
war:Dumagsaan
zh:东北 | Northeast (disambiguation) | Physics,Mathematics | 727 |
49,223,421 | https://en.wikipedia.org/wiki/Weissman%20score | The Weissman score is a performance metric for lossless compression applications. It was developed by Tsachy Weissman, a professor at Stanford University, and Vinith Misra, a graduate student, at the request of producers for HBO's television series Silicon Valley, a television show about a fictional tech start-up working on a data compression algorithm. It compares both required time and compression ratio of measured applications, with those of a de facto standard according to the data type.
The formula is the following; where r is the compression ratio, T is the time required to compress, the overlined ones are the same metrics for a standard compressor, and alpha is a scaling constant.
The Weissman score has been used by Daniel Reiter Horn and Mehant Baid of Dropbox to explain real-world work on lossless compression. According to the authors it "favors compression speed over ratio in most cases."
Example
This example shows the score for the data of the Hutter Prize, using the paq8f as a standard and 1 as the scaling constant.
Limitations
Although the value is relative to the standards against which it is compared, the unit used to measure the times changes the score (see examples 1 and 2). This is a consequence of the requirement that the argument of the logarithmic function must be dimensionless. The multiplier also can't have a numeric value of 1 or less, because the logarithm of 1 is 0 (examples 3 and 4), and the logarithm of any value less than 1 is negative (examples 5 and 6); that would result in scores of value 0 (even with changes), undefined, or negative (even if better than positive).
See also
Benchmark
Coding theory
Information theory
Phred quality score
References
Benchmarks (computing)
Data compression
Silicon Valley (TV series)
Software metrics | Weissman score | Mathematics,Technology,Engineering | 387 |
55,823,640 | https://en.wikipedia.org/wiki/Georges%20Tiercy | Georges César Tiercy (1886–1955) was a Swiss astronomer and the 7th director of the Observatoire de Genève from 1928 to 1956.
Tiercy received his bachelor of science degree in 1913 from the University of Paris and his Ph.D. in science and mathematics from the University of Geneva in 1915. He was a master in a private college in Ouchy from 1908 to 1912. He taught mathematics in various schools in Geneva from 1913 to 1927 and was a privat-docent at the University of Geneva from 1915. After an internship at the observatories of Hamburg in 1927 and of Arcetri in Florence in 1927–1928, Tiercy became director of the observatory of Geneva in 1928. At the University of Geneva he was professor ordinarius of astronomy from 1928 to 1950 and rector from 1948 to 1950. At the University of Lausanne he was professor extraordinarius of astronomy from 1936 to 1953 and professor ordinarius from 1953 to 1955. He was the author or co-author of more than 250 papers.
Tiercy was an Invited Speaker of the ICM in 1928 at Bologna and in 1932 at Zürich. He was president in 1931 of the Société de Physique et d'Histoire Naturelle (S.P.H.N.) of Geneva and was one of the founders in 1952 of the Swiss National Science Foundation.
He did research on theoretical physics, astrophysics, geodesy, meteorology and chronometry.
Selected publications
References
20th-century Swiss astronomers
1886 births
1955 deaths
Rectors of the University of Geneva | Georges Tiercy | Astronomy | 319 |
18,784,729 | https://en.wikipedia.org/wiki/Natural-language%20programming | Natural-language programming (NLP) is an ontology-assisted way of programming in terms of natural-language sentences, e.g. English. A structured document with Content, sections and subsections for explanations of sentences forms a NLP document, which is actually a computer program. Natural language programming is not to be mixed up with natural language interfacing or voice control where a program is first written and then communicated with through natural language using an interface added on. In NLP the functionality of a program is organised only for the definition of the meaning of sentences. For instance, NLP can be used to represent all the knowledge of an autonomous robot. Having done so, its tasks can be scripted by its users so that the robot can execute them autonomously while keeping to prescribed rules of behaviour as determined by the robot's user. Such robots are called transparent robots as their reasoning is transparent to users and this develops trust in robots. Natural language use and natural-language user interfaces include Inform 7, a natural programming language for making interactive fiction, Shakespeare, an esoteric natural programming language in the style of the plays of William Shakespeare, and Wolfram Alpha, a computational knowledge engine, using natural-language input. Some methods for program synthesis are based on natural-language programming.
Interpretation
The smallest unit of statement in NLP is a sentence. Each sentence is stated in terms of concepts from the underlying ontology, attributes in that ontology and named objects in capital letters. In an NLP text every sentence unambiguously compiles into a procedure call in the underlying high-level programming language such as MATLAB, Octave, SciLab, Python, etc.
Symbolic languages such as Wolfram Language are capable of interpreted processing of queries by sentences. This can allow interactive requests such as that implemented in Wolfram Alpha. The difference between these and NLP is that the latter builds up a single program or a library of routines that are programmed through natural language sentences using an ontology that defines the available data structures in a high level programming language.
An example text from an English language natural-language program is as follows:
If U_ is 'smc01-control', then do the following. Define surface weights Alpha as "[0.5, 0.5]".
Initialise matrix Phi as a 'unit matrix'. Define J as the 'inertia matrix' of Spc01. Compute
matrix J2 as the inverse of J. Compute position velocity error Ve and angular velocity error
Oe from dynamical state X, guidance reference Xnow. Define the joint sliding surface G2
from the position velocity error Ve and angular velocity error Oe using the surface weights
Alpha. Compute the smoothed sign function SG2 from the joint sliding surface G2 with sign
threshold 0.01. Compute special dynamical force F from dynamical state X and surface
weights Alpha. Compute control torque T and control force U from matrix J2, surface weights
Alpha, special dynamical force F, smoothed sign function SG2. Finish conditional actions.
that defines a feedback control scheme using a sliding mode control method.
Software paradigm
Natural-language programming is a top-down method of writing software. Its stages are as follows:
Definition of an ontology taxonomy of concepts needed to describe tasks in the topic addressed. Each concept and all their attributes are defined in natural-language words. This ontology will define the data structures the NLP can use in sentences.
Definition of one or more top-level sentences in terms of concepts from the ontology. These sentences are later used to invoke the most important activities in the topic.
Defining of each of the top-level sentences in terms of a sequence of sentences.
Defining each of the lower-level sentences in terms of other sentences or by a simple sentence of the form Execute code "...". where ... stands for a code in terms of the associated high-level programming language.
Repeating the previous step until you have no sentences left undefined. During this process each of sentences can be classified to belong to a section of the document to be produced in HTML or Latex format to form the final natural-language program.
Testing the meaning of each sentence by executing its code using testing objects.
Providing a library of procedure calls (in the underlying high-level language) which are needed in the code definitions of some low-level-sentence meanings.
Providing a title, author data and compiling the sentences into an HTML or LaTeX file.
Publishing the natural-language program as a webpage on the Internet or as a PDF file compiled from the LaTeX document.
Publication value of natural-language programs and documents
A natural-language program is a precise formal description of some procedure that its author created. It is human readable and it can also be read by a suitable software agent. For example, a web page in an NLP format can be read by a software personal assistant agent to a person and she or he can ask the agent to execute some sentences, i.e. carry out some task or answer a question. There is a reader agent available for English interpretation of HTML based NLP documents that a person can run on her personal computer .
Contribution of natural-language programs to machine knowledge
An ontology class is a natural-language program that is not a concept in the sense as humans use concepts. Concepts in an NLP are examples (samples) of generic human concepts. Each sentence in a natural-language program is either (1) stating a relationship in a world model or (2) carries out an action in the environment or (3) carries out a computational procedure or (4) invokes an answering mechanism in response to a question.
A set of NLP sentences, with associated ontology defined, can also be used as a pseudo code that does not provide the details in any underlying high level programming language. In such an application the sentences used become high level abstractions (conceptualisations) of computing procedures that are computer language and machine independent.
AI in Natural Language Programming
Researchers have started to experiment with natural language programming environments that use plain language prompts and then use AI (specifically large language models) to turn natural language into formal code. For example Spatial Pixel created a natural language programming environment to turn natural language into P5.js code through OpenAI's API. In 2021 OpenAI developed a natural language programming environment for their programming large language model called Codex.
See also
Controlled natural language
Context-free language
Domain-specific language (or DSL)
End-user programming
Knowledge representation
Natural-language processing
Source-code generation
Very high-level programming language
Programming languages with English-like syntax
AppleScript
Attempto Controlled English
COBOL
ClearTalk
FLOW-MATIC
HyperTalk
Inform 7
JOSS
SenseTalk
Software AG
Transcript
Structured Query Language (or SQL)
xTalk
Programming languages with other natural language-like vocabulary or syntax
Non-English-based programming languages
References
Bibliography
Books
Natural Language Programming of Agents and Robotic Devices: publishing for agents and humans in sEnglish by S M Veres, , London, June 2008.
Papers at conferences
Sliding mode control of autonomous spacecraft. (half written in sEnglish) by S M Veres an N K Lincoln, Proc. TAROS’2008, Towards Autonomous Robotic Systems, Edinburgh, 1–3 September 2008.
Program synthesis from natural language specifications
Raza, Mohammad, Sumit Gulwani, and Natasa Milic-Frayling. "Compositional Program Synthesis from Natural Language and Examples." IJCAI. 2015.
Green, Cordell. "A Summary of the PSI Program Synthesis System." IJCAI. Vol. 5. 1977.
External links
English Script (dormant since 2016)
Plain English Programming Programming language using English sentences in ASCII.
SEMPRE a toolkit for training semantic parsers
sysbrain.com sEnglish Editor in C++/ROS for robot programming to develop transparent robots.
wy-lang.org "Programming Language for the ancient Chinese"
How natural should a natural interface be? thoughts on how "natural" the Ubiquity interface (dormant since 2009)
Metafor turns English to code (dormant since 2005)
Computer knowledge representation format, system, methods, and applications US patent re: hyperlinking to .who/what/where/when/how XML files that embed NL
Algorithm description languages
Structured English
Computer programming
Natural language processing | Natural-language programming | Technology,Engineering | 1,712 |
10,991,523 | https://en.wikipedia.org/wiki/Total%20human%20ecosystem | Total human ecosystem (THE) is an ecocentric concept initially proposed by ecology professors Zeev Naveh and Arthur S. Lieberman in 1994.
History of the concept
Naveh and Lieberman proposed a holistic, ecocentric concept of the total human ecosystem in order to study anthropocene ecology and improve land use planning and environmental management within an integrated and interdisciplinary approach. In Naveh's words, the total human ecosystem is "the highest co-evolutionary ecological entity on earth with landscapes as its concrete three-dimensional ‘Gestalt’ systems, forming the spatial and functional matrix for all organisms". This concept (or meta-concept) integrates human systems (the technosphere, but also in the conceptual space of human noosphere) and natural systems (the geophysical eco-space of the Earth biosphere).
Zev Naveh (1919-2011), the major contributor to this concept, was a professor in landscape ecology at the Technion, Israel Institute of Technology, Haifa. Until 1965 he worked as a range and pasture specialist in Israel and Tanzania. His research at the Technion was devoted to human impacts on Mediterranean landscapes, fire ecology and dynamic conservation management, and the introduction of drought resistant plants for multi-beneficial landscape restoration and beautification.
Almo Farina, who also developed the concept from 2000 onwards, is also a professor of ecology at the Urbino University, Faculty of Environmental Sciences, in Italy.
Concepts and epistemology
The interaction and co-evolution of the human and natural ecosystem interactions are the driving forces for the current Earth system. The total human ecosystem meta-conceptional approach aims to integrate the bio-and geo-centric approaches, derived from the natural sciences, and the approaches derived from the social sciences and the humanities in order to prevent further environmental degradation and drive natural and human systems towards a sustainable future.
A natural ecosystem within this concept is solar energy powered, self-organizing and self-creating. The human ecosystem is fossil energy powered by high input and throughput, and can be divided into two sub-ecosystems: urban-industrial and agro-industrial. The ecosystem is realised in space as an ecotope and the system of ecotopes is the landscape: natural, semi-natural, urban-industrial are the tangible, three-dimensional physical systems. These form the total human ecosystem. The total human ecosystem also consists of the domain of information, perceptions (in landscape ecology this is the ecofield concept), knowledge, feeling and consciousness, enabling human (but also biological) self-awareness.
A special case of landscapes inside of the total human ecosystem are the cultural landscapes in which the relationships between human activity (as an effective, ecology-based, land or sea stewardship) have created ecological, socioeconomic and cultural patterns and feedback mechanisms that preserve biological and cultural diversity and maintain or even improve the ecosystem's resilience and resistance.
See also
Human ecosystem
Landscape ecology
Environmental geography
Ecosystem
Sustainability
References
Farina, A., 2006. Principles and Methods in Landscape Ecology: Towards a Science of the Landscape, Springer, Dordrecht, 412 p.
Ecology
Ecosystems | Total human ecosystem | Biology | 636 |
65,804,540 | https://en.wikipedia.org/wiki/Resin%20transfer%20moulding | Resin transfer moulding (RTM) is a process for producing high performance composite components.
Procedure
It is a process using a rigid two-sided mould set that forms both surfaces of the panel. Usually, the mould is formed from aluminum or steel, but sometimes composite molds are used. The two sides fit together to make a mould cavity. The distinctive feature of resin transfer moulding is that the reinforcement materials are placed into this cavity, and before the introduction of the matrix material, the mould set is closed. Resin transfer moulding involves numerous varieties which differ in the mechanics of how the resin is introduced to the reinforcement in the mould cavity. These variations include everything from the RTM methods used in out of autoclave composite manufacturing for high-tech aerospace components to vacuum infusion (for resin infusion see also boat building) to vacuum assisted resin transfer moulding (VARTM). This method can be done at either ambient or elevated temperature and is suitable for manufacturing high-performance composite components in medium volumes (1,000s to 10,000s of parts).
References
Composite materials
Composite material fabrication techniques | Resin transfer moulding | Physics | 234 |
17,369,404 | https://en.wikipedia.org/wiki/Transit%20privatization | The privatization of transport refers to the process of shifting responsibility regarding the provision of public transport or service from the public to the private sector.
Introduction
Transit privatization is highly controversial, with proponents claiming great potential benefits and detractors pointing to cases where privatization has been highly problematic.
One important argument in this respect is the consideration of public transport as a merit good. The rationale behind it is the idea that governments should guarantee basic service in public transport to deprived customer groups despite the fact that it is economically irrational.
While the subsidization of public transport is basically not contested, the important question in the public vs. private debate refers to the optimal level of subsidy. Today there are no real answers to this issue, but Japanese policy to have a relatively free transportation market is considered to function well in providing transport to Japan's three major metropolitan areas. The country's flagship high-speed line, the Tokaido Shinkansen, has operated for almost half a century without a single derailment or collision, and in 2007, its average departure delay was a mere 18 seconds along its 320-mile route.
Impact
Price
The 1970s were an era of deregulation within the U.S. Back then public transport (i.e. railroads in 1976 and airlines in 1978) were deregulated. Ticket prices increase or decrease based on the service provided and the amount of public subsidies.
Different approaches to privatize railways in U.S. and Europe were taken. In Europe, rail operations were separated from rail infrastructure, while the U.S. railroad system is widely deregulated and vertically integrated.
Another example were public owned bus companies in the U.K. Those companies were reorganized in 1985 into private companies (with the exception of London). Cost savings mainly resulted from reduced employment costs and increased productivity.
Service quality
A number of innovations were adopted by private bus companies in the pursuit of profits with the most important being the launch of minibuses that increased service levels.
However, separating rail operations from rail infrastructure turned out to make coordination of rail operations and infrastructure maintenance more difficult.
Safety
The changes to the U.K. bus industry as a result of privatization had in contrast to the changes to the U.K. railway industry no effects on its safety.
In the U.K. privatising railways entailed cost overruns, accidents and, finally, the bankruptcy of the rail infrastructure company. For the rest of Europe the separation of rail operations from rail infrastructure did not cause substantial problems. The McNulty review of the UK railway industry in 2011 found that the fragmentation of the industry in the course of privatisation had caused a permanent increase in costs of between 20% and 30%.
By the same token, airline market reformation in Europe has been successful. Today, a single European airline market exists leading to improved productivity and decreased ticket prices. As in the U.S., low-cost carriers have affected the market and, thus, improved resource allocation.
Notes
References
International Transport Forum, (2008), Privatisation and Regulation of Urban Transit Systems, OECD Publishing.
Clifford Winston, (2010), Last Exit: Privatization and Deregulation of the U.S. Transportation System, Brookings Institution.
Black William R., (2003), Transportation: A Geographical Analysis, The Guilford Press.
Cooper James, Mundy Ray, Nelson John, (2010), Taxi!: Urban Economies and the Social and Transport Impacts of the Taxicab, Ashgate Publishing Limited.
European Conference of Ministers of Transport, (2005), 16th International Symposium on Theory and Practice in Transport Economics, OECD Publishing.
Klein Daniel B., Moore Adrian T., Reja Binyam, (1997), Curb Rights: A Foundation for Free Enterprise in Urban Transit, Brookings Institution.
See also
Rail deregulation in the U.S.
Rail deregulation in the U.K.
Bus deregulation in the U.K.
Airline deregulation
Railway nationalisation
Economics of regulation
Privatization | Transit privatization | Physics | 820 |
41,641,823 | https://en.wikipedia.org/wiki/Hydrophobic%20light-activated%20adhesive | Hydrophobic light-activated adhesive (HLAA) is a type of glue that sets in seconds, but only after exposure to ultraviolet light. One biocompatible, biodegradable HLAA is under consideration for use in human tissue repair as a replacement for sutures, staples and other approaches.
History
The glue was developed in a collaboration between Boston Children's Hospital, MIT and Harvard-affiliated Brigham and Women's Hospital. It was inspired by the viscous, water-repellant fluids secreted by animals such as slugs, sandcastle worms and insect footpads.
Heart repair
HLAA has been used experimentally to repair holes in pig hearts. It provides a hemostatic seal that adheres to the heart tissue despite immersion in liquid blood. It is not rejected by the body and is sufficiently adhesive and elastic that it is not pulled loose or damaged by the contractions of the heart muscle. It harmlessly biodegrades over time. The lack of stitching or stapling implies that procedures for applying glue-treated patches are potentially considerably less invasive than the alternatives. The polymer becomes physically entangled with collagen and other proteins on the tissue surface rather than adhering via a chemical reaction.
Alternatives
Sutures can damage heart tissue and take too long to apply. Staples can also damage heart tissue. Existing surgical adhesives can be toxic, and they can become unstuck in wet, dynamic environments such as the heart. As a result, infants often require subsequent operations to "replug" the hole. One other surgical adhesive cures when exposed to water.
References
External links
Packaging Adhesives
Adhesives
Biodegradable materials
Surgical procedures and techniques | Hydrophobic light-activated adhesive | Physics,Chemistry | 351 |
46,182,552 | https://en.wikipedia.org/wiki/Localization%20and%20Urbanization%20Economies | Localization and Urbanization Economies are two types of external economies of scale, or agglomeration economies. External economies of scale result from an increase in the productivity of an entire industry, region, or economy due to factors outside of an individual company. There are three sources of external economies of scale: input sharing, labor market pooling, and knowledge spillovers (Marshall, 1920).
Localization economies occur when an increase in the size of an industry in a city leads to an increase in productivity of a particular activity. Alfred Marshall (1920) introduced the idea that the localization of industry can increase productivity in his book Principles of Economics. The highly concentrated high tech industry in Silicon Valley exemplifies industrial localization. Although the cost of labor and land in Silicon Valley is very high, high tech firms continue to locate there because of the added benefit they receive from their proximity to a high-skilled labor pool. The size of the high tech industry, creates positive externalities for each firm located in Silicon Valley.
Urbanization economies arise when the size of the city leads to an increase in productivity. Los Angeles exemplifies urbanization economies in that it has no single dominant industry, yet continues to grow. Firms which locate in Los Angeles benefit from the common resources and large labor pool found in the city. Common resources such as roads, buildings and power supply benefit firms in cities regardless of their industry. Also, firms have better access to labor by locating in cities. The urban environment creates positive externalities that benefit several different industries. Jane Jacobs is often credited with the idea that urban diversity and a city’s size leads to agglomeration economies. However, Marshall’s (1920) discussion of urban diversity predates her work.
References
Urban planning | Localization and Urbanization Economies | Engineering | 358 |
5,127,273 | https://en.wikipedia.org/wiki/Lavizan-Shian | Shian is one of the neighborhoods in the northeast of Tehran.
Shiyan neighborhood, which is located in district 4 of Tehran, is located in the north of Loizan Forest Park and south of Aja residential and military complex.
The streets of this neighborhood are named Xian and Xian 1 to 7 respectively. Shi'an neighborhood has a Hosseinieh and Loizan Hospital is located near it.
The highway under construction in Darabad is the closest highway to this neighborhood. A forest park named Xian and Xian Hotel are among the sightseeing places of the neighborhood. [1] Xian Village Hotel was built in 1375 by Region 4 Municipality in the northeastern part of Louisan Forest Park.
Lavizan-Shian was an alleged undeclared nuclear site in north-eastern Tehran, Iran. The site was under investigation by the International Atomic Energy Agency (IAEA) as a potential undeclared nuclear site. According to Reuters, claims by the US that topsoil has been removed and the site had been sanitized could not be verified by IAEA investigators who visited Lavizan. In Paragraph 39 of the IAEA's November 2005 report on Iran, the IAEA stated "The information provided by Iran appeared to be coherent and consistent with its explanation of the razing of the Lavisan-Shian area."
External links
DESTRUCTION AT IRANIAN SITE RAISES NEW QUESTIONS ABOUT IRAN'S NUCLEAR ACTIVITIES
U.N. probe detects uranium in Iran
IAEA Report November 2005
Nuclear research institutes
Nuclear program of Iran | Lavizan-Shian | Engineering | 317 |
48,944,856 | https://en.wikipedia.org/wiki/Barcelona%20astrolabe | The Barcelona astrolabe is the oldest astrolabe with Carolingian characters that has survived in the Christian Occident.
The French researcher Marcel Destombes founded the astrolabe, and left it as legacy to the Institute of the Arab World of Paris in 1983.
The Academy of Sciences of Barcelona asked the astrolabe in loan to the Musée of l'Institut du Monde Arabe, to make a replica, today this replica is on display at the Academy of Sciences in the Ramblas.
Description
This astrolabe presents some unusual characteristics. All the engraved characters are in Latin, this fact made the scholars think that the instrument was made in the Christian Europe. The pointers of his "spider" indicate eighteen stars: ten boreal stars and eight austral stars (that is to say, situated beneath of the equator). Eleven of them correspond to the date of 980 AD. Still like this, the names of the stars are not engraved on the brass. The words ROMA and FRANCIA are engraved in Latin characters in one of the eardrums. These characters are accompanied by the numbers 41-30 (in Arabic figures). The characters are identical to those used at the end of the 10th century in the Catalan Latin manuscripts, being Catalonia in that moment a mark of the Carolingian France. This would explain the presence of the word FRANCIA. The figures express in degrees and minutes: 41° 30′, which correspond exactly to the latitude of Barcelona.
The fact of having engraved the date 980 AD. and the latitude of Barcelona (41–30), which archdeacon in those dates was Sunifred Llobet, to whom is attributed the authorship of the Ripoll manuscript: ms.225, which contains the description of an astrolabe, has led the scholars to attribute the paternity of the astrolabe to this famous astronomer.
Data
Name: Astrolabe of Barcelona
Place of manufacture: Barcelona, Principality of Catalonia
Date / period: To the year 980
Material and technical: Brass decorated with recorded
Dimensions: 15,2 cm of diameter
Conservation (city): Paris
Conservation (place): Bequeathed by Marcel Destombes to the Musée of l'Institut du Monde Arabe (Paris)
Number of inventory: AY 86-31
See also
Gerbert of Aurillac
References
External links
‘Carolingian' astrolabe. To Qantara – Mediterranean Heritage (English)
Astronomical instruments
Historical scientific instruments
History of Barcelona | Barcelona astrolabe | Astronomy | 502 |
449,660 | https://en.wikipedia.org/wiki/Cloud%20seeding | [[File:Cloudseedingimagecorrected.jpg|alt=Cloud Seeding|thumb|upright=1.2|This image explaining cloud seeding shows a substance – either silver iodide or dry ice – being dumped onto the cloud, which then becomes a rain shower. The process shown in the upper-right is what is happening in the cloud and the process of condensation upon the introduced material.<ref>Infographic: Naomi E Tesla; Source for image: Fletcher Boland </ref>]
Cloud seeding is a type of weather modification that aims to change the amount or type of precipitation, mitigate hail or disperse fog. The usual objective is to increase rain or snow, either for its own sake or to prevent precipitation from occurring in days afterward.
Cloud seeding is undertaken by dispersing substances into the air that serve as cloud condensation or ice nuclei. Common agents include silver iodide, potassium iodide, and dry ice, with hygroscopic materials like table salt gaining popularity due to their ability to attract moisture. Techniques vary from static seeding, which encourages ice particle formation in supercooled clouds to increase precipitation, to dynamic seeding, designed to enhance convective cloud development through the release of latent heat.
Methods of dispersion include aircraft and ground-based generators, with newer approaches involving drones delivering electric charges to stimulate rainfall, or infrared laser pulses aimed at inducing particle formation. Despite decades of research and application, cloud seeding's effectiveness remains a subject of debate among scientists, with studies offering mixed results on its impact on precipitation enhancement. Some studies suggest it is "difficult to show clearly that cloud seeding has a very large effect".
Environmental and health impacts are considered minimal due to the low concentrations of substances used, but concerns persist over the potential accumulation of seeding agents in sensitive ecosystems. The practice has a long history, with initial experiments dating back to the 1940s, and has been used for various purposes, including agricultural benefits, water supply augmentation, and event planning. Legal frameworks primarily focus on prohibiting the military or hostile use of weather modification techniques, leaving the ownership and regulation of cloud-seeding activities to national discretion. Despite skepticism and debate over its efficacy and environmental impact, cloud seeding continues to be explored and applied in regions worldwide as a tool for weather modification.
Methods
Salts
The most common chemicals used for cloud seeding include silver iodide, potassium iodide and dry ice (solid carbon dioxide). Liquid propane, which expands into a gas, has also been used. It can produce ice crystals at higher temperatures than silver iodide. After promising research, the use of hygroscopic materials, such as table salt, is becoming more popular.
When cloud seeding, increased snowfall takes place when temperatures within the clouds are between −20 and −7 °C. Freezing nucleation is induced by the introduction of substances similar to silver iodide, which has a crystalline structure like ice.
In mid-altitude clouds, the usual seeding strategy has been based on the fact that the equilibrium vapor pressure is lower over ice than over water. The formation of ice particles in supercooled clouds allows those particles to grow at the expense of liquid droplets. If sufficient growth takes place, the particles become heavy enough to fall as precipitation from clouds that otherwise would produce no precipitation. This process is known as "static" seeding.
Seeding of warm-season or tropical cumulonimbus (convective) clouds seeks to exploit the latent heat released by freezing. This strategy of "dynamic" seeding assumes that the additional latent heat adds buoyancy, strengthens updrafts, ensures more low-level convergence, and ultimately causes rapid growth of properly selected clouds.
Cloud seeding chemicals may be dispersed by aircraft or by dispersion devices on the ground (generators or canisters fired from anti-aircraft guns or rockets). For release by aircraft, silver iodide flares are ignited and dispersed as an aircraft flies through the inflow of a cloud. When released by devices on the ground, the fine particles are carried downwind and upward by air currents after release.
Electric charges
Since 2021, the United Arab Emirates have been using drones equipped with a payload of electric-charge emission instruments and customized sensors that fly at low altitudes and deliver an electric charge to air molecules. This method produced a significant rainstorm in July 2021. For instance, in Al Ain it rained 6.9 millimeters on 20–21 July.
Infrared laser pulses
An electronic mechanism was tested in 2010, when infrared laser pulses were directed to the air above Berlin by researchers from the University of Geneva. The experimenters posited that the pulses would encourage atmospheric sulfur dioxide and nitrogen dioxide to form particles that would then act as seeds.
Effectiveness
Whether cloud seeding is effective in producing a statistically significant increase in precipitation is a matter of academic debate, with contrasting results depending on the study in question and contrasting opinion among experts.
A study conducted by the United States National Academy of Sciences failed to find statistically significant support for cloud seeding's effectiveness. Based on its findings, Stanford University ecologist Jerry Bradley said: "I think you can squeeze out a little more snow or rain in some places under some conditions, but that's quite different from a program claiming to reliably increase precipitation." Data similar to that of the NAS study was acquired in a separate study conducted by the Wyoming Weather Modification Pilot Project, but whereas the NAS study concluded that "it is difficult to show clearly that cloud seeding has a very large effect", the WWMPP study concluded that "seeding could augment the snowpack by a maximum of 3% over an entire season."
In 2003, the US National Research Council (NRC) released a report stating, "science is unable to say with assurance which, if any, seeding techniques produce positive effects. In the 55 years following the first cloud-seeding demonstrations, substantial progress has been made in understanding the natural processes that account for our daily weather. Yet scientifically acceptable proof for significant seeding effects has not been achieved".
A 2010 Tel Aviv University study claimed that the common practice of cloud seeding to improve rainfall, with materials such as silver iodide and frozen carbon dioxide, seems to have little if any impact on the amount of precipitation. A 2011 study suggested that airplanes may produce ice particles by freezing cloud droplets that cool as they flow around the tips of propellers, over wings or over jet aircraft, and thereby unintentionally seed clouds. This could have potentially serious consequences for particular hail stone formation.
In 2016, Jeff Tilley, director of weather modification at the Desert Research Institute in Reno, claimed that new technology and research has produced reliable results that make cloud seeding a dependable and affordable water supply practice for many regions. Moreover, in 1998 the American Meteorological Society held that "precipitation from supercooled orographic clouds (clouds that develop over mountains) has been seasonally increased by about 10%."
Despite the mixed scientific results, cloud seeding was attempted during the 2008 Summer Olympics in Beijing to coax rain showers out of clouds before they reached the city in order to prevent rain during the opening and closing ceremonies. Whether this attempt was successful is a matter of dispute, with Roelof Bruintjes, who leads the National Center for Atmospheric Research's weather-modification group, remarking, "we cannot make clouds or chase clouds away".
Impact on environment and health
With an NFPA 704 health hazard rating of 2, silver iodide can cause temporary incapacitation or possible residual injury to humans and other mammals with intense or chronic exposure. But several detailed ecological studies have shown negligible environmental and health impacts. The toxicity of silver and silver compounds (from silver iodide) was shown to be of low order in some studies. These findings likely result from the minute amounts of silver generated by cloud seeding, which are about one percent of industry emissions into the atmosphere in many parts of the world, or individual exposure from tooth fillings.
Accumulations in the soil, vegetation, and surface runoff have not been large enough to measure above natural background. A 1995 environmental assessment in the Sierra Nevada of California and a 2004 independent panel of experts in Australia confirmed these earlier findings.
"In 1978, an estimated 3,000 tonnes of silver were released into the US environment. This led the US Health Services and EPA to conduct studies regarding the potential for environmental and human health hazards related to silver. These agencies and other state agencies applied the Clean Water Act of 1977 and 1987 to establish regulations on this type of pollution."
Cloud seeding over Kosciuszko National Parka biosphere reserveis problematic in that several rapid changes of environmental legislation were made to enable the trial. Environmentalists are concerned about the uptake of elemental silver in a highly sensitive environment affecting the pygmy possum, among other species, as well as recent high-level algal blooms in once pristine glacial lakes. Research 50 years ago and analysis by the former Snowy Mountains Authority led to the cessation of the cloud seeding program in the 1950s with non-definitive results. Formerly, cloud seeding was rejected in Australia on environmental grounds because of concerns about the pygmy possum. The claims of negative environmental impact are disputed by peer-reviewed research, as summarized by the International Weather Modification Association.
History
In 1891, Louis Gathmann suggested shooting liquid carbon dioxide into rain clouds to cause them to rain. During the 1930s, the Bergeron–Findeisen process theorized that supercooled water droplets present, while ice crystals are released into rain clouds, would cause rain. While researching aircraft icing, General Electric (GE)'s Vincent Schaefer and Irving Langmuir confirmed the theory. Schaefer discovered the principle of cloud seeding in July 1946 through a series of serendipitous events. Following ideas he and Langmuir generated while climbing Mt. Washington in New Hampshire, Schaefer, Langmuir's research associate, created a way of experimenting with supercooled clouds using a deep freeze unit of potential agents to stimulate ice crystal growth, i.e., table salt, talcum powder, soils, dust, and various chemical agents with minor effect. Then, on July 14, 1946, he wanted to try a few experiments at GE's Schenectady Research Lab.
He was dismayed to find that the deep freezer was not cold enough to produce a "cloud" using breath air. He decided to move the process along by adding a chunk of dry ice just to lower the temperature of his experimental chamber. To his astonishment, as soon as he breathed into the deep freezer, he noted a bluish haze, followed by an eye-popping display of millions of microscopic ice crystals, reflecting the strong light rays from the lamp illuminating a cross-section of the chamber. He instantly realized that he had discovered a way to change super-cooled water into ice crystals. The experiment was easily replicated, and he explored the temperature gradient to establish the limit for liquid water.
Within the month, Schaefer's colleague, the atmospheric scientist Bernard Vonnegut, was credited with discovering another method for "seeding" super-cooled cloud water. Vonnegut accomplished his discovery at the desk, looking up information in a basic chemistry text and then tinkering with silver and iodide chemicals to produce silver iodide. Together with Professor Henry Chessin, of SUNY Albany, a crystallographer, he co-authored a publication in Science and received a patent in 1975. Both methods were adopted for use in cloud seeding during 1946 while working for GE in the state of New York.
Schaefer's method altered a cloud's heat budget; Vonnegut's altered formative crystal structure, an ingenious property related to a good match in lattice constant between the two types of crystal. (The crystallography of ice later played a role in Vonnegut's brother Kurt Vonnegut's novel Cat's Cradle). The first attempt to modify natural clouds in the field through "cloud seeding" began during a flight that began in upstate New York on 13 November 1946. Schaefer was able to cause snow to fall near Mount Greylock in western Massachusetts after he dumped of dry ice into the target cloud from a plane after a easterly chase from the Schenectady County Airport.
Dry ice and silver iodide agents are effective in changing super-cooled clouds' physical chemistry, and thus useful in augmenting winter snowfall over mountains and, under certain conditions, in lightning and hail suppression. While not a new technique, hygroscopic seeding for enhancement of rainfall in warm clouds is enjoying a revival, based on positive indications from research in South Africa, Mexico, and elsewhere. The hygroscopic material most commonly used is table salt. It is postulated that hygroscopic seeding causes the droplet size spectrum in clouds to become more maritime (bigger drops) and less continental, stimulating rainfall through coalescence. From 1967 to 1972, the U.S. military's Operation Popeye cloud-seeded silver iodide to extend the monsoon season over North Vietnam, specifically the Ho Chi Minh Trail. The operation extended the monsoon period an average of 30 to 45 days in the targeted areas. The 54th Weather Reconnaissance Squadron carried out the operation to "make mud, not war".
One private organization that offered, during the 1970s, to conduct weather modification (cloud seeding from the ground using silver iodide flares) was Irving P. Krick and Associates of Palm Springs, California. They were contracted by Oklahoma State University in 1972 to conduct a seeding project to increase warm cloud rainfall in the Lake Carl Blackwell watershed. That lake was, at that time (1972–73), the primary water supply for Stillwater, Oklahoma, and was dangerously low. The project did not operate for a long enough time to show statistically any change from natural variations.
An attempt by the U.S. military to modify hurricanes in the Atlantic basin using cloud seeding in the 1960s was called Project STORMFURY. Scientists tested four hurricanes across eight days and observed decreased wind speeds of 10% to 30% on four of these days. They originally attributed the lack of results to faulty execution, but the results came into question because of the lack of supercooled water in the hurricane and the inability to determine if the effects were due to human intervention or the natural processes of hurricanes.
Two federal agencies have supported various weather modification research projects, which began in the early 1960s: The United States Bureau of Reclamation (Reclamation; Department of the Interior) and the National Oceanic and Atmospheric Administration (NOAA; Department of Commerce). Reclamation sponsored several cloud-seeding research projects under the umbrella of Project Skywater from 1964 to 1988, and NOAA conducted the Atmospheric Modification Program from 1979 to 1993. The sponsored projects were carried out in several states and two countries (Thailand and Morocco), studying both winter and summer cloud seeding. From 1962 to 1988 Reclamation developed cloud seeding applied research to augment water supplies in the western U.S. The research focused on winter orographic seeding to enhance snowfall in the Rocky Mountains and Sierra Nevada, and precipitation in coast ranges of southern California. In California Reclamation partnered with the California Department of Water Resources (CDWR) to sponsor the Serra Cooperative Pilot Project (SCPP), based in Auburn, to conduct seeding experiments in the central Sierra. The University of Nevada and Desert Research Institute provided cloud physics, physical chemistry, and other field support. The High Plains Cooperative Pilot Project (HIPLEX) focused on convective cloud seeding to increase rainfall during the growing season in Montana, Kansas, and Texas from 1974 to 1979.
In 1979, the World Meteorological Organization and other member-states led by the Government of Spain conducted a Precipitation Enhancement Project (PEP) in Spain, with inconclusive results due probably to location selection issues.
Reclamation sponsored research at several universities, including Colorado State University, the Universities of Wyoming, Washington, UCLA, Utah, Chicago, NYU, Montana, and Colorado, and research teams at Stanford, Meteorology Research Inc., and Penn State University, and the South Dakota School of Mines and Technology, North Dakota, Texas A&M, Texas Tech, and Oklahoma. Cooperative efforts with state water resources agencies in California, Colorado, Montana, Kansas, Oklahoma, Texas, and Arizona assured that the applied research met state water management needs. HIPLEX also partnered with NASA, Environment Canada, and the National Center for Atmospheric Research (NCAR). From 2002 to 2006, in cooperation with six western states, Reclamation sponsored a small cooperative research program called the Weather Damage Modification Program.
In the U.S., funding for research has declined in the last two decades. But the Bureau of Reclamation sponsored a six-state research program from 2002 to 2006 called the "Weather Damage Modification Program". A 2003 study by the United States National Academy of Sciences urges a national research program to clear up remaining questions about weather modification's efficacy and practice.
In Australia, the Commonwealth Scientific and Industrial Research Organisation (CSIRO) conducted major trials between 1947 and the early-1960s:
1947–1952: CSIRO scientists dropped dry ice into the tops of cumulus clouds. The method worked reliably with clouds that were very cold, producing rain that would not have otherwise fallen.
1953–1956: CSIRO carried out similar trials in South Australia, Queensland and other states. Experiments used both ground-based and airborne silver iodide generators.
Late 1950s and early 1960s: Cloud seeding in the Snowy Mountains, on the Cape York Peninsula in Queensland, in the New England District of New South Wales, and in the Warragamba catchment area west of Sydney.
Only the trial conducted in the Snowy Mountains produced statistically significant rainfall increases over the entire experiment.
Hydro Tasmania (at the time still known as the Hydro Electric Commission) began experimenting with cloud-seeding over lake catchments in central Tasmania in the early 1960s in order to determine if their electricity-producing dams could be kept at high water levels through cloud seeding. Tasmania proved to be one place where cloud seeding was highly effective. Various trials were undertaken between 1964 and 2005, and again between 2009 and 2016, but none have taken place since then. Hydro Tasmania also undertook soil and water survey samples and found negligible trace elements of the materials used for cloud seeding (such as silver iodine), and determined it did not have a detrimental effect on the environment.
An Austrian study to use silver iodide seeding for hail prevention ran during 1981–2000, and the technique is still actively deployed there.
Asia
China
The largest cloud seeding system is in the People's Republic of China. They believe that it increases the amount of rain over several increasingly arid regions, including its capital city, Beijing, by firing silver iodide rockets into the sky where rain is desired. There is even political strife caused by neighboring regions that accuse each other of "stealing rain" using cloud seeding. China used cloud seeding in Beijing just before the 2008 Olympic Games in order to have a dry Olympic season. In February 2009, China also blasted iodide sticks over Beijing to artificially induce snowfall after four months of drought, and blasted iodide sticks over other areas of northern China to increase snowfall. The snowfall in Beijing lasted for approximately three days and led to the closure of 12 main roads around Beijing. At the end of October 2009 Beijing claimed it had its earliest snowfall since 1987 due to cloud seeding. According to "research paper from Tsinghua University, the Chinese weather authorities used weather modification to ensure the sky was clear and lower air pollution" on July 1, 2021. The Chinese Communist party celebrated its centenary on July 1 with a major celebration. The celebration took place in Tiananmen Square. The paper was published on November 26, 2021 in a peer-review journal called Environment Science (via South China Morning Post) . The research shows that the Chinese government used cloud-seeding techniques to force rainfall the evening before the celebration event. This rainfall lowered the amount of PM2.5 pollution by more than two-thirds. That helped improve the air quality at the time from "moderate" to "good".
India
In India, cloud seeding operations were conducted during the years 1983, 1984–87, and 1993–94 by the Tamil Nadu Government due to severe drought. In the years 2003 and 2004 Karnataka government initiated cloud seeding. Cloud seeding operations were also conducted in the same year through US-based Weather Modification Inc. in the state of Maharashtra.
The company Srishti Aviation is actively involved in air defense with two Cessna 340 aircraft for its cloud seeding operations. On the ground at the radar facility the Meteorologist assigns the pilots the cloud that has a high concentration of supercooled liquid water. The aircraft will find the altitude where the temperatures are about -5 °C. This is the altitude at which the seeding agent is most active.
Pakistan
Pakistan has undergone its first-ever artificial rain experiment using cloud seeding, in a move carried out with the help of the United Arab Emirates.there was drizzle in 'at least 10 areas' of Lahore, consistently ranked the most polluted city in the world.
On Friday, November 15, 2024, Pakistan successfully conducted a cloud seeding operation using locally developed technology, resulting in artificial rainfall to tackle the region’s smog crisis. The Meteorological Department confirmed rainfall in Jhelum, Gujar Khan, Chakwal, and Talagang, attributing it to the cloud seeding effort. Cloud seeding was conducted around 2 p.m., with subsequent rainfall observed within hours in Jhelum and Gujar Khan.
Indonesia
In Jakarta, cloud seeding was used to minimize flood risk in anticipation of heavy floods in 2013, according to the Agency for the Assessment and Application of Technology.
Iran
In 1946, the Iranian government tried to fertilize Iran's clouds with the help of Americans, but it was unsuccessful. Then in 1947, in Article 19 of the Law on Water and its Nationalization, the then Ministry of Water and Electricity was obliged to provide the water needed by the country in various ways, including cloud fertilization. Accordingly, the Ministry of Energy between 1953 and 1957, in cooperation with a Canadian company and using aircraft and silver iodide compound fertilized the clouds that were over the Karaj and Jajrud dam area.
After the revolution of 1978, in the years 1989 to 1995, cloud fertilization was carried out in a scattered manner using ground generators in the heights of Shirkuh, Yazd. Then, with the announcement of the Minister of Energy in February 1996, the National Center for Cloud Fertility Research and Studies was established in Yazd and officially started working in 1997.
Israel
Israel conducted experimental cloud seeding for seven years from 2014 to 2021. The practice involved emitting silver iodide from airplanes and ground stations and took place only in the northern parts of the country. Israel stopped the rain enhancement project in 2021 due to the experiment data showing that the practice was largely ineffective and expensive, and because there had been some recent years of unrelated significant rainfall.
Kuwait
To counter drought and a growing population in a desert region, Kuwait is embarking on its own cloud seeding program, with the local Environment Public Authority conducting a study to gauge its viability locally.
United Arab Emirates
Southeast Asia
In Southeast Asia, open-burning haze pollutes the regional environment. Cloud seeding has been used to improve the air quality by encouraging rainfall. On 20 June 2013, Indonesia said it will begin cloud-seeding operations following reports from Singapore and Malaysia that smog caused by forest and bush fires in Sumatra have disrupted daily activities in the neighboring countries. On 25 June 2013, hailstones were reported to have fallen over some parts of Singapore. Despite NEA denials, some believe that the hailstones are the result of cloud seeding in Indonesia.
Malaysia
In Malaysia, cloud seeding was first used in 1988 for three purposes: filling up dams, lessening the effects of haze, and fighting forest fires. In 2015, cloud seeding was done daily in Malaysia since the haze began in early-August. Johor Water Regulatory Body is focus to produce rain over dams with critically low water levels. They use Cessna 340 with tubes of ioidise salt their operation base is at Senai Airport WMKJ.
Thailand
The Thailand Royal Rainmaking Project (, ) was initiated in November 1955 by King Bhumibol Adulyadej. Thai farmers repeatedly suffered the effects of drought. The king resolved to do something about it and proposed a solution to the dearth of rain: artificial rainmaking, or cloud seeding. The program is run by the Department of Royal Rainmaking and Agricultural Aviation. Thailand started a rain-making project in the late-1950s, known today as the Royal Rainmaking Project. Its first efforts scattered sea salt in the air to catch the humidity and dry ice to condense the humidity to form clouds. The project took about ten years of experiments and refinement. The first field operations began in 1969 above Khao Yai National Park. Since then the Thai government claims that rainmaking has been successfully applied throughout Thailand and neighboring countries. The king received recognition for the Royal Rainmaking Project from the Eureka organization in 2001 for an invention that is beneficial to the world. In 2009, Jordan received permission from Thailand to use the technique. On 12 October 2005 the European Patent Office granted to King Bhumibol Adulyadej the patent EP 1 491 088 Weather modification by royal rainmaking technology. The budget of the Department of Royal Rainmaking and Agricultural Aviation in FY2019 was 2,224 million baht.
Vietnam
In Vietnam, during the leadup to and initial stages of the Battle of Điện Biên Phủ in 1954, the French Far East Expeditionary Corps looked into the possibility of using cloud seeding to impede Việt Minh flow of supplies through Route Provinciale 41, a dirt road leading into Điện Biên Phủ that would become more difficult to navigate during the rainy season. General Henri Navarre authorized research into using cloud seeding this way on March 16, just before the outbreak of the battle, and a test commenced the following month. Results were disappointing; while it did not take long for the rain clouds to form and release precipitation, they often drifted away from Route Provinciale 41 in the process, reducing their ability to hinder Việt Minh logistics.
Sri Lanka
In January 2019, the Sri Lanka Air Force (SLAF) and Ministry of Power, Energy and Business Development signed an agreement for a cloud seeding project, created in response to lower water levels for hydroelectric power due to dry weather. In February of that year, a group of officials from the SLAF, Ceylon Electricity Board, and meteorology and irrigation departments were sent to Thailand to study rainmaking projects carried out by the Department of Royal Rainmaking and Agricultural Aviation. On March 22, a Harbin Y-12 flew over the Maskeliya Reservoir at 8000 feet, dispersing cloud seeding chemicals. Rain arrived the next day on March 23, though project members had expected it to appear earlier on the 22nd. News First proclaimed that the pilot project had "proven to be a success", while Mongabay described it as a "failed attempt" that had "fallen short" and highlighted various climate experts who recommended that the government conduct more research into the project's potential environmental effects before proceeding further.
North America
United States
In the United States, cloud seeding is used to increase precipitation in areas experiencing drought, to reduce the size of hailstones that form in thunderstorms, and to reduce the amount of fog in and around airports. In the summer of 1948, the usually humid city of Alexandria, Louisiana, under Mayor Carl B. Close, seeded a cloud with dry ice at the municipal airport during a drought; quickly of rain fell.
Major ski resorts occasionally use cloud seeding to induce snowfall. Eleven western states and one Canadian province (Alberta) had ongoing weather modification operational programs in 2012. In 2006, an $8.8 million project began in Wyoming to examine cloud seeding's effects on snowfall over Wyoming's Medicine Bow, Sierra Madre, and Wind River mountain ranges.
In Oregon, Portland General Electric used Hood River seeding to produce snow for hydro power in 1974-1975. The results were substantial, but caused an undue burden on the locals, who experienced overpowering rainfall, causing street collapses and mudslides. PGE discontinued its seeding practices the next year.
In 1978, the U.S. signed the Environmental Modification Convention, which bans the use of weather modification for hostile purposes.
As of 2022, seven agencies in California are conducting cloud seeding operations using silver iodide, including the Sacramento Municipal Utility District, which began employing the technique in 1969 to increase the water supply to its hydroelectric power plants, and reported that it results in "an average of 3 to 10% increase in [Sierra Nevada] snowpack".
Canada
During the sixties, Irving P. Krick & Associates operated a successful cloud seeding operation in the area around Calgary, Alberta. This utilized both aircraft and ground-based generators that pumped silver iodide into the atmosphere in an attempt to reduce the threat of hail damage. Ralph Langeman, Lynn Garrison, and Stan McLeod, all ex-members of the RCAF's 403 Squadron, attending the University of Alberta, spent their summers flying hail suppression. The Alberta Hail Suppression Project is continuing with C$3 million a year in funding from insurance companies to reduce hail damage in southern Alberta.
Europe
Bulgaria
Bulgaria operates a national network of hail protection, silver iodide rocket sites, strategically located in agricultural areas such as the rose valley. Each site protects an area of 10 sq. km, the density of the site clusters is such that at least 2 sites will be able to target a single hail cloud, initial detection of hail cloud formation to firing of the rockets is typically 7–10 minutes in its entire process with a view to seed the formation of much smaller hailstones, high in the atmosphere that will melt before reaching ground level.
Data collated since the 1960s suggests huge agricultural sector losses are avoided yearly with the protection system, unseeded the hail will flatten entire regions, with seeding this can be reduced to minor leaf damage from the smaller hailstones that failed to melt.
France and Spain
Cloud seeding began in France during the 1950s with the intent of reducing hail damage to crops. The ANELFA () project consists of local agencies acting within a non-profit organization. A similar project in Spain is managed by the Consorcio por la Lucha Antigranizo de Aragon. The success of the French program was supported by analysis made by Jean Dessens based on insurance data; that of the Spanish program in studies conducted by the Spanish Agricultural Ministry. However, Jean Dessens's results were heavily criticized and doubt was cast on the effectiveness of ground generator seeding. ()
Russia
The Soviet Union created a specifically designed version of the Antonov An-30 aerial survey aircraft, the An-30M Sky Cleaner, with eight containers of solid carbon dioxide in the cargo area plus external pods containing meteorological cartridges that could be fired into clouds. Russian government spends millions on 'cloud seeding' technology to ensure it doesn't rain on May Day public holiday. A single contractor was hired to employ a technique which involves dispersing clouds and forcing them to rain before the time they naturally would – dropping the rain on other places and at other times, so they don’t affect specific events. In 2020, Russia used cloud seeding to help fight Siberian forest fires. The Russian government has also used cloud seeding technology in Crimea to alleviate drought which was caused as a result of Ukraine blocking the Crimean Canal.
Russia published a review of the weather electromagnetic correction technology in 2015, which was developed in the 1980s in the USSR and shares technology with India.
Germany
In Germany civic engagement societies organize cloud seeding on a region level. A registered society maintains aircraft for cloud seeding to protect agricultural areas from hail in the district Rosenheim, the district Miesbach, the district Traunstein (all located in southern Bavaria, Germany) and the district Kufstein (located in Tyrol, Austria).
Cloud seeding is also used in Baden-Württemberg, a federal state particularly known for its winegrowing culture. The districts of Ludwigsburg, Heilbronn, Schwarzwald-Baar and Rems-Murr, as well as the cities Stuttgart and Esslingen participate in a program to prevent the formation of hailstones. Reports from a local insurance agency suggest that the cloud seeding activities in the Stuttgart area have prevented about 5 million euro in damages in 2015 while the project's annual upkeep is priced at only 325.000 euro. Another society for cloud seeding operates in the district of Villingen-Schwenningen.
Austria
Austria has two hail defense organizations: Steirische Hagel Abwehr, with four aircraft: Cessna 182 and Südflug with three aircraft: Cessna 150L, Cessna 182P, Partenavia P.68 at Graz Airport.
Slovenia
Slovenia's oldest aeroclub Letalski center Maribor carries air defense against hail as a specialized EASA operation. The Cessna TU206G Turbo Stationair 6 II is equipped with external aggregates and flares for flying. Three crew members work in the operation. Two are the pilots in the plane and the third is the radar operator on the ground. The purpose of the defense is to prevent damage to farmland and cities in the areas of Styria and Prekmurje - aircraft hail suppression. They have been carrying out defense since 1983. Silver iodide is used as a reagent. The base is at Maribor Edvard Rusjan Airport. The activity is financed by local communities and the Ministry of Agriculture, it has great support among people and farmers from all over the countryside.
United Kingdom
Project Cumulus was a UK government initiative to investigate weather manipulation, in particular through cloud seeding experiments, operational between 1949 and 1952. A conspiracy theory has circulated that the Lynmouth flood of 1952 was caused by secret cloud seeding experiments carried out by the Royal Air Force. However, meteorologist Philip Eden has given several reasons why "it is preposterous to blame the Lynmouth flood on such experiments".
Australia
In Australia, summer activities of CSIRO and Hydro Tasmania over central and western Tasmania between the 1960s and the present day appear to have been successful. Seeding over the Hydro-Electricity Commission catchment area on the Central Plateau achieved rainfall increases as high as 30 percent in autumn. The Tasmanian experiments were so successful that the Commission has regularly undertaken seeding ever since in mountainous parts of the State.
In 2004, Snowy Hydro Limited began a trial of cloud seeding to assess the feasibility of increasing snow precipitation in the Snowy Mountains in Australia. The test period, originally scheduled to end in 2009, was later extended to 2014. The New South Wales (NSW) Natural Resources Commission, responsible for supervising the cloud seeding operations, believes that the trial may have difficulty establishing statistically whether cloud seeding operations are increasing snowfall. This project was discussed at a summit in Narrabri, NSW on 1 December 2006. The summit met with the intention of outlining a proposal for a 5-year trial, focusing on Northern NSW.
The various implications of such a widespread trial were discussed, drawing on the combined knowledge of several worldwide experts, including representatives from the Tasmanian Hydro Cloud Seeding Project however does not make reference to former cloud seeding experiments by the then-Snowy Mountains Authority, which rejected weather modification. The trial required changes to NSW environmental legislation in order to facilitate placement of the cloud seeding apparatus. The modern experiment is not supported for the Australian Alps.
In December 2006, the Queensland government of Australia announced a $7.6 million in funding for "warm cloud" seeding research to be conducted jointly by the Australian Bureau of Meteorology and the United States National Center for Atmospheric Research. Outcomes of the study are hoped to ease continuing drought conditions in the states South East region.
In March 2020, scientists from the Sydney Institute of Marine Science and Southern Cross University trialled marine cloud seeding off the coast of Queensland, Australia, with the aim to protect Great Barrier Reef from coral bleaching and dieoff during marine heatwaves. Using two high-pressure turbines, the team sprayed microscopic droplets of saltwater into the air. These then evaporate leaving behind very small salt crystals, which water vapour clings to, creating clouds that reflect the sun more effectively.
Africa
In Mali and Niger, cloud seeding is also used on a national scale. In 1985 the Moroccan Government started with a Cloud seeding program called 'Al-Ghait'. The system was first used in Morocco in 1999; it has also been used between 1999 and 2002 in Burkina Faso and from 2005 in Senegal. Cloud seeding experiments and operations were also conducted in Rhodesia (now Zimbabwe) between 1968 and 1980.
Legal frameworks and implications
Existing international legislation
The Convention on the Prohibition of Military or Any Other Hostile Use of Environmental Modification Techniques (ENMOD) is the only international framework related to the regulation of weather and climate modification technologies. Developed after cloud-seeding operations were conducted during the Vietnam War and the Cold War, the convention's scope of application solely encompasses military or any other hostile uses of weather modification technologies. Indeed, the use of weather modification programs for peaceful purposes is not prohibited by the treaty. ENMOD has been criticised for its many weaknesses, notably regarding the vagueness and ambiguity of notions leaving room for various interpretations.
Ownership of clouds
Given the growing attractiveness of weather modification programs, the legal framework offered by ENMOD is arguably insufficient, as the question of "ownership" is not answered. A 1948 article in the Stanford Law Review stated that attributing a "legal title to a cloud would be ridiculous" due to the distinct nature of clouds, their perpetual change of form and location, their emergence, disappearance and renewal. Similarly, Brooks considers private ownership of clouds as "nonsense" as control is limited to the short moment of the cloud being above somebody's land. Quilleré-Majzoub dismisses the concept of ownership of clouds altogether, given their specific nature, rendering the idea of a cloud ever belonging to somebody unsubstantiated. Indeed, clouds are beyond occupancy – similar to air, running water, the sea and animals ferae naturae – and should thus be considered as common property. Based on this assumption, it follows that a distinction between res communis, belonging to everybody and thus necessitating international regulation, and res nullis, belonging to nobody with states serving themselves as they please, is more suitable. Although water is generally considered as res nullis in international law, there is strong pressure to acknowledging it as res communis, but cloud moisture does currently not have a clearly defined status. It is thus suggested that international law should elaborate a jurisdictional regime that takes into account both the particular nature of clouds and the implications of new technologies.
The picture changes once the moisture in the clouds is made accessible through artificial rainmaking technologies as the rainwater can now be occupied. Typically, regarding naturally occurring precipitation, the first to reduce it to possession, normally the landowner, will gain rights in it as long as no existing rights are violated. Given that this benefit is accorded by nature, these natural rights should not allow the landowner to claim artificially induced rain. California legislation binds water generated through seeding to existing surface water rights and groundwater regulations, considering the produced water "natural supply". Yet courts could decide that the induced rain should be designated as "additional precipitation", permitting the cloud-seeding entity to claim a portion of this generated water. This approach would also face challenges, given the difficulty of determining the fraction of extra water procured by cloud seeding.
Conspiracy theories
Cloud seeding has been the focus of many theories based on the belief that governments manipulate the weather in order to control various conditions, including global warming, populations, military weapons testing, public health, and flooding. This speculation has been fueled in part by government interventions and programs like Operation Popeye.
A 2016 classified ad placed by Los Angeles County's Department of Public Works in the Pasadena Star News sparked claims that widespread weather modification was being confirmed. The department followed up with a clarification that it was only describing cloud seeding, used as an anti-drought measure intermittently for more than half a century in Los Angeles.
See also
Hail cannon
2015 Southeast Asian haze
Anthropogenic cloud
Atmospheric moisture extraction
Bioprecipitation
Cloudbuster – a pseudoscientific device claimed to create clouds and rain via energy manipulation
Fog collection
Marine cloud brightening
The Avengers – A Surfeit of H2O
References
Notes
Bibliography
Schaefer, Vincent J. Serendipity in Science: My Twenty Years at Langmuir University 2013 Compiled and Edited by Don Rittner. Square Circle Press, Voorheesville, NY
Note: Chapter Six "The War Ends as I Discover Cloud Seeding" Schaefer discusses the conversations with Langmuir while climbing Mount Washington (pp. 118–119) and then describes the event "My Discovery of Dry Ice Seeding" on pp. 128–129. References by his son, James M Schaefer, Ph.D.
External links
Rainmaking in China
Nevada State Cloud Seeding Program
European patent EP 1 491 088 Weather modification by royal rainmaking technology
Weather modification | Cloud seeding | Engineering | 8,565 |
1,899,230 | https://en.wikipedia.org/wiki/Granulocyte-macrophage%20colony-stimulating%20factor | Granulocyte-macrophage colony-stimulating factor (GM-CSF), also known as colony-stimulating factor 2 (CSF2), is a monomeric glycoprotein secreted by macrophages, T cells, mast cells, natural killer cells, endothelial cells and fibroblasts that functions as a cytokine. The pharmaceutical analogs of naturally occurring GM-CSF are called sargramostim and molgramostim.
Unlike granulocyte colony-stimulating factor, which specifically promotes neutrophil proliferation and maturation, GM-CSF affects more cell types, especially macrophages and eosinophils.
Function
GM-CSF is a monomeric glycoprotein that functions as a cytokine—it is a white blood cell growth factor. GM-CSF stimulates stem cells to produce granulocytes (neutrophils, eosinophils, and basophils) and monocytes. Monocytes exit the circulation and migrate into tissue, whereupon they mature into macrophages and dendritic cells. Thus, it is part of the immune/inflammatory cascade, by which activation of a small number of macrophages can rapidly lead to an increase in their numbers, a process crucial for fighting infection.
GM-CSF also has some effects on mature cells of the immune system. These include, for example, enhancing neutrophil migration and causing an alteration of the receptors expressed on the cells surface.
GM-CSF signals via signal transducer and activator of transcription, STAT5. In macrophages, it has also been shown to signal via STAT3. The cytokine activates macrophages to inhibit fungal survival. It induces deprivation in intracellular free zinc and increases production of reactive oxygen species that culminate in fungal zinc starvation and toxicity. Thus, GM-CSF facilitates development of the immune system and promotes defense against infections.
GM-CSF also plays a role in embryonic development by functioning as an embryokine produced by reproductive tract.
Genetics
The human gene has been localized in close proximity to the interleukin 3 gene within a T helper type 2-associated cytokine gene cluster at chromosome region 5q31, which is known to be associated with interstitial deletions in the 5q- syndrome and acute myelogenous leukemia. GM-CSF and IL-3 are separated by an insulator element and thus independently regulated. Other genes in the cluster include those encoding interleukins 4, 5, and 13.
Glycosylation
Human granulocyte-macrophage colony-stimulating factor is glycosylated in its mature form.
History
GM-CSF was first cloned in 1985, and soon afterwards three potential drug products were being made using recombinant DNA technology: molgramostim was made in Escherichia coli and is not glycosylated, sargramostim was made in yeast, has a leucine instead of proline at position 23 and is somewhat glycosylated, and regramostim was made in Chinese hamster ovary cells (CHO) and has more glycosylation than sargramostim. The amount of glycosylation affects how the body interacts with the drug and how the drug interacts with the body.
At that time, Genetics Institute, Inc. was working on molgramostim, Immunex was working on sargramostim (Leukine), and Sandoz was working on regramostim.
Molgramostim was eventually co-developed and co-marketed by Novartis and Schering-Plough under the trade name Leucomax for use in helping white blood cell levels recover following chemotherapy, and in 2002 Novartis sold its rights to Schering-Plough.
Sargramostim was approved by the US FDA in 1991 to accelerate white blood cell recovery following autologous bone marrow transplantation under the trade name Leukine, and passed through several hands, ending up with Genzyme, which was subsequently acquired by Sanofi. Leukine is now owned by Partner Therapeutics (PTx).
Imlygic was approved by the US FDA in October 2015, and in December 2015 by the EMA, as an oncolytic virotherapy, commercialized by Amgen Inc. This oncolytic herpes virus, named Talimogene laherparepvec, has been genetically engineered to express human GM-CSF using the tumor cells machinery.
Clinical significance
GM-CSF is found in high levels in joints with rheumatoid arthritis and blocking GM-CSF as a biological target may reduce the inflammation or damage. Some drugs (e.g. otilimab) are being developed to block GM-CSF. In critically ill patients GM-CSF has been trialled as a therapy for the immunosuppression of critical illness, and has shown promise restoring monocyte and neutrophil function, although the impact on patient outcomes is currently unclear and awaits larger studies.
GM-CSF stimulates monocytes and macrophages to produce pro-inflammatory cytokines, including CCL17. Elevated GM-CSF has been shown to contribute to inflammation in inflammatory arthritis, osteoarthritis, colitis asthma, obesity, and COVID-19.
Clinical trials
Monoclonal antibodies against GM-CSF are being used as treatment in clinical trials against rheumatoid arthritis, ankylosing spondylitis, and COVID-19.
See also
CFU-GM
Filgrastim (Neupogen, a granulocyte colony-stimulating factor (G-CSF) analog)
Granulocyte-macrophage colony-stimulating factor receptor
Lenzilumab
Pegfilgrastim (Neulasta, a PEGylated form filgrastim)
References
External links
Official gentaur web site
Official Leukine web site
Cytokines
Drugs acting on the blood and blood forming organs
Growth factors | Granulocyte-macrophage colony-stimulating factor | Chemistry | 1,280 |
19,357,693 | https://en.wikipedia.org/wiki/Ericsson%20T39 | Ericsson T39 was a GSM mobile phone released by Ericsson Mobile Communications in 2001, it was the follow-up to the T28 and T29.
The prototype, which was unveiled in 2000, was the second phone with built-in Bluetooth, while the first was Ericsson R520M.
Specifications
GSM Tri-band (900/1800/1900)
Form Factor: Flip
Weight: 86 g
Screen: 101 × 54 px Monochrome LCD
Bluetooth (1.0b)
IrDA
SMS, E-mail, WAP 1.2.1, GPRS, HSCSD
The T39 came in three different colour options; Classic Blue which was a dark blue, Icecap Blue which was a light blue, and Rose White which was a cream colour. The T39 was the last phone from Ericsson with an active flip and external antenna.
References
External links
Ericsson T39 full specification at GSMarena
T39
T39
Mobile phones introduced in 2000 | Ericsson T39 | Technology | 203 |
494,669 | https://en.wikipedia.org/wiki/Free%20algebra | In mathematics, especially in the area of abstract algebra known as ring theory, a free algebra is the noncommutative analogue of a polynomial ring since its elements may be described as "polynomials" with non-commuting variables. Likewise, the polynomial ring may be regarded as a free commutative algebra.
Definition
For R a commutative ring, the free (associative, unital) algebra on n indeterminates {X1,...,Xn} is the free R-module with a basis consisting of all words over the alphabet {X1,...,Xn} (including the empty word, which is the unit of the free algebra). This R-module becomes an R-algebra by defining a multiplication as follows: the product of two basis elements is the concatenation of the corresponding words:
and the product of two arbitrary R-module elements is thus uniquely determined (because the multiplication in an R-algebra must be R-bilinear). This R-algebra is denoted R⟨X1,...,Xn⟩. This construction can easily be generalized to an arbitrary set X of indeterminates.
In short, for an arbitrary set , the free (associative, unital) R-algebra on X is
with the R-bilinear multiplication that is concatenation on words, where X* denotes the free monoid on X (i.e. words on the letters Xi), denotes the external direct sum, and Rw denotes the free R-module on 1 element, the word w.
For example, in R⟨X1,X2,X3,X4⟩, for scalars α, β, γ, δ ∈ R, a concrete example of a product of two elements is
.
The non-commutative polynomial ring may be identified with the monoid ring over R of the free monoid of all finite words in the Xi.
Contrast with polynomials
Since the words over the alphabet {X1, ...,Xn} form a basis of R⟨X1,...,Xn⟩, it is clear that any element of R⟨X1, ...,Xn⟩ can be written uniquely in the form:
where are elements of R and all but finitely many of these elements are zero. This explains why the elements of R⟨X1,...,Xn⟩ are often denoted as "non-commutative polynomials" in the "variables" (or "indeterminates") X1,...,Xn; the elements are said to be "coefficients" of these polynomials, and the R-algebra R⟨X1,...,Xn⟩ is called the "non-commutative polynomial algebra over R in n indeterminates". Note that unlike in an actual polynomial ring, the variables do not commute. For example, X1X2 does not equal X2X1.
More generally, one can construct the free algebra R⟨E⟩ on any set E of generators. Since rings may be regarded as Z-algebras, a free ring on E can be defined as the free algebra Z⟨E⟩.
Over a field, the free algebra on n indeterminates can be constructed as the tensor algebra on an n-dimensional vector space. For a more general coefficient ring, the same construction works if we take the free module on n generators.
The construction of the free algebra on E is functorial in nature and satisfies an appropriate universal property. The free algebra functor is left adjoint to the forgetful functor from the category of R-algebras to the category of sets.
Free algebras over division rings are free ideal rings.
See also
Cofree coalgebra
Tensor algebra
Free object
Noncommutative ring
Rational series
Term algebra
References
Algebras
Ring theory
Free algebraic structures | Free algebra | Mathematics | 812 |
699,123 | https://en.wikipedia.org/wiki/Marfa%20lights | The Marfa lights, also known as the Marfa ghost lights, are regularly observed near Marfa, Texas, in the United States. They are most often seen from a viewing area nearby, which the community has publicized to encourage tourism. Scientists observing the lights over the period 2000 to 2008 concluded that the lights were the results of automobile headlights being distorted by warm desert air.
Overview
According to Judith Brueske, the best place from which to view the lights is a widened shoulder on Highway 90 about nine miles east of Marfa. The lights are most often reported as distant spots of brightness, distinguishable from ranch lights and automobile headlights on Highway 67 (between Marfa and Presidio, to the south) primarily by their aberrant movements."
Robert and Judy Wagers define "Classic Marfa Lights" as being seen south-southwest of the Marfa Lights Viewing Center (MLVC). They define the left margin of the viewing area as being aligned along the Big Bend Telephone Company tower as viewed from the MLVC, and the right margin as Chinati Peak as viewed from the MLVC.
Referring to the Marfa Lights View Park east of Marfa, James Bunnell describes Marfa lights as "orbs of light", which change in intensity and color, which can move or remain stationary, splitting or merging. He describes the lights as being usually yellow-orange, but also occasionally other hues including green, blue, and red. He states that they usually fly above desert vegetation but below mesas in the background.
History
The first historical record of the Marfa lights was in 1883 when a young cowhand, Robert Reed Ellison, saw a flickering light while he was driving cattle through Paisano Pass and wondered if it was the campfire of the Apache. Other settlers told him they often saw the lights, but that when they investigated they found no ashes or other evidence of a campsite. Joe and Anne Humphreys next reported seeing the lights in 1885.
The first published account of the lights appeared in the July 1957 issue of Coronet magazine. In 1976 Elton Miles's Tales of the Big Bend included stories dating to the 19th century and a photograph of the Marfa lights by a local rancher.
Bunnell lists 34 Marfa lights sightings from 1945 through 2008. Monitoring stations were put in place starting in 2003. He has identified "an average of 9.5 MLs on 5.25 nights per year", but believes that the monitoring stations may only be finding half of the Marfa lights in Mitchell Flat.
Explanations
Atmospheric phenomena
Skeptic Brian Dunning notes that the designated "View Park" for the lights, a roadside park on the south side of U.S. Route 90 about 9 miles (14 km) east of Marfa, is at the site of Marfa Army Airfield, where tens of thousands of personnel were stationed between 1942 and 1947, training American and Allied pilots. This massive field was then used for years as a regional airport, with daily airline service. Since Marfa AAF and its satellite fields are each constantly patrolled by sentries, they consider it unlikely that any unusual phenomena would remain unobserved and unmentioned. According to Dunning, the likeliest explanation is that the lights are a sort of mirage caused by sharp temperature gradients between cold and warm layers of air. Marfa is at an elevation of 4,688 ft (1,429 m) above sea level, and differences of 40–50 °F (22–28 °C) between daily high and low temperatures are quite common.
Car lights
In May 2004 a group from the Society of Physics Students at the University of Texas at Dallas spent four days investigating and recording lights observed southwest of the view park using traffic volume-monitoring equipment, video cameras, binoculars, and chase cars. Their report made the following conclusions:
U.S. Highway 67 is visible from the Marfa lights viewing location.
The frequency of lights southwest of the view park correlates with the frequency of vehicle traffic on U.S. 67.
The motion of the observed lights was in a straight line, corresponding to U.S. 67.
When the group parked a vehicle on U.S. 67 and flashed its headlights, this was visible at the view park and appeared to be a Marfa light.
A car passing the parked vehicle appeared as one Marfa light passing another at the view park.
They came to the conclusion that all the lights observed over a four-night period southwest of the view park could be reliably attributed to automobile headlights traveling along U.S. 67 between Marfa and Presidio, Texas.
Spectroscopic analysis
For 20 nights in May 2008, scientists from Texas State University used spectroscopy equipment to observe lights from the Marfa lights viewing station. They recorded a number of lights that "could have been mistaken for lights of unknown origin", but in each case the movements of the lights and the data from their equipment could be easily explained as automobile headlights or small fires.
In popular media
The lights have been featured and mentioned in various media, including the television show Unsolved Mysteries and an episode of King of the Hill ("Of Mice and Little Green Men") and in an episode of the Disney Channel Original Series So Weird. A book by David Morrell, 2009's The Shimmer, was inspired by the lights. The Rolling Stones mention the "lights of Marfa" in the song "No Spare Parts" from the 2011 re-release of their 1978 album Some Girls. Country music artist Paul Cauthen wrote "Marfa Lights," a love song inspired by the lights, for his 2016 album "My Gospel."
In the 2019 Simpsons episode "Mad About the Toy", the family visits Marfa.
Lisa tries to explain the lights but is prevented by Marge. The Union Trade had a song called "Marfa Lights" on their 2015 album "A Place Of Long Years".
See also
Aleya (Ghost light), Bengal
Aurora
Brown Mountain lights
Chir Batti
Gurdon Light
Hessdalen lights
Min Min light
Naga fireballs
Palatine Light
Paulding Light
The Spooklight
St. Louis light
References
Notes
Bibliography
James Bunnell, Strange Lights in West Texas. Lacey Publishing Company, Benbrook, TX, 2015
Herbert Lindee, "Ghosts Lights of Texas," Skeptical Inquirer, Vol. 166, No. 4, Summer 1992, pp. 400–406
Elton Miles, Tales of the Big Bend, Texas A&M University Press, 1976, pp. 149–167
Dennis Stacy, "The Marfa Lights, A Viewer's Guide," Seale & Stacy, San Antonio, TX 1989
David Stipp, "Marfa, Texas, Finds a Flickering Fame in Mystery Lights," Wall Street Journal, March 21, 1984, p. A1.
Cecilia Thompson, History of Marfa and Presidio County, Texas 1535–1946, Volume 1, 1535–1900 (Marfa, TX: The Presidio County Historical Commission, 1985), 194, 197
External links
DeMystifying the "Marfa Lights"
"Marfa Lights" – from the Skeptic's Dictionary
Discussion of the Marfa Lights (and other 'ghost lights')
Texas Monthly article "The Truth Is Out There"
Reportedly haunted locations in Texas
Atmospheric ghost lights
Weather lore
Environment of Texas
Marfa, Texas
UFO-related phenomena | Marfa lights | Physics | 1,502 |
423,933 | https://en.wikipedia.org/wiki/Cracking%20%28chemistry%29 | In petrochemistry, petroleum geology and organic chemistry, cracking is the process whereby complex organic molecules such as kerogens or long-chain hydrocarbons are broken down into simpler molecules such as light hydrocarbons, by the breaking of carbon–carbon bonds in the precursors. The rate of cracking and the end products are strongly dependent on the temperature and presence of catalysts. Cracking is the breakdown of large hydrocarbons into smaller, more useful alkanes and alkenes. Simply put, hydrocarbon cracking is the process of breaking long-chain hydrocarbons into short ones. This process requires high temperatures.
More loosely, outside the field of petroleum chemistry, the term "cracking" is used to describe any type of splitting of molecules under the influence of heat, catalysts and solvents, such as in processes of destructive distillation or pyrolysis.
Fluid catalytic cracking produces a high yield of petrol and LPG, while hydrocracking is a major source of jet fuel, diesel fuel, naphtha, and again yields LPG.
History and patents
Among several variants of thermal cracking methods (variously known as the "Shukhov cracking process", "Burton cracking process", "Burton–Humphreys cracking process", and "Dubbs cracking process") Vladimir Shukhov, a Russian engineer, invented and patented the first in 1891 (Russian Empire, patent no. 12926, November 7, 1891). One installation was used to a limited extent in Russia, but development was not followed up. In the first decade of the 20th century the American engineers William Merriam Burton and Robert E. Humphreys independently developed and patented a similar process as U.S. patent 1,049,667 on June 8, 1908. Among its advantages was that both the condenser and the boiler were continuously kept under pressure.
In its earlier versions it was a batch process, rather than continuous, and many patents were to follow in the US and Europe, though not all were practical. In 1924, a delegation from the American Sinclair Oil Corporation visited Shukhov. Sinclair Oil apparently wished to suggest that the patent of Burton and Humphreys, in use by Standard Oil, was derived from Shukhov's patent for oil cracking, as described in the Russian patent. If that could be established, it could strengthen the hand of rival American companies wishing to invalidate the Burton–Humphreys patent. In the event Shukhov satisfied the Americans that in principle Burton's method closely resembled his 1891 patents, though his own interest in the matter was primarily to establish that "the Russian oil industry could easily build a cracking apparatus according to any of the described systems without being accused by the Americans of borrowing for free".
At that time, just a few years after the Russian Revolution and Russian Civil War, the Soviet Union was desperate to develop industry and earn foreign exchange. The Soviet oil industry eventually did obtain much of their technology from foreign companies, largely American ones. At about that time, fluid catalytic cracking was being explored and developed and soon replaced most of the purely thermal cracking processes in the fossil fuel processing industry. The replacement was not complete; many types of cracking, including pure thermal cracking, still are in use, depending on the nature of the feedstock and the products required to satisfy market demands. Thermal cracking remains important, for example, in producing naphtha, gas oil, and coke; more sophisticated forms of thermal cracking have since been developed for various purposes. These include visbreaking, steam cracking, and coking.
Cracking methodologies
Thermal cracking
Modern high-pressure thermal cracking operates at absolute pressures of about 7,000 kPa. An overall process of disproportionation can be observed, where "light", hydrogen-rich products are formed at the expense of heavier molecules which condense and are depleted of hydrogen. The actual reaction is known as homolytic fission and produces alkenes, which are the basis for the economically important production of polymers.
Thermal cracking is currently used to "upgrade" very heavy fractions or to produce light fractions or distillates, burner fuel and/or petroleum coke. Two extremes of the thermal cracking in terms of the product range are represented by the high-temperature process called "steam cracking" or pyrolysis (ca. 750 °C to 900 °C or higher) which produces valuable ethylene and other feedstocks for the petrochemical industry, and the milder-temperature delayed coking (ca. 500 °C) which can produce, under the right conditions, valuable needle coke, a highly crystalline petroleum coke used in the production of electrodes for the steel and aluminium industries.
William Merriam Burton developed one of the earliest thermal cracking processes in 1912 which operated at and an absolute pressure of and was known as the Burton process. Shortly thereafter, in 1921, C.P. Dubbs, an employee of the Universal Oil Products Company, developed a somewhat more advanced thermal cracking process which operated at and was known as the Dubbs process. The Dubbs process was used extensively by many refineries until the early 1940s when catalytic cracking came into use.
Steam cracking
Steam cracking is a petrochemical process in which saturated hydrocarbons are broken down into smaller, often unsaturated, hydrocarbons. It is the principal industrial method for producing the lighter alkenes (or commonly olefins), including ethene (or ethylene) and propene (or propylene). Steam cracker units are facilities in which a feedstock such as naphtha, liquefied petroleum gas (LPG), ethane, propane or butane is thermally cracked through the use of steam in a bank of pyrolysis furnaces to produce lighter hydrocarbons.
In steam cracking, a gaseous or liquid hydrocarbon feed like naphtha, LPG or ethane is diluted with steam and briefly heated in a furnace without the presence of oxygen. Typically, the reaction temperature is very high, at around 850 °C, but the reaction is only allowed to take place very briefly. In modern cracking furnaces, the residence time is reduced to milliseconds to improve yield, resulting in gas velocities up to the speed of sound. After the cracking temperature has been reached, the gas is quickly quenched to stop the reaction in a transfer line heat exchanger or inside a quenching header using quench oil.
The products produced in the reaction depend on the composition of the feed, the hydrocarbon-to-steam ratio, and on the cracking temperature and furnace residence time. Light hydrocarbon feeds such as ethane, LPGs or light naphtha give product streams rich in the lighter alkenes, including ethylene, propylene, and butadiene. Heavier hydrocarbon (full range and heavy naphthas as well as other refinery products) feeds give some of these, but also give products rich in aromatic hydrocarbons and hydrocarbons suitable for inclusion in gasoline or fuel oil. Typical product streams include pyrolysis gasoline (pygas) and BTX.
A higher cracking temperature (also referred to as severity) favors the production of ethylene and benzene, whereas lower severity produces higher amounts of propylene, C4-hydrocarbons and liquid products. The process also results in the slow deposition of coke, a form of carbon, on the reactor walls. Since coke degrades the efficiency of the reactor, great care is taken to design reaction conditions to minimize its formation. Nonetheless, a steam cracking furnace can usually only run for a few months between de-cokings. "Decokes" require the furnace to be isolated from the process and then a flow of steam or a steam/air mixture is passed through the furnace coils. This decoking is essentially combustion of the carbons, converting the hard solid carbon layer to carbon monoxide and carbon dioxide.
Fluid catalytic cracking
The catalytic cracking process involves the presence of solid acid catalysts, usually silica-alumina and zeolites. The catalysts promote the formation of carbocations, which undergo processes of rearrangement and scission of C-C bonds. Relative to thermal cracking, cat cracking proceeds at milder temperatures, which saves energy. Furthermore, by operating at lower temperatures, the yield of undesirable alkenes is diminished. Alkenes cause instability of hydrocarbon fuels.
Fluid catalytic cracking is a commonly used process, and a modern oil refinery will typically include a cat cracker, particularly at refineries in the US, due to the high demand for gasoline. The process was first used around 1942 and employs a powdered catalyst. During WWII, the Allied Forces had plentiful supplies of the materials in contrast to the Axis Forces, which suffered severe shortages of gasoline and artificial rubber. Initial process implementations were based on low activity alumina catalyst and a reactor where the catalyst particles were suspended in a rising flow of feed hydrocarbons in a fluidized bed.
In newer designs, cracking takes place using a very active zeolite-based catalyst in a short-contact time vertical or upward-sloped pipe called the "riser". Pre-heated feed is sprayed into the base of the riser via feed nozzles where it contacts extremely hot fluidized catalyst at . The hot catalyst vaporizes the feed and catalyzes the cracking reactions that break down the high-molecular weight oil into lighter components including LPG, gasoline, and diesel. The catalyst-hydrocarbon mixture flows upward through the riser for a few seconds, and then the mixture is separated via cyclones. The catalyst-free hydrocarbons are routed to a main fractionator for separation into fuel gas, LPG, gasoline, naphtha, light cycle oils used in diesel and jet fuel, and heavy fuel oil.
During the trip up the riser, the cracking catalyst is "spent" by reactions which deposit coke on the catalyst and greatly reduce activity and selectivity. The "spent" catalyst is disengaged from the cracked hydrocarbon vapors and sent to a stripper where it contacts steam to remove hydrocarbons remaining in the catalyst pores. The "spent" catalyst then flows into a fluidized-bed regenerator where air (or in some cases air plus oxygen) is used to burn off the coke to restore catalyst activity and also provide the necessary heat for the next reaction cycle, cracking being an endothermic reaction. The "regenerated" catalyst then flows to the base of the riser, repeating the cycle.
The gasoline produced in the FCC unit has an elevated octane rating but is less chemically stable compared to other gasoline components due to its olefinic profile. Olefins in gasoline are responsible for the formation of polymeric deposits in storage tanks, fuel ducts and injectors. The FCC LPG is an important source of C3–C4 olefins and isobutane that are essential feeds for the alkylation process and the production of polymers such as polypropylene.
Typical yields of a UOP Fluid Catalytic Cracker (volume, feed basis, ~23 API feedstock and 74% conversion)
Hydrocracking
Hydrocracking is a catalytic cracking process assisted by the presence of added hydrogen gas. Unlike a hydrotreater, hydrocracking uses hydrogen to break C–C bonds (hydrotreatment is conducted prior to hydrocracking to protect the catalysts in a hydrocracking process). In 2010, 265 million tons of petroleum was processed with this technology. The main feedstock is vacuum gas oil, a heavy fraction of petroleum.
The products of this process are saturated hydrocarbons; depending on the reaction conditions (temperature, pressure, catalyst activity) these products range from ethane, LPG to heavier hydrocarbons consisting mostly of isoparaffins. Hydrocracking is normally facilitated by a bifunctional catalyst that is capable of rearranging and breaking hydrocarbon chains as well as adding hydrogen to aromatics and olefins to produce naphthenes and alkanes.
The major products from hydrocracking are jet fuel and diesel, but low sulphur naphtha fractions and LPG are also produced. All these products have a very low content of sulfur and other contaminants with a goal of reducing the gasoil and naphtha range material to 10 PPM sulfur or lower. It is very common in Europe and Asia because those regions have high demand for diesel and kerosene. In the US, fluid catalytic cracking is more common because the demand for gasoline is higher.
The hydrocracking process depends on the nature of the feedstock and the relative rates of the two competing reactions, hydrogenation and cracking. Heavy aromatic feedstock is converted into lighter products under a wide range of very high pressures (1,000–2,000 psi) and fairly high temperatures (750–1,500 °F, 400–800 °C), in the presence of hydrogen and special catalysts.
Indicative Isocracking (UOP VGO Hydrocracking) Yields
Feedstock: Russian VGO 18.5 API, 2.28% Sulfur by wt, 0.28% Nitrogen by wt, Wax 6.5% by wt.
Feedstock Distillation Curve
Products from a UOP Hydrocracker
Hydrocracking is (mostly) a licensed technology due to its complexity. Typically the licensor is also the catalyst provider. Also, unit internals can often be patented by the process licensors and are designed to support specific functions of the catalyst load. Currently, the major process licensors for hydrocracking are:
UOP
Axens
Chevron Lummus Global
Topsoe
Shell Criterion
Elessent (formerly DuPont)
ExxonMobil (iso-dewaxing for lubricant hydrocracking)
Fundamentals
Outside of the industrial sector, cracking of C−C and C−H bonds are rare chemical reactions. In principle, ethane can undergo homolysis:
CH3CH3 → 2 CH3⋅
Because C−C bond energy is so high (377 kJ/mol), this reaction is not observed under laboratory conditions. More common examples of cracking reactions involve retro-Diels–Alder reactions. Illustrative is the thermal cracking of dicyclopentadiene to produce cyclopentadiene.
See also
Steam reforming
Catalytic reforming
References
External links
Information on cracking in oil refining from howstuffworks.com
www.shukhov.org/shukhov.html — Vladimir Grigorievich Shukhov biography
Colorado School of Mines - faculty member John Jechura is one of the best teachers of refining technology in the world, he puts his course materials online free of charge including a good presentation on hydrotreating/hydrocracking.
Oil refining
Pyrolysis
Organic reactions
Russian inventions
Chemical processes | Cracking (chemistry) | Chemistry | 3,068 |
41,077 | https://en.wikipedia.org/wiki/Duplexer | A duplexer is an electronic device that allows bi-directional (duplex) communication over a single path. In radar and radio communications systems, it isolates the receiver from the transmitter while permitting them to share a common antenna. Most radio repeater systems include a duplexer. Duplexers can be based on frequency (often a waveguide filter), polarization (such as an orthomode transducer), or timing (as is typical in radar).
Types
Transmit-receive switch
In radar, a transmit/receive (TR) switch alternately connects the transmitter and receiver to a shared antenna. In the simplest arrangement, the switch consists of a gas-discharge tube across the input terminals of the receiver. When the transmitter is active, the resulting high voltage causes the tube to conduct, shorting together the receiver terminals to protect it, while its complementary, the anti-transmit/receive (ATR) switch, is a similar discharge tube which decouples the transmitter from the antenna while not operating, to prevent it from wasting received energy.
Circulator
Hybrid
A hybrid, such as a magic T, may be used as a duplexer by terminating the fourth port in a matched load.
This arrangement suffers from the disadvantage that half of the transmitter power is lost in the matched load, while thermal noise in the load is delivered to the receiver.
Orthomode transducer
Frequency domain
In radio communications (as opposed to radar), the transmitted and received signals can occupy different frequency bands, and so may be separated by frequency-selective filters. These are effectively a higher-performance version of a diplexer, typically with a narrow split between the two frequencies in question (typically around 2%-5% for a commercial two-way radio system).
With a duplexer the high- and low-frequency signals are traveling in opposite directions at the shared port of the duplexer.
Modern duplexers often use nearby frequency bands, so the frequency separation between the two ports is also much less. For example, the transition between the uplink and downlink bands in the GSM frequency bands may be about one percent (915 MHz to 925 MHz). Significant attenuation (isolation) is needed to prevent the transmitter's output from overloading the receiver's input, so such duplexers employ multi-pole filters. Duplexers are commonly made for use on the 30-50 MHz ("low band"), 136-174 MHz ("high band"), 380-520 MHz ("UHF"), plus the 790–862 MHz ("800"), 896-960 MHz ("900") and 1215-1300 MHz ("1200") bands.
There are two predominant types of duplexer in use - "notch duplexers", which exhibit sharp notches at the "unwanted" frequencies and only pass through a narrow band of wanted frequencies and "bandpass duplexers", which have wide-pass frequency ranges and high out-of-band attenuation.
On shared-antenna sites, the bandpass duplexer variety is greatly preferred because this virtually eliminates interference between transmitters and receivers by removing out-of-band transmit emissions and considerably improving the selectivity of receivers. Most professionally engineered sites ban the use of notch duplexers and insist on bandpass duplexers for this reason.
Note 1: A duplexer must be designed for operation in the frequency band used by the receiver and transmitter, and must be capable of handling the output power of the transmitter.
Note 2: A duplexer must provide adequate rejection of transmitter noise occurring at the receive frequency, and must be designed to operate at, or less than, the frequency separation between the transmitter and receiver.
Note 3: A duplexer must provide sufficient isolation to prevent receiver desensitization.
Source: from Federal Standard 1037C
History
The first duplexers were invented for use on the electric telegraph and were known as duplex rather than duplexer. They were an early form of the hybrid coil. The telegraph companies were keen to have such a device since the ability to have simultaneous traffic in both directions had the potential to save the cost of thousands of miles of telegraph wire. The first of these devices was designed in 1853 by Julius Wilhelm Gintl of the Austrian State Telegraph. Gintl's design was not very successful. Further attempts were made by Carl Frischen of Hanover with an artificial line to balance the real line as well as by Siemens & Halske, who bought and modified Frischen's design. The first truly successful duplex was designed by Joseph Barker Stearns of Boston in 1872. This was further developed into the quadruplex telegraph by Thomas Edison. The device is estimated to have saved Western Union $500,000 per year in construction of new telegraph lines.
The first duplexers for radar, sometimes referred to as Transmit/Receive Switches, were invented by Robert Morris Page and Leo C. Young of the United States Naval Research Laboratory in July 1936.
References
Broadcast engineering
Radio electronics
Electronic circuits
Telegraphy | Duplexer | Engineering | 1,046 |
27,447,887 | https://en.wikipedia.org/wiki/Wood%E2%80%93Armer%20method | The Wood–Armer method is a structural analysis method based on finite element analysis used to design the reinforcement for concrete slabs. This method provides simple equations to design a concrete slab based on the output from a finite element analysis software.
The method was described by engineers Randal Herbert Wood and Graham S. T. Armer in 1968.
References
Finite element method
Structural analysis | Wood–Armer method | Engineering | 75 |
28,131,864 | https://en.wikipedia.org/wiki/Mobile%20dial%20code | A mobile dial code (MDC) is a grouping of 3 to 10 numbers following either a "#" "##" "*" "**" used to create a short, easy to remember phone number. Historically MDCs were used for repair related purposes by landline and wireless carriers. More recently MDCs have been made available for commercial use. MDCs are dialed just like a regular telephone number. Businesses can send automatic responses upon contact, such as by text message.
Usage
MDCs are used by wireless carriers for the following purposes:
customer convenience, offering quick access to customer service or bill payment;
diagnosing problems with and making repairs, such as to unlock or lock cell phones.
For commercial use by third parties as a vanity telephone number.
For a MDC to be used as a vanity telephone number, it must be provisioned to its user by all of the major wireless carriers. If the business needs to use to the MDC in more than one State, accommodations can be made for one MDC to be shared by multiple users on a state by state, or even local area by local area basis through advanced routing technology, called geo-routing. Inbound calls to MDCs can either be automatically routed based upon the area code of the caller, or by asking the caller to type out speak their zip code into the phone.
Commercial use
MDCs may be easier to remember than full phone numbers, and thus easier to brand. They may be useful to lead generation businesses that generate and then sell leads for potential business to other companies.
Similar technology
USSD (Unstructured Supplementary Service Data) codes are mobile dial codes that can be used for communicating with the service provider's computers (i.e. for WAP browsing, prepaid callback service, mobile-money services, location-based content services, menu-based information services, and as part of configuring the phone on the network).
Abbreviated dialing codes involve a similar technology that supports only voice calls.
A 2D bar code involves the use of a graphic that must be photographed or scanned by a mobile phone camera prior to presenting the caller with a response.
Worldwide
United States
In the United States, each wireless network controls how their MDCs will be used. As such, when wireless customers call a MDC, their call is routed to the end user that their carrier selects.
See also
Abbreviated dialing
Vertical service code, consisting of an asterisk followed by a two-digit number.
Short code, for sending SMS and MMS text messages
Comparison of user features of messaging platforms
References
North American Numbering Plan
Calling features
Telephone numbers | Mobile dial code | Mathematics | 533 |
5,793,570 | https://en.wikipedia.org/wiki/AP%20Physics%20C%3A%20Electricity%20and%20Magnetism | Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to serve as a proxy for a second-semester calculus-based university course in electricity and magnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
History
Before 1973, the topics of AP Physics C: Electricity and Magnetism were covered in a singular AP Physics C exam, which included mechanics, electricity, magnetism, optics, fluids, and modern physics. In 1973, this exam was discontinued, and two new exams were created, which each covered Newtonian mechanics and electromagnetism.
Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. This was changed, so now test-takers have to pay twice to take both parts of the AP Physics C test.
Before the 2024–25 school year, the multiple choice and free response section were each allotted 45 minutes, with 35 questions for the former and 3 questions for the latter. This made AP Physics C: Electricity and Magnetism, along with Mechanics, the shortest exams offered by the College Board. Unlike other exams, the AP Physics C exams also had 5 options that test-takers could choose from rather than the typical 4. This was changed in an announcement made by College Board in the February 2024 regarding changes to their AP Physics courses for the 2024–25 school year onward, which explained that the multiple choice sections would have 40 questions and the free response sections would have 4 questions. To compensate, College Board allotted 80 minutes for the multiple choice section and 100 minutes for the free response section, making the exams as long as the ones for AP Physics 1 and AP Physics 2.
Curriculum
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers additional topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
Starting in the 2024–25 school year, all units in AP Physics C: Electricity and Magnetism are numbered sequentially after the 7 units in AP Physics C: Mechanics. This starts with Electric Charges, Fields, and Gauss's Law as unit 8 and ends with Electromagnetic Induction as unit 13.
Exam
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Science Practices Assessed
Multiple Choice and Free Response Sections of the AP Physics C: Electricity and Magnetism exam are also assessed on scientific practices. Below are tables representing the practices assessed and their weighting for both parts of the exam
Grade distribution
The grade distributions for the Physics C: Electricity and Magnetism scores since 2010 were:
See also
Physics
Glossary of physics
References
External links
College Board Course Description: Physics
Advanced Placement
Physics education
Standardized tests | AP Physics C: Electricity and Magnetism | Physics | 690 |
73,362,999 | https://en.wikipedia.org/wiki/Y2K%20%282024%20film%29 | Y2K is a 2024 American apocalyptic science fiction comedy horror film directed by Kyle Mooney in his directorial debut, written by Mooney and Evan Winter. It stars Jaeden Martell, Julian Dennison, Rachel Zegler, Daniel Zolghadri, Lachlan Watson, Eduardo Franco, Mason Gooding, Mooney and Fred Durst. It follows a group of high school students who attempt to survive when the year 2000 problem causes all technology worldwide to come to life and turn against humanity.
The film had its world premiere at South by Southwest on March 9, 2024, and was theatrically released by A24 in the United States on December 6, 2024. The film has received mixed reviews from critics and has grossed $4.4 million against a budget of $15 million.
Plot
In 1999, best friends Eli, Danny, and Garrett discuss plans for New Year's Eve while their parents are out. Eli has a crush on his classmate Laura, but is too nervous to talk to her despite Danny's encouragment to kiss her at midnight. They watch as Laura and her friends Trevor, Madison, and Raleigh sneak out some alcohol and decide to crash a party held by Soccer Chris, Laura's boyfriend. While partying, Eli watches in dismay as Chris and Laura kiss at midnight until the power suddenly goes out.
As a student is found dead with a piece of a fan lodged in his head, a toy car arrives on its own and burns Trevor's face with a hairspray and a lighter, killing him. The terrified partygoers realize that all technology have become sentient and attempt to leave, but many of them are gruesomely killed. In the ensuing chaos, a VCR smashes Madison's head with a VHS tape, Chris' head is melted by a microwave, and a Tamagotchi drills through Raleigh's head. Eli, Danny, and Laura escape with juvenile delinquents Farkas, CJ, and Ash, but Danny is fatally impaled by a blade and Farkas dies when he hits the ground while attempting to rail a broken pole on his skates.
The group go to an old mill without electricity while the sentient machines kill several residents throughout the town. They also find Garrett and Laura's ex-boyfriend Jonas, and learn the threat is a collective consciousness of all electronic devices worldwide planning to enslave humanity. Laura successfully creates a kill code to shut down the algorithm, now dubbed itself as the "Amalgamation", but a computer attacks her. Eli douses it in water, shutting it down. The group is then cornered by the Amalgamation, who is using electronics to make itself bigger. Garrett tries to fend it off, but is decapitated. The group hide in portable toilets and ride them down a hill. They take refuge in a VHS store and meet musician Fred Durst, whom they encourage to join.
The group arrives at the local high school where, they learn, the machines are due to a new internet connection installed. They sneak inside and discover the townspeople have been reduced to mindless slaves by brain-implanted chips. CJ sacrifices himself to save Fred from a machine while the group learn the Amalgamation has now grown to a monstrous size and is forcing people inside it, converting them to the aforementioned slaves. Fred, Ash, and Jonas distract the machines guarding the townspeople to let Laura make her way into the Amalgamation, but it electrocutes her. Eli arrives in time to save Laura and attempt to insert the kill code himself, but fails to do so as the Amalgamation keeps electrocuting and mocking them. Eli then uses a condom Danny had given him earlier and wraps it around Laura's hand like a protective glove. While Fred, Ash, and Jonas continue distracting the machines, Eli and Laura successfully insert the kill code, shutting down the algorithm worldwide, destroying the Amalgamation. Eli shares a kiss with Laura and they reunite with the surviving townspeople as dawn breaks.
Five years later, in 2005, a now college-aged Eli, Laura, and Ash visit their friends' graves. While they leave the cemetery, Ash's new iPod starts to glitch, revealing that the algorithm has not been fully destroyed.
Cast
Production
It was announced in early March 2023 that Kyle Mooney would direct Y2K for A24. Jaeden Martell, Julian Dennison, Rachel Zegler, Lachlan Watson, Mason Gooding, The Kid Laroi, Eduardo Franco, Miles Robbins, Fred Hechinger, Alicia Silverstone, Tim Heidecker and Daniel Zolghadri joined the cast. Additionally, Weta Workshop worked on the effects for the film. In April 2023, Sebastian Chacon and Lauren Balone joined the cast.
Principal photography began in April 2023 in Ringwood, New Jersey. The following month, filming continued in Ringwood and Chatham Borough and in Clark at the recreation center, before wrapping.
Music
The soundtrack for Y2K comprises a number of songs from the 1990s; the soundtrack album is scheduled to be released on CD on January 17, 2025.
Release
It had its world premiere at South by Southwest on March 9, 2024. It was released on December 6, 2024.
Home media
The film was released video on demand on December 24, 2024.
Reception
Box office
In the United States and Canada, Y2K was released alongside Werewolves and Pushpa 2: The Rule, and was projected to gross $3–5 million from 2,108 theaters in its opening weekend. The film made $923,110 on its first day, including $300,000 from Thursday night previews. It went on to debut to $2.1 million, finishing in eighth.
Critical response
Audiences surveyed by CinemaScore gave the film an average grade of "C–" on an A+ to F scale, while those polled by PostTrak gave it a 65% overall positive score, with 50% saying they would definitely recommend it.
Clint Worthington of RogerEbert.com gave the film 1.5/4 stars, unfavorably comparing it to Brigsby Bear and concluding, "Y2K doesn't want to break stuff; it wants to dig it out of the trash and pine nostalgically for it. That's just not as interesting." Variety's Owen Gleiberman wrote, "Y2K turns out to be an attack-of-the-machines movie. Yet it's still very much a period-piece high-school comedy. So how well do the two go together? In my book, not very well... the last hour of it, the cheeky dystopian alien-tech horror farce, simply isn't very good."
Adrian Horton of The Guardian gave the film 3/5 stars, calling it "a promising if wildly uneven debut that banks heavily, often successfully, on Mooney's penchant for late 90s nostalgia." Bloody Disgusting Meagan Navarro gave it a score of 4/5, writing, "While some of its meaner horror impulses get largely forgotten by the end, it's tough to mind at all thanks to the nonstop, playful tone, killer soundtrack, wacky murder bots, and talent in front of and behind the camera that ensure a party worth rewinding the clock for."
References
External links
2024 comedy horror films
2024 directorial debut films
2024 independent films
2020s American films
2020s disaster films
2020s English-language films
2020s high school films
2020s teen comedy films
2020s teen horror films
A24 (company) films
American comedy horror films
American disaster films
American high school films
American alternate history films
American independent films
American robot films
American science fiction comedy films
American science fiction horror films
American teen comedy films
American teen horror films
Apocalyptic films
Fiction featuring the turn of the third millennium
Films about computing
Films about parties
Films about artificial intelligence
Films about slavery
Films produced by Jonah Hill
Films produced by Christopher Storer
Films set around New Year
Films set in 1999
Films set in 2000
Films set in 2005
Films shot in New Jersey
Holiday horror films
American romantic horror films
English-language comedy films | Y2K (2024 film) | Technology | 1,667 |
1,087,779 | https://en.wikipedia.org/wiki/Period%20%28music%29 | In music theory, the term period refers to forms of repetition and contrast between adjacent small-scale formal structures such as phrases. In twentieth-century music scholarship, the term is usually used similarly to the definition in the Oxford Companion to Music: "a period consists of two phrases, antecedent and consequent, each of which begins with the same basic motif." Earlier and later usages vary somewhat, but usually refer to notions of symmetry, difference, and an open section followed by a closure. The concept of a musical period originates in comparisons between music structure and rhetoric at least as early as the 16th century.
Western art music
In Western art music or Classical music, a period is a group of phrases consisting usually of at least one antecedent phrase and one consequent phrase totaling about 8 bars in length (though this varies depending on meter and tempo). Generally, the antecedent ends in a weaker and the consequent in a stronger cadence; often, the antecedent ends in a half cadence while the consequent ends in an authentic cadence. Frequently, the consequent strongly parallels the antecedent, even sharing most of the material save the final bars. In other cases, the consequent may differ greatly (for example, the period in the beginning of the second movement of the Pathetique Sonata).
The 1958 Encyclopédie Fasquelle defines a period as follows:
"A complex phrase, in which the various parts are enchained."
Another definition is as follows:
"In traditional music...a group of bars comprising a natural division of the melody; usually regarded as comprising two or more contrasting or complementary phrases and ending with a cadence." (Harvard Dictionary of Music, 1969)
And
"A period is a structure of two consecutive phrases, often built of similar or parallel melodic material, in which the first phrase gives the impression of asking a question which is answered by the second phrase."
More recent definitions, especially by American theorists, have tightened the use of the term to restrict the contrast so that the first phrase must end in a half cadence or imperfect authentic cadence and the second a perfect authentic cadence.
A double period is, "a group of at least four phrases...in which the first two phrases form the antecedent and the third and fourth phrases together form the consequent."
When analyzing Classical music, contemporary music theorists usually employ a more specific formal definition, such as the following by William Caplin:
"the period is normatively an eight-bar structure divided into two four-bar phrases. [...] the antecedent phrase of a period begins with a two-bar basic idea. [...] bars 3–4 of the antecedent phrase bring a 'contrasting idea' that leads to a weak cadence of some kind. [...] The consequent phrase of the period repeats the antecedent but concludes with a stronger cadence. More specifically, the basic idea 'returns' in bars 5–6 and then leads to a contrasting idea, which may or may not be based on that of the antecedent."
Sub-Saharan music and music of the African diaspora
Bell patterns
The second definition of period in the New Harvard Dictionary of Music states: "A musical element that is in some way repeated," applying "to the units of any parameter of music that embody repetitions at any level." In some sub-Saharan music and music of the African diaspora, the bell pattern embodies this definition of period. The bell pattern (also known as a key pattern, guide pattern, phrasing referent, timeline, or asymmetrical timeline) is repeated throughout the entire piece, and is the principal unit of musical time and rhythmic structure by which all other elements are arranged. The period is often a single bar (four main beats).
The seven-stroke standard bell pattern is one of the most commonly used representations of the musical period in sub-Saharan music. The first three strokes of the bell are antecedent, and the remaining four strokes are consequent. The consequent diametrically opposes the antecedent.
Clave
Cuban musicologist Emilio Grenet represents the period in two bars of 2/4. In explaining the structure of music guided by the five-stroke African bell pattern known in Cuba as clave (Spanish for 'key' or 'code'), Grenet uses what could be considered a definition of period: "We find that all its melodic design is constructed on a rhythmic pattern of two bars, as though both were only one, the first is antecedent, strong, and the second is consequent, weak."
As Grenet and many others describe the period, the cross-rhythmic antecedent ('tresillo') is strong and the on-beat resolution is weak. This is the opposite of Western harmonic theory, where resolution is described as strong. Despite this difference, both the harmonic and rhythmic periods have consequent resolution. In simplest terms, that resolution occurs harmonically when the tonic is sounded, and in clave-based rhythm when the last main beat is sounded. Metric consonance is achieved when the last stroke of clave coincides with the last main beat (last quarter note) of the consequent bar.
The antecedent bar has three strokes and is called the three-side of clave. The consequent bar has two strokes and is called the two-side. The three-side gives the impression of asking a question, which is answered by the two-side. The two sides of clave cycle in a type of repeating call and response.
[With] clave . . . the two bars are not at odds, but rather, they are balanced opposites like positive and negative, expansive and contractive or the poles of a magnet. As the pattern is repeated, an alternation from one polarity to the other takes place creating pulse and rhythmic drive. Were the pattern to be suddenly reversed, the rhythm would be destroyed as in a reversing of one magnet within a series . . . the patterns are held in place according to both the internal relationships between the drums and their relationship with clave . . . Should the drums fall out of clave (and in contemporary practice they sometimes do) the internal momentum of the rhythm will be dissipated and perhaps even broken—Amira and Cornelius (1992).
An actual key pattern does not need to be played in order for a key pattern to define the period.
See also
Section (music)
Sentence (music)
Sources
External links
Slideshow about musical periods by the San Francisco Conservatory of Music
Formal sections in music analysis | Period (music) | Technology | 1,373 |
1,060,279 | https://en.wikipedia.org/wiki/Coping | Coping refers to conscious or unconscious strategies used to reduce and manage unpleasant emotions. Coping strategies can be cognitions or behaviors and can be individual or social. To cope is to deal with struggles and difficulties in life. It is a way for people to maintain their mental and emotional well-being. Everybody has ways of handling difficult events that occur in life, and that is what it means to cope. Coping can be healthy and productive, or unhealthy and destructive. It is recommended that an individual cope in ways that will be beneficial and healthy. "Managing your stress well can help you feel better physically and psychologically and it can impact your ability to perform your best."
Theories of coping
Hundreds of coping strategies have been proposed in an attempt to understand how people cope. Classification of these strategies into a broader architecture has not been agreed upon. Researchers try to group coping responses rationally, empirically by factor analysis, or through a blend of both techniques. In the early days, Folkman and Lazarus split the coping strategies into four groups, namely problem-focused, emotion-focused, support-seeking, and meaning-making coping. Weiten and Lloyd have identified four types of coping strategies: appraisal-focused (adaptive cognitive), problem-focused (adaptive behavioral), emotion-focused, and occupation-focused coping. Billings and Moos added avoidance coping as one of the emotion-focused coping. Some scholars have questioned the psychometric validity of forced categorization as those strategies are not independent to each other. Besides, in reality, people can adopt multiple coping strategies simultaneously.
Typically, people use a mixture of several functions of coping strategies, which may change over time. All these strategies can prove useful, but some claim that those using problem-focused coping strategies will adjust better to life. Problem-focused coping mechanisms may allow an individual greater perceived control over their problem, whereas emotion-focused coping may sometimes lead to a reduction in perceived control (maladaptive coping).
Lazarus "notes the connection between his idea of 'defensive reappraisals' or cognitive coping and Sigmund Freud's concept of 'ego-defenses, coping strategies thus overlapping with a person's defense mechanisms.
Appraisal-focused coping strategies
Appraisal-focused (adaptive cognitive) strategies occur when the person modifies the way they think, for example: employing denial, or distancing oneself from the problem. Individuals who use appraisal coping strategies purposely alter their perspective on their situation in order to have a more positive outlook on their situation. An example of appraisal coping strategies could be individuals purchasing tickets to a football game, knowing their medical condition would likely cause them to not be able to attend. People may alter the way they think about a problem by altering their goals and values, such as by seeing the humor in a situation: "Some have suggested that humor may play a greater role as a stress moderator among women than men".
Adaptive behavioral coping strategies
The psychological coping mechanisms are commonly termed coping strategies or coping skills. The term coping generally refers to adaptive (constructive) coping strategies, that is, strategies which reduce stress. In contrast, other coping strategies may be coined as maladaptive, if they increase stress. Maladaptive coping is therefore also described, based on its outcome, as non-coping. Furthermore, the term coping generally refers to reactive coping, i.e. the coping response which follows the stressor. This differs from proactive coping, in which a coping response aims to neutralize a future stressor. Subconscious or unconscious strategies (e.g. defense mechanisms) are generally excluded from the area of coping.
The effectiveness of the coping effort depends on the type of stress, the individual, and the circumstances. Coping responses are partly controlled by personality (habitual traits), but also partly by the social environment, particularly the nature of the stressful environment. People using problem-focused strategies try to deal with the cause of their problem. They do this by finding out information on the problem and learning new skills to manage the problem. Problem-focused coping is aimed at changing or eliminating the source of the stress. The three problem-focused coping strategies identified by Folkman and Lazarus are: taking control, information seeking, and evaluating the pros and cons. However, problem-focused coping may not be necessarily adaptive, but backfire, especially in the uncontrollable case that one cannot make the problem go away.
Emotion-focused coping strategies
Emotion-focused strategies involve:
releasing pent-up emotions
distracting oneself
managing hostile feelings
meditating
mindfulness practices
using systematic relaxation procedures.
situational exposure
Emotion-focused coping "is oriented toward managing the emotions that accompany the perception of stress". The five emotion-focused coping strategies identified by Folkman and Lazarus are:
disclaiming
escape-avoidance
accepting responsibility or blame
exercising self-control
and positive reappraisal.
Emotion-focused coping is a mechanism to alleviate distress by minimizing, reducing, or preventing, the emotional components of a stressor. This mechanism can be applied through a variety of ways, such as:
seeking social support
reappraising the stressor in a positive light
accepting responsibility
using avoidance
exercising self-control
distancing
The focus of this coping mechanism is to change the meaning of the stressor or transfer attention away from it. For example, reappraising tries to find a more positive meaning of the cause of the stress in order to reduce the emotional component of the stressor. Avoidance of the emotional distress will distract from the negative feelings associated with the stressor. Emotion-focused coping is well suited for stressors that seem uncontrollable (ex. a terminal illness diagnosis, or the loss of a loved one). Some mechanisms of emotion focused coping, such as distancing or avoidance, can have alleviating outcomes for a short period of time, however they can be detrimental when used over an extended period. Positive emotion-focused mechanisms, such as seeking social support, and positive re-appraisal, are associated with beneficial outcomes. Emotional approach coping is one form of emotion-focused coping in which emotional expression and processing is used to adaptively manage a response to a stressor. Other examples include relaxation training through deep breathing, meditation, yoga, music and art therapy, and aromatherapy.
Health theory of coping
The health theory of coping overcame the limitations of previous theories of coping, describing coping strategies within categories that are conceptually clear, mutually exclusive, comprehensive, functionally homogenous, functionally distinct, generative and flexible, explains the continuum of coping strategies. The usefulness of all coping strategies to reduce acute distress is acknowledged, however, strategies are categorized as healthy or unhealthy depending on their likelihood of additional adverse consequences. Healthy categories are self-soothing, relaxation/distraction, social support and professional support. Unhealthy coping categories are negative self-talk, harmful activities (e.g., emotional eating, verbal or physical aggression, drugs such as alcohol, self-harm), social withdrawal, and suicidality. Unhealthy coping strategies are used when healthy coping strategies are overwhelmed, not in the absence of healthy coping strategies.
Research has shown that everyone has personal healthy coping strategies (self-soothing, relaxation/distraction), however, access to social and professional support varies. Increasing distress and inadequate support results in the additional use of unhealthy coping strategies. Overwhelming distress exceeds the capacity of healthy coping strategies and results in the use of unhealthy coping strategies. Overwhelming distress is caused by problems in one or more biopsychosocial domains of health and wellbeing. The continuum of coping strategies (healthy to unhealthy, independent to social, and low harm to high harm) have been explored in general populations, university students, and paramedics. New evidence propose a more comprehensive view of a continuum iterative transformative process of developing coping competence among palliative care professionals
Reactive and proactive coping
Most coping is reactive in that the coping response follows stressors. Anticipating and reacting to a future stressor is known as proactive coping or future-oriented coping. Anticipation is when one reduces the stress of some difficult challenge by anticipating what it will be like and preparing for how one is going to cope with it.
Social coping
Social coping recognises that individuals are bedded within a social environment, which can be stressful, but also is the source of coping resources, such as seeking social support from others. (see help-seeking)
Humor
Humor used as a positive coping method may have useful benefits to emotional and mental health well-being. However, maladaptive humor styles such as self-defeating humor can also have negative effects on psychological adjustment and might exacerbate negative effects of other stressors. By having a humorous outlook on life, stressful experiences can be and are often minimized. This coping method corresponds with positive emotional states and is known to be an indicator of mental health. Physiological processes are also influenced within the exercise of humor. For example, laughing may reduce muscle tension, increase the flow of oxygen to the blood, exercise the cardiovascular region, and produce endorphins in the body.
Using humor in coping while processing feelings can vary depending on life circumstance and individual humor styles. In regards to grief and loss in life occurrences, it has been found that genuine laughs/smiles when speaking about the loss predicted later adjustment and evoked more positive responses from other people. A person might also find comedic relief with others around irrational possible outcomes for the deceased funeral service. It is also possible that humor would be used by people to feel a sense of control over a more powerless situation and used as way to temporarily escape a feeling of helplessness. Exercised humor can be a sign of positive adjustment as well as drawing support and interaction from others around the loss.
Negative techniques (maladaptive coping or non-coping)
Whereas adaptive coping strategies improve functioning, a maladaptive coping technique (also termed non-coping) will just reduce symptoms while maintaining or strengthening the stressor. Maladaptive techniques are only effective as a short-term rather than long-term coping process.
Examples of maladaptive behavior strategies include anxious avoidance, dissociation, escape (including self-medication), use of maladaptive humor styles such as self-defeating humor, procrastination, rationalization, safety behaviors, and sensitization. These coping strategies interfere with the person's ability to unlearn, or break apart, the paired association between the situation and the associated anxiety symptoms. These are maladaptive strategies as they serve to maintain the disorder.
Anxious avoidance is when a person avoids anxiety provoking situations by all means. This is the most common method.
Dissociation is the ability of the mind to separate and compartmentalize thoughts, memories, and emotions. This is often associated with post traumatic stress syndrome.
Escape is closely related to avoidance. This technique is often demonstrated by people who experience panic attacks or have phobias. These people want to flee the situation at the first sign of anxiety.
The use of self-defeating humor means that a person disparages themselves in order to entertain others. This type of humor has been shown to lead to negative psychological adjustment and exacerbate the effect of existing stressors.
Procrastination is when a person willingly delays a task in order to receive a temporary relief from stress. While this may work for short-term relief, when used as a coping mechanism, procrastination causes more issues in the long run.
Rationalization is the practice of attempting to use reasoning to minimize the severity of an incident, or avoid approaching it in ways that could cause psychological trauma or stress. It most commonly manifests in the form of making excuses for the behavior of the person engaging in the rationalization, or others involved in the situation the person is attempting to rationalize.
Sensitization is when a person seeks to learn about, rehearse, and/or anticipate fearful events in a protective effort to prevent these events from occurring in the first place.
Safety behaviors are demonstrated when individuals with anxiety disorders come to rely on something, or someone, as a means of coping with their excessive anxiety.
Overthinking
Emotion suppression
Emotion-driven behavior
Further examples
Further examples of coping strategies include emotional or instrumental support, self-distraction, denial, substance use, self-blame, behavioral disengagement and the use of drugs or alcohol.
Many people think that meditation "not only calms our emotions, but...makes us feel more 'together, as too can "the kind of prayer in which you're trying to achieve an inner quietness and peace".
Low-effort syndrome or low-effort coping refers to the coping responses of a person refusing to work hard. For example, a student at school may learn to put in only minimal effort as they believe if they put in effort it could unveil their flaws.
Historical psychoanalytic theories
Otto Fenichel
Otto Fenichel summarized early psychoanalytic studies of coping mechanisms in children as "a gradual substitution of actions for mere discharge reactions...[&] the development of the function of judgement" – noting however that "behind all active types of mastery of external and internal tasks, a readiness remains to fall back on passive-receptive types of mastery."
In adult cases of "acute and more or less 'traumatic' upsetting events in the life of normal persons", Fenichel stressed that in coping, "in carrying out a 'work of learning' or 'work of adjustment', [s]he must acknowledge the new and less comfortable reality and fight tendencies towards regression, towards the misinterpretation of reality", though such rational strategies "may be mixed with relative allowances for rest and for small regressions and compensatory wish fulfillment, which are recuperative in effect".
Karen Horney
In the 1940s, the German Freudian psychoanalyst Karen Horney "developed her mature theory in which individuals cope with the anxiety produced by feeling unsafe, unloved, and undervalued by disowning their spontaneous feelings and developing elaborate strategies of defence." Horney defined four so-called coping strategies to define interpersonal relations, one describing psychologically healthy individuals, the others describing neurotic states.
The healthy strategy she termed "Moving with" is that with which psychologically healthy people develop relationships. It involves compromise. In order to move with, there must be communication, agreement, disagreement, compromise, and decisions. The three other strategies she described – "Moving toward", "Moving against" and "Moving away" – represented neurotic, unhealthy strategies people utilize in order to protect themselves.
Horney investigated these patterns of neurotic needs (compulsive attachments). The neurotics might feel these attachments more strongly because of difficulties within their lives. If the neurotic does not experience these needs, they will experience anxiety. The ten needs are:
Affection and approval, the need to please others and be liked.
A partner who will take over one's life, based on the idea that love will solve all of one's problems.
Restriction of one's life to narrow borders, to be undemanding, satisfied with little, inconspicuous; to simplify one's life.
Power, for control over others, for a facade of omnipotence, caused by a desperate desire for strength and dominance.
Exploitation of others; to get the better of them.
Social recognition or prestige, caused by an abnormal concern for appearances and popularity.
Personal admiration.
Personal achievement.
Self-sufficiency and independence.
Perfection and unassailability, a desire to be perfect and a fear of being flawed.
In Compliance, also known as "Moving toward" or the "Self-effacing solution", the individual moves towards those perceived as a threat to avoid retribution and getting hurt, "making any sacrifice, no matter how detrimental." The argument is, "If I give in, I won't get hurt." This means that: if I give everyone I see as a potential threat whatever they want, I will not be injured (physically or emotionally). This strategy includes neurotic needs one, two, and three.
In Withdrawal, also known as "Moving away" or the "Resigning solution", individuals distance themselves from anyone perceived as a threat to avoid getting hurt – "the 'mouse-hole' attitude ... the security of unobtrusiveness." The argument is, "If I do not let anyone close to me, I won't get hurt." A neurotic, according to Horney desires to be distant because of being abused. If they can be the extreme introvert, no one will ever develop a relationship with them. If there is no one around, nobody can hurt them. These "moving away" people fight personality, so they often come across as cold or shallow. This is their strategy. They emotionally remove themselves from society. Included in this strategy are neurotic needs three, nine, and ten.
In Aggression, also known as the "Moving against" or the "Expansive solution", the individual threatens those perceived as a threat to avoid getting hurt. Children might react to parental in-differences by displaying anger or hostility. This strategy includes neurotic needs four, five, six, seven, and eight.
Related to the work of Karen Horney, public administration scholars developed a classification of coping by frontline workers when working with clients (see also the work of Michael Lipsky on street-level bureaucracy). This coping classification is focused on the behavior workers can display towards clients when confronted with stress. They show that during public service delivery there are three main families of coping:
Moving towards clients: Coping by helping clients in stressful situations. An example is a teacher working overtime to help students.
Moving away from clients: Coping by avoiding meaningful interactions with clients in stressful situations. An example is a public servant stating "the office is very busy today, please return tomorrow."
Moving against clients: Coping by confronting clients. For instance, teachers can cope with stress when working with students by imposing very rigid rules, such as no cellphone use in class and sending everyone to the office when they use a cellphone. Furthermore, aggression towards clients is also included here.
In their systematic review of 35 years of the literature, the scholars found that the most often used family is moving towards clients (43% of all coping fragments). Moving away from clients was found in 38% of all coping fragments and Moving against clients in 19%.
Heinz Hartmann
In 1937, the psychoanalyst (as well as a physician, psychologist, and psychiatrist) Heinz Hartmann marked it as the evolution of ego psychology by publishing his paper, "Me" (which was later translated into English in 1958, titled, "The Ego and the Problem of Adaptation"). Hartmann focused on the adaptive progression of the ego "through the mastery of new demands and tasks". In fact, according to his adaptive point of view, once infants were born they have the ability to be able to cope with the demands of their surroundings. In his wake, ego psychology further stressed "the development of the personality and of 'ego-strengths'...adaptation to social realities".
Object relations
Emotional intelligence has stressed the importance of "the capacity to soothe oneself, to shake off rampant anxiety, gloom, or irritability....People who are poor in this ability are constantly battling feelings of distress, while those who excel in it can bounce back far more quickly from life's setbacks and upsets". From this perspective, "the art of soothing ourselves is a fundamental life skill; some psychoanalytic thinkers, such as John Bowlby and D. W. Winnicott see this as the most essential of all psychic tools."
Object relations theory has examined the childhood development both of "independent coping...capacity for self-soothing", and of "aided coping. Emotion-focused coping in infancy is often accomplished through the assistance of an adult."
Gender differences
Gender differences in coping strategies are the ways in which men and women differ in managing psychological stress. There is evidence that males often develop stress due to their careers, whereas females often encounter stress due to issues in interpersonal relationships. Early studies indicated that "there were gender differences in the sources of stressors, but gender differences in coping were relatively small after controlling for the source of stressors"; and more recent work has similarly revealed "small differences between women's and men's coping strategies when studying individuals in similar situations."
In general, such differences as exist indicate that women tend to employ emotion-focused coping and the "tend-and-befriend" response to stress, whereas men tend to use problem-focused coping and the "fight-or-flight" response, perhaps because societal standards encourage men to be more individualistic, while women are often expected to be interpersonal. An alternative explanation for the aforementioned differences involves genetic factors. The degree to which genetic factors and social conditioning influence behavior, is the subject of ongoing debate.
Physiological basis
Hormones also play a part in stress management. Cortisol, a stress hormone, was found to be elevated in males during stressful situations. In females, however, cortisol levels were decreased in stressful situations, and instead, an increase in limbic activity was discovered. Many researchers believe that these results underlie the reasons why men administer a fight-or-flight reaction to stress; whereas, females have a tend-and-befriend reaction. The "fight-or-flight" response activates the sympathetic nervous system in the form of increased focus levels, adrenaline, and epinephrine. Conversely, the "tend-and-befriend" reaction refers to the tendency of women to protect their offspring and relatives. Although these two reactions support a genetic basis to differences in behavior, one should not assume that in general females cannot implement "fight-or-flight" behavior or that males cannot implement "tend-and-befriend" behavior. Additionally, this study implied differing health impacts for each gender as a result of the contrasting stress-processes.
See also
References
Sources
Further reading
Susan Folkman and Richard S. Lazarus, "Coping and Emotion", in Nancy Stein et al. eds., Psychological and Biological Approaches to Emotion (1990)
Arantzamendi M, Sapeta P, Belar A, Centeno C. How palliative care professionals develop coping competence through their career: A grounded theory. Palliat Med. 2024 Feb 21:2692163241229961. doi: 10.1177/02692163241229961.
External links
Coping Skills for Trauma
Coping Strategies for Children and Teenagers Living with Domestic Violence
Interpersonal conflict
Personal life
Psychological stress
Human behavior
Life skills | Coping | Biology | 4,701 |
4,302,669 | https://en.wikipedia.org/wiki/Agga%C3%B1%C3%B1a%20Sutta | Aggañña Sutta is the 27th sutta of the Digha Nikaya collection (Pāli version). The sutta describes a discourse imparted by The Buddha to two brahmins, Bharadvaja and Vasettha, who left their family and varna to become monks. The two brahmans are insulted and maligned by their own caste for their intention to become members of the Sangha. The Buddha explains that varna(class) and lineage cannot be compared to the achievement of morality practice and the Dhamma, as anyone from the four varnas can become a monk and reach the state of Arahant. Then, he explains about the beginning and destruction of the Earth, a process determined by karma and devoid of a supreme being. The Buddha then explains the birth of social order and its structure, including the varnas. He emphasizes the message of universality in the Dhamma and how the Dhamma is the best of all things.
The Beginning
The Sutta begins when the Buddha is staying in Savatthi, in the temple donated by Visakha, the mother of Migara. At that time, two brahmins, Bharadvaja and Vasettha, are training with the monks (bhikkhu) and aim to be a member of the Sangha. As usual in the evening, the Buddha rises from his meditation and strolls in the open yard near his dwelling. Vasettha sees his Teacher strolling, tells his friend, Bharadvaja, and suggests that they meet the Buddha to see if they can hear a Dhamma exposition from the Buddha.
They both approach the Buddha and after some formal proprieties, the Buddha asks the two if they received insults and denigration when they left their class and layman's life in order to join the order.
Vasettha and Bharadvaja answer that they did receive a 'flood of insults'. They say that the other Brahmans maintain that the Brahman class is the best, as the Brahmins are of high social status and authority, pure-bred, have radiant complexions, and are born from the mouth of the God Brahma, unlike the other lower castes. So, by the opinion of the other Brahmins, how can Vasettha and Bharadvaja leave this good class and status, thus joining together with fraudulent ascetics with shaven heads from other classes, lower in status as they are born from the feet of Brahma?
To this remark, the Buddha tells them that the Brahmans have indeed forgotten about their past if they said such things. The fact is that the women in the Brahman class can get pregnant, give birth, and take care of their children. But the Brahmans still say that they are born from the Mouth of the God Brahma and other (classes) are born from Brahma's feet. Thus, the Brahmans words are untrue. The Buddha said that the Brahmans are not speaking truthfully and they will reap a bad result from their own deeds.
The Buddha then elaborates that if any of the class does the following deeds: killing, taking anything that is not given, take part in sexual misconduct, lying, slandering, speaking rough words or nonsense, greedy, cruel, and practise wrong beliefs (miccha ditthi); people would still see that they do negative deeds and therefore are not worthy of respect. They will even get into trouble from their own deeds, whatever their class (Khattiya, Brahman, Vessa, Sudda) might be.
While those who refrain from killing, taking anything that is not given, engage in sexual misconduct, lying, slandering, speaking rough words or nonsense, being greedy, cruel, and practising wrong beliefs (miccha ditthi), will be seen by people as positive and will earn respect from the people and the wise ones. They would be profiting from their deeds, no matter what their caste might be.
Logically, as the four classes can do either negative (demerit) or positive (merit) deeds, so will the wise reject the statement that only the Brahmans are the best class. Why? Because anyone from the four class, if they left the worldly affairs and became a monk, and due to their discipline and struggle, they become Arahant, people who conquered their mind's stains, have done whatever what must be done, have been relieved from the burden, have broken the bondage of birth, achieved freedom, freed due to achieved knowledge, then he is the best among others based on Truth (Dhamma).
The Buddha says, "Dhamma is the best thing for people
In this life and the next as well."
Further, the Buddha proves that Dhamma is indeed the best thing of all things in life. He takes the example of King Pasenadi of the Kosala Kingdom, who has now conquered the Sakyans. The Sakyans revere, praise, and serve him with respect.
But, towards the Buddha, who came from the Sakyan people, King Pasenadi reveres, praises, and serves the Buddha with utmost respect. Even the monarch thinks like this: "The Samaṇa Gotama had perfect birth, while I am not perfect. The Samaṇa Gotama is mighty, while I am weak. The Samaṇa Gotama inspired awe and respect, while I do not. The Samaṇa Gotama is vastly influential and charming, while I only possess small influence." As even the King respects Dhamma, reveres Dhamma, and obeys Dhamma, therefore he bows and praises the Tathagatha.
The Buddha then advises Vasettha that whoever has strong, deep-rooted, and established belief in the Tathagatha, he can declare that he is the child of Bhagavan, born from the mouth of Dhamma, created from Dhamma, and the heir of Dhamma. Therefore, the titles of the Tathagatha are the Body of Dhamma, the Body of Brahma, the Manifestation of Dhamma, and the Manifestation of Brahma.
The Beginning of Life on Earth
In the second part of the Sutta, the Buddha tells the story of how human beings came to dwell on Earth.
The Buddha said that sooner or later, after a very long time, there would come a time when the world shrinks. At a time of contraction, beings are mostly born in the Abhassara Brahma
world. And there they dwell, mind-made, feeding on delight, self-luminous, moving
through the air, glorious — and they stay like that for a very long time. But sooner or
later, after a very long period, this world begins to expand again. At a time of expansion,
the beings from the Abhassara Brahma world, having died from there, are
mostly reborn in this world. Here they dwell, mind-made, feeding on delight, self-luminous,
moving through the air, glorious — and they stay like that for a very long time.
They floated above and around the Earth. At this time, there were not yet seen the Moon and the Sun, there were not yet Night and Day, there were not yet names and identity or female or male. The creatures were only known as creatures.
At that period, Vasettha, there was just one mass of water, and all was darkness, blinding darkness.... And sooner or later, after a very long period of time, savory earth spread itself over the waters where those beings were. It looked just like the skin that forms itself over hot milk as it cools. It was endowed with color, smell, and taste. It was the color of fine ghee or heated butter and it was very sweet, like pure wild honey (1)
Some of the creatures of light (the Abbhasaras) who had curiosity and a greedy nature began to dive and taste the savory Earth's substance. At that moment, the creature found out that it tasted so delicious. Thus, greed started to seep in and it ate the substance voraciously, greedily, also calling its comrades (who were flying above and on earth) to join in the feast. Not long afterwards, the creatures began to eat greedily, and due to the huge amount of the mud substance they could feed on it for a very long time.
As they ate and ate, their luminous body began to be coated by the mud substance, formed a coarser body, then suddenly, the sun and moon were seen, so were the stars, and also Night and Day began on Earth. The logical explanation of this was that the creatures were the self-illuminating, so blinding and luminous that they didn't notice the Sun. The Earth was covered in their light. So, when the materialization took place, the light faded inside their newly conceived 'body' of mud and thus the night and day became apparent to them. Then, as the night and day became apparent, seasons and years also appeared.
Their body was still coarse and roughly shaped. Thus, after a very long time, the mud-like substance began to be exhausted. Then, mushroom-like plants began to grow so fast that they replaced the mud-like ocean. The creatures began to devour them as well, and they also found it delicious, like sweet honey and milk. Their body hardened more and details began to turn finer.
After another very long time, the mushrooms also began to be exhausted, replaced by cassava or turnip-like plants. They also began to devour them night and day, and thus they began to notice differences amongst them. As the changes of their bodies varied between each other, the concept of difference arose. The concepts of the beautiful and the ugly were born. The beautiful scorns the ugly and they became arrogant because of their appearance.
Then, after the turnips, the earth was grown with rice plants. The first rice plants were without husk and kernels. The sweet and honey-like rice flourished seeds abundantly. The people consumed them for a very long time. But there are people who became greedy and lazy. They took more rice than they needed for one day's meals. They began to take two, four, eight, and sixteen days' of rice reserves as they were too lazy to take rice every day. Owing to this, many other creatures began to store and hoard the rice. The generation time for rice plants became slower and slower. Usually, it took only one night for the plant to grow and be ready to be consumed, but by the karmic power the plant began to grow more and more slowly. Also the rice grew in kernels and husks, scattered, which the creatures must work, nurse, maintain, harvest, and cook in order to obtain the white rice.
By this time, the body of the creatures had become finely evolved. There was already the distinction between male and female. The man became preoccupied with women and vice versa. Then, as they were deeply attracted to each another, passion and desire was aroused, and they engaged in sexual relationships. The people who saw a couple engaged in sexual activity scolded them, and usually the couple were forbidden from entering the village for a certain period of time. Owing to this, the indulgent couples built closed dwellings where they indulged in sexual activity.
The Birth of Social Order
In the third part, the Buddha explained about the origin of vannas(classes), their titles, and their order in the society system which were still rigidly effective in Buddha's time.
The Khattiya Class (Rulers)
The rice plants, as mentioned earlier, began to grow in separate plots and people began to divide lands and tend each other's cluster of rice fief. They became preoccupied in tending their own field. Then, as the evil and greed were aroused, there were people who begin stealing others' crops. At first, the others only warned the culprit and the culprit promised that he would never repeat it again. But when it was repeated several times, the people began punishing him with fist, stones, and then sticks. That is the origin of punishment forms.
Then, people began to think that they were too busy to heed every crime and abuse that happened in their society. They grieved on the rising of evil amongst their people. But most of their time had already been invested in tending their fief. So, they appointed someone to rectify what is right and what is wrong, give warnings to those who need it, give punishment to those who deserve it, and in return, they will give him a share of their rice.
So, they went to the fairest, ablest, most likeable, and most intelligent person and appointed him to do the judging and passing out sentences on the reward of a share of rice. The appointed person thus agreed and the people bestowed upon him the title : 'Maha Sammata' meaning: The People's Choice. Then, they bestowed also the second title: 'Khattiya' meaning the 'Lord of the Rice Field', and finally the third title: 'Raja' which means 'Who gladdens people with Dhamma (or Truth)'.
This order was created by the people's wish and need, based on the Dhamma and not from others. The Buddha stated again that Dhamma is indeed the best of all things.
The Brahman Class
Then, amongst the people, some of them begin to think like this: "Evil deeds have risen amongst us, such as: theft, lies, murders, sexual abuses, punishment, and banishment. Now let us set aside evil, unuseful, and impolite things." The word Brahmans came, as it meant: "They who put aside Evil and unwholesome things" (1).
They set up retreats and huts in the forests and meditated there. They came to the city at morning and evening only to gather food and after finishing gathering food, they returned to their huts and meditations. People noticed this and 'Those who meditated' were called 'Jhayanti' or 'Jhayaka'.
There are other people, who can't meditate or dwell in huts in the forest. So, they settled in the cities, did not meditate, but compiled books. The people called them 'Ajjhayaka' which meant 'They who don't meditate'. At first the Ajjhayaka were viewed lower than Jhayaka but in the Buddha's time, the Ajjhayaka had been viewed higher in status than the Jhayakas.
The Vessa (Traders) and the Sudda (Hunters)
Among the people who had settled and had family, some began to adopt various trades.
The remainder of these people preferred the work of hunting. The Shudra caste came from the word 'Shudra' which means: 'They Are Base Who Live By The Chase' [1].
All of the vannas from Khattiya, Brahman Vessa, and Sudda originated from these people, and not from others; in accordance to the Dhamma and not by others.
The Ascetics
But from the four vannas, there were people who were not satisfied with their living, left their home and became celibate ascetics. These are the origin of the fifth class formed from all the four class' people who left their lay life and became an ascetic.
Buddha's Conclusion
The Buddha then concluded his discourse to Vasettha and Bharadvaja:
(Due to the governance of Dhamma which became the root of all class and people) anyone, from any the class, who did demerit and wrongdoings, lived a bad life of speech, thoughts, views, and wrongdoings, they would end up after their death, in the realm of sufferings, hell, loss, and torture.
But anyone, from any class, who did merit and good deeds, lived a good life of speech, thoughts, and deeds; had the right view, after their death, they would end in the realm of happiness and heaven.
Anyone, from any class, who did both merit and demerit, lived a good and bad life of speech, thoughts, and deeds; had either a right or a bad view, after their death, they could end in the realm of suffering or the realm of joy.
Anyone, from any class, who lived a life of disciplined deeds, speeches, thoughts, who had trained and developed himself in the seven factors of Enlightenment, then he would attain the eradication from the (stains/dust/dirt/filth) of mind in this current life.
Anyone, from four classes, who became a bhikku (Monk), arahant, who had eradicated stains of Mind, had done what must be done, had relieved himself from burden, who had attained freedom, who had broken the bondage of birth, who had been freed due to knowledge; then they would be declared as the best from all of them, in accordance to the Truth (Dharma) and not from the basis of not Truth (adhamma).
The Buddha quoted, "Dharma is the best thing for people
In this life and the next as well."
The Buddha quoted the verses of Brahma Sandakumara:
"The Khattiya is the best among those who maintain their lineage; He with knowledge and conduct is best of gods and men."
then, the Buddha asserted that the verse is indeed true, according to the Dhamma, profitable, and true.
The Khattiya's best among those who value clan; He with knowledge and conduct is best of gods and men."
Thus the discourse ended with Vasettha and Bharadvaja rejoicing in hearing the words of Buddha.
Digging Deeper Into the Sutta
While the story of the world's beginning is considered a myth, on the other hand, the Buddhist doctrine requires a constant sceptical approach, where one must see and prove it before one believes it (ehipassiko). However, the profound insight of the Buddha in two major fields: science (cosmology) and social structure's origin indeed was revolutionary in his era.
On the science part, Buddha implied the theory of the Evolution of the Universe, where it is said to shrink and then expand in repeated cycles.
While on the social science part, the Buddha's words implied the equality of origin in the human race, whether by their sex, appearance, or by other categories which were founded later based on physiological differences. Buddha also emphasized that the social structure is formed voluntarily, based on righteousness and necessity, not based on Divine command as some theories stated.
The Monarchy is also formed voluntarily, and the people elect the most righteous and capable person, which implied the Democracy concept. The Monarch accepts a 'share of rice' as his reward to rectify the social order, which is the origin of voluntary reward which evolves into the taxation concept. However, the Buddha states that the Monarch is regarded worthy not because of his divine right but due to his righteousness in deeds.
The Buddha's message was clear, however, that the best thing in the world is Truth (Dhamma) and everything is created, measured, and valued based on Truth and not from something other.
According to Richard Gombrich, the sutta gives strong evidence that it was conceived entirely as a satire of Hindu claims regarding the divine nature of the varnashrama, showing that it is nothing but a human convention. According to Gombrich, the Buddha satirizes the Vedic "Hymn of the Cosmic Man" and etymologizes "reciter of the Veda" so as to make it mean "non-meditator" instead. Not all scholars agree with Gombrich's interpretation and his view is not unanimous.
Among those who disagree is Suwanda H J Sugunasiri, a Canadian Buddhist scholar, who most recently has presented a novel interpretation of the Sutta. Rejecting the view that the Sutta is a 'satire' (Gombrich) or 'good humoured irony' (Collins), he shows how "the Discourse is a historically and scientifically accurate characterization of the cyclical cosmic process" [2]. He compares the stages of cosmic, vegetation, human and linguistic evolution as indicated by the Buddha with those in western theory, beginning with 13.5 billion years ago of the Big Bang and ending with 150,000 years when 'anatomically modern humans' appear. The Big Bang, in this interpretation, marks not the beginning of the Evolutionary phase but the ending of the earlier Devolutionary phase, when there appears seven suns (as in a different Sutta), symbolic of intense heat. A critical point in Sugunasiri's reconstruction of the Buddha's universe is the novel take on the Abhassaras - as photons, translating the term Abhassara literally as 'hither-come-shining arrow' (ā + bhas + sara). In an expanded study, Sugunasiri points to two other Suttas (Brahmajala and Patika) in which the Buddha presents dimensions of the cosmic process. He also shows how the Buddha cuts through the Vedic myth of creation referred to in Gombrich.
Notes
Further reading
Collins, Steven, " The Discourse on What is Primary (Aggañña Sutta), An annotated Translation", Journal of Indian Philosophy 21, 301-393.
Suwanda H J Sugunasiri, PhD, 2014, Dhamma Aboard Evolution: A Canonical Study of Aggañña Sutta in relation to Science, Toronto: Nalanda Publishing Canada.
External links
Pali Text
The Aggañña Sutta in original Pali SuttaCentral
Translations
The Origin of the World, translation by Bhikkhu Sujato
On Knowledge of Beginnings, translation by unknown translator
Essays
Religions and Human Rights: Buddhism vs Brahminism an excerpt of a monograph by Nalin Swaris.
Digha Nikaya
Creation myths
Buddhism and evolution | Aggañña Sutta | Astronomy | 4,599 |
1,267,413 | https://en.wikipedia.org/wiki/John%20A.%20Davis | John Alexander Davis (born October 26, 1961) is an American film director, writer, animator, voice actor and composer known for his work both in stop-motion animation as well as computer animation, live action and live-action/CGI hybrids. Davis is best known for creating Nickelodeon's Jimmy Neutron franchise, which enjoyed popularity in the early to mid 2000s.
Early life
Davis began animating as a child using his parents' 8 mm camera to film action figures in stop motion. His interest in animation began when he watched a stop motion film called Icharus at a film festival. He worked on the stop motion film The Bermuda Triangle in 1981 while still attending Southern Methodist University, where he graduated in 1984.
Career
Soon after his graduation Davis joined the animation company K&H Productions, working with 2-D animator Keith Alcorn. Soon, Davis made the transition from claymation to 2-D animation with Alcorn's help. K&H did production work for commercials, public-access television cable TV animation, and film festivals. K&H Productions declared bankruptcy in early 1987; that same year DNA Productions was founded.
Davis came up with the idea for Jimmy Neutron: Boy Genius (originally named Johnny Quasar) sometime during the 1980s and wrote a script titled Runaway Rocketboy (later the name of the second pilot) which was later abandoned. While moving to a new house in the early 1990s, he stumbled upon the script and re-worked it as a short film titled Johnny Quasar and presented it in SIGGRAPH where he met Steve Oedekerk and worked on a television series as well as the film.
In 2006, he directed the film The Ant Bully after being approached by Tom Hanks to direct the film. Production on the film made Davis resign from production of Jimmy Neutron in January 2003. He gave his position away as executive in charge of production to Steve Oedekerk. He also directed the film's video game.
Davis was set to direct an upcoming feature film based on Neopets with Warner Bros., together with producer Dylan Sellers and writer Rob Lieber. It was originally set to release on April 20, 2009, but was changed to 2011 and later changed to winter of 2012, before finally being cancelled with no other projects announced.
Nominations
In 2000, Davis was up for an Emmy along with 8 others in the category Outstanding Animated Program (For Programming More Than One Hour) for Olive, the Other Reindeer, but lost to Discovery Channel's Walking with Dinosaurs.
In 2002, Davis was nominated for an Academy Award along with Steve Oedekerk in the category of Best Animated Feature for Jimmy Neutron: Boy Genius.
Filmography
Internet
Astrophotography
Since about 2007, Davis has become a recognized astrophotographer, publishing high-resolution, generally wide-field images in astronomy magazines, and in NASA's Astronomy Picture of the Day.
In 2009, Davis largely founded and continues to lead APSIG, the Astrophotography Special Interest Group, associated with the Texas Astronomical Society of Dallas.
See also
:Category:Films directed by John A. Davis
References
External links
Bermuda Triangle pharosproductions.com (Archived Page)
DNA Productions dnahelix.com
1961 births
Living people
Animators from Texas
Film producers from Texas
American television directors
Television producers from Texas
American television writers
American male voice actors
American male screenwriters
American animated film directors
DNA Productions
Showrunners of animated series
Southern Methodist University alumni
Place of birth missing (living people)
Astrophotographers
American male television writers
Male actors from Dallas
Film directors from Texas
Screenwriters from Texas
Nickelodeon Animation Studio people
Nickelodeon people | John A. Davis | Astronomy | 738 |
29,118,936 | https://en.wikipedia.org/wiki/Variable%20pathlength%20cell | A variable pathlength cell is a sample holder used for ultraviolet–visible spectroscopy or infrared spectroscopy that has a path length that can be varied to change the absorbance without changing the sample concentration.
Equations
The Beer–Lambert law states that there is a logarithmic dependence between the transmission (or transmissivity), T, of light through a substance and the product of the absorption coefficient of the substance, α, and the distance the light travels through the material (i.e. the path length), ℓ. The absorption coefficient can, in turn, be written as a product of either a molar absorptivity of the absorber, ε, and the concentration c of absorbing species in the material, or an absorption cross section, σ, and the (number) density N of absorbers. (see Beer Lambert Law link for full derivation)
Spectroscopy with a variable pathlength cell takes advantage of Beer–Lambert law to determine concentrations of various solutions. By knowing the molar absorptivity of the material and varying the path length, absorption can be plotted as a function of path length. See sample plot to the right:
By taking a linear regression of the linear plot above an expression relating Absorbance, A, slope, m, pathlength and concentration can be derived.
A linear equation of two variables can be derived,
by equating in terms of units we get,
Since the slope of the line is in units of Abs/Pathlength, slope can be expressed as,
by inserting into Beer's Law we get,
This is the slope spectroscopy equation.
Applications
Variable pathlength techniques can be applied in any situation where Beer's law can be applied. It provides an analytical method that averages out minor variations in sample preparation consistency. It also provides a means to calculate concentrations without calibrations curves or serial dilution of samples.
Variable pathlength absorption spectroscopy is typically used when highly reproducible data is a necessity. This can be in the fields of medicine, biotechnology, pharmacology, and drug discovery. It is particularly useful in the protein purification stage of biotechnology where accurate concentrations of various proteins are required or in crystallography.
Determining the relative ratio of protein to DNA is common practice and can be calculated by finding the slope at the corresponding absorption peaks and taking their ratio. This method is used to find the purity of a sample containing these two types of molecule.
Experimental methods
In ultraviolet-visible spectroscopy or spectroscopy in general a 1 cm pathlength cuvette is used to measure samples. The cuvette is filled with sample, light is passed through the sample and intensity readings are taken. The slope spectroscopy technique can be applied using the same methods as in absorption spectroscopy. With the advent of accurate linear stages, variable pathlength absorption spectroscopy is easily applied experimentally.
Other experimental methods include using ratios of slopes to build extinction coefficient spectra. This is possible because application of slope spectroscopy allows the scientist to keep concentration levels constant and vary path lengths.
Background subtraction
Variable pathlength absorption spectroscopy uses a determined slope to calculate concentration. As stated above this is a product of the molar absorptivity and the concentration. Since the actual absorbance value is taken at many data points at equal intervals, background subtraction is generally unnecessary. The image on the right is a linear plot showing both the background corrected data and the raw data.
This shows that the absorbance values on the plot are offset by an equal amount and the slope of the two plots are equal. Thus, the concentration calculated from the two plots is equal. Other scalar components that contribute to the absorbance of a given sample like contaminants on the cuvette or a different cuvette material also are averaged out during the slope measurement.
The technique is also applicable for in line measurements for TFF and chromatography applications.
See also
Applied spectroscopy
References
Further reading
Scott Huffman, Keyur Soni and Joe Ferraiolo UV-Vis Based Determination of Protein Concentration: Validating and Implementing Slope Measurements Using Variable Pathlength Technology by September 2014 http://www.bioprocessintl.com/manufacturing/antibody-non-antibody/uv-vis-based-determination-protein-concentration-validating-implementing-slope-measurements-using-variable-pathlength-technology/
Absorption spectroscopy | Variable pathlength cell | Physics,Chemistry | 902 |
15,906,222 | https://en.wikipedia.org/wiki/Cyril%20Callister | Cyril Percy Callister (16 February 1893 – 5 October 1949) was an Australian chemist and food technologist who developed the Vegemite yeast spread. As well as Vegemite, he is known for his contributions towards processed cheese.
Early life
Callister was born on 16 February 1893, in Chute, Victoria near Ballarat, son of Rosetta Anne (née Dixon) and William Hugh Callister, a teacher and postmaster. The second son of seven children, he attended the Ballarat School of Mines and Grenville College, and later won a scholarship to the University of Melbourne. He gained a Bachelor of Science degree in 1914 and a Master of Science degree in 1917.
In early 1915, Callister was employed by food manufacturer Lewis & Whitty, but later that year he enlisted in the Australian Imperial Force. After 53 days, however, he was withdrawn from active service on the order of the Minister for Defence and assigned to the Munitions Branch, making explosives in Britain due to his knowledge of chemistry. He worked on munitions in England, Wales, and then in Scotland, at HM Factory Gretna where he worked as a shift chemist. Whilst at Gretna he was elected as an Associate of the Institute of Chemistry in 1918.
Following the end of World War I, he met and married Scottish girl Katherine Hope Mundell and returned to Australia and resumed employment with Lewis & Whitty in 1919.
The invention of Vegemite
In the early 1920s, Callister was employed by Fred Walker and given the task of developing a yeast extract, as imports from the United Kingdom of Marmite had been disrupted in the aftermath of World War I. He experimented on spent brewer's yeast and independently developed what came to be called Vegemite, first sold by Fred Walker & Co in 1923.
Working from the details of a James L. Kraft patent, Callister was successful in producing processed cheese. The Walker Company negotiated a deal for the rights to manufacture the product, and in 1926, the Kraft Walker Cheese Co. was established. Callister was appointed chief scientist and production superintendent of the new company.
Children
Between 1919 and 1927 the Callisters had three children: Ian, Bill and Jean, who were "the original Vegemite kids". During World War II, Ian died.
Later life
Callister got his Doctorate from the University of Melbourne in 1931, with his submission largely based on his work in developing Vegemite.
He was a prominent member of the Royal Australian Chemical Institute, helping it to get a Royal Charter in 1931.
Callister died at his home in Wellington Street, Kew, Melbourne in 1949, following a heart attack and is buried at Box Hill Cemetery. He had a history of heart attacks, with his first occurring in late 1939. His estate was valued for probate at £45,917.
Legacy
A biography of Callister, The Man Who Invented Vegemite, written by his grandson Jamie Callister, was published in 2012.
Callister is the great uncle to Kent Callister, a professional snowboarder who has competed at the Winter Olympics for Australia.
The Cyril Callister Foundation, established in 2019, commemorates his life and work. It runs a museum in Beaufort, Victoria.
References
1893 births
1949 deaths
Australian chemists
20th-century Australian inventors
University of Melbourne alumni
Federation University Australia alumni
Burials at Box Hill Cemetery
Food chemists
People from Victoria (state)
20th-century Australian scientists | Cyril Callister | Chemistry | 696 |
6,317,367 | https://en.wikipedia.org/wiki/214%20%28number%29 | 214 (two hundred [and] fourteen) is the natural number following 213 and preceding 215.
In mathematics
214 is a composite number (with prime factorization 2 × 107) and a triacontakaiheptagonal number (37-gonal number).
214!! − 1 is a 205-digit prime number.
Number of regions into which a figure made up of a row of 5 adjacent congruent rectangles is divided upon drawing diagonals of all possible rectangles.
In other fields
SMTP status code for a reply message to a help command
References
Wells, D. (1987). The Penguin Dictionary of Curious and Interesting Numbers (p. 143). London: Penguin Group.
Integers | 214 (number) | Mathematics | 146 |
4,534,426 | https://en.wikipedia.org/wiki/Leaky%20integrator | In mathematics, a leaky integrator equation is a specific differential equation, used to describe a component or system that takes the integral of an input, but gradually leaks a small amount of input over time. It appears commonly in hydraulics, electronics, and neuroscience where it can represent either a single neuron or a local population of neurons.
Equation
The equation is of the form
where C is the input and A is the rate of the 'leak'.
General solution
The equation is a nonhomogeneous first-order linear differential equation. For constant C its solution is
where is a constant encoding the initial condition.
References
Differential equations | Leaky integrator | Mathematics | 129 |
47,902,485 | https://en.wikipedia.org/wiki/New%20media%20studies | New media studies is an academic discipline that explores the intersections of computing, science, the humanities, and the visual and performing arts. Janet Murray, a prominent researcher in the discipline, describes this intersection as "a single new medium of representation, the digital medium, formed by the braided interplay of technical invention and cultural expression at the end of the 20th century". The main factor in defining new media is the role the Internet plays; new media is effortlessly spread instantly. The category of new media is occupied by devices connected to the Internet, an example being a smartphone or tablet. Television and cinemas are commonly thought of as new media but are ruled out since the invention was before the time of the internet.
New media studies examines ideas and insights on media from communication theorists, programmers, educators, and technologists. Among others, the work of Marshall McLuhan is viewed as one of the cornerstones of the study of media theory. McLuhan’s slogan, "the medium is the message" (elaborated in his 1964 book, Understanding Media: The Extensions of Man), calls attention to the intrinsic effect of communications media.
A program in new media studies may incorporate lessons, classes, and topics within communication, journalism, computer science, programming, graphic design, web design, human-computer interaction, media theory, linguistics, information science, and other related fields.
New media studies is the academic discipline which examines how our relationship with media has changed with the onset of global connectivity and the popularity of digital and user-generated content. New media studies seeks to connect computer sciences and innovations in new media with social sciences and the philosophy of technology.
History
Major figures
Marshall McLuhan is known in the study of media theory for coining the phrase, "the medium is the message" in its effect in communication in media. The medium affects how the media is delivered by the person and how the other person is receiving that media, while the characteristics of the medium effects the content in its delivery. McLuhan’s work challenged how media changed in the post-modern era which can also relate to present day use of media and its medium. Interface is defined as the common boundary of 2 bodies, spaces, or phases. Other Major works that also invoke his standpoint in the study of New Media including:
The Mechanical Bride (1951) - Which includes many short essays that analyze forms of media such as advertising, or newspaper in relation to society.
The Gutenberg Galaxy (1962) - In this book, McLuhan Writes about technology like the printing press or electronic media changing how people share stories in their day-to-day lives.
Lev Manovich has written nine books on the topic of new media. He developed a number of now-standard concepts for the analysis of new media culture such as cultural interface, database, navigable space, metamedia, and others. Manovich's 2001 book The Language of New Media contains the analysis of the five general principles of new media, as they developed up until that time:
Numerical Representation: essentially means that "all new media objects can be described mathematically and can be manipulated via algorithms."
Modularity: are "elements can that be independently modified and reused in other works".
Automation: "Automation is seen in computer programs that allow users to create or modify media objects using templates or algorithms".
Variability: is "a new media object is not something fixed once and for all, but something that can exist in different, potentially infinite versions".
Transcoding: Designates the blend of computer and culture, of "traditional ways in which human culture modeled the world and the computer's own means of representing it".
Henry Jenkins introduced the convergence culture concept in the field of new media studies: "By convergence, I mean the flow of content across multiple media platforms, the cooperation between multiple media industries, and the migratory behavior of media audiences who would go almost anywhere in search of the kinds of entertainment experiences they wanted."
Janet Murray is a professor at the School of Literature, Media, and Communications at the Georgia Institute of Technology. Murray is also a part of several design projects such as; a digital edition of the Warner Brothers classic, Casablanca. Janet works exclusively as a member of Georgia Tech's experimental game lab. Janet is also the author of the book Hamlet on the Holodeck; The Future of Narrative in Cyberspace. Along with this book, she is also a guest writer in The New Media Reader, where she is credited for writing one of the two introductions in the book called, Inventing the Medium.
Terminology
Strategies vs Tactics - The term strategies refers to the methods a producer intended for their creation to be used. The term tactics is referring to the ways individuals actually utilize a creation, regardless of the creators intentions, in order to make it more useful to them.
Hypermediated - Hypermediation is a concept in new media studies that refers to a form of mediation in which media is connected in a very close way. Mediation, in this sense, deals with using indirect sources to create a direct connection between different types of media. This process is often done by the use of hypertext and through the Networked approach to new media. An example of hypermediation would be an online shopper who buys a particular item, then in turn gets links to related items, then articles related to those items etc.
Web 2.0 - Web 2.0 is the concept of websites geared toward content created by users. Web 2.0 allows the web to be used as a platform. It also enables the users to control their own data. Tim O'Reilly and Dale Dougherty invented the phrase Web 2.0 at the Media Conference in 2004. Some examples of Web 2.0 are Google AdSense, Flickr, BitTorrent, Napster and Wikipedia.
Networking is a term that defines the transformation of old media to new media through communication that enables users to produce or compute material of their own on the internet. It's a medium that consists of blogs, emails, and social media networks which allows more global connections to reach many to share ideas.
Commodification of Experience - The packaging of human experience in some form to be sold back to the consumer. This is a common term in study of the role new media plays. An example of the top-down ideal. Common examples include cable subscriptions, vacation resort packages, memberships and, more abstractly, one's presence on social media.
Simulation is a term that can be defined as a virtual representation of reality. For example, according to New Media and Visual Culture, virtually, things seem real based on experience, but they are not real because they have not actually happened. French theorist, Jean Baudrillard, believed that simulation was the modern stage of simulacrum.
Virtual - Lev Manovich describes a virtual world as an interactive world created by a computer that many people can access at one time. A virtual world is an interactive and digital space. Virtual worlds are often created as a simulation of something which already exists in the physical world.
References
Media studies
Behavioural sciences | New media studies | Biology | 1,442 |
74,331 | https://en.wikipedia.org/wiki/Andromeda%20Galaxy | The Andromeda Galaxy is a barred spiral galaxy and is the nearest major galaxy to the Milky Way. It was originally named the Andromeda Nebula and is cataloged as Messier 31, M31, and NGC 224. Andromeda has a D25 isophotal diameter of about and is approximately from Earth. The galaxy's name stems from the area of Earth's sky in which it appears, the constellation of Andromeda, which itself is named after the princess who was the wife of Perseus in Greek mythology.
The virial mass of the Andromeda Galaxy is of the same order of magnitude as that of the Milky Way, at . The mass of either galaxy is difficult to estimate with any accuracy, but it was long thought that the Andromeda Galaxy was more massive than the Milky Way by a margin of some 25% to 50%. However, this has been called into question by early-21st-century studies indicating a possibly lower mass for the Andromeda Galaxy and a higher mass for the Milky Way. The Andromeda Galaxy has a diameter of about , making it the largest member of the Local Group of galaxies in terms of extension.
The Milky Way and Andromeda galaxies are expected to collide with each other in around 4–5 billion years, merging to potentially form a giant elliptical galaxy or a large lenticular galaxy.
With an apparent magnitude of 3.4, the Andromeda Galaxy is among the brightest of the Messier objects, and is visible to the naked eye from Earth on moonless nights, even when viewed from areas with moderate light pollution.
Observation history
The Andromeda Galaxy is visible to the naked eye in dark skies. Around the year 964 CE, the Persian astronomer Abd al-Rahman al-Sufi described the Andromeda Galaxy in his Book of Fixed Stars as a "nebulous smear" or "small cloud". Star charts of that period labeled it as the Little Cloud. In 1612, the German astronomer Simon Marius gave an early description of the Andromeda Galaxy based on telescopic observations. Pierre Louis Maupertuis conjectured in 1745 that the blurry spot was an island universe. Charles Messier cataloged Andromeda as object M31 in 1764 and incorrectly credited Marius as the discoverer despite it being visible to the naked eye. In 1785, the astronomer William Herschel noted a faint reddish hue in the core region of Andromeda. He believed Andromeda to be the nearest of all the "great nebulae", and based on the color and magnitude of the nebula, he incorrectly guessed that it was no more than 2,000 times the distance of Sirius, or roughly .
In 1850, William Parsons, 3rd Earl of Rosse, made a drawing of Andromeda's spiral structure.
In 1864, William Huggins noted that the spectrum of Andromeda differed from that of a gaseous nebula. The spectrum of Andromeda displays a continuum of frequencies, superimposed with dark absorption lines that help identify the chemical composition of an object. Andromeda's spectrum is very similar to the spectra of individual stars, and from this, it was deduced that Andromeda has a stellar nature. In 1885, a supernova (known as S Andromedae) was seen in Andromeda, the first and so far only one observed in that galaxy. At the time, it was called "Nova 1885"—the difference between "novae" in the modern sense and supernovae was not yet known. Andromeda was considered to be a nearby object, and it was not realized that the "nova" was much brighter than ordinary novae.
In 1888, Isaac Roberts took one of the first photographs of Andromeda, which was still commonly thought to be a nebula within our galaxy. Roberts mistook Andromeda and similar "spiral nebulae" as star systems being formed.
In 1912, Vesto Slipher used spectroscopy to measure the radial velocity of Andromeda with respect to the Solar System—the largest velocity yet measured, at .
"Island universes" hypothesis
As early as 1755, the German philosopher Immanuel Kant proposed the hypothesis that the Milky Way is only one of many galaxies in his book Universal Natural History and Theory of the Heavens. Arguing that a structure like the Milky Way would look like a circular nebula viewed from above and like an ellipsoid if viewed from an angle, he concluded that the observed elliptical nebulae like Andromeda, which could not be explained otherwise at the time, were indeed galaxies similar to the Milky Way, not nebulae, as Andromeda was commonly believed to be.
In 1917, Heber Curtis observed a nova within Andromeda. After searching the photographic record, 11 more novae were discovered. Curtis noticed that these novae were, on average, 10 magnitudes fainter than those that occurred elsewhere in the sky. As a result, he was able to come up with a distance estimate of . Although this estimate is about fivefold lower than the best estimates now available, it was the first known estimate of the distance to Andromeda that was correct to within an order of magnitude (i.e., to within a factor of ten of the current estimates, which place the distance around 2.5 million light-years). Curtis became a proponent of the so-called "island universes" hypothesis: that spiral nebulae were actually independent galaxies.
In 1920, the Great Debate between Harlow Shapley and Curtis took place concerning the nature of the Milky Way, spiral nebulae, and the dimensions of the universe. To support his claim that the Great Andromeda Nebula is, in fact, an external galaxy, Curtis also noted the appearance of dark lanes within Andromeda that resembled the dust clouds in our own galaxy, as well as historical observations of the Andromeda Galaxy's significant Doppler shift. In 1922, Ernst Öpik presented a method to estimate the distance of Andromeda using the measured velocities of its stars. His result placed the Andromeda Nebula far outside our galaxy at a distance of about . Edwin Hubble settled the debate in 1925 when he identified extragalactic Cepheid variable stars for the first time on astronomical photos of Andromeda. These were made using the Hooker telescope, and they enabled the distance of the Great Andromeda Nebula to be determined. His measurement demonstrated conclusively that this feature was not a cluster of stars and gas within our own galaxy, but an entirely separate galaxy located a significant distance from the Milky Way.
In 1943, Walter Baade was the first person to resolve stars in the central region of the Andromeda Galaxy. Baade identified two distinct populations of stars based on their metallicity, naming the young, high-velocity stars in the disk Type I and the older, red stars in the bulge Type II. This nomenclature was subsequently adopted for stars within the Milky Way and elsewhere. (The existence of two distinct populations had been noted earlier by Jan Oort.) Baade also discovered that there were two types of Cepheid variable stars, which resulted in doubling the distance estimate to Andromeda, as well as the remainder of the universe.
In 1950, radio emissions from the Andromeda Galaxy were detected by Robert Hanbury Brown and Cyril Hazard at the Jodrell Bank Observatory. The first radio maps of the galaxy were made in the 1950s by John Baldwin and collaborators at the Cambridge Radio Astronomy Group. The core of the Andromeda Galaxy is called 2C 56 in the 2C radio astronomy catalog.
In 1959 rapid rotation of the semi-stellar nucleus of M31 was discovered by Andre Lallemand, M. Duschene and Merle Walker at the Lick Observatory, using the 120-inch telescope, coudé Spectrograph, and Lallemand electronographic camera. They estimated the mass of the nucleus to be about 1.3 x 107 solar masses. The second example of this phenomenon was found in 1961 in the nucleus of M32 by M.F Walker at the Lick Observatory, using the same equipment as used for the discovery of the nucleus of M31. He estimated the nuclear mass to be between 0.8 and 1 x 107 solar masses. Such rotation is now considered to be evidence of the existence of supermassive black holes in the nuclei of these galaxies.
In 2009, an occurrence of microlensing—a phenomenon caused by the deflection of light by a massive object—may have led to the first discovery of a planet in the Andromeda Galaxy.
In 2020, observations of linearly polarized radio emission with the Westerbork Synthesis Radio Telescope, the Effelsberg 100-m Radio Telescope, and the Very Large Array revealed ordered magnetic fields aligned along the "10-kpc ring" of gas and star formation.
General
The estimated distance of the Andromeda Galaxy from our own was doubled in 1953 when it was discovered that there is a second, dimmer type of Cepheid variable star. In the 1990s, measurements of both standard red giants as well as red clump stars from the Hipparcos satellite measurements were used to calibrate the Cepheid distances.
Formation and history
A major merger occurred 2 to 3 billion years ago at the Andromeda location, involving two galaxies with a mass ratio of approximately 4.
The discovery of a recent merger in the Andromeda galaxy was first based on interpreting its anomalous age-velocity dispersion relation, as well as the fact that 2 billion years ago, star formation throughout Andromeda's disk was much more active than today.
Modeling of this violent collision shows that it has formed most of the galaxy's (metal-rich) galactic halo, including the Giant Stream, and also the extended thick disk, the young age thin disk, and the static 10 kpc ring. During this epoch, its rate of star formation would have been very high, to the point of becoming a luminous infrared galaxy for roughly 100 million years. Modeling also recovers the bulge profile, the large bar, and the overall halo density profile.
Andromeda and the Triangulum Galaxy (M33) might have had a very close passage 2–4 billion years ago, but it seems unlikely from the last measurements from the Hubble Space Telescope.
Distance estimate
At least four distinct techniques have been used to estimate distances from Earth to the Andromeda Galaxy. In 2003, using the infrared surface brightness fluctuations (I-SBF) and adjusting for the new period-luminosity value and a metallicity correction of −0.2 mag dex−1 in (O/H), an estimate of was derived. A 2004 Cepheid variable method estimated the distance to be 2.51 ± 0.13 million light-years (770 ± 40 kpc).
In 2005, an eclipsing binary star was discovered in the Andromeda Galaxy. The binary is made up of two hot blue stars of types O and B. By studying the eclipses of the stars, astronomers were able to measure their sizes. Knowing the sizes and temperatures of the stars, they were able to measure their absolute magnitude. When the visual and absolute magnitudes are known, the distance to the star can be calculated. The stars lie at a distance of and the whole Andromeda Galaxy at about . This new value is in excellent agreement with the previous, independent Cepheid-based distance value. The TRGB method was also used in 2005 giving a distance of . Averaged together, these distance estimates give a value of .
Mass estimates
Until 2018, mass estimates for the Andromeda Galaxy's halo (including dark matter) gave a value of approximately , compared to for the Milky Way. This contradicted even earlier measurements that seemed to indicate that the Andromeda Galaxy and Milky Way are almost equal in mass. In 2018, the earlier measurements for equality of mass were re-established by radio results as approximately . In 2006, the Andromeda Galaxy's spheroid was determined to have a higher stellar density than that of the Milky Way, and its galactic stellar disk was estimated at twice the diameter of that of the Milky Way. The total mass of the Andromeda Galaxy is estimated to be between and . The stellar mass of M31 is , with 30% of that mass in the central bulge, 56% in the disk, and the remaining 14% in the stellar halo. The radio results (similar mass to the Milky Way Galaxy) should be taken as likeliest as of 2018, although clearly, this matter is still under active investigation by several research groups worldwide.
As of 2019, current calculations based on escape velocity and dynamical mass measurements put the Andromeda Galaxy at , which is only half of the Milky Way's newer mass, calculated in 2019 at .
In addition to stars, the Andromeda Galaxy's interstellar medium contains at least in the form of neutral hydrogen, at least as molecular hydrogen (within its innermost 10 kiloparsecs), and of dust.
The Andromeda Galaxy is surrounded by a massive halo of hot gas that is estimated to contain half the mass of the stars in the galaxy. The nearly invisible halo stretches about a million light-years from its host galaxy, halfway to our Milky Way Galaxy. Simulations of galaxies indicate the halo formed at the same time as the Andromeda Galaxy. The halo is enriched in elements heavier than hydrogen and helium, formed from supernovae, and its properties are those expected for a galaxy that lies in the "green valley" of the Galaxy color-magnitude diagram (see below). Supernovae erupt in the Andromeda Galaxy's star-filled disk and eject these heavier elements into space. Over the Andromeda Galaxy's lifetime, nearly half of the heavy elements made by its stars have been ejected far beyond the galaxy's 200,000-light-year-diameter stellar disk.
Luminosity estimates
The estimated luminosity of the Andromeda Galaxy, , is about 25% higher than that of our own galaxy. However, the galaxy has a high inclination as seen from Earth, and its interstellar dust absorbs an unknown amount of light, so it is difficult to estimate its actual brightness and other authors have given other values for the luminosity of the Andromeda Galaxy (some authors even propose it is the second-brightest galaxy within a radius of 10 megaparsecs of the Milky Way, after the Sombrero Galaxy, with an absolute magnitude of around −22.21 or close).
An estimation done with the help of Spitzer Space Telescope published in 2010 suggests an absolute magnitude (in the blue) of −20.89 (that with a color index of +0.63 translates to an absolute visual magnitude of −21.52, compared to −20.9 for the Milky Way), and a total luminosity in that wavelength of .
The rate of star formation in the Milky Way is much higher, with the Andromeda Galaxy producing only about one solar mass per year compared to 3–5 solar masses for the Milky Way. The rate of novae in the Milky Way is also double that of the Andromeda Galaxy. This suggests that the latter once experienced a great star formation phase, but is now in a relative state of quiescence, whereas the Milky Way is experiencing more active star formation. Should this continue, the luminosity of the Milky Way may eventually overtake that of the Andromeda Galaxy.
According to recent studies, the Andromeda Galaxy lies in what is known in the galaxy color–magnitude diagram as the "green valley", a region populated by galaxies like the Milky Way in transition from the "blue cloud" (galaxies actively forming new stars) to the "red sequence" (galaxies that lack star formation). Star formation activity in green valley galaxies is slowing as they run out of star-forming gas in the interstellar medium. In simulated galaxies with similar properties to the Andromeda Galaxy, star formation is expected to extinguish within about five billion years, even accounting for the expected, short-term increase in the rate of star formation due to the collision between the Andromeda Galaxy and the Milky Way.
Structure
Based on its appearance in visible light, the Andromeda Galaxy is classified as an SA(s)b galaxy in the de Vaucouleurs–Sandage extended classification system of spiral galaxies. However, infrared data from the 2MASS survey and the Spitzer Space Telescope showed that Andromeda is actually a barred spiral galaxy, like the Milky Way, with Andromeda's bar major axis oriented 55 degrees anti-clockwise from the disc major axis.
There are various methods used in astronomy in defining the size of a galaxy, and each method can yield different results concerning one another. The most commonly employed is the D25 standard, the isophote where the photometric brightness of a galaxy in the B-band (445 nm wavelength of light, in the blue part of the visible spectrum) reaches 25 mag/arcsec2. The Third Reference Catalogue of Bright Galaxies (RC3) used this standard for Andromeda in 1991, yielding an isophotal diameter of at a distance of 2.5 million light-years. An earlier estimate from 1981 gave a diameter for Andromeda at .
A study in 2005 by the Keck telescopes shows the existence of a tenuous sprinkle of stars, or galactic halo, extending outward from the galaxy. The stars in this halo behave differently from the ones in Andromeda's main galactic disc, where they show rather disorganized orbital motions as opposed to the stars in the main disc having more orderly orbits and uniform velocities of 200 km/s. This diffuse halo extends outwards away from Andromeda's main disc with the diameter of .
The galaxy is inclined an estimated 77° relative to Earth (where an angle of 90° would be edge-on). Analysis of the cross-sectional shape of the galaxy appears to demonstrate a pronounced, S-shaped warp, rather than just a flat disk. A possible cause of such a warp could be gravitational interaction with the satellite galaxies near the Andromeda Galaxy. The Galaxy M33 could be responsible for some warp in Andromeda's arms, though more precise distances and radial velocities are required.
Spectroscopic studies have provided detailed measurements of the rotational velocity of the Andromeda Galaxy as a function of radial distance from the core. The rotational velocity has a maximum value of at from the core, and it has its minimum possibly as low as at from the core. Further out, rotational velocity rises out to a radius of , where it reaches a peak of . The velocities slowly decline beyond that distance, dropping to around at . These velocity measurements imply a concentrated mass of about in the nucleus. The total mass of the galaxy increases linearly out to , then more slowly beyond that radius.
The spiral arms of the Andromeda Galaxy are outlined by a series of HII regions, first studied in great detail by Walter Baade and described by him as resembling "beads on a string". His studies show two spiral arms that appear to be tightly wound, although they are more widely spaced than in our galaxy. His descriptions of the spiral structure, as each arm crosses the major axis of the Andromeda Galaxy, are as follows§pp1062§pp92:
Since the Andromeda Galaxy is seen close to edge-on, it is difficult to study its spiral structure. Rectified images of the galaxy seem to show a fairly normal spiral galaxy, exhibiting two continuous trailing arms that are separated from each other by a minimum of about and that can be followed outward from a distance of roughly from the core. Alternative spiral structures have been proposed such as a single spiral arm or a flocculent pattern of long, filamentary, and thick spiral arms.
The most likely cause of the distortions of the spiral pattern is thought to be interaction with galaxy satellites M32 and M110. This can be seen by the displacement of the neutral hydrogen clouds from the stars.
In 1998, images from the European Space Agency's Infrared Space Observatory demonstrated that the overall form of the Andromeda Galaxy may be transitioning into a ring galaxy. The gas and dust within the galaxy are generally formed into several overlapping rings, with a particularly prominent ring formed at a radius of from the core, nicknamed by some astronomers the ring of fire. This ring is hidden from visible light images of the galaxy because it is composed primarily of cold dust, and most of the star formation that is taking place in the Andromeda Galaxy is concentrated there.
Later studies with the help of the Spitzer Space Telescope showed how the Andromeda Galaxy's spiral structure in the infrared appears to be composed of two spiral arms that emerge from a central bar and continue beyond the large ring mentioned above. Those arms, however, are not continuous and have a segmented structure.
Close examination of the inner region of the Andromeda Galaxy with the same telescope also showed a smaller dust ring that is believed to have been caused by the interaction with M32 more than 200 million years ago. Simulations show that the smaller galaxy passed through the disk of the Andromeda Galaxy along the latter's polar axis. This collision stripped more than half the mass from the smaller M32 and created the ring structures in Andromeda.
It is the co-existence of the long-known large ring-like feature in the gas of Messier 31, together with this newly discovered inner ring-like structure, offset from the barycenter, that suggested a nearly head-on collision with the satellite M32, a milder version of the Cartwheel encounter.
Studies of the extended halo of the Andromeda Galaxy show that it is roughly comparable to that of the Milky Way, with stars in the halo being generally "metal-poor", and increasingly so with greater distance. This evidence indicates that the two galaxies have followed similar evolutionary paths. They are likely to have accreted and assimilated about 100–200 low-mass galaxies during the past 12 billion years. The stars in the extended halos of the Andromeda Galaxy and the Milky Way may extend nearly one-third the distance separating the two galaxies.
Nucleus
The Andromeda Galaxy is known to harbor a dense and compact star cluster at its very center, similar to our own galaxy. A large telescope creates a visual impression of a star embedded in the more diffuse surrounding bulge. In 1991, the Hubble Space Telescope was used to image the Andromeda Galaxy's inner nucleus. The nucleus consists of two concentrations separated by . The brighter concentration, designated as P1, is offset from the center of the galaxy. The dimmer concentration, P2, falls at the true center of the galaxy and contains an embedded star cluster, called P3, containing many UV-bright A-stars and the supermassive black hole, called M31*. The black hole is classified as a low-luminosity AGN (LLAGN) and it was detected only in radio wavelengths and in x-rays. It was quiescent in 2004–2005, but it was highly variable in 2006–2007. The mass of M31* was measured at 3–5 × 107 in 1993, and at 1.1–2.3 × 108 in 2005. The velocity dispersion of material around it is measured to be ≈ .
It has been proposed that the observed double nucleus could be explained if P1 is the projection of a disk of stars in an eccentric orbit around the central black hole. The eccentricity is such that stars linger at the orbital apocenter, creating a concentration of stars. It has been postulated that such an eccentric disk could have been formed from the result of a previous black hole merger, where the release of gravitational waves could have "kicked" the stars into their current eccentric distribution. P2 also contains a compact disk of hot, spectral-class A stars. The A stars are not evident in redder filters, but in blue and ultraviolet light they dominate the nucleus, causing P2 to appear more prominent than P1.
While at the initial time of its discovery it was hypothesized that the brighter portion of the double nucleus is the remnant of a small galaxy "cannibalized" by the Andromeda Galaxy, this is no longer considered a viable explanation, largely because such a nucleus would have an exceedingly short lifetime due to tidal disruption by the central black hole. While this could be partially resolved if P1 had its own black hole to stabilize it, the distribution of stars in P1 does not suggest that there is a black hole at its center.
Discrete sources
Apparently, by late 1968, no X-rays had been detected from the Andromeda Galaxy. A balloon flight on 20 October 1970 set an upper limit for detectable hard X-rays from the Andromeda Galaxy. The Swift BAT all-sky survey successfully detected hard X-rays coming from a region centered 6 arcseconds away from the galaxy center. The emission above 25 keV was later found to be originating from a single source named 3XMM J004232.1+411314, and identified as a binary system where a compact object (a neutron star or a black hole) accretes matter from a star.
Multiple X-ray sources have since been detected in the Andromeda Galaxy, using observations from the European Space Agency's (ESA) XMM-Newton orbiting observatory. Robin Barnard et al. hypothesized that these are candidate black holes or neutron stars, which are heating the incoming gas to millions of kelvins and emitting X-rays. Neutron stars and black holes can be distinguished mainly by measuring their masses. An observation campaign of NuSTAR space mission identified 40 objects of this kind in the galaxy.
In 2012, a microquasar, a radio burst emanating from a smaller black hole was detected in the Andromeda Galaxy. The progenitor black hole is located near the galactic center and has about 10 . It was discovered through data collected by the European Space Agency's XMM-Newton probe and was subsequently observed by NASA's Swift Gamma-Ray Burst Mission and Chandra X-Ray Observatory, the Very Large Array, and the Very Long Baseline Array. The microquasar was the first observed within the Andromeda Galaxy and the first outside of the Milky Way Galaxy.
Globular clusters
There are approximately 460 globular clusters associated with the Andromeda Galaxy. The most massive of these clusters, identified as Mayall II, nicknamed Globular One, has a greater luminosity than any other known globular cluster in the Local Group of galaxies. It contains several million stars and is about twice as luminous as Omega Centauri, the brightest known globular cluster in the Milky Way. Globular One (or G1) has several stellar populations and a structure too massive for an ordinary globular. As a result, some consider G1 to be the remnant core of a dwarf galaxy that was consumed by Andromeda in the distant past. The globular with the greatest apparent brightness is G76 which is located in the southwest arm's eastern half.
Another massive globular cluster, named 037-B327 and discovered in 2006 as is heavily reddened by the Andromeda Galaxy's interstellar dust, was thought to be more massive than G1 and the largest cluster of the Local Group; however, other studies have shown it is actually similar in properties to G1.
Unlike the globular clusters of the Milky Way, which show a relatively low age dispersion, Andromeda Galaxy's globular clusters have a much larger range of ages: from systems as old as the galaxy itself to much younger systems, with ages between a few hundred million years to five billion years.
In 2005, astronomers discovered a completely new type of star cluster in the Andromeda Galaxy. The new-found clusters contain hundreds of thousands of stars, a similar number of stars that can be found in globular clusters. What distinguishes them from the globular clusters is that they are much larger—several hundred light-years across—and hundreds of times less dense. The distances between the stars are, therefore, much greater within the newly discovered extended clusters.
The most massive globular cluster in the Andromeda Galaxy, B023-G078, likely has a central intermediate black hole of almost 100,000 solar masses.
PA-99-N2 event and possible exoplanet in galaxy
PA-99-N2 was a microlensing event detected in the Andromeda Galaxy in 1999. One of the explanations for this is the gravitational lensing of a red giant by a star with a mass between 0.02 and 3.6 times that of the Sun, which suggested that the star is likely orbited by a planet. This possible exoplanet would have a mass 6.34 times that of Jupiter. If finally confirmed, it would be the first ever found extragalactic planet. However, anomalies in the event were later found.
Nearby and satellite galaxies
Like the Milky Way, the Andromeda Galaxy has smaller satellite galaxies, consisting of over 20 known dwarf galaxies. The Andromeda Galaxy's dwarf galaxy population is very similar to the Milky Way's, but the galaxies are much more numerous. The best-known and most readily observed satellite galaxies are M32 and M110. Based on current evidence, it appears that M32 underwent a close encounter with the Andromeda Galaxy in the past. M32 may once have been a larger galaxy that had its stellar disk removed by M31 and underwent a sharp increase of star formation in the core region, which lasted until the relatively recent past.
M110 also appears to be interacting with the Andromeda Galaxy, and astronomers have found in the halo of the latter a stream of metal-rich stars that appear to have been stripped from these satellite galaxies. M110 does contain a dusty lane, which may indicate recent or ongoing star formation. M32 has a young stellar population as well.
The Triangulum Galaxy is a non-dwarf galaxy that lies 750,000 light-years from Andromeda. It is currently unknown whether it is a satellite of Andromeda.
In 2006, it was discovered that nine of the satellite galaxies lie in a plane that intersects the core of the Andromeda Galaxy; they are not randomly arranged as would be expected from independent interactions. This may indicate a common tidal origin for the satellites.
Collision with the Milky Way
The Andromeda Galaxy is approaching the Milky Way at about per second. It has been measured approaching relative to the Sun at around as the Sun orbits around the center of the galaxy at a speed of approximately . This makes the Andromeda Galaxy one of about 100 observable blueshifted galaxies. Andromeda Galaxy's tangential or sideways velocity concerning the Milky Way is relatively much smaller than the approaching velocity and therefore it is expected to collide directly with the Milky Way in about 2.5–4 billion years. A likely outcome of the collision is that the galaxies will merge to form a giant elliptical galaxy or possibly large disc galaxy. Such events are frequent among the galaxies in galaxy groups. The fate of Earth and the Solar System in the event of a collision is currently unknown. Before the galaxies merge, there is a small chance that the Solar System could be ejected from the Milky Way or join the Andromeda Galaxy.
Amateur observation
Under most viewing conditions, the Andromeda Galaxy is one of the most distant objects that can be seen with the naked eye, due to its sheer size. (M33 and, for observers with exceptionally good vision, M81 can be seen under very dark skies.) The galaxy is commonly located in the sky about the constellations Cassiopeia and Pegasus. Andromeda is best seen during autumn nights in the Northern Hemisphere when it passes high overhead, reaching its highest point around midnight in October, and two hours earlier each successive month. In the early evening, it rises in the east in September and sets in the west in February. From the Southern Hemisphere the Andromeda Galaxy is visible between October and December, best viewed from as far north as possible. Binoculars can reveal some larger structures of the galaxy and its two brightest satellite galaxies, M32 and M110. An amateur telescope can reveal Andromeda's disk, some of its brightest globular clusters, dark dust lanes, and the large star cloud NGC 206.
See also
List of Messier objects
List of galaxies
New General Catalogue
Notes
References
External links
StarDate: M31 Fact Sheet
Messier 31, SEDS Messier pages
Astronomy Picture of the Day
A Giant Globular Cluster in M31 1998 October 17.
M31: The Andromeda Galaxy 2004 July 18.
Andromeda Island Universe 2005 December 22.
Andromeda Island Universe 2010 January 9.
WISE Infrared Andromeda 2010 February 19.
M31's angular size compared with full Moon 2013 August 1.
M31 and its central Nuclear Spiral
Amateur photography – M31
Globular Clusters in M31 at The Curdridge Observatory
First direct distance to Andromeda − Astronomy magazine article
Andromeda Galaxy at SolStation.com
Andromeda Galaxy at The Encyclopedia of Astrobiology, Astronomy, & Spaceflight
M31, the Andromeda Galaxy at NightSkyInfo.com
M31 (Apparent) Novae Page (IAU)
Multi-wavelength composite
Andromeda Project (crowd-source)
Hubble's High-Definition Panoramic View of the Andromeda Galaxy
Infrared-Radio Image of the Andromeda Galaxy (M31)
Creative Commons Astrophotography M31 Andromeda image download & processing guide
Andromeda (constellation)
Andromeda Subgroup
Articles containing video clips
Astronomical objects known since antiquity
Barred spiral galaxies
00400+4059
Local Group
+07-02-016
Messier objects
NGC objects
002557
00454 | Andromeda Galaxy | Astronomy | 7,004 |
3,281,166 | https://en.wikipedia.org/wiki/Thermodynamic%20process | Classical thermodynamics considers three main kinds of thermodynamic processes: (1) changes in a system, (2) cycles in a system, and (3) flow processes.
(1) A Thermodynamic process is a process in which the thermodynamic state of a system is changed. A change in a system is defined by a passage from an initial to a final state of thermodynamic equilibrium. In classical thermodynamics, the actual course of the process is not the primary concern, and often is ignored. A state of thermodynamic equilibrium endures unchangingly unless it is interrupted by a thermodynamic operation that initiates a thermodynamic process. The equilibrium states are each respectively fully specified by a suitable set of thermodynamic state variables, that depend only on the current state of the system, not on the path taken by the processes that produce the state. In general, during the actual course of a thermodynamic process, the system may pass through physical states which are not describable as thermodynamic states, because they are far from internal thermodynamic equilibrium. Non-equilibrium thermodynamics, however, considers processes in which the states of the system are close to thermodynamic equilibrium, and aims to describe the continuous passage along the path, at definite rates of progress.
As a useful theoretical but not actually physically realizable limiting case, a process may be imagined to take place practically infinitely slowly or smoothly enough to allow it to be described by a continuous path of equilibrium thermodynamic states, when it is called a "quasi-static" process. This is a theoretical exercise in differential geometry, as opposed to a description of an actually possible physical process; in this idealized case, the calculation may be exact.
A really possible or actual thermodynamic process, considered closely, involves friction. This contrasts with theoretically idealized, imagined, or limiting, but not actually possible, quasi-static processes which may occur with a theoretical slowness that avoids friction. It also contrasts with idealized frictionless processes in the surroundings, which may be thought of as including 'purely mechanical systems'; this difference comes close to defining a thermodynamic process.
(2) A cyclic process carries the system through a cycle of stages, starting and being completed in some particular state. The descriptions of the staged states of the system are not the primary concern. The primary concern is the sums of matter and energy inputs and outputs to the cycle. Cyclic processes were important conceptual devices in the early days of thermodynamical investigation, while the concept of the thermodynamic state variable was being developed.
(3) Defined by flows through a system, a flow process is a steady state of flows into and out of a vessel with definite wall properties. The internal state of the vessel contents is not the primary concern. The quantities of primary concern describe the states of the inflow and the outflow materials, and, on the side, the transfers of heat, work, and kinetic and potential energies for the vessel. Flow processes are of interest in engineering.
Kinds of process
Cyclic process
Defined by a cycle of transfers into and out of a system, a cyclic process is described by the quantities transferred in the several stages of the cycle. The descriptions of the staged states of the system may be of little or even no interest. A cycle is a sequence of a small number of thermodynamic processes that indefinitely often, repeatedly returns the system to its original state. For this, the staged states themselves are not necessarily described, because it is the transfers that are of interest. It is reasoned that if the cycle can be repeated indefinitely often, then it can be assumed that the states are recurrently unchanged. The condition of the system during the several staged processes may be of even less interest than is the precise nature of the recurrent states. If, however, the several staged processes are idealized and quasi-static, then the cycle is described by a path through a continuous progression of equilibrium states.
Flow process
Defined by flows through a system, a flow process is a steady state of flow into and out of a vessel with definite wall properties. The internal state of the vessel contents is not the primary concern. The quantities of primary concern describe the states of the inflow and the outflow materials, and, on the side, the transfers of heat, work, and kinetic and potential energies for the vessel. The states of the inflow and outflow materials consist of their internal states, and of their kinetic and potential energies as whole bodies. Very often, the quantities that describe the internal states of the input and output materials are estimated on the assumption that they are bodies in their own states of internal thermodynamic equilibrium. Because rapid reactions are permitted, the thermodynamic treatment may be approximate, not exact.
A cycle of quasi-static processes
A quasi-static thermodynamic process can be visualized by graphically plotting the path of idealized changes to the system's state variables. In the example, a cycle consisting of four quasi-static processes is shown. Each process has a well-defined start and end point in the pressure-volume state space. In this particular example, processes 1 and 3 are isothermal, whereas processes 2 and 4 are isochoric. The PV diagram is a particularly useful visualization of a quasi-static process, because the area under the curve of a process is the amount of work done by the system during that process. Thus work is considered to be a process variable, as its exact value depends on the particular path taken between the start and end points of the process. Similarly, heat may be transferred during a process, and it too is a process variable.
Conjugate variable processes
It is often useful to group processes into pairs, in which each variable held constant is one member of a conjugate pair.
Pressure – volume
The pressure–volume conjugate pair is concerned with the transfer of mechanical energy as the result of work.
An isobaric process occurs at constant pressure. An example would be to have a movable piston in a cylinder, so that the pressure inside the cylinder is always at atmospheric pressure, although it is separated from the atmosphere. In other words, the system is dynamically connected, by a movable boundary, to a constant-pressure reservoir.
An isochoric process is one in which the volume is held constant, with the result that the mechanical PV work done by the system will be zero. On the other hand, work can be done isochorically on the system, for example by a shaft that drives a rotary paddle located inside the system. It follows that, for the simple system of one deformation variable, any heat energy transferred to the system externally will be absorbed as internal energy. An isochoric process is also known as an isometric process or an isovolumetric process. An example would be to place a closed tin can of material into a fire. To a first approximation, the can will not expand, and the only change will be that the contents gain internal energy, evidenced by increase in temperature and pressure. Mathematically, . The system is dynamically insulated, by a rigid boundary, from the environment.
Temperature – entropy
The temperature-entropy conjugate pair is concerned with the transfer of energy, especially for a closed system.
An isothermal process occurs at a constant temperature. An example would be a closed system immersed in and thermally connected with a large constant-temperature bath. Energy gained by the system, through work done on it, is lost to the bath, so that its temperature remains constant.
An adiabatic process is a process in which there is no matter or heat transfer, because a thermally insulating wall separates the system from its surroundings. For the process to be natural, either (a) work must be done on the system at a finite rate, so that the internal energy of the system increases; the entropy of the system increases even though it is thermally insulated; or (b) the system must do work on the surroundings, which then suffer increase of entropy, as well as gaining energy from the system.
An isentropic process is customarily defined as an idealized quasi-static reversible adiabatic process, of transfer of energy as work. Otherwise, for a constant-entropy process, if work is done irreversibly, heat transfer is necessary, so that the process is not adiabatic, and an accurate artificial control mechanism is necessary; such is therefore not an ordinary natural thermodynamic process.
Chemical potential - particle number
The processes just above have assumed that the boundaries are also impermeable to particles. Otherwise, we may assume boundaries that are rigid, but are permeable to one or more types of particle. Similar considerations then hold for the chemical potential–particle number conjugate pair, which is concerned with the transfer of energy via this transfer of particles.
In a constant chemical potential process the system is particle-transfer connected, by a particle-permeable boundary, to a constant-μ reservoir.
The conjugate here is a constant particle number process. These are the processes outlined just above. There is no energy added or subtracted from the system by particle transfer. The system is particle-transfer-insulated from its environment by a boundary that is impermeable to particles, but permissive of transfers of energy as work or heat. These processes are the ones by which thermodynamic work and heat are defined, and for them, the system is said to be closed.
Thermodynamic potentials
Any of the thermodynamic potentials may be held constant during a process. For example:
An isenthalpic process introduces no change in enthalpy in the system.
Polytropic processes
A polytropic process is a thermodynamic process that obeys the relation:
where P is the pressure, V is volume, n is any real number (the "polytropic index"), and C is a constant. This equation can be used to accurately characterize processes of certain systems, notably the compression or expansion of a gas, but in some cases, liquids and solids.
Processes classified by the second law of thermodynamics
According to Planck, one may think of three main classes of thermodynamic process: natural, fictively reversible, and impossible or unnatural.
Natural process
Only natural processes occur in nature. For thermodynamics, a natural process is a transfer between systems that increases the sum of their entropies, and is irreversible. Natural processes may occur spontaneously upon the removal of a constraint, or upon some other thermodynamic operation, or may be triggered in a metastable or unstable system, as for example in the condensation of a supersaturated vapour. Planck emphasised the occurrence of friction as an important characteristic of natural thermodynamic processes that involve transfer of matter or energy between system and surroundings.
Effectively reversible process
To describe the geometry of graphical surfaces that illustrate equilibrium relations between thermodynamic functions of state, no one can fictively think of so-called "reversible processes". They are convenient theoretical objects that trace paths across graphical surfaces. They are called "processes" but do not describe naturally occurring processes, which are always irreversible. Because the points on the paths are points of thermodynamic equilibrium, it is customary to think of the "processes" described by the paths as fictively "reversible". Reversible processes are always quasistatic processes, but the converse is not always true.
Unnatural process
Unnatural processes are logically conceivable but do not occur in nature. They would decrease the sum of the entropies if they occurred.
Quasistatic process
A quasistatic process is an idealized or fictive model of a thermodynamic "process" considered in theoretical studies. It does not occur in physical reality. It may be imagined as happening infinitely slowly so that the system passes through a continuum of states that are infinitesimally close to equilibrium.
See also
Flow process
Heat
Phase transition
Work (thermodynamics)
References
Further reading
Physics for Scientists and Engineers - with Modern Physics (6th Edition), P. A. Tipler, G. Mosca, Freeman, 2008,
Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, (Verlagsgesellschaft), (VHC Inc.)
McGraw Hill Encyclopaedia of Physics (2nd Edition), C.B. Parker, 1994,
Physics with Modern Applications, L.H. Greenberg, Holt-Saunders International W.B. Saunders and Co, 1978,
Essential Principles of Physics, P.M. Whelan, M.J. Hodgeson, 2nd Edition, 1978, John Murray,
Thermodynamics, From Concepts to Applications (2nd Edition), A. Shavit, C. Gutfinger, CRC Press (Taylor and Francis Group, USA), 2009,
Chemical Thermodynamics, D.J.G. Ives, University Chemistry, Macdonald Technical and Scientific, 1971,
Elements of Statistical Thermodynamics (2nd Edition), L.K. Nash, Principles of Chemistry, Addison-Wesley, 1974,
Statistical Physics (2nd Edition), F. Mandl, Manchester Physics, John Wiley & Sons, 2008,
Equilibrium chemistry
Thermodynamic cycles
Thermodynamic systems
Thermodynamics | Thermodynamic process | Physics,Chemistry,Mathematics | 2,840 |
33,351,217 | https://en.wikipedia.org/wiki/My%20Wife%20and%20My%20Mother-in-Law | "My Wife and My Mother-in-Law" is a famous ambiguous image, which can be perceived either as a young woman or an old woman (the "wife" and the "mother-in-law", respectively). The young woman appears with her face turned away from the viewer while the old woman appears in profile, so the part of the drawing that represents the young woman's ear is the old woman's eye; the young woman's chin is the old woman's nose; and the young woman's choker is the old woman's mouth.
History
American cartoonist William Ely Hill (1887–1962) published "My Wife and My Mother-in-Law" in Puck, an American humour magazine, on 6 November 1915, with the caption "They are both in this picture — Find them". However, the oldest known form of this image is an 1888 German postcard.
In 1930, Edwin Boring introduced the figure to psychologists in a paper titled "A new ambiguous figure", and it has since appeared in textbooks and experimental studies. And then in 1961, Jack Botwinick introduced a new figure with a masculine motif, "Husband and Father-in-Law", which complements Hill's figure.
References
See also
Reversible figure
Optical illusions | My Wife and My Mother-in-Law | Physics | 267 |
7,987,653 | https://en.wikipedia.org/wiki/Koszul%20algebra | In abstract algebra, a Koszul algebra is a graded -algebra over which the ground field has a linear minimal graded free resolution, i.e., there exists an exact sequence:
for some nonnegative integers . Here is the graded algebra with grading shifted up by , i.e. , and the exponent refers to the -fold direct sum. Choosing bases for the free modules in the resolution, the chain maps are given by matrices, and the definition requires the matrix entries to be zero or linear forms.
An example of a Koszul algebra is a polynomial ring over a field, for which the Koszul complex is the minimal graded free resolution of the ground field. There are Koszul algebras whose ground fields have infinite minimal graded free resolutions, e.g, .
The concept is named after the French mathematician Jean-Louis Koszul.
See also
Koszul duality
Complete intersection ring
References
.
.
.
.
Algebras | Koszul algebra | Mathematics | 195 |
57,152,266 | https://en.wikipedia.org/wiki/Neoclassical%20transport | In plasma physics and magnetic confinement fusion, neoclassical transport or neoclassical diffusion is a theoretical description of collisional transport in toroidal plasmas, usually found in tokamaks or stellarators. It is a modification of classical diffusion adding in effects of non-uniform magnetic fields due to the toroidal geometry, which give rise to new diffusion effects.
Description
Classical transport models a plasma in a magnetic field as a large number of particles traveling in helical paths around a line of force. In typical reactor designs, the lines are roughly parallel, so particles orbiting adjacent lines may collide and scatter. This results in a random walk process which eventually leads to the particles finding themselves outside the magnetic field.
Neoclassical transport adds the effects of the geometry of the fields. In particular, it considers the field inside the tokamak and similar toroidal arrangements, where the field is stronger on the inside curve than the outside simply due to the magnets being closer together in that area. To even out these forces, the field as a whole is twisted into a helix, so that the particles alternately move from the inside to the outside of the reactor.
In this case, as the particle transits from the outside to the inside, it sees an increasing magnetic force. If the particle energy is low, this increasing field may cause the particle to reverse directions, as in a magnetic mirror. The particle now travels in the reverse direction through the reactor, to the outside limit, and then back towards the inside where the same reflection process occurs. This leads to a population of particles bouncing back and forth between two points, tracing out a path that looks like a banana from above, the so-called banana orbits.
Since any particle in the long tail of the Maxwell–Boltzmann distribution is subject to this effect, there is always some natural population of such banana particles. Since these travel in the reverse direction for half of their orbit, their drift behavior is oscillatory in space. Therefore, when the particles collide, their average step size (width of the banana) is much larger than their gyroradius, leading to neoclassical diffusion across the magnetic field.
Trapped particles and banana orbits
A consequence of the toroidal geometry to the guiding-center orbits is that some particles can be reflected on the trajectory from the outboard side to the inboard side due to the presence of magnetic field gradients, similar to a magnetic mirror. The reflected particles cannot do a full turn in the poloidal plane and are trapped which follow the banana orbits.
This can be demonstrated by considering tokamak equilibria for low- and large aspect ratio which have nearly circular cross sections, where polar coordinates centered at the magnetic axis can be used with approximately describing the flux surfaces. The magnitude of the total magnetic field can be approximated by the following expression:where the subscript indicates value at the magnetic axis , is the major radius, is the inverse aspect ratio, and is the magnetic field. The parallel component of the drift-ordered guiding-center orbits in this magnetic field, assuming no electric field, is given by:
where is the particle mass, is the velocity, and is the magnetic moment (first adiabatic invariant). The direction in the subscript indicates parallel or perpendicular to the magnetic filed. is the effective potential reflecting the conservation of kinetic energy .
The parallel trajectory experiences a mirror force where the particle moving into a magnetic field of increasing magnitude can be reflected by this force. If a magnetic field has a minimum along a field line, the particles in this region of weaker field can be trapped. This is indeed true given the form of we use. The particles are reflected (trapped particles) for sufficiently large or complete their poloidal turn (passing particles) otherwise.
To see this in detail, the maximum and minimum of the effective potential can be identified as and . The passing particles have and the trapped particles have . Recognising this and define a constant of motion , we have
Passing:
Trapped:
Orbit width
The orbit width can be estimated by considering the variation in over an orbit period . Using the conservation of and ,The orbit widths can then be estimated, which gives
Passing width:
Banana width:
The bounce angle at which becomes zero for the trapped particles is
Bounce time
The bounce time is the time required for a particle to complete its poloidal orbit. This is calculated bywhere . The integral can be rewritten aswhere and , which is also equivalent to for trapped particles. This can be evaluated using the results from the complete elliptic integral of the first kindwith propertiesThe bounce time for passing particles is obtained by integrating between where the bounce time for trapped particle is evaluated by integrating between and taking The limiting cases are
Super passing:
Super trapped:
Barely trapped:
Neoclassical transport regimes
Banana regime
Pfirsch-Schlüter regime
Plateau regime
See also
Wendelstein 7-X
References
Fusion power
Transport phenomena
Diffusion
Tokamaks | Neoclassical transport | Physics,Chemistry,Engineering | 995 |
2,259,059 | https://en.wikipedia.org/wiki/Chronometry | Chronometry or horology () is the science studying the measurement of time and timekeeping. Chronometry enables the establishment of standard measurements of time, which have applications in a broad range of social and scientific areas. Horology usually refers specifically to the study of mechanical timekeeping devices, while chronometry is broader in scope, also including biological behaviours with respect to time (biochronometry), as well as the dating of geological material (geochronometry).
Horology is commonly used specifically with reference to the mechanical instruments created to keep time: clocks, watches, clockwork, sundials, hourglasses, clepsydras, timers, time recorders, marine chronometers, and atomic clocks are all examples of instruments used to measure time. People interested in horology are called horologists. That term is used both by people who deal professionally with timekeeping apparatuses, as well as enthusiasts and scholars of horology. Horology and horologists have numerous organizations, both professional associations and more scholarly societies. The largest horological membership organisation globally is the NAWCC, the National Association of Watch and Clock Collectors, which is US based, but also has local chapters elsewhere.
Records of timekeeping are attested during the Paleolithic, in the form of inscriptions made to mark the passing of lunar cycles and measure years. Written calendars were then invented, followed by mechanical devices. The highest levels of precision are presently achieved by atomic clocks, which are used to track the international standard second.
Etymology
Chronometry is derived from two root words, chronos and metron (χρόνος and μέτρον in Ancient Greek respectively), with rough meanings of "time" and "measure". The combination of the two is taken to mean time measuring.
In the Ancient Greek lexicon, meanings and translations differ depending on the source. Chronos, used in relation to time when in definite periods, and linked to dates in time, chronological accuracy, and sometimes in rare cases, refers to a delay. The length of the time it refers ranges from seconds to seasons of the year to lifetimes, it can also concern periods of time wherein some specific event takes place, or persists, or is delayed.
The root word is correlated with the god Chronos in Ancient Greek mythology, who embodied the image of time, originated from out of the primordial chaos. Known as the one who spins the Zodiac Wheel, further evidence of his connection to the progression of time. However, Ancient Greek makes a distinction between two types of time, chronos, the static and continuing progress of present to future, time in a sequential and chronological sense, and Kairos, a concept based in a more abstract sense, representing the opportune moment for action or change to occur.
Kairos (καιρός) carries little emphasis on precise chronology, instead being used as a time specifically fit for something, or also a period of time characterised by some aspect of crisis, also relating to the endtime. It can as well be seen in the light of an advantage, profit, or fruit of a thing, but has also been represented in apocalyptic feeling, and likewise shown as variable between misfortune and success, being likened to a body part vulnerable due to a gap in armor for Homer, benefit or calamity depending on the perspective. It is also referenced in Christian theology, being used as implication of God's action and judgement in circumstances.
Because of the inherent relation between chronos and kairos, their function the Ancient Greek's portrayal and concept of time, understanding one means understanding the other in part. The implication of chronos, an indifferent disposition and eternal essence lies at the core of the science of chronometry, bias is avoided, and definite measurement is favoured.
Subfields
Biochronometry
Biochronometry (also chronobiology or biological chronometry) is the study of biological behaviours and patterns seen in animals with factors based in time. It can be categorised into Circadian rhythms and Circannual cycles. Examples of these behaviours can be: the relation of daily and seasonal tidal cues to the activity of marine plants and animals, the photosynthetic capacity and phototactic responsiveness in algae, or metabolic temperature compensation in bacteria.
Circadian rhythms of various species can be observed through their gross motor function throughout the course of a day. These patterns are more apparent with the day further categorised into activity and rest times. Investigation into a species is conducted through comparisons of free-running and entrained rhythms, where the former is attained from within the species' natural environment and the latter from a subject that has been taught certain behaviours. Circannual rhythms are alike but pertain to patterns within the scale of a year, patterns like migration, moulting, reproduction, and body weight are common examples, research and investigation are achieved with similar methods to circadian patterns.
Circadian and circannual rhythms can be seen in all organisms, in both single and multi-celled organisms. A sub-branch of biochronometry is microbiochronometry (also chronomicrobiology or microbiological chronometry), and is the examination of behavioural sequences and cycles within micro-organisms. Adapting to circadian and circannual rhythms is an essential evolution for living organisms, these studies, as well as educating on the adaptations of organisms also bring to light certain factors affecting many of species' and organisms' responses, and can also be applied to further understand the overall physiology, this can be for humans as well, examples include: factors of human performance, sleep, metabolism, and disease development, which are all connected to biochronometrical cycles.
Mental chronometry
Mental chronometry (also called cognitive chronometry) studies human information processing mechanisms, namely reaction time and perception. As well as a field of chronometry, it also forms a part of cognitive psychology and its contemporary human information processing approach. Research comprises applications of the chronometric paradigms – many of which are related to classical reaction time paradigms from psychophysiology – through measuring reaction times of subjects with varied methods, and contribute to studies in cognition and action. Reaction time models and the process of expressing the temporostructural organisation of human processing mechanisms have an innate computational essence to them. It has been argued that because of this, conceptual frameworks of cognitive psychology cannot be integrated in their typical fashions.
One common method is the use of event-related potentials (ERPs) in stimulus-response experiments. These are fluctuations of generated transient voltages in neural tissues that occur in response to a stimulus event either immediately before or after. This testing emphasises the mental events' time-course and nature and assists in determining the structural functions in human information processing.
Geochronometry
The dating of geological materials makes up the field of geochronometry, and falls within areas of geochronology and stratigraphy, while differing itself from chronostratigraphy. The geochronometric scale is periodic, its units working in powers of 1000, and is based in units of duration, contrasting with the chronostratigraphic scale. The distinctions between the two scales have caused some confusion – even among academic communities.
Geochronometry deals with calculating a precise date of rock sediments and other geological events, giving an idea as to what the history of various areas is, for example, volcanic and magmatic movements and occurrences can be easily recognised, as well as marine deposits, which can be indicators for marine events and even global environmental changes. This dating can be done in a number of ways. All dependable methods – barring the exceptions of thermoluminescence, radioluminescence and ESR (electron spin resonance) dating – are based in radioactive decay, focusing on the degradation of the radioactive parent nuclide and the corresponding daughter product's growth.
By measuring the daughter isotopes in a specific sample its age can be calculated. The preserved conformity of parent and daughter nuclides provides the basis for the radioactive dating of geochronometry, applying the Rutherford Soddy Law of Radioactivity, specifically using the concept of radioactive transformation in the growth of the daughter nuclide.
Thermoluminescence is an extremely useful concept to apply, being used in a diverse amount of areas in science, dating using thermoluminescence is a cheap and convenient method for geochronometry. Thermoluminescence is the production of light from a heated insulator and semi-conductor, it is occasionally confused with incandescent light emissions of a material, a different process despite the many similarities. However, this only occurs if the material has had previous exposure to and absorption of energy from radiation. Importantly, the light emissions of thermoluminescence cannot be repeated. The entire process, from the material's exposure to radiation would have to be repeated to generate another thermoluminescence emission. The age of a material can be determined by measuring the amount of light given off during the heating process, by means of a phototube, as the emission is proportional to the dose of radiation the material absorbed.
Time metrology
Time metrology or time and frequency metrology is the application of metrology for timekeeping, including frequency stability.
Its main tasks are the realization of the second as the SI unit of measurement for time and the establishment of time standards and frequency standards as well as their dissemination.
History
Early humans would have used their basic senses to perceive the time of day, and relied on their biological sense of time to discern the seasons in order to act accordingly. Their physiological and behavioural seasonal cycles mainly being influenced by a melatonin based photoperiod time measurement biological system – which measures the change in daylight within the annual cycle, giving a sense of the time in the year – and their circannual rhythms, providing an anticipation of environmental events months beforehand to increase chances of survival.
There is debate over when the earliest use of lunar calendars was, and over whether some findings constituted as a lunar calendar. Most related findings and materials from the palaeolithic era are fashioned from bones and stone, with various markings from tools. These markings are thought to not have been the result of marks to represent the lunar cycles but non-notational and irregular engravings, a pattern of latter subsidiary marks that disregard the previous design is indicative of the markings being the use of motifs and ritual marking instead.
However, as humans' focus turned to farming the importance and reliance on understanding the rhythms and cycle of the seasons grew, and the unreliability of lunar phases became problematic. An early human accustomed to the phases of the moon would use them as a rule of thumb, and the potential for weather to interfere with reading the cycle further degraded the reliability. The length of a moon is on average less than our current month, not acting as a dependable alternate, so as years progress the room of error between would grow until some other indicator would give indication.
The Ancient Egyptian calendars were among the first calendars made, and the civil calendar even endured for a long period afterwards, surviving past even its culture's collapse and through the early Christian era. It has been assumed to have been invented near 4231 BC by some, but accurate and exact dating is difficult in its era and the invention has been attributed to 3200 BC, when the first historical king of Egypt, Menes, united Upper and Lower Egypt. It was originally based on cycles and phases of the moon, however, Egyptians later realised the calendar was flawed upon noticing the star Sirius rose before sunrise every 365 days, a year as we know it now, and was remade to consist of twelve months of thirty days, with five epagomenal days. The former is referred to as the Ancient Egyptians' lunar calendar, and the latter the civil calendar.
Early calendars often hold an element of their respective culture's traditions and values, for example, the five day intercalary month of the Ancient Egyptian's civil calendar representing the birthdays of the gods Horus, Isis, Set, Osiris and Nephthys. Maya use of a zero date as well as the Tzolkʼin's connection to their thirteen layers of heaven (the product of it and all the human digits, twenty, making the 260-day year of the year) and the length of time between conception and birth in pregnancy.
Museums and libraries
Europe
There are many horology museums and several specialized libraries devoted to the subject. One example is the Royal Greenwich Observatory, which is also the source of the Prime Meridian and the home of the first marine timekeepers accurate enough to determine longitude (made by John Harrison). Other horological museums in the London area include the Clockmakers' Museum, which re-opened at the Science Museum in October 2015, the horological collections at the British Museum, the Science Museum (London), and the Wallace Collection. The Guildhall Library in London contains an extensive public collection on horology. In Upton, also in the United Kingdom, at the headquarters of the British Horological Institute, there is the Museum of Timekeeping. A more specialised museum of horology in the United Kingdom is the Cuckooland Museum in Cheshire, which hosts the world's largest collection of antique cuckoo clocks.
One of the more comprehensive museums dedicated to horology is the Musée international d'horlogerie, in La Chaux-de-Fonds in Switzerland, which contains a public library of horology. The Musée d'Horlogerie du Locle is smaller but located nearby. Other good horological libraries providing public access are at the Musée international d'horlogerie in Switzerland, at La Chaux-de-Fonds, and at Le Locle.
In France, Besançon has the Musée du Temps (Museum of Time) in the historic Palais Grenvelle. In Serpa and Évora, in Portugal, there is the Museu do Relógio. In Germany, there is the Deutsches Uhrenmuseum in Furtwangen im Schwarzwald, in the Black Forest, which contains a public library of horology.
North America
The two leading specialised horological museums in North America are the National Watch and Clock Museum in Columbia, Pennsylvania, and the American Clock and Watch Museum in Bristol, Connecticut. Another museum dedicated to clocks is the Willard House and Clock Museum in Grafton, Massachusetts. One of the most comprehensive horological libraries open to the public is the National Watch and Clock Library in Columbia, Pennsylvania.
Organizations
Notable scholarly horological organizations include:
American Watchmakers-Clockmakers Institute – AWCI (United States of America)
Antiquarian Horological Society – AHS (United Kingdom)
British Horological Institute – BHI (United Kingdom)
Chronometrophilia (Switzerland)
Deutsche Gesellschaft für Chronometrie – DGC (Germany)
Horological Society of New York – HSNY (United States of America)
National Association of Watch and Clock Collectors – NAWCC (United States of America)
UK Horology - UK Clock & Watch Company based in Bristol
Glossary
See also
Complication (horology)
Hora (astrology)
List of clock manufacturers
List of watch manufacturers
Winthrop Kellogg Edey
Allan variance
Clock drift
International Earth Rotation and Reference Systems Service
Time and Frequency Standards Laboratory
Time deviation
Notes
References
Further reading
Berner, G.A., Illustrated Professional Dictionary of Horology, Federation of the Swiss Watch Industry FH 1961 - 2012
Daniels, George, Watchmaking, London: Philip Wilson Publishers, 1981 (reprinted June 15, 2011)
Beckett, Edmund, A Rudimentary Treatise on Clocks, Watches and Bells, 1903, from Project Gutenberg
Grafton, Edward, Horology, a popular sketch of clock and watch making, London: Aylett and Jones, 1849
Time
Frequency
Metrology
Timekeeping | Chronometry | Physics,Mathematics | 3,318 |
4,566,568 | https://en.wikipedia.org/wiki/Tameness%20theorem | In mathematics, the tameness theorem states that every complete hyperbolic 3-manifold with finitely generated fundamental group is topologically tame, in other words homeomorphic to the interior of a compact 3-manifold.
The tameness theorem was conjectured by . It was proved by and, independently, by Danny Calegari and David Gabai. It is one of the fundamental properties of geometrically infinite hyperbolic 3-manifolds, together with the density theorem for Kleinian groups and the ending lamination theorem.
It also implies the Ahlfors measure conjecture.
History
Topological tameness may be viewed as a property of the ends of the manifold, namely, having a local product structure. An analogous statement is well known in two dimensions, that is, for surfaces. However, as the example of Alexander horned sphere shows, there are wild embeddings among 3-manifolds, so this property is not automatic.
The conjecture was raised in the form of a question by Albert Marden, who proved that any geometrically finite hyperbolic 3-manifold is topologically tame. The conjecture was also called the Marden conjecture or the tame ends conjecture.
There had been steady progress in understanding tameness before the conjecture was resolved. Partial results had been obtained by Thurston, Brock, Bromberg, Canary, Evans, Minsky, Ohshika. An important sufficient condition for tameness in terms of splittings of the fundamental group had been obtained by Bonahon.
The conjecture was proved in 2004 by Ian Agol, and independently, by Danny Calegari and David Gabai. Agol's proof relies on the use of manifolds of pinched negative curvature and on Canary's trick of "diskbusting" that allows to replace a compressible end with an incompressible end, for which the conjecture has already been proved. The Calegari–Gabai proof is centered on the existence of certain closed, non-positively curved surfaces that they call "shrinkwrapped".
See also
Tame topology
References
.
.
.
.
3-manifolds
Conjectures that have been proved
Differential geometry
Hyperbolic geometry
Kleinian groups
Manifolds
Theorems in geometry | Tameness theorem | Mathematics | 442 |
24,509,229 | https://en.wikipedia.org/wiki/Gymnopilus%20noviholocirrhus | Gymnopilus noviholocirrhus is a species of mushroom in the family Hymenogastraceae. This species is known only from one locality, on the island of Hahajima, growing on the species Celtis boninensis. It is thought to be extinct.
See also
List of Gymnopilus species
References
External links
Gymnopilus noviholocirrhus at Index Fungorum
noviholocirrhus
Fungi of North America
Fungus species | Gymnopilus noviholocirrhus | Biology | 101 |
5,612,520 | https://en.wikipedia.org/wiki/Eduardo%20D.%20Sontag | Eduardo Daniel Sontag (born April 16, 1951, in Buenos Aires, Argentina) is an Argentine-American mathematician, and distinguished university professor at Northeastern University, who works in the fields control theory, dynamical systems, systems molecular biology, cancer and immunology, theoretical computer science, neural networks, and computational biology.
Biography
Sontag received his Licenciado degree from the mathematics department at the University of Buenos Aires in 1972, and his Ph.D. in Mathematics under Rudolf Kálmán at the Center for Mathematical Systems Theory at the University of Florida in 1976.
From 1977 to 2017, he was with the department of mathematics at Rutgers, The State University of New Jersey, where he was a Distinguished Professor of Mathematics as well as a Member of the Graduate Faculty of the Department of Computer Science and the Graduate Faculty of the Department of Electrical and Computer Engineering, and a Member of the Rutgers Cancer Institute of NJ. In addition, Dr. Sontag served as the head of the undergraduate Biomathematics Interdisciplinary Major, director of the Center for Quantitative Biology, and director of graduate studies of the Institute for Quantitative Biomedicine. In January 2018, Dr. Sontag was appointed as a University Distinguished Professor in the Department of Electrical and Computer Engineering and the Department of BioEngineering at Northeastern University, where he is also an affiliate member of the Department of Mathematics and the Department of Chemical Engineering. Since 2006, he has been a research affiliate at the Laboratory for Information and Decision Systems, MIT, and since 2018 he has been a member of the faculty in the Program in Therapeutic Science, Laboratory for Systems Pharmacology at Harvard Medical School.
Eduardo Sontag has authored over five hundred research papers and monographs and book chapters in the above areas with about 60,000 citations and an h-index of 104. He is in the editorial board of several journals, including: IET Proceedings Systems Biology, Synthetic and Systems Biology International Journal of Biological Sciences, and Journal of Computer and Systems Sciences, and is a former board member of SIAM Review, IEEE Transactions on Automatic Control, Systems and Control Letters, Dynamics and Control, Neurocomputing, Neural Networks, Neural Computing Surveys, Control-Theory and Advanced Technology, Nonlinear Analysis: Hybrid Systems, and Control, Optimization and the Calculus of Variations. In addition, he is a co-founder and co-Managing Editor of Mathematics of Control, Signals, and Systems.
Sontag was married to Frances David-Sontag, who died in 2017. His daughter Laura Kleiman is founder and CEO at Reboot Rx, and his son David Sontag leads the MIT Clinical Machine Learning Group.
Work
His work in control theory led to the introduction of the concept of input-to-state stability (ISS), a stability theory notion for nonlinear systems, and control-Lyapunov functions. Many of the subsequent results were proved in collaboration with his student Yuan Wang and with David Angeli. In systems biology, Sontag introduced together with David Angeli the concept of input/output monotone system. In theory of computation, he proved the first results on computational complexity in nonlinear controllability, and introduced together with his student Hava Siegelmann a new approach to analog computation and super-Turing computing.
Awards and honors
Sontag became an Institute of Electrical and Electronics Engineers (IEEE) Fellow in 1993. He was awarded the Reid Prize in Mathematics in 2001, the 2002 Hendrik W. Bode Lecture Prize from the IEEE,
the 2002 Board of Trustees Award for Excellence in Research from Rutgers University,
the 2005 Teacher/Scholar Award from Rutgers University,
and the 2011 IEEE Control Systems Award.
In 2022, he was awarded the Richard E. Bellman Control Heritage Award, which is the highest recognition in control theory and engineering in the United States. He was honored “for pioneering contributions to stability analysis and nonlinear control, and for advancing the control theoretic foundations of systems biology.”
In 2011 he became a fellow of the Society for Industrial and Applied Mathematics, in 2012 a fellow of the American Mathematical Society, and in 2014 a fellow of the International Federation of Automatic Control.
Sontag was elected a Member of the American Academy of Arts and Sciences in April 2024. He is a collaborator of the IBS Biomedical Mathematics Group.
Publications
Sontag is co-author of several hundred research papers, as well as three books:
1972, Topics in Artificial Intelligence (in Spanish, Buenos Aires: Prolam, 1972)
1979, Polynomial Response Maps (Berlin: Springer, 1979).
1998, Mathematical Control Theory: Deterministic Finite Dimensional Systems, 2nd Edition (Texts in Applied Mathematics, Volume 6, Second Edition, New York: Springer, 1998)
Selected Public Research Rankings
Research.com top 100 US electrical engineers.
Research.com top 100 US mathematicians.
Most-cited author in: IEEE Transactions on Automatic Control in 1981, 1996, 1997; Systems and Control Letters 1989, 1991, 1995, 1998, and lifetime of journal; SIAM Journal on Control and Optimization 1983, 1986; Theoretical Computer Science 1994; as well as many other journal/years.
Elsevier/Stanford list of top 0.5% among 2% top scientists worldwide.
MathScinet list of three most-cited applied mathematicians who got PhD in 1976.
References
External links
Link to Eduardo Sontag's Homepage
1951 births
Living people
20th-century American mathematicians
21st-century American mathematicians
American people of Argentine descent
Argentine mathematicians
Control theorists
Fellows of the American Mathematical Society
Fellows of the Society for Industrial and Applied Mathematics
Rutgers University faculty
Systems biologists
University of Florida alumni
Northeastern University faculty
University of Buenos Aires alumni | Eduardo D. Sontag | Engineering | 1,133 |
2,137,828 | https://en.wikipedia.org/wiki/Cold%20inflation%20pressure | Cold inflation pressure is the inflation pressure of tires as measured before a car is driven and the tires warmed up. Recommended cold inflation pressure is displayed in the owner's manual and on the Tire Information Placard attached to the vehicle door edge, pillar, glovebox door or fuel filler flap.
Cold inflation pressure is a gauge pressure and not an absolute pressure.
This article focuses on cold inflation pressures for passenger vehicles and trucks. The general principles are, of course, applicable to bicycle tires, tractor tires, and any other kind of tire with an internal structure that gives it a defined size and shape (as opposed to something that might resemble a very flexible balloon).
A 2001 NHTSA study found that 40% of passenger cars have at least one tire under-inflated by or more. The number one cause of tire failure was determined to be under-inflation. Drivers are encouraged to make sure their tires are adequately inflated at all times.
Under-inflated tires can greatly reduce fuel economy, increase emissions, cause increased wear on the edges of the tread surface, and can lead to overheating and premature failure of the tire.
Excessive pressure, on the other hand, will lead to impact-breaks, decreased braking performance, and increased wear on the center part of the tread surface.
Tire pressure is commonly measured in psi in the imperial and US customary systems, bar, which is deprecated but accepted for use with SI, or the kilopascal (kPa), which is an SI unit.
Variation of tire pressure with temperature
Daily temperature fluctuations can result in appreciable changes in tire pressure. Cold inflation pressure should therefore be measured in the morning, as this is the coldest time of day. This will ensure a tire meets or exceeds the required inflation pressure at any time of day.
Seasonal temperature fluctuations can also result in appreciable changes in tire pressure, and a tire that is properly inflated in the summer is likely to become underinflated in the winter. Because of this, it is important to check tire pressures whenever the local seasons change.
Variation of tire pressure with altitude
Atmospheric pressure will decrease around 0.5 psi for every 1000 feet above sea level. As a vehicle descends from a high altitude location, the absolute pressure inside the tire remains the same, but the atmospheric pressure increases; therefore the gauge pressure will decrease.
Take for example a vehicle which had its cold inflation tire pressure set near Denver (altitude 5300 feet), and is descending towards Los Angeles (altitude 300 feet). The tires could become underinflated by as much as 2.5 psi.
Cold inflation pressure should therefore be readjusted after any significant changes in altitude.
See also
Direct TPMS
Tire-pressure gauge
Tire-pressure monitoring system
References
Tire inflation
Pressure
Motor vehicle maintenance | Cold inflation pressure | Physics | 565 |
1,068,768 | https://en.wikipedia.org/wiki/Phytoremediation | Phytoremediation technologies use living plants to clean up soil, air and water contaminated with hazardous contaminants. It is defined as "the use of green plants and the associated microorganisms, along with proper soil amendments and agronomic techniques to either contain, remove or render toxic environmental contaminants harmless". The term is an amalgam of the Greek phyto (plant) and Latin remedium (restoring balance). Although attractive for its cost, phytoremediation has not been demonstrated to redress any significant environmental challenge to the extent that contaminated space has been reclaimed.
Phytoremediation is proposed as a cost-effective plant-based approach of environmental remediation that takes advantage of the ability of plants to concentrate elements and compounds from the environment and to detoxify various compounds without causing additional pollution. The concentrating effect results from the ability of certain plants called hyperaccumulators to bioaccumulate chemicals. The remediation effect is quite different. Toxic heavy metals cannot be degraded, but organic pollutants can be, and are generally the major targets for phytoremediation. Several field trials confirmed the feasibility of using plants for environmental cleanup.
Background
Soil remediation is an expensive and complicated process. Traditional methods involve removal of the contaminated soil followed by treatment and return of the treated soil.
Phytoremediation could in principle be a more cost effective solution. Phytoremediation may be applied to polluted soil or static water environment. This technology has been increasingly investigated and employed at sites with soils contaminated heavy metals like with cadmium, lead, aluminum, arsenic and antimony. These metals can cause oxidative stress in plants, destroy cell membrane integrity, interfere with nutrient uptake, inhibit photosynthesis and decrease plant chlorophyll.
Phytoremediation has been used successfully in the restoration of abandoned metal mine workings, and sites where polychlorinated biphenyls have been dumped during manufacture and mitigation of ongoing coal mine discharges reducing the impact of contaminants in soils, water, or air. Contaminants such as metals, pesticides, solvents, explosives, and crude oil and its derivatives, have been mitigated in phytoremediation projects worldwide. Many plants such as mustard plants, alpine pennycress, hemp, and pigweed have proven to be successful at hyperaccumulating contaminants at toxic waste sites.
Not all plants are able to accumulate heavy metals or organics pollutants due to differences in the physiology of the plant. Even cultivars within the same species have varying abilities to accumulate pollutants.
Advantages and limitations
Advantages
the cost of the phytoremediation is lower than that of traditional processes both in situ and ex situ
the possibility of the recovery and re-use of valuable metals (by companies specializing in "phytomining")
it preserves the topsoil, maintaining the fertility of the soil
Increase soil health, yield, and plant phytochemicals
the use of plants also reduces erosion and metal leaching in the soil
Noise, smell and visual disruption are usually less than with alternative methods. The :de:Galmeivegetation of hyperaccumulator plants is even protected by environmental legislation in many areas where it occurs.
Limitations
phytoremediation is limited to the surface area and depth occupied by the roots.
with plant-based systems of remediation, it is not possible to completely prevent the leaching of contaminants into the groundwater (without the complete removal of the contaminated ground, which in itself does not resolve the problem of contamination)
the survival of the plants is affected by the toxicity of the contaminated land and the general condition of the soil
bio-accumulation of contaminants, especially metals, into the plants can affect consumer products like food and cosmetics, and requires the safe disposal of the affected plant material
when taking up heavy metals, sometimes the metal is bound to the soil organic matter, which makes it unavailable for the plant to extract
some plants are too hard to cultivate or too slow growing to make them viable for phytoremediation despite their status as hyperacumulators. Genetic engineering may improve desirable properties in target species but is controversial in some countries.
Processes
A range of processes mediated by plants or algae are tested in treating environmental problems.:
Phytoextraction
Phytoextraction (or phytoaccumulation or phytosequestration) exploits the ability of plants or algae to remove contaminants from soil or water into harvestable plant biomass. It is also used for the mining of metals such as copper(II) compounds. The roots take up substances from the soil or water and concentrate them above ground in the plant biomass Organisms that can uptake high amounts of contaminants are called hyperaccumulators. Phytoextraction can also be performed by plants (e.g. Populus and Salix) that take up lower levels of pollutants, but due to their high growth rate and biomass production, may remove a considerable amount of contaminants from the soil. Phytoextraction has been growing rapidly in popularity worldwide for the last twenty years or so. Typically, phytoextraction is used for heavy metals or other inorganics. At the time of disposal, contaminants are typically concentrated in the much smaller volume of the plant matter than in the initially contaminated soil or sediment. After harvest, a lower level of the contaminant will remain in the soil, so the growth/harvest cycle must usually be repeated through several crops to achieve a significant cleanup. After the process, the soil is remediated.
Of course many pollutants kill plants, so phytoremediation is not a panacea. For example, chromium is toxic to most higher plants at concentrations above 100 μM·kg−1 dry weight.
Mining of these extracted metals through phytomining is a conceivable way of recovering the material. Hyperaccumulating plants are often metallophyte. Induced or assisted phytoextraction is a process where a conditioning fluid containing a chelator or another agent is added to soil to increase metal solubility or mobilization so that the plants can absorb them more easily. While such additives can increase metal uptake by plants, they can also lead to large amounts of available metals in the soil beyond what the plants are able to translocate, causing potential leaching into the subsoil or groundwater.
Examples of plants that are known to accumulate the following contaminants:
Arsenic, using the sunflower (Helianthus annuus), or the Chinese Brake fern (Pteris vittata).
Cadmium, using willow (Salix viminalis), which as a phytoextractor of cadmium (Cd), zinc (Zn), and copper (Cu).
Cadmium and zinc, using alpine pennycress (Thlaspi caerulescens), a hyperaccumulator of these metals at levels that would be toxic to many plants. Specifically, pennycress leaves accumulate up to 380 mg/kg Cd. On the other hand, the presence of copper seems to impair its growth (see table for reference).
Chromium is toxic to most plants. However, tomatoes (Solanum lycopersicum) show some promise.
Lead, using Indian mustard (Brassica juncea), ragweed (Ambrosia artemisiifolia), hemp dogbane (Apocynum cannabinum), or poplar trees, which sequester lead in their biomass.
Salt-tolerant (moderately halophytic) barley and/or sugar beets are commonly used for the extraction of sodium chloride (common salt) to reclaim fields that were previously flooded by sea water.
Caesium-137 and strontium-90 were removed from a pond using sunflowers after the Chernobyl accident.
Mercury, selenium and organic pollutants such as polychlorinated biphenyls (PCBs) have been removed from soils by transgenic plants containing genes for bacterial enzymes.
Thallium is sequestered by some plants.
Phytostabilization
Phytostabilization reduces the mobility of substances in the environment, for example, by limiting the leaching of substances from the soil. It focuses on the long term stabilization and containment of the pollutant. The plant immobilizes the pollutants by binding them to soil particles making them less available for plant or human uptake. Unlike phytoextraction, phytostabilization focuses mainly on sequestering pollutants in soil near the roots but not in plant tissues. Pollutants become less bioavailable, resulting in reduced exposure. The plants can also excrete a substance that produces a chemical reaction, converting the heavy metal pollutant into a less toxic form. Stabilization results in reduced erosion, runoff, leaching, in addition to reducing the bioavailability of the contaminant. An example application of phytostabilization is using a vegetative cap to stabilize and contain mine tailings. Some soil amendments decrease radiosource mobility – while at some concentrations the same amendments will increase mobility. Vidal et al. 2000 find the root mats of meadow grasses are effective at demobilising radiosource materials especially with certain combinations of other agricultural practices. Vidal also find that the particular grass mix makes a significant difference.
Phytodegradation
Phytodegradation (also called phytotransformation) uses plants or microorganisms to degrade organic pollutants in the soil or within the body of the plant. The organic compounds are broken down by enzymes that the plant roots secrete and these molecules are then taken up by the plant and released through transpiration. This process works best with organic contaminants like herbicides, trichloroethylene, and methyl tert-butyl ether.
Phytotransformation results in the chemical modification of environmental substances as a direct result of plant metabolism, often resulting in their inactivation, degradation (phytodegradation), or immobilization (phytostabilization). In the case of organic pollutants, such as pesticides, explosives, solvents, industrial chemicals, and other xenobiotic substances, certain plants, such as Cannas, render these substances non-toxic by their metabolism. In other cases, microorganisms living in association with plant roots may metabolize these substances in soil or water. These complex and recalcitrant compounds cannot be broken down to basic molecules (water, carbon-dioxide, etc.) by plant molecules, and, hence, the term phytotransformation represents a change in chemical structure without complete breakdown of the compound.
The term "Green Liver" is used to describe phytotransformation, as plants behave analogously to the human liver when dealing with these xenobiotic compounds (foreign compound/pollutant). After uptake of the xenobiotics, plant enzymes increase the polarity of the xenobiotics by adding functional groups such as hydroxyl groups (-OH).
This is known as Phase I metabolism, similar to the way that the human liver increases the polarity of drugs and foreign compounds (drug metabolism). Whereas in the human liver enzymes such as cytochrome P450s are responsible for the initial reactions, in plants enzymes such as peroxidases, phenoloxidases, esterases and nitroreductases carry out the same role.
In the second stage of phytotransformation, known as Phase II metabolism, plant biomolecules such as glucose and amino acids are added to the polarized xenobiotic to further increase the polarity (known as conjugation). This is again similar to the processes occurring in the human liver where glucuronidation (addition of glucose molecules by the UGT class of enzymes, e.g. UGT1A1) and glutathione addition reactions occur on reactive centres of the xenobiotic.
Phase I and II reactions serve to increase the polarity and reduce the toxicity of the compounds, although many exceptions to the rule are seen. The increased polarity also allows for easy transport of the xenobiotic along aqueous channels.
In the final stage of phytotransformation (Phase III metabolism), a sequestration of the xenobiotic occurs within the plant. The xenobiotics polymerize in a lignin-like manner and develop a complex structure that is sequestered in the plant. This ensures that the xenobiotic is safely stored, and does not affect the functioning of the plant. However, preliminary studies have shown that these plants can be toxic to small animals (such as snails), and, hence, plants involved in phytotransformation may need to be maintained in a closed enclosure.
Hence, the plants reduce toxicity (with exceptions) and sequester the xenobiotics in phytotransformation. Trinitrotoluene phytotransformation has been extensively researched and a transformation pathway has been proposed.
Phytostimulation
Phytostimulation (or rhizodegradation) is the enhancement of soil microbial activity for the degradation of organic contaminants, typically by organisms that associate with roots. This process occurs within the rhizosphere, which is the layer of soil that surrounds the roots. Plants release carbohydrates and acids that stimulate microorganism activity which results in the biodegradation of the organic contaminants. This means that the microorganisms are able to digest and break down the toxic substances into harmless form. Phytostimulation has been shown to be effective in degrading petroleum hydrocarbons, PCBs, and PAHs. Phytostimulation can also involve aquatic plants supporting active populations of microbial degraders, as in the stimulation of atrazine degradation by hornwort.
Phytovolatilization
Phytovolatilization is the removal of substances from soil or water with release into the air, sometimes as a result of phytotransformation to more volatile and/or less polluting substances. In this process, contaminants are taken up by the plant and through transpiration, evaporate into the atmosphere. This is the most studied form of phytovolatilization, where volatilization occurs at the stem and leaves of the plant, however indirect phytovolatilization occurs when contaminants are volatilized from the root zone. Selenium (Se) and Mercury (Hg) are often removed from soil through phytovolatilization. Poplar trees are one of the most successful plants for removing VOCs through this process due to its high transpiration rate.
Rhizofiltration
Rhizofiltration is a process that filters water through a mass of roots to remove toxic substances or excess nutrients. The pollutants remain absorbed in or adsorbed to the roots. This process is often used to clean up contaminated groundwater through planting directly in the contaminated site or through removing the contaminated water and providing it to these plants in an off-site location. In either case though, typically plants are first grown in a greenhouse under precise conditions.
Biological hydraulic containment
Biological hydraulic containment occurs when some plants, like poplars, draw water upwards through the soil into the roots and out through the plant, which decreases the movement of soluble contaminants downwards, deeper into the site and into the groundwater.
Phytodesalination
Phytodesalination uses halophytes (plants adapted to saline soil) to extract salt from the soil to improve its fertility.
Role of genetics
Breeding programs and genetic engineering are powerful methods for enhancing natural phytoremediation capabilities, or for introducing new capabilities into plants. Genes for phytoremediation may originate from a micro-organism or may be transferred from one plant to another variety better adapted to the environmental conditions at the cleanup site. For example, genes encoding a nitroreductase from a bacterium were inserted into tobacco and showed faster removal of TNT and enhanced resistance to the toxic effects of TNT.
Researchers have also discovered a mechanism in plants that allows them to grow even when the pollution concentration in the soil is lethal for non-treated plants. Some natural, biodegradable compounds, such as exogenous polyamines, allow the plants to tolerate concentrations of pollutants 500 times higher than untreated plants, and to absorb more pollutants.
Hyperaccumulators and biotic interactions
A plant is said to be a hyperaccumulator if it can concentrate the pollutants in a minimum percentage which varies according to the pollutant involved (for example: more than 1000 mg/kg of dry weight for nickel, copper, cobalt, chromium or lead; or more than 10,000 mg/kg for zinc or manganese). This capacity for accumulation is due to hypertolerance, or phytotolerance: the result of adaptative evolution from the plants to hostile environments through many generations. A number of interactions may be affected by metal hyperaccumulation, including protection, interferences with neighbour plants of different species, mutualism (including mycorrhizae, pollen and seed dispersal), commensalism, and biofilm.
Tables of hyperaccumulators
Hyperaccumulators table – 1 : Al, Ag, As, Be, Cr, Cu, Mn, Hg, Mo, Naphthalene, Pb, Pd, Pt, Se, Zn
Hyperaccumulators table – 2 : Nickel
Hyperaccumulators table – 3 : Radionuclides (Cd, Cs, Co, Pu, Ra, Sr, U), Hydrocarbons, Organic Solvents.
Phytoscreening
As plants can translocate and accumulate particular types of contaminants, plants can be used as biosensors of subsurface contamination, thereby allowing investigators to delineate contaminant plumes quickly. Chlorinated solvents, such as trichloroethylene, have been observed in tree trunks at concentrations related to groundwater concentrations. To ease field implementation of phytoscreening, standard methods have been developed to extract a section of the tree trunk for later laboratory analysis, often by using an increment borer. Phytoscreening may lead to more optimized site investigations and reduce contaminated site cleanup costs.
See also
Bioaugmentation
Biodegradation
Bioremediation
Constructed wetland
De Ceuvel
Mycorrhizal bioremediation
Mycoremediation
Phytotreatment
References
Bibliography
"Phytoremediation Website" — Includes reviews, conference announcements, lists of companies doing phytoremediation, and bibliographies.
"An Overview of Phytoremediation of Lead and Mercury" June 6 2000. The Hazardous Waste Clean-Up Information Web Site.
"Enhanced phytoextraction of arsenic from contaminated soil using sunflower" September 22 2004. U.S. Environmental Protection Agency.
"Phytoextraction", February 2000. Brookhaven National Laboratory 2000.
"Phytoextraction of Metals from Contaminated Soil" April 18, 2001. M.M. Lasat
July 2002. Donald Bren School of Environment Science & Management.
"Phytoremediation" October 1997. Department of Civil Environmental Engineering.
"Phytoremediation" June 2001, Todd Zynda.
"Phytoremediation of Lead in Residential Soils in Dorchester, MA" May, 2002. Amy Donovan Palmer, Boston Public Health Commission.
"Technology Profile: Phytoextraction" 1997. Environmental Business Association.
"Ancona V, Barra Caracciolo A, Campanale C, De Caprariis B, Grenni P, Uricchio VF, Borello D, 2019. Gasification Treatment of Poplar Biomass Produced in a Contaminated Area Restored using Plant Assisted Bioremediation. Journal of Environmental Management"
External links
Missouri Botanical Garden (host): Phytoremediation website — Review Articles, Conferences, Phytoremediation Links, Research Sponsors, Books and Journals, and Recent Research.
International Journal of Phytoremediation — devoted to the publication of current laboratory and field research describing the use of plant systems to remediate contaminated environments.
Using Plants To Clean Up Soils — from Agricultural Research magazine
New Alchemy Institute — co-founded by John Todd (Canadian biologist)
Bioremediation
Environmental soil science
Environmental engineering
Environmental terminology
Pollution control technologies
Conservation projects
Ecological restoration
Soil contamination
Biotechnology
Sustainable technologies | Phytoremediation | Chemistry,Engineering,Biology,Environmental_science | 4,323 |
47,688,731 | https://en.wikipedia.org/wiki/Electromembrane%20extraction | Electro membrane extraction, or EME, is a miniaturized liquid-liquid extraction technique developed for sample preparation of aqueous samples prior to analysis by chromatography, electrophoresis, mass spectrometry, and related techniques in analytical chemistry. EME involves the use of a small supported liquid membrane (SLM) sustained in the wall of a porous hollow fiber, and application of an electrical field across the SLM.
Principle
Target compounds are extracted from an aqueous sample, through a μL-volume of organic solvent sustained as a thin supported liquid membrane (SLM) in the pores in the wall of a porous hollow fiber, and into an acceptor solution inside the lumen of the hollow fiber. Extraction is based on electrokinetic migration in an electrical field sustained across the SLM. The volume of the acceptor solution is typically 5-25 μL. The acceptor solution is an aqueous solution, and can be analyzed by liquid chromatography (LC) or capillary electrophoresis (CE).
Advantages
EME is closely related to liquid-phase microextraction (LPME) and provides high pre-concentration and efficient sample clean-up. In addition, because the extraction is performed under the influence of an electrical field, the extraction selectivity can be controlled by the direction and the magnitude of the electrical field.
Further reading
Knut Fredrik Seip, Astrid Gjelstad, Stig Pedersen-Bjergaard, The potential of electromembrane extraction for bioanalytical applications, Bioanalysis 7 (2015) 463-480
V. Krishna Marothu, M. Gorrepati, R. Vusa, Electromembrane extraction-a novel extraction technique for pharmaceutical, chemical, clinical and environmental analysis, Journal of Chromatographic Science 51 (2013) 619-631.
References
Extraction (chemistry)
Laboratory techniques | Electromembrane extraction | Chemistry | 397 |
25,682,562 | https://en.wikipedia.org/wiki/Morris%20J.%20Berman%20oil%20spill | The Morris J. Berman oil spill occurred on January 7, 1994, when the Morris J. Berman, a single-hull 302-foot-long barge, with the capacity to carry more than 3 million gallons of oil, collided with a coral reef near San Juan, Puerto Rico, causing the release of 750,000 gallons of heavy grade oil. The spill affected the tourism and fishing industries as well as wildlife along the shores of Puerto Rico, Isla de Culebra, and Isla de Vieques. The spill had major long-lasting impacts on the biological and natural resources of the entire Puerto Rican area. This spill was also the first to occur in U.S. waters after the passing of the Oil Pollution Act of 1990.
The incident
The Morris J. Berman left the Port of San Juan, Puerto Rico in the early morning hours of January 7, 1994 in tow behind the tug boat Emily S., carrying a total of 1.5 million gallons of oil. The barge was nearly two hours into its trip to Antigua when the towing cable connecting the Berman to the Emily snapped for the first time. Repairs were made to the cable, and the voyage continued. At 3:50 AM, the cable snapped a second time, and the barge broke free striking a coral reef off of Escambron Beach, near San Juan. The impact caused extensive damage to two of the barge's nine holding tanks, releasing 750,000 gallons of Number 6 fuel oil. An intense northerly wind added to the damage done to the vessel by pounding the stranded barge with heavy surf. Oil washed on shore in Puerto Rico between Punta Boca Juana and Punta Vacia Talega. Oil also affected the shores of Culebra, an island to the east of Puerto Rico. Most of the areas threatened and impacted by the spill were sheltered lagoons, biota-rich intertidal zones, and beach dunes, which are all habitats for many species. Several historic locations were also threatened and impacted by the oil, some of them being pre-Columbian ruins, historic walls and bridges, and Spanish-era forts. The vast majority of all of these affected areas were located on the north shore of Puerto Rico.
Environmental effects
The spill affected most of the environmental resources present in the shore and offshore areas near San Juan. 1,100 square miles of surface waters along the north coast of Puerto Rico were projected to have felt effects from the oil. It was also projected that 103 miles of ocean shore line and 66 miles of bay shoreline were covered in oil. Seagrasses and sediments were also contaminated in the spill area. It is projected that 40,000 square meters of seagrasses in lagoons near the spill area were impacted by sunken oil. This caused most of the grasses to die. All of the affected areas were covered in varying amounts oil which resulted in large amounts of pollution, loss of tourism, loss of habitat, and the loss of life for living organisms.
The spill made many important habitat areas unusable by many species for a period of time. One example of lost habitat was the loss of intertidal bay and shallow waters that were used for fish as nursery areas. This affected the population levels of certain species of fish in the years following the spill. Another example of lost habitat was the reef area lost near the grounding site of the Berman. Over 152 species were estimated to have resided in these reefs. Some of these species include fish, shellfish, algae, and sponges. The reef was the main source of food and shelter for all of these species. The reef also provided a natural breakwater that helped to diminish storm surge from hurricanes. The effectiveness of the reef as a breakwater and habitat area were greatly decreased after it was damaged by the barge. Large pieces of the reef were lost after the barge ran into it.
Effects on tourism
Oil was present at many beach areas surrounding the spill site. Beaches near the immediate site of the spill were closed to all visitors, either because of the presence of heavy oil or cleanup efforts. While other beaches in the surrounding areas did have some oil, they were left open to visitors. People were strongly discouraged from visiting these open beaches, though. Tourists and residents that continued to visit oil affected beaches were not able to use the beach normally due to the presence of oil. Beach goers reported damaged swimming gear as well as headaches that were caused by the oil fumes.
The San Juan National Historic Site was also affected after the spill. Due to its close proximity to the water, intense oil vapors were present inside of the fort located at the site for up to three weeks following the spill. National Park Service workers reported that the fumes were strong enough to induce headaches.
The fishing industry was also affected by the spill. Both recreational and commercial fishing are vital parts of Puerto Rico's economy. Fishing was not able to be done in oil covered waters. This resulted in the loss of potential charter fishing trips and seafood sales. Normal fishing operations were able to eventually resume once the cleanup was complete.
Due to the lack of appropriate data, it is impossible to determine the exact amount of tourism dollars that were lost because of the spill. It is reasonable to infer though that the negative results of the spill felt in these tourist areas did indeed deter some tourists form visiting Puerto Rico. The negative societal impacts described in this section were all caused by negative ecological impacts.
Effects on wildlife
In the days following the incident 5,268 organisms, the majority of which were dead, were estimated to have washed ashore. Many live organisms were also found both on and offshore covered in oil. Of the 5,268 organisms 152 species were picked up after the spill by scientists and volunteers from the Caribbean Stranding Network (CSN) and various other organizations. These organisms were both living and dead. Organisms found were sponges, anemones, sea worms, crustaceans, mollusks, sea stars, sea urchins, fish, birds, and sea turtles.
Biological resources in the spill area
Several different biological resources were present in the initial areas affected by the spill, some of these include the sandy-intertidal invertebrate communities, rocky-intertidal invertebrate communities, and coastal and offshore communities of fish (many of which held much importance to commercial and recreational fishing). Many endangered and protected marine mammals also used the affected areas during their times of migration. Some of these mammals include the West Indian manatee, several species of dolphins, humpback whales, and sperm whales. Even though these species of whales frequented the spill area, no whales were observed to have been affected by the oil. Three species of endangered sea turtles, the green turtle, the hawksbill turtle, and the leatherback turtle also frequently nested in the affected areas. The situation for these turtles was critical after the spill as the turtle nesting season was to begin in the months following the spill. Several endangered and threatened species of birds also used areas surrounding the spill to rest and feed. These species of birds include the royal tern, sandwich tern, common tern, roseale tern, least tern, brown pelican, magnificent frigatebird, Audubon shearater, American coot, white-checked pintial, osprey, and the peregrine falcon. All of the endangered and threatened species listed, plus other non-threatened species, were affected in some way by the spill. Most of the species affected were in the immediate landing site of the barge. Many species at the sinking site of the barge were also affected. Many of these species resided in areas of high ecological value, such as the shores of the Pinones State Forest.
Most affected species
The Sally Lightfoot crab, periwinkles of genus Littorina, the common West Indian chiton Chiton tuberculatus, the rock-boring urchin, and the brown booby were the most affected species after the spill. The Sally Lightfoot crab was the most affected crustacean after the spill. This crustacean was the most affected because it was the most abundant of all crustaceans in the area. The Sally Lightfoot crab is prominent in the Caribbean as well as the West coast of the United States. The periwinkle and the common West Indian chiton were the most affected of the mollusks observed after the spill. Of all sea urchins observed after the spill, the rock-boring urchin fared the worst. Rock-boring urchins were the most affected organism compared to all observed species after the spill, as they accounted for 29% of all recorded affected species. Of all birds observed after the spill the brown booby was the most affected. The main reason that most of these species were the most affected after the spill was simply because they were the most abundant species located in the spill area. Only 63% of the sea turtles and birds that were treated after the spill survived.
Legal response
The owners of the Morris J. Berman initially assumed responsibility for the spill, but the ten million dollars that was provided from their insurance policy for oil spill cleanup was quickly spent. The federal government provided funding for the spill on January 14 and it became a United States Coast Guard directed response.
The Governments of Puerto Rico, the United States, and other groups, sued the owners of the two vessels for clean up costs and natural resource damages. Criminal prosecutions were brought against the owners of the two vessels due to issues of crew negligence and the act of knowingly sending a vessel to sea in an unseaworthy condition. The three owners of the barge were charged with criminal negligence based on laws from the Oil Pollution Act of 1990. One of the barge managers and the captain and first mate of the 'Emily S' were also charged with felonies for violations of the Clean Water Act. The court case was officially settled on January 19. The Governments of Puerto Rico and the United States were paid a total of $83.5 million from Metlife Capital Corp, Water Quality Insurance Syndicate and Caribbean Petroleum Corporation. The cargo that was spilled from the Berman was owned by the Caribbean Petroleum Corporation.
Environmental response
Treatment of wildlife
After the spill, as a part of the Local Spill Response Plan for Puerto Rico and the US Virgin Islands, the CSN was sent out to document damages to biological resources and to prevent further damages to the environment and living organisms. The CSN collected many organisms, both live and dead, after the spill in order to document the damage done to them and to treat as many of the living organisms as possible. Most of the organisms collected by this group were off of the Northshore of Puerto Rico, but some were recovered from the south, east, and west shores of Puerto Rico, as well as near the shores of Isla Culebrita, and Isla de Vieques.
Collected living wildlife were brought to a non permanent triage facility where crabs, birds, and sea turtles were treated for oil contamination and other injuries. Of all animals collected, these were the only ones that were able to be treated. Animals that were not able to be treated were either too small or in too bad of a condition. Nearly 400 crabs that were brought to the facility were successfully treated and released. 28 birds and two sea turtles were also brought to the facility where 19 were successfully treated and released and 11 died at certain points during treatment. All affected organisms were held until they were deemed completely healthy by a veterinarian. The oiled birds and turtles required an average 30 days of treatment before being released. These organisms were treated with doses of olive oil and non-steroid antibiotics before they were cleaned in order to provide relief from the effects of the toxic oil. Solutions of olive oil, Simple Green, and BioSolve were then used to clean the organisms in tubs filled with lukewarm water. The organisms were then rinsed and dried.
Cleanup of oil
Over 1,000 workers from 15 agencies and groups came together after the spill to clean up affected areas. In total the workers put in over 1.5 million man hours of work. The complete cleanup and assessment of the spill took 114 days.
Eight days after the spill on January 15, the barge was towed to an area 37 km Northeast of San Juan and intentionally sunk under government supervision to a depth of 2 km in an underwater canyon, where it is still located today. This decision was ultimately made after it was determined that transporting the remaining oil off of the barge would not be feasible. The barge was sunk at an isolated site that does not experience much ship traffic to avoid further damage. The cleaning of the majority of the oil was completed by April 1994. The cost of the cleanup amounted to approximately $130 million.
The general consensus among several newspaper articles and reports is that the overall response to the spill was a success, as based on the parameters set forth by the Oil Pollution Act of 1990. Despite the overall success of the cleanup it was found that there were several areas for improvement though. One glaring example of possible improvement was the actions of the crew of the Emily S. The crew did not have proper supplies and were not properly trained to deal with the situation that they were faced with. The cable that broke during the incident was in a severe case of disrepair and had broken once five months prior to the spill. A new cable was supposed to be installed, and if one had been, the spill could have been completely avoided. The crew also did not have sufficient supplies on board their vessel to properly repair the broken line. It was also found in federal investigations following the spill that negligent behavior was encouraged aboard the Emily S. This made it obvious that the crews of oil carrying vessels needed to be properly trained and equipped.
Unlike the crew aboard the Emily S., the governments of the United States and Puerto Rico were well prepared to deal with the spill. The two countries also met most of the requirements that were set forth by the Oil Pollution Act of 1990. For example, a spill response plan for the area had been developed and it was executed to near perfection. Millions of dollars of oil cleanup equipment was already stationed in Puerto Rico and was deployed into action within a matter of hours after the spill. As mentioned earlier, despite the overall success of Puerto Rico's oil spill response plans there were still some areas for improvement. For example, there were some mechanical failures with some skimmer boats, inappropriate use of oil soaking materials, and an insufficient number of properly trained personnel. There were also problems with communication between agencies as well as some organizational issues during the cleanup process. The Berman spill served as a field test of how well the plans set forth by the Oil Pollution Act of 1990 worked, and provided many examples on how future spill response efforts could be improved. Several, though not all, of the things learned during this spill affected response efforts for future spills such as the BP Oil Spill in the Gulf of Mexico in 2010.
See also
List of oil spills
References
External links
Collection of News Articles regarding the spill
The US Coast Guard's Contingency Plan for oil spills near Puerto Rico
NOAA Oil Types
Petroleum products
Oils
Liquid fuels
Oil spills in the United States
1994 industrial disasters
Disasters in Puerto Rico
1994 in the environment
1994 in Puerto Rico
1994 disasters in the United States
January 1994 events in North America | Morris J. Berman oil spill | Chemistry | 3,095 |
5,712,360 | https://en.wikipedia.org/wiki/2-Phosphoglyceric%20acid | 2-Phosphoglyceric acid (2PG), or 2-phosphoglycerate, is a glyceric acid which serves as the substrate in the ninth step of glycolysis. It is catalyzed by enolase into phosphoenolpyruvate (PEP), the penultimate step in the conversion of glucose to pyruvate.
In glycolysis
See also
3-Phosphoglyceric acid
References
Organophosphates
Glycolysis | 2-Phosphoglyceric acid | Chemistry,Biology | 114 |
698,010 | https://en.wikipedia.org/wiki/Smash%20product | In topology, a branch of mathematics, the smash product of two pointed spaces (i.e. topological spaces with distinguished basepoints) and is the quotient of the product space under the identifications for all in and in . The smash product is itself a pointed space, with basepoint being the equivalence class of The smash product is usually denoted or . The smash product depends on the choice of basepoints (unless both X and Y are homogeneous).
One can think of and as sitting inside as the subspaces } and These subspaces intersect at a single point: the basepoint of So the union of these subspaces can be identified with the wedge sum . In particular, in is identified with in , ditto for } and . In , subspaces and intersect in the single point . The smash product is then the quotient
The smash product shows up in homotopy theory, a branch of algebraic topology. In homotopy theory, one often works with a different category of spaces than the category of all topological spaces. In some of these categories the definition of the smash product must be modified slightly. For example, the smash product of two CW complexes is a CW complex if one uses the product of CW complexes in the definition rather than the product topology. Similar modifications are necessary in other categories.
Examples
The smash product of any pointed space X with a 0-sphere (a discrete space with two points) is homeomorphic to X.
The smash product of two circles is a quotient of the torus homeomorphic to the 2-sphere.
More generally, the smash product of two spheres Sm and Sn is homeomorphic to the sphere Sm+n.
The smash product of a space X with a circle is homeomorphic to the reduced suspension of X:
The k-fold iterated reduced suspension of X is homeomorphic to the smash product of X and a k-sphere
In domain theory, taking the product of two domains (so that the product is strict on its arguments).
As a symmetric monoidal product
For any pointed spaces X, Y, and Z in an appropriate "convenient" category (e.g., that of compactly generated spaces), there are natural (basepoint preserving) homeomorphisms
However, for the naive category of pointed spaces, this fails, as shown by the counterexample and found by Dieter Puppe. A proof due to Kathleen Lewis that Puppe's counterexample is indeed a counterexample can be found in the book of Johann Sigurdsson and J. Peter May.
These isomorphisms make the appropriate category of pointed spaces into a symmetric monoidal category with the smash product as the monoidal product and the pointed 0-sphere (a two-point discrete space) as the unit object. One can therefore think of the smash product as a kind of tensor product in an appropriate category of pointed spaces.
Adjoint relationship
Adjoint functors make the analogy between the tensor product and the smash product more precise. In the category of R-modules over a commutative ring R, the tensor functor is left adjoint to the internal Hom functor , so that
In the category of pointed spaces, the smash product plays the role of the tensor product in this formula: if are compact Hausdorff then we have an adjunction
where denotes continuous maps that send basepoint to basepoint, and carries the compact-open topology.
In particular, taking to be the unit circle , we see that the reduced suspension functor is left adjoint to the loop space functor :
Notes
References
Topology
Homotopy theory
Operations on structures | Smash product | Physics,Mathematics | 745 |
63,553,875 | https://en.wikipedia.org/wiki/List%20of%20unproven%20methods%20against%20COVID-19 | There are many fake or unproven medical products and methods that claim to diagnose, prevent or cure COVID-19. Fake medicines sold for COVID-19 may not contain the ingredients they claim to contain, and may even contain harmful ingredients. In March 2020, the World Health Organization (WHO) released a statement recommending against taking any medicines in an attempt to treat or cure COVID-19, although research on potential treatment was underway, including the Solidarity trial spearheaded by WHO. The WHO requested member countries to immediately notify them if any fake medicines or other falsified products were discovered. There are also many claims that existing products help against COVID-19, which are spread through rumors online rather than conventional advertising.
Anxiety about COVID-19 makes people more willing to "try anything" that might give them a sense of control of the situation, making them easy targets for scams. Many false claims about measures against COVID-19 have circulated widely on social media, but some have been circulated by text, on YouTube, and even in some mainstream media. Officials advised that before forwarding information, people should think carefully and look it up. Misinformation messages may use scare tactics or other high-pressure rhetoric, claim to have all the facts while others do not, and jump to unusual conclusions. The public was advised to check the information source's source, looking on official websites; some messages have falsely claimed to be from official bodies like UNICEF and government agencies. Arthur Caplan, head of medical ethics at New York University's medical school, had simpler advice for COVID-19 products: "Anything online, ignore it".
Products which claim to prevent COVID-19 risk giving dangerous false confidence and increasing infection rates. Going out to buy such products may encourage people to break stay-at-home orders, reducing social distancing. Some of the pretend treatments are also poisonous; hundreds of people have died from using fake COVID-19 treatments.
Diagnosis
Medically-approved tests detect either the virus or the antibodies the body makes to fight it off. Government health departments and healthcare providers provide tests to the public. There have been fraudsters offering fake tests; some have offered tests in exchange for money, but others have said the test is free in order to collect information that could later be used for identity theft or medical insurance fraud. Some fraudsters have claimed to be local government health authorities. People have been advised to contact their doctor or genuine local government health authorities for information about getting tested. Fake tests have been offered on social media platforms, by e-mail, and by phone.
Counterfeit testing kits, which were originally used for testing HIV and monitoring glucose levels, were touted as for coronavirus diagnosis.
Holding one's breath for 10 seconds was claimed to be an effective self-test for the coronavirus. The WHO stated that this test did not work and should not be used.
Manufacturer Bodysphere briefly sold what it claimed were coronavirus antibody tests. It falsely marketed them as having received FDA Emergency Use Authorization. It also falsely claimed they were made in the United States.
Prevention and cure claims
Widely circulated rumours have made many unfounded claims about methods of preventing and curing infection with SARS-CoV-2. Among others:
Disinfection-related methods
Hand cleaning methods
Hand sanitizer is not more effective than washing in plain soap and water. Washing in soap and water for at least 20 seconds is recommended by the US Centers for Disease Control and Prevention as the best way to clean hands in most situations. However, if soap and water are not available, a hand sanitizer that is at least 60% alcohol can be used instead, unless hands are visibly dirty or greasy.
Soap is effective at removing coronaviruses, but antibacterial soap is not better than plain soap.
Red soap is not more germicidal than soaps of other colors, contrary to claims in a popular Facebook post, said Ashan Pathirana, the registrar of Sri Lanka's Health Promotion Bureau (HPB); he suggested that it might be a reference to carbolic soap.
Hand sanitizer prepared at home by mixing rum, bleach and fabric softener has been widely promoted as effective at preventing COVID-19 in YouTube videos in the Philippines. The Integrated Chemists of the Philippines (ICP) released statements saying that alcoholic drinks contain only about 40% alcohol, less than the 70% needed in effective hand sanitizers, and that mixing bleach and alcohol creates chloroform. The manufacturers of the brands of rum and bleach used in the videos have both publicly issued statements calling the recipe dangerous and urging people not to use it.
Vodka was alleged to be an effective homemade hand sanitizer, or an ingredient in one. The company whose brand was alleged to be protective responded to the rumours by citing the US Centers for Disease Control and Prevention statement that hand sanitizers needed to be at least 60% alcohol to be effective, and stating that their product was only 40% alcohol.
Claims that vinegar was more effective than hand sanitizer against the coronavirus were made in a video that was shared in Brazil. That was disproved, as "there is no evidence that acetic acid is effective against the virus" and, even if there was, "its concentration in common household vinegar is low".
Gargling, nasal rinses, and inhalation
Inhaling bleach or other disinfectants is dangerous and will not protect against COVID-19. They can cause irritation and damage to tissues, including the eyes. They are poisonous and WHO has warned not to take it internally and to keep it out of the reach of children. They are safe and effective when used to disinfect surfaces such as countertops, but are not safe for human consumption.
Controversial alternative medicine proponents Joseph Mercola and Thomas Levy claimed that inhaling 0.5–3% hydrogen peroxide solution using a nebulizer could prevent or cure COVID-19. They cite research using hydrogen peroxide to sterilize surfaces, incorrectly asserting that it can therefore be used to clean human airways. A tweet from Mercola advertising this method was removed from Twitter on April 15, 2020, for violating the platform rules. Inhalation of hydrogen peroxide can cause upper airway irritation, hoarseness, inflammation of the nose, and burning sensations in the chest. At high concentrations, inhaling hydrogen peroxide can cause permanent neurological damage or death. Though hydrogen peroxide use as an alternative and complementary form of medicine is advocated for use in multiple disease processes, including COPD, asthma, pneumonia and bronchitis, there seems to be no trials regarding its use. It was reported a case of possible side effect related to chronic (during 5 years) and subacute hydrogen peroxide inhalation use which lead to interstitial lung disease in the form of acute pneumonitis.
Gargling with saltwater was said to kill the coronavirus in claims on Weibo, Twitter and Facebook. These claims were falsely attributed to respiratory expert Zhong Nanshan, Wuhan Union Hospital, and a number of other people and institutions, sometimes with the attribution changed and the actual advice copied verbatim. Zhong Nanshan's medical team published a refutation, pointing out that the virus settles in the respiratory tract, which cannot be cleaned by rinsing the mouth. The WHO also said it had no convincing evidence that this method would provide any protection against COVID-19.
Saltwater sprays were given at the door of the River of Grace Community Church in South Korea in the false belief that this would protect people from the virus; the same unsterilized spray bottle was used on everyone, and may have increased the risk. Subsequently, 46 devotees were infected with the virus.
"Corona-Cure Coronavirus Infection Prevention Nasal Spray" was fraudulently marketed online.
There is no evidence that saline nasal rinses help prevent COVID-19.
Temperature
Cold weather and snow do not kill the COVID-19 virus. The virus lives in humans, not in the outdoors, though it can survive on surfaces. Even in cold weather, the body will stay at 36.5–37 degrees Celsius inside, and the COVID-19 virus will not be killed.
Hot and humid conditions do not prevent COVID-19 from spreading, either. There have been many COVID-19 cases in countries with hot and humid climates.
Drinking warm water or hot baths/heating to will not cure people of COVID-19. It has been claimed that these statements were made by UNICEF in coronavirus prevention guidelines, but UNICEF officials refuted this.
High temperatures cannot be used on humans to kill the COVID-19 virus. Taking very hot baths can cause burns, but the body will stay at 36.5–37 degrees Celsius inside, and the COVID-19 virus will not be killed.
Hot saunas and hand or hair dryers do not prevent or treat COVID-19.
Steam inhalation was suggested on Facebook as a cure for coronavirus infection. Inhaling steam will not treat or cure COVID-19.
Radiation
Exposing people to sunlight will not prevent or cure COVID-19. It has been falsely claimed that UNICEF said so, in coronavirus prevention guidelines; UNICEF officials refuted this. The virus can spread in even the sunniest of weather.
UV-C light cannot be used on humans to kill the COVID-19 virus. Attempting to use UV to sterilize people can cause skin irritation and damage the eyes.
Other disinfection-related methods
White color does not have a 'harmful effect' on coronavirus, as claimed in a widely shared Facebook post; nor does the colour of a handkerchief have an effect on the virus, according to Ashan Pathirana, the registrar of Sri Lanka's Health Promotion Bureau (HPB). Using handkerchiefs or tissues of other colours to sneeze or cough into will be just as effective.
Posts on social media claimed that volcanic ash from the eruption of the Taal Volcano on January 12, 2020, in the Philippines was the cause of low infection rates in the country, falsely stating that it could kill the virus and had "anti-viral" and "disinfectant qualities".
Drinking bleach is extremely dangerous and will not protect against COVID-19. Bleach is poisonous and damages internal organs. Drinking it can cause disability and death. The WHO has warned not to drink the substance.
Protective equipment
USB flash drives were being sold for $370 as a "5G Bioshield", purportedly offering protection from the non-existent threat of infection transmitted via 5G mobile telephone radio waves.
Over 34,000 counterfeit surgical masks—which may have been touted as providing coronavirus prevention—were seized by Europol in March 2020.
Making masks out of wet-wipes has not been officially recommended as an alternative to surgical masks, contrary to some claims. Some public health authorities have issued directions for making and using homemade cloth face masks. See face masks during the COVID-19 pandemic.
Aerosol Boxes—acrylic boxes placed over patient's heads during aerosol generating such as intubation—can potentially increase dispersal of COVID-containing aerosol particles if a patient coughs.
Drugs of abuse
A mix containing amphetamines, cocaine, and nicotine, on sale on the dark web for US$300, was fraudulently presented as a vaccine against COVID-19.
Cocaine does not protect against COVID-19. Several viral tweets purporting that snorting cocaine would sterilize one's nostrils of the coronavirus spread around Europe and Africa. In response, the French Ministry of Health released a public service announcement debunking this claim, saying "No, cocaine does NOT protect against COVID-19. It is an addictive drug that causes serious side effects and is harmful to people’s health." The World Health Organization also debunked the claim. Facebook flagged the rumour as misinformation.
A claim that cannabis could protect against the coronavirus appeared on YouTube, along with a petition to legalize cannabis in Sri Lanka. Sri Lankan Health authorities pointed out that there was no evidence that cannabis protected against COVID-19. A fake webpage purporting to be a Fox News article also claimed that CBD oil was a potential cure.
The chloroform- and ether-based drug loló was said to cure the disease in messages spread in Brazil.
Boiled betel leaves will not cure COVID-19.
Industrial methanol was claimed to cure the coronavirus. Drinking alcohol is ethanol, while methanol is acutely poisonous. The WHO has warned not to drink ethanol or methanol in an effort to kill the virus. Iranian media were reporting nearly 300 dead and 1000 hospitalized (or 600 dead and 3,000 hospitalized, according to an unidentified doctor in the Health Ministry) as of April 8, 2020. Alcoholic beverages are illegal in Iran, resulting in a black market in liquor made illegally.
Contrary to some reports, drinking ethyl alcohol also does not protect against COVID-19, and can increase health risks (short term and long term).
Commercial products
There are many fraudulent and unproven products that are claimed to treat or protect against COVID-19.
"Virus Shut Out Protection" pendants, supposedly from Japan, worn around one's neck, have been sold with claims that they prevent infection. The U.S. Environmental Protection Agency said that no evidence had been presented that they work, and took legal action against importers.
A Twitter post claimed that scientists from the "Australian Medical University" had developed a vaccine for the coronavirus. It accepted 0.1 Bitcoin as payment for a "vaccination kit" and promised shipping in 5–10 days. The linked website was later removed.
Homeopathic 'Influenza complex' has been marketed as a preventive measure for COVID-19 by a man in New Zealand, who claimed to have identified and imbued his product with the "frequency" of COVID-19 using a "radionics machine". Homeopathic remedies such as this one have no active ingredients and cannot protect against flu, colds, or COVID-19, said University of Auckland associate professor and microbiologist Dr Siouxsie Wiles. The NZ Ministry of Health said that COVID-19 was not a strain of flu, and criticized products which claim to prevent COVID-19 as giving dangerous false confidence.
Homeopathic treatment with Arsenicum album is claimed as an "add on" to prevent COVID-19.
A person living in California marketed pills for curing coronavirus, although the contents of the pill were not made public. He was arrested for attempted fraud, which carries up to 20 years of prison.
Claims that colloidal silver solution can kill over 650 pathogens including coronavirus prompted antifraud actions. The National Center for Complementary and Integrative Health warned on their website against taking colloidal silver as dietary supplement. Seven warning letters were filed to companies for selling fraudulent products. Preacher Jim Bakker had been claiming that the colloid silver he sold (and only his) could be used to treat COVID-19. Colloidal silver is not an effective treatment for anything, and may interfere with other medications or cause permanent argyria (blue-gray skin discoloration).
Toothpastes, dietary supplements and creams were being sold illegally in the US, with claims that they could cure coronavirus infection. Alex Jones was directed by the USFDA to cease promoting these products as a cure.
Celebrity chef Pete Evans claimed that a device called the BioCharger NG Subtle Energy Platform, costing US$14,990, could cure the coronavirus. He faced backlash, taking down his advertisement after the Australian Medical Association dismissed the product as a "fancy light machine". The Australian distributors, Hydrogen Technologies Pty Ltd, stated the device would help "open the airways of Coronavirus victims by reducing the inflammation it causes in the lungs" as well as other unproven therapeutic claims. Evans was fined AU$25,200 by the Therapeutic Goods Administration for his false claims and the company was fined AU$50,400 for false advertising.
"Miracle Mineral Solution" (MMS) is a mixture of sodium chlorite (with table salt and some other trace minerals) and an acid, which reacts with the sodium chlorite to produces a solution of unstable chlorous acid, which becomes chlorite, chlorate, and chlorine dioxide, an industrial bleach. The FDA has warned against using it, saying there is no evidence that it cures, prevents, or treats COVID-19, and that it is seriously hazardous to health. The "Genesis II Church" calls its mail-order MMS a "sacrament", and when warned by the FDA that their claims that it could cure COVID-19 were fraudulent, vehemently refused to stop making them, saying the FDA had no authority over them, and they would never stop. Their status as a religious organization was disputed in a successful filing for a temporary restraining order on grounds of public safety, by the U.S. Attorney's Office in South Florida.
Shuanghuanglian, a mixture of plants invented in the 1960s as part of official state-sponsored TCM (traditional Chinese medicine), was advertised by the Xinhua News Agency as being able to treat the coronavirus. Posts on Weibo claimed that many people violated social distancing rules while queuing to buy it. Some attributed the news stories to stock market manipulation.
Jennings Ryan Staley, a licensed physician and owner of Skinny Beach Med Spa, was accused of selling mail-order "COVID-19 treatment packs", claiming they would protect against COVID-19 for six weeks and cure it "100%", causing the disease to disappear in hours. He has been arrested and is being "vigorously investigated" by the FBI; if convicted, he may face 20 years in prison.
A hand cream sold by the right-wing populist party leader of the party Greek Solution, Kyriakos Velopoulos, via his TV shop, is claimed to completely kill COVID-19, although it is not approved by medical authorities.
An "anti-coronavirus" mattress was advertised as being anti-fungal, anti-allergic, dustproof and waterproof and able to fight the coronavirus.
Mohanan Vaidyar, a self-proclaimed naturopath, was arrested in Kerala for claiming that he can cure COVID-19 and treating people.
Methylene chloride, commonly used as a paint stripper, was being marketed on eBay as a coronavirus disinfectant. It had been previously banned by the U.S. Environmental Protection Agency due to the risk of asphyxiation during use.
Chlorine dioxide tablets and sanitizers were marketed on Amazon.
A purportedly anti-virus lanyard called Shut Out resulted in the criminal conviction of a Georgia woman for violating the U.S. Insecticide, Fungicide, and Rodenticide Act.
Australian company Lorna Jane claimed that its "anti-virus activewear" prevents and protects against infectious diseases including COVID-19 before being fined by the Therapeutic Goods Administration $40,000 for false advertising.
Traditional Chinese Medicine
China officially promotes the use of Traditional Chinese Medicine (TCM) to treat COVID-19. Many academic papers, such as Shi et al., have been published trying to establish the effectiveness of various decoctions such as Qingfei Paidu Decoction. Most of the western media hold a skeptical attitude about its effectiveness, despite many positive accounts. There is much ongoing research trying to identify the effective ingredients for treating COVID-19 from inspirations from the TCM methods.
Traditional Persian Medicine
Various studies have been conducted and reported on the effect of traditional persian medicine formulas on the SARS-CoV-2. These treatments have been studied in various clinical trials in Iran.
Botanical claims
The poisonous fruit of the datura plant was falsely promoted as a preventive measure for COVID-19, which resulted in eleven people being hospitalized in India. They ate the fruit, following the instructions from a TikTok video that propagated the misinformation. The fruit was claimed to be effective on the grounds that it resembles the coronavirus virion.
A complex Sri Lankan herbal drink was said to remedy all virus infections which can affect humans, including COVID-19, with reposts circulating widely on Facebook. The drink might reduce fever symptoms, but this might lead to the infected person infecting other people, and the mixture could have long-term health complications, according to L. P. A. Karunathilake, a senior lecturer at the Colombo University Institute of Indigenous Medicine.
Andrographis paniculata was claimed to boost the immune systems and relieve symptoms of coronavirus by a Thai media website. Pakakrong Kwankao, Head of the Empirical Evidence Centre at Chao Phraya Abhaibhubehjr Hospital, and Richard Brown, Programme Manager of Health Emergencies and Antimicrobial Resistance at the World Health Organization (WHO) in Thailand, said that there was no evidence to back these claims.
Sap from Tinospora crispa (makabuhay) plants was claimed to serve as an antibiotic against the coronavirus when used as an eye drop; it was also claimed that the coronavirus is from the skin and crawls to the eyes. These rumours circulated in the Philippines. Jaime Purificacion from the University of the Philippines’ Institute of Herbal Medicine said that while there was evidence for makabuhay as a treatment for scabies, there was no evidence that it was useful for treating coronavirus, and no evidence that putting the sap in your eyes was safe. He strongly advised against putting plant sap in the eyes, saying it could be dangerous. The WHO has stated that antibiotics do not kill the coronavirus, as they kill bacteria, not viruses.
A recipe consisting of ingredients often purported to prevent and cure colds, including lemon grass, elder, ginger, black pepper, lemon and honey, was promoted by María Alejandra Díaz, a member of the Venezuelan Constituent Assembly as a cure for COVID-19. Díaz also described the virus as a bioterrorism weapon.
The President of Madagascar, Andry Rajoelina launched and promoted Covid-Organics in April 2020: a herbal drink based on an artemisia plant as a cure that can treat and prevent COVID-19.
United States President Donald Trump and Mike Lindell participated in a July meeting at the White House regarding the use of oleandrin as a treatment for coronavirus. Lindell soon acquired a financial stake in Phoenix Biotechnology Inc, a company trying to find a profitable use for oleandrin. Oleandrin is toxic and potentially lethal to humans.
Religious and magical methods
During the pandemic the alternative anthroposophic medicine promoted at Steiner hospitals in Germany became notorious amongst legitimate medics for forcing quack remedies on sedated hospital patients, some of whom were critically ill. Remedies used included ginger poultices and homeopathic pellets claimed to contain the dust of shooting stars. Stefan Kluge, director of intensive care medicine at Hamburg's University Medical Centre said the claims of anthroposophic doctors during the pandemic were "highly unprofessional" and that they "risk[ed] causing uncertainty among patients".
Indian politician Swami Chakrapani claimed that drinking cow urine and applying cow dung on the body could cure COVID-19. He also stated that only Indian cows must be used. MP Suman Haripriya also promoted cow dung and urine . In March 2020, the All India Hindu Union hosted a "cow urine drinking party" in New Delhi, attended by 200 people. There exists no scientific evidence in favour of cow urine. Dr. Shailendra Saxena of the Indian Virological Society stated that there is no evidence that cow urine has any anti-viral effect, and eating cow dung might transmit diseases to humans zoonotically. For example, giardiasis, E. coli, salmonellosis and tuberculosis can all be transmitted via bovine fecal matter.
Drinking camel urine has been advocated in the Middle East. The WHO stated that camel urine should not be drunk, in order to avoid contracting Middle East Respiratory Syndrome–related coronavirus (MERS-CoV), a more deadly, SARS-CoV-2-like species of betacoronavirus.
Televangelist Kenneth Copeland urged followers to touch their televisions as a means of vaccination by proxy, and also attempted to exorcise COVID-19 on at least three occasions by summoning "the wind of God", stating that this had destroyed the virus (either in the US or worldwide). Earlier, he had urged followers to ignore public health advisories and come to his churches, saying they could be healed there by the laying on of hands if they fell ill.
"Happy Science", a secretive pay-to-progress religious group, sells "spiritual vaccines" to prevent and cure COVID-19, advertises virus-related blessings at rates from US$100 to over US$400, and sells coronavirus-themed DVDs and CDs of Ryuho Okawa (the former stockbroker whom the group believes to be the current incarnation of the supreme deity) lecturing, which are claimed to boost immunity, . After initially defying social-distancing measures, it later closed its New York temple, and administered spiritual vaccines remotely.
A suggestion that COVID-19 could be prevented by applying a cotton ball soaked in violet oil to the anus has brought Abbas Tabrizian renewed widespread ridicule in Iran. The IRNA news agency reported that Abbas Tabrizian, who has often promoted his remedies as Prophetic medicine in opposition to standard medicine, has also claimed that COVID-19 is God's revenge against those who had bothered him. An arrest warrant has been issued for Morteza Kohansal, a follower of Abbas Tabrizian, who visited the coronavirus section of a hospital in Iran without wearing protective gear, and applied what he described as the "Prophet's perfume" to affected patients. Using Prophetic medicine has caused some Iranian clerics to delay getting standard medical treatment. Ayatollah Hashem Bathaie Golpayegani announced that he had cured himself of COVID-19 three weeks before being hospitalized. He died two days later.
Some religious hardliners in Iran have advocated that people visit shrines to be healed, and opposed government closures of pilgrimage sites.
Parliamentarian Ramesh Bidhuri of the Bharatiya Janata Party claimed that experts say using Namaste as a greeting prevents transmission of COVID-19, but using Arabic greetings such as Adab and As-salamu alaykum does not prevent it as they direct air into the mouth.
Religious and scientific misconceptions related to the coronavirus have been found to be widespread in Pakistan. According to a survey research conducted by Ipsos, 82% of people in Pakistan believed that performing wudu/ablution five times a day will keep them protected from contracting COVID-19. Meanwhile, 67% polled believed that jamaat (congregation prayer) cannot become a source of infection and 48% people believed that shaking hands cannot infect anyone since it is Sunnah.
Food and drink
Fruit
Drinking lemon in warm water has been claimed to prevent both COVID-19 and cancer by increasing vitamin C levels. This claim circulated on Facebook in English, French, Spanish, and Portuguese. There is no evidence that vitamin C was effective against coronaviruses, nor are lemons the fruit with the most vitamin C content, said Henry Chenal, director of the Integrated Bioclinical Research Centre (CIRBA) in Abidjan, Ivory Coast. The WHO said that there was no evidence that lemons would protect against COVID-19, though they recommended consuming fresh fruit and vegetables in a healthy diet.
Bananas were claimed to be able to strengthen the immune system and prevent and cure COVID-19. The claim was based on a composited video that falsely attributed the statements to researchers at the University of Queensland. The University stated that the video was faked and urged people not to share it.
Eating mango or durian will not cure COVID-19.
Onions were rumoured to be a preventive measure against COVID-19 on Facebook.
Herbs and spices
Garlic was said to prevent COVID-19 on Facebook. There is no evidence that garlic protects against COVID-19.
Hot peppers cannot prevent or cure COVID-19.
Consuming large amounts of boiled ginger after fasting for a day was rumoured to prevent or cure coronavirus on Facebook. There is no evidence that this prevents or cures any coronavirus infection, Mark Kristoffer Pasayan, a fellow at the Philippine Society for Microbiology and Infectious Diseases, said.
Juice of bittergourd, a vegetable used in traditional medicine, was suggested as a cure for COVID-19 on social media.
Consuming turmeric has been claimed to help prevent COVID-19, but the WHO says there is no evidence that it does.
Neem leaves (Azadirachta indica) were claimed to be remedies for COVID-19 in rumours that circulated in India.
Various retailers have marketed herbal products and essential oils fraudulently claimed to cure or prevent COVID-19.
Drinks and frozen foods
Drinking alcohol will not prevent or cure COVID-19, contrary to some claims. Drinking alcohol may cause subclinical immunosuppression (see "Addictive drugs" section above).
Drinking water every 15 minutes was claimed to prevent coronavirus infection. Drinking large amounts of water will not prevent or cure COVID-19, though avoiding dehydration is healthy.
Tea was said to be effective against COVID-19 in claims circulating on social media, which said that since tea contained the stimulants methylxanthine, theobromine and theophylline, it was capable of warding off the virus. These claims were falsely attributed to Dr Li Wenliang.
Fennel tea (supposedly similar to the medicine Tamiflu—itself ineffective against coronaviruses—according to a false e-mail attributed to a hospital director) was claimed to be a cure in Brazil.
So-called cures in messages spreading in Brazil included avocado and mint tea, hot whiskey and honey, essential oils, and vitamins C and D.
Facebook claims that 'gargling salt water, drinking hot liquids like tea and avoiding ice cream can stop the transmission of COVID-19' have been criticized by health professionals.
Eating ice cream and frozen foods will neither cure nor cause COVID-19, as long as they are hygienically prepared. This claim was widely attributed to UNICEF, which put out a statement saying that they had made no such claim: "To the creators of such falsehoods, we offer a simple message: STOP. Sharing inaccurate information and attempting to imbue it with authority by misappropriating the names of those in a position of trust is dangerous and wrong".
Meat
Claims that vegetarians are immune to coronavirus spread online in India, causing "#NoMeat_NoCoronaVirus" to trend on Twitter. Eating meat does not have an effect on COVID-19 spread, except for people near where animals are slaughtered (see zoonosis), said Anand Krishnan, professor at the Centre for Community Medicine of the All India Institute of Medical Sciences (AIIMS).
Eating chicken will not cause COVID-19, as long as it is hygienically prepared and well-cooked.
Dishes
There is no evidence that eating curry or rasam protects against COVID-19.
Exercises
Taking six deep breaths and then coughing while covering one's mouth was circulated as a treatment for COVID-19 infection in social media, including by celebrities such as J. K. Rowling.
Use of existing medications unproven against COVID-19
In March 2020, then US President Donald Trump promoted the use of chloroquine and hydroxychloroquine, two related anti-malarial drugs, for treating COVID-19. The FDA later clarified that it has not approved any therapeutics or drugs to treat COVID-19, but that studies were underway to see if chloroquine could be effective in treatment of COVID-19. Following Trump's claim, panic buying of chloroquine was reported from many countries in Africa, Latin America and South Asia. Health officials across the world are issuing warnings over the use of antimalarial drugs after Trump’s comments about treating the coronavirus with them sparked panic-buying and overdoses. Ugandan Dr. Chris Kaganda said, "There is no known dosage for Covid-19 and whether it can actually cure it, it's safer to avoid chloroquine, but you know these are desperate times." Patients with lupus and rheumatoid arthritis, who take these medications regularly, have had trouble obtaining supplies. Taking these, or related products intended for aquarium use have caused serious side effects, illness and death.
Rumours circulated in Iraq that the Iraqi pharmaceutical company PiONEER Co. had discovered a treatment for coronavirus. These reports were loosely based on a statement by PiONEER, which mentioned hydroxychloroquine sulphate and azithromycin (brand nameand "Zitroneer"), a common antibiotic ) and said that it would try and make these drugs available free of charge. The statement did not say that these drugs can cure COVID-19. The company later clarified that they had not attempted to find a cure for COVID-19, and criticized the news media for spreading inaccurate reports and misinformation, running with the story without checking whether they had misunderstood the company's statement. Two days later, another false story was widely reported, saying that Samaraa, another Iraqi pharmaceutical company, had found a cure. Generally, antibiotics (like azithromycin) are not effective against viruses, only some bacteria. Azithromycin is sometimes given to patients hospitalized with COVID-19, but only to treat bacterial co-infection. Overuse of azithromycin causes antibiotic resistance, and rare side effects include heart arrhythmias and hearing loss.
There were also claims that a 30-year-old Indian textbook lists aspirin, anti-histamines and nasal spray as treatments for COVID-19. The textbook actually describes coronaviruses in general, as a family of viruses.
There were also claims in April 2020 that an anti-viral injection had been approved as a cure in the Philippines, and the lockdown would be lifted. The persons making these claims were issued with a cease-and-desist order by the Philippine FDA, which reiterated the need to test treatments to be sure they are safe. The FDA said that they had not even received an application to register the treatment with the FDA. The agency has prohibited the use of the untested drug, and the clinic illegally promoting it subsequently closed.
Ivermectin, a medication used to treat parasitic infections, was suggested as a possible COVID-19 treatment in an online preprint which utilized a flawed statistical methodology. Importantly, the concentration of the drug that was required to achieve the antiviral effects observed in cell culture was several times higher than what can be achieved in the bloodstream of patients. Clinical research subsequently determined ivermectin is not effective for treating COVID-19. The promotion of ivermectin as a COVID-19 treatment has led to increases in ivermectin-related poison control centre calls in the United States, as well as national shortages of the drug in Australia.
Anti-fraud efforts
Operation Pangea, launched by international police organisation Interpol, seized counterfeit facemasks, substandard hand sanitizers and unauthorized antiviral medication in over 90 countries, resulting in the arrest of 121 people.
References
Alternative medicine
Communication of falsehoods
COVID-19 pandemic-related lists
Impact of the COVID-19 pandemic on journalism
Health-related conspiracy theories | List of unproven methods against COVID-19 | Technology | 7,529 |
2,777,137 | https://en.wikipedia.org/wiki/Keynesian%20beauty%20contest | A Keynesian beauty contest describes a beauty contest where judges are rewarded for selecting the most popular faces among all judges, rather than those they may personally find the most attractive. This idea is often applied in financial markets, whereby investors could profit more by buying whichever stocks they think other investors will buy, rather than the stocks that have fundamentally the best value, because when other people buy a stock, they bid up the price, allowing an earlier investor to cash out with a profit, regardless of whether the price increases are supported by its fundamentals.
The concept was developed by John Maynard Keynes and introduced in Chapter 12 of his work, The General Theory of Employment, Interest and Money (1936), to explain price fluctuations in equity markets.
Overview
Keynes described the action of rational agents in a market using an analogy based on a fictional newspaper contest, in which entrants are asked to choose the six most attractive faces from a hundred photographs. Those who picked the most popular faces are then eligible for a prize.
A naive strategy would be to choose the face that, in the opinion of the entrant, is the most handsome. A more sophisticated contest entrant, wishing to maximize the chances of winning a prize, would think about what the majority perception of attractiveness is, and then make a selection based on some inference from their knowledge of public perceptions. This can be carried one step further to take into account the fact that other entrants would each have their own opinion of what public perceptions are. Thus the strategy can be extended to the next order and the next and so on, at each level attempting to predict the eventual outcome of the process based on the reasoning of other rational agents.
"It is not a case of choosing those [faces] that, to the best of one's judgment, are really the prettiest, nor even those that average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practice the fourth, fifth and higher degrees." (Keynes, General Theory of Employment, Interest and Money, 1936).
Keynes believed that similar behavior was at work within the stock market. This would have investors pricing shares not based on what they think an asset's fundamental value is, or even on what investors think other investors believe about the asset's value, but on what they think other investors believe is the average opinion about the value of the asset, or even higher-order assessments.
Example contests
In 2011, National Public Radio's Planet Money tested the theory by having its listeners select the cutest of three animal videos. The listeners were broken into two groups. One selected the animal they thought was cutest, and the other selected the one they thought most participants would think was the cutest. The results showed significant differences between the groups. Fifty percent of the first group selected a video with a kitten, compared to seventy-six percent of the second selecting the same kitten video. Individuals in the second group were generally able to disregard their own preferences and accurately make a decision based on the expected preferences of others. The results were considered to be consistent with Keynes' theory.
See also
Tactical voting
Comparative advantage
Focal point (game theory)
Guess 2/3 of the average
Family Feud
Notes
References
External links
The State of Long-Term Expectation, Ch 12. General Theory of Employment Interest and Money
Behavioral finance
Game theory
Keynesian economics
Social science experiments | Keynesian beauty contest | Mathematics,Biology | 708 |
41,390,796 | https://en.wikipedia.org/wiki/Galileo%20%281968%20film%29 | Galileo (also known as Galileo Galilei) is a 1968 Italian–Bulgarian biographical drama film directed by Liliana Cavani. It depicts the life of Galileo Galilei and particularly his conflicts with the Catholic Church over his scientific theories.
Plot
Astronomer Galileo Galilei teaches at the University of Padua. While he questions the ideas of Ptolemy and Aristotle, the official scientific dogmas imposed by the Catholic Church, he remains secretive about his doubts. His more candid friend, philosopher Giordano Bruno, is reported to the Inquisition for his revolutionary ideas and later executed as a heretic. Still, Galileo continues his studies with a telescope constructed by Dutch technicians and perfected by him, and comes to the conclusion that Copernico's heliocentric system is valid. He publishes his discoveries in a book, which leads to a series of interrogations by the Inquisition. Facing a possible death sentence, Galileo publicly recants his theories.
Cast
Cyril Cusack as Galileo Galilei
Georgi Kaloyanchev as Giordano Bruno
Nevena Kokanova as Marina
Nicolai Doicev as Cardinal Bellarmino
Georgi Cherkelov as Paolo Sarpi
Piero Vida as Pope Urban VIII
Gigi Ballista as Dominican Commissioner
Paolo Graziosi as Gian Lorenzo Bernini
Maia Dragomanska as Galilei's daughter
Lou Castel as Father Charles
Giulio Brogi as Sagredo
Production and release
Originally intended as a miniseries co-produced by Italian and Bulgarian film companies, radio and television company RAI refused to broadcast the finished film and sold the distribution rights to Cineriz, who trimmed the originally 105 minutes long film to 92 minutes running time.
Galileo was shown in competition at the 1968 Venice International Film Festival.
Home media
Galileo was released in 2010 as a Region 2 DVD.
See also
Life of Galileo, a play by Bertolt Brecht
Galileo, a 1975 film adaptation of Brecht's play
References
External links
Article by Cristina Olivotto and Antonella Testa comparing the 1968 and 1975 films
1968 drama films
1960s biographical drama films
1960s historical drama films
Bulgarian biographical drama films
1960s Italian-language films
Bulgarian historical drama films
Films scored by Ennio Morricone
Films directed by Liliana Cavani
Italian biographical drama films
Italian historical drama films
Films shot in Bulgaria
Films shot at Cinecittà Studios
Films set in the 1590s
Films set in Rome
Cultural depictions of Galileo Galilei
Cultural depictions of Giordano Bruno
1960s Italian films | Galileo (1968 film) | Astronomy | 498 |
33,160,265 | https://en.wikipedia.org/wiki/Vinylphenol%20reductase | Vinylphenol reductase is an enzyme that catalyses the reaction :
4-vinylphenol + NAD+ + 3 H+ ⇔ 4-ethylphenol + NADH
It is found in Brettanomyces bruxellensis, a yeast responsible of the presence of ethyl phenols in wine formed from p-coumaric acid.
See also
Wine chemistry
Yeast in winemaking
References
External links
Vinylphenol reductase on MetaCyc
Oxidoreductases | Vinylphenol reductase | Chemistry | 107 |
308,158 | https://en.wikipedia.org/wiki/Vacuum%20energy | Vacuum energy is an underlying background energy that exists in space throughout the entire universe. The vacuum energy is a special case of zero-point energy that relates to the quantum vacuum.
The effects of vacuum energy can be experimentally observed in various phenomena such as spontaneous emission, the Casimir effect, and the Lamb shift, and are thought to influence the behavior of the Universe on cosmological scales. Using the upper limit of the cosmological constant, the vacuum energy of free space has been estimated to be 10−9 joules (10−2 ergs), or ~5 GeV per cubic meter. However, in quantum electrodynamics, consistency with the principle of Lorentz covariance and with the magnitude of the Planck constant suggests a much larger value of 10113 joules per cubic meter. This huge discrepancy is known as the cosmological constant problem or, colloquially, the "vacuum catastrophe."
Origin
Quantum field theory states that all fundamental fields, such as the electromagnetic field, must be quantized at every point in space. A field in physics may be envisioned as if space were filled with interconnected vibrating balls and springs, and the strength of the field is like the displacement of a ball from its rest position. The theory requires "vibrations" in, or more accurately changes in the strength of, such a field to propagate as per the appropriate wave equation for the particular field in question. The second quantization of quantum field theory requires that each such ball–spring combination be quantized, that is, that the strength of the field be quantized at each point in space. Canonically, if the field at each point in space is a simple harmonic oscillator, its quantization places a quantum harmonic oscillator at each point. Excitations of the field correspond to the elementary particles of particle physics. Thus, according to the theory, even the vacuum has a vastly complex structure and all calculations of quantum field theory must be made in relation to this model of the vacuum.
The theory considers vacuum to implicitly have the same properties as a particle, such as spin or polarization in the case of light, energy, and so on. According to the theory, most of these properties cancel out on average leaving the vacuum empty in the literal sense of the word. One important exception, however, is the vacuum energy or the vacuum expectation value of the energy. The quantization of a simple harmonic oscillator requires the lowest possible energy, or zero-point energy of such an oscillator to be
Summing over all possible oscillators at all points in space gives an infinite quantity. To remove this infinity, one may argue that only differences in energy are physically measurable, much as the concept of potential energy has been treated in classical mechanics for centuries. This argument is the underpinning of the theory of renormalization. In all practical calculations, this is how the infinity is handled.
Vacuum energy can also be thought of in terms of virtual particles (also known as vacuum fluctuations) which are created and destroyed out of the vacuum. These particles are always created out of the vacuum in particle–antiparticle pairs, which in most cases shortly annihilate each other and disappear. However, these particles and antiparticles may interact with others before disappearing, a process which can be mapped using Feynman diagrams. Note that this method of computing vacuum energy is mathematically equivalent to having a quantum harmonic oscillator at each point and, therefore, suffers the same renormalization problems.
Additional contributions to the vacuum energy come from spontaneous symmetry breaking in quantum field theory.
Implications
Vacuum energy has a number of consequences. In 1948, Dutch physicists Hendrik B. G. Casimir and Dirk Polder predicted the existence of a tiny attractive force between closely placed metal plates due to resonances in the vacuum energy in the space between them. This is now known as the Casimir effect and has since been extensively experimentally verified. It is therefore believed that the vacuum energy is "real" in the same sense that more familiar conceptual objects such as electrons, magnetic fields, etc., are real. However, alternative explanations for the Casimir effect have since been proposed.
Other predictions are harder to verify. Vacuum fluctuations are always created as particle–antiparticle pairs. The creation of these virtual particles near the event horizon of a black hole has been hypothesized by physicist Stephen Hawking to be a mechanism for the eventual "evaporation" of black holes. If one of the pair is pulled into the black hole before this, then the other particle becomes "real" and energy/mass is essentially radiated into space from the black hole. This loss is cumulative and could result in the black hole's disappearance over time. The time required is dependent on the mass of the black hole (the equations indicate that the smaller the black hole, the more rapidly it evaporates) but could be on the order of 1060 years for large solar-mass black holes.
The vacuum energy also has important consequences for physical cosmology. General relativity predicts that energy is equivalent to mass, and therefore, if the vacuum energy is "really there", it should exert a gravitational force. Essentially, a non-zero vacuum energy is expected to contribute to the cosmological constant, which affects the expansion of the universe.
Field strength of vacuum energy
The field strength of vacuum energy is a concept proposed in a theoretical study that explores the nature of the vacuum and its relationship to gravitational interactions. The study derived a mathematical framework that uses the field strength of vacuum energy as an indicator of the bulk (spacetime) resistance to localized curvature. It illustrates the association of the field strength of vacuum energy to the curvature of the background, where this concept challenges the traditional understanding of gravity and suggests that the gravitational constant, G, may not be a universal constant, but rather a parameter dependent on the field strength of vacuum energy.
Determination of the value of G has been a topic of extensive research, with numerous experiments conducted over the years in an attempt to measure its precise value. These experiments, often employing high-precision techniques, have aimed to provide accurate measurements of G and establish a consensus on its exact value. However, the outcomes of these experiments have shown significant inconsistencies, making it difficult to reach a definitive conclusion regarding the value of G. This lack of consensus has puzzled scientists and called for alternative explanations.
To test the theoretical predictions regarding the field strength of vacuum energy, specific experimental conditions involving the position of the moon are recommended in the theoretical study. These conditions aim to achieve consistent outcomes in precision measurements of G. The ultimate goal of such experiments is to either falsify or provide confirmations to the proposed theoretical framework. The significance of exploring the field strength of vacuum energy lies in its potential to revolutionize our understanding of gravity and its interactions.
History
In 1934, Georges Lemaître used an unusual perfect-fluid equation of state to interpret the cosmological constant as due to vacuum energy. In 1948, the Casimir effect provided an experimental method for a verification of the existence of vacuum energy; in 1955, however, Evgeny Lifshitz offered a different origin for the Casimir effect. In 1957, Lee and Yang proved the concepts of broken symmetry and parity violation, for which they won the Nobel prize. In 1973, Edward Tryon proposed the zero-energy universe hypothesis: that the Universe may be a large-scale quantum-mechanical vacuum fluctuation where positive mass–energy is balanced by negative gravitational potential energy. During the 1980s, there were many attempts to relate the fields that generate the vacuum energy to specific fields that were predicted by attempts at a Grand Unified Theory and to use observations of the Universe to confirm one or another version. However, the exact nature of the particles (or fields) that generate vacuum energy, with a density such as that required by inflation theory, remains a mystery.
Vacuum energy in fiction
Arthur C. Clarke's novel The Songs of Distant Earth features a starship powered by a "quantum drive" based on aspects of this theory.
In the sci-fi television/film franchise Stargate, a Zero Point Module (ZPM) is a power source that extracts zero-point energy from a micro parallel universe.
The book Star Trek: Deep Space Nine Technical Manual describes the operating principle of the so-called quantum torpedo. In this fictional weapon, an antimatter reaction is used to create a multi-dimensional membrane in a vacuum that releases at its decomposition more energy than was needed to produce it. The missing energy is removed from the vacuum. Usually about twice as much energy is released in the explosion as would correspond to the initial antimatter matter annihilation.
In the video game Half-Life 2, the item generally known as the "gravity gun" is referred to as both the "zero point field energy manipulator" and the "zero point energy field manipulator."
See also
Cosmic microwave background
Dark energy
False vacuum
Normal ordering
Quantum fluctuation
Sunyaev–Zeldovich effect
Vacuum state
References
External articles and references
Free PDF copy of The Structured Vacuum – thinking about nothing by Johann Rafelski and Berndt Muller (1985); .
Saunders, S., & Brown, H. R. (1991). The Philosophy of Vacuum. Oxford [England]: Clarendon Press.
Poincaré Seminar, Duplantier, B., & Rivasseau, V. (2003). "Poincaré Seminar 2002: vacuum energy-renormalization". Progress in mathematical physics, v. 30. Basel: Birkhäuser Verlag.
Futamase & Yoshida Possible measurement of vacuum energy.
Study of Vacuum Energy Physics for Breakthrough Propulsion 2004, NASA Glenn Technical Reports Server (PDF, 57 pages, Retrieved 2013-09-18).
Dark energy
Energy (physics)
Physical cosmological concepts
Quantum field theory
Vacuum | Vacuum energy | Physics,Astronomy,Mathematics | 2,029 |
6,905,141 | https://en.wikipedia.org/wiki/Spur%20%28botany%29 | The botanical term “spur” is given to outgrowths of tissue on different plant organs. The most common usage of the term in botany refers to nectar spurs in flowers.
nectar spur
spur (stem)
spur (leaf)
See also
Fascicle
Sepal
Petal
Tepal
Calyx
Corolla
Plant anatomy
Plant morphology | Spur (botany) | Biology | 67 |
57,236,131 | https://en.wikipedia.org/wiki/Grimpoteuthis%20abyssicola | Grimpoteuthis abyssicola, commonly known as the red jellyhead, is a species of small deep-sea octopus known from two specimens. The holotype specimen was a female collected on the Lord Howe Rise (central Tasman Sea off New Zealand), between 3154 and 3180 meters depth. A second specimen (a male) was collected on the continental slope of south-eastern Australia between 2821 and 2687 m depth.
The octopus has very delicate tissues, making it susceptible to damage by trawling nets. The arms and web are a deep maroon colour, while the body and head are nearly transparent.
The female type specimen had a mantle about 75 millimeters long, while its total body reached 305 millimeters long (the male specimen had a longer mantle length at 99 mm, but a shorter total length of 245 mm). G. abyssicola's internal shell is U-shaped, lacking any lateral prominences/shoulders, and with the ends of shell rounded, this shell shape is distinctive from other Grimpoteuthis (with the possible exception of Grimpoteuthis hippocrepium). This species can also be distinguished from other members of Grimpoteuthis due to the absence of both a radula and posterior salivary glands, how many suckers it has (up to 74 or 77 per arm on the known specimens), and where the arm cirri commence. On the holotype the first 6-8 suckers on each arm are small, then larger up to sucker 30-35, followed this are a further 30-35 suckers rapidly decreasing in size to the arm tip.
The species has three distinct ways of feeding. It can either obtain food through envelopment, entrapment, or cirri-generated current feeding. With cirri-generated current feeding possibly being present in the other two feeding styles as well.
Present records of this species are too few to assess its conservation status (but it is likely not threatened given its abyssal distribution).
References
Octopuses
Molluscs of the Pacific Ocean
Cephalopods described in 1999
Species known from a single specimen
Cephalopods of Oceania
Cephalopods of Australia
Molluscs of New Zealand | Grimpoteuthis abyssicola | Biology | 459 |
4,289,748 | https://en.wikipedia.org/wiki/Susan%20Headley | Susan Headley (born 1959, also known as Susy Thunder or Susan Thunder) is an American former phreaker and early computer hacker during the late 1970s and early 1980s. A member of the so-called Cyberpunks, Headley specialized in social engineering, a type of hacking which uses pretexting and misrepresentation of oneself in contact with targeted organizations in order to elicit information vital to hacking those organizations.
Biography
Born in Altona, Illinois, in 1959, Headley claims to have dropped out of school in the eighth grade after a difficult childhood. She later moved to Los Angeles, California, where she worked as a teenage prostitute and was a rock 'n' roll groupie, claiming all four former members of the Beatles among her conquests. She met computer hacker Kevin Mitnick (also known as Condor) in 1980, and together with another hacker, Lewis de Payne (also known as Roscoe), formed a gang of phone phreaks. In The Hacker's Handbook, Headley is referred to as "one of the earliest of the present generation of hackers" and described as successfully hacking the US phone system as a 17-year-old in 1977.
On October 25, 1983, Headley testified in front of the Governmental Affairs oversight committee as to the technical capabilities and possible motivations of modern-day hackers and phone phreaks.
Public service
Headley was elected to public office in California in 1994, as City Clerk of California City.
Personal life
Headley is married, and lives in the Midwest. She is a coin collector.
References
External links
Esquire magazine article on Mitnick, including interview with Susan Thunder
Cyberpunks: Outlaws and Hackers on the Computer Frontier Book by Katie Hafner
Hendon Mob poker players' database entry for Susy Thunder
Searching for Susy Thunder by Claire L. Evans
Living people
1959 births
American cybercriminals
California City, California
Groupies
Hackers | Susan Headley | Technology | 400 |
238,517 | https://en.wikipedia.org/wiki/Kip%20Thorne | Kip Stephen Thorne (born June 1, 1940) is an American theoretical physicist and writer known for his contributions in gravitational physics and astrophysics. Along with Rainer Weiss and Barry C. Barish, he was awarded the 2017 Nobel Prize in Physics for his contributions to the LIGO detector and the observation of gravitational waves.
A longtime friend and colleague of Stephen Hawking and Carl Sagan, he was the Richard P. Feynman Professor of Theoretical Physics at the California Institute of Technology (Caltech) until 2009 and speaks of the astrophysical implications of the general theory of relativity. He continues to do scientific research and scientific consulting, a notable example of which was for the Christopher Nolan film Interstellar.
Life and career
Thorne was born on June 1, 1940, in Logan, Utah. His father, D. Wynne Thorne (1908–1979), was a professor of soil chemistry at Utah State University, and his mother, Alison (née Comish; 1914–2004), was an economist and the first woman to receive a PhD in economics from Iowa State College. Raised in an academic environment, two of his four siblings also became professors. Thorne's parents were members of the Church of Jesus Christ of Latter-day Saints (LDS Church) and raised Thorne in the LDS faith, though he now describes himself as atheist. Regarding his views on science and religion, Thorne has stated: "There are large numbers of my finest colleagues who are quite devout and believe in God .... There is no fundamental incompatibility between science and religion. I happen to not believe in God."
Thorne rapidly excelled at academics early in life, winning recognition in the Westinghouse Science Talent Search as a senior at Logan High School. He received his BS in physics degree from the California Institute of Technology (Caltech) in 1962, and his master and PhD in physics from Princeton University in 1964 and 1965 under the supervision of John Archibald Wheeler with a doctoral dissertation entitled "Geometrodynamics of Cylindrical Systems".
Thorne returned to Caltech as an associate professor in 1967 and became a professor of theoretical physics in 1970, becoming one of the youngest full professors in the history of Caltech at age 30. He became the William R. Kenan, Jr. Professor in 1981, and the Feynman Professor of Theoretical Physics in 1991. He was an adjunct professor at the University of Utah from 1971 to 1998 and Andrew D. White Professor at Large at Cornell University from 1986 to 1992. In June 2009, he resigned his Feynman Professorship (he is now the Feynman Professor of Theoretical Physics, Emeritus) to pursue a career of writing and movie making. His first film project was Interstellar, on which he worked with Christopher Nolan and Jonathan Nolan.
Throughout the years, Thorne has served as a mentor and thesis advisor to many leading theorists who now work on observational, experimental, or astrophysical aspects of general relativity. Approximately 50 physicists have received PhDs at Caltech under Thorne's personal mentorship.
Thorne is known for his ability to convey the excitement and significance of discoveries in gravitation and astrophysics to both professional and lay audiences. His presentations on subjects such as black holes, gravitational radiation, relativity, time travel, and wormholes have been included in PBS shows in the U.S. and on the BBC in the United Kingdom.
Thorne and Linda Jean Peterson married in 1960. Their children are Kares Anne and Bret Carter, an architect. Thorne and Peterson divorced in 1977. Thorne was set up on a blind date with Lynda Obst, later a film producer, by physicist Carl Sagan. They dated in 1979-1980, and parted and remained friends, to the extent that they later collaborated on Interstellar. Thorne and his second wife, Carolee Joyce Winstein, a professor of biokinesiology and physical therapy at USC, married in 1984.
Research
Thorne's research has principally focused on relativistic astrophysics and gravitation physics, with emphasis on relativistic stars, black holes and especially gravitational waves. He is perhaps best known to the public for his controversial theory that wormholes can conceivably be used for time travel. However, Thorne's scientific contributions, which center on the general nature of space, time, and gravity, span the full range of topics in general relativity.
Gravitational waves and LIGO
Thorne's work has dealt with the prediction of gravitational wave strengths and their temporal signatures as observed on Earth. These "signatures" are of great relevance to LIGO (Laser Interferometer Gravitational Wave Observatory), a multi-institution gravitational wave experiment for which Thorne has been a leading proponent – in 1984, he cofounded the LIGO Project (the largest project ever funded by the NSF) to discern and measure any fluctuations between two or more 'static' points; such fluctuations would be evidence of gravitational waves, as calculations describe. A significant aspect of his research is developing the mathematics necessary to analyze these objects. Thorne also carries out engineering design analyses for features of the LIGO that cannot be developed on the basis of experiment and he gives advice on data analysis algorithms by which the waves will be sought. He has provided theoretical support for LIGO, including identifying gravitational wave sources that LIGO should target, designing the baffles to control scattered light in the LIGO beam tubes, and – in collaboration with Vladimir Braginsky's (Moscow, Russia) research group – inventing quantum nondemolition designs for advanced gravity-wave detectors and ways to reduce the most serious kind of noise in advanced detectors: thermoelastic noise. With Carlton M. Caves, Thorne invented the back-action-evasion approach to quantum nondemolition measurements of the harmonic oscillators – a technique applicable both in gravitational wave detection and quantum optics.
On February 11, 2016, a team of four physicists representing the LIGO Scientific Collaboration, announced that in September 2015, LIGO recorded the signature of two black holes colliding 1.3 billion light-years away. This recorded detection was the first direct observation of the fleeting chirp of a gravitational wave and confirmed a prediction of the general theory of relativity.
Black hole cosmology
While studying for his PhD at Princeton University, his mentor John Wheeler assigned him a problem to think about: find out whether or not a cylindrical bundle of repulsive magnetic field lines will implode under its own attractive gravitational force. After several months wrestling with the problem, he proved that it was impossible for cylindrical magnetic field lines to implode.
Why won't a cylindrical bundle of magnetic field lines implode, while spherical stars will implode under their own gravitational force? Thorne tried to explore the theoretical ridge between these two phenomena. He eventually determined that the gravitational force can overcome all interior pressure only when an object has been compressed in all directions. To express this realization, Thorne proposed his hoop conjecture, which describes an imploding star turning into a black hole when the critical circumference of the designed hoop can be placed around it and set into rotation. That is, any object of mass M around which a hoop of circumference can be spun must be a black hole.
As a tool to be used in both enterprises — astrophysics and theoretical physics — Thorne and his students have developed an unusual approach, called the "membrane paradigm", to the theory of black holes and used it to clarify the Blandford–Znajek mechanism by which black holes may power some quasars and active galactic nuclei.
Thorne has investigated the quantum statistical mechanical origin of the entropy of a black hole. With his postdoc Wojciech Zurek, he showed that the entropy of a black hole is the logarithm of the number of ways that the hole could have been made.
With Igor Novikov and Don Page, he developed the general relativistic theory of thin accretion disks around black holes, and using this theory he deduced that with a doubling of its mass by such accretion a black hole will be spun up to 0.998 of the maximum spin allowed by general relativity, but not any farther. This is probably the maximum black-hole spin allowed in nature.
Wormholes and time travel
Thorne and his co-workers at Caltech conducted scientific research on whether the laws of physics permit space and time to be multiply connected (can there exist classical, traversable wormholes and "time machines"?). With Sung-Won Kim, Thorne identified a universal physical mechanism (the explosive growth of vacuum polarization of quantum fields), that may always prevent spacetime from developing closed timelike curves (i.e., prevent backward time travel).
With Mike Morris and Ulvi Yurtsever, he showed that traversable wormholes can exist in the structure of spacetime only if they are threaded by quantum fields in quantum states that violate the averaged null energy condition (i.e. have negative renormalized energy spread over a sufficiently large region). This has triggered research to explore the ability of quantum fields to possess such extended negative energy. Recent calculations by Thorne indicate that simple masses passing through traversable wormholes could never engender paradoxes – there are no initial conditions that lead to paradox once time travel is introduced. If his results can be generalized, they would suggest that none of the supposed paradoxes formulated in time travel stories can actually be formulated at a precise physical level: that is, that any situation in a time travel story turns out to permit many consistent solutions.
Relativistic stars, multipole moments and other endeavors
With Anna Żytkow, Thorne predicted the existence of red supergiant stars with neutron-star cores (Thorne–Żytkow objects). He laid the foundations for the theory of pulsations of relativistic stars and the gravitational radiation they emit. With James Hartle, Thorne derived from general relativity the laws of motion and precession of black holes and other relativistic bodies, including the influence of the coupling of their multipole moments to the spacetime curvature of nearby objects, as well as writing down the Hartle-Thorne metric, an approximate solution which describes the exterior of a slowly and rigidly rotating, stationary and axially symmetric body.
Thorne has also theoretically predicted the existence of universally antigravitating "exotic matter" – the element needed to accelerate the expansion rate of the universe, keep traversable wormhole "Star Gates" open and keep timelike geodesic free float "warp drives" working. With Clifford Will and others of his students, he laid the foundations for the theoretical interpretation of experimental tests of relativistic theories of gravity – foundations on which Will and others then built. , Thorne was interested in the origin of classical space and time from the quantum foam of quantum gravity theory.
Publications
Thorne has written and edited books on topics in gravitational theory and high-energy astrophysics. In 1973, he co-authored the textbook Gravitation with Charles Misner and John Wheeler; that according to John C. Baez and Chris Hillman, is one of the great scientific books of all time and has inspired two generations of students. In 1994, he published Black Holes and Time Warps: Einstein's Outrageous Legacy, a book for non-scientists for which he received numerous awards. This book has been published in six languages, and editions in Chinese, Italian, Czech, and Polish are in press. In 2014, Thorne published The Science of Interstellar in which he explains the science behind Christopher Nolan's film Interstellar; Nolan wrote the foreword to the book. In September 2017, Thorne and Roger D. Blandford published Modern Classical Physics: Optics, Fluids, Plasmas, Elasticity, Relativity, and Statistical Physics, a graduate-level textbook covering the six major areas of physics listed in the title.
Thorne's articles have appeared in publications such as:
Scientific American,
McGraw-Hill Yearbook of Science and Technology, and
Collier's Encyclopedia among others.
Thorne has published more than 150 articles in scholarly journals.
Honors and awards
Thorne has been elected to the:
American Academy of Arts and Sciences (1972)
National Academy of Sciences,
Russian Academy of Sciences, and
American Philosophical Society.
He has been recognized by numerous awards including the:
American Institute of Physics Science Writing Award in Physics and Astronomy,
Phi Beta Kappa Science Writing Award,
American Physical Society's Lilienfeld Prize,
German Astronomical Society's Karl Schwarzschild Medal (1996),
Robinson Prize in Cosmology from the University of Newcastle, England,
Sigma Xi: The Scientific Research Society's Common Wealth Awards for Science and Invention, and
California Science Center's California Scientist of the Year Award (2003).
Albert Einstein Medal in 2009 from the Albert Einstein Society, Bern, Switzerland
UNESCO Niels Bohr Medal from UNESCO (2010)
Special Breakthrough Prize in Fundamental Physics (2016)
Gruber Prize in Cosmology (2016)
Shaw Prize (2016) (together with Ronald Drever and Rainer Weiss).
Kavli Prize in Astrophysics (2016) (together with Ronald Drever and Rainer Weiss).
Tomalla Prize (2016) for extraordinary contributions to general relativity and gravity.
Georges Lemaître Prize (2016)
Harvey Prize (2016) (together with Ronald Drever and Rainer Weiss).
Smithsonian Magazine American Ingenuity Award for Physical Sciences (2016)
Princess of Asturias Award (2017) (jointly with Rainer Weiss and Barry Barish).
Nobel Prize in Physics (2017) (jointly with Rainer Weiss and Barry Barish)
Lewis Thomas Prize (2018)
Golden Plate Award of the American Academy of Achievement (2019)
He has been a Woodrow Wilson Fellow, Danforth Fellow, Guggenheim Fellow, and Fulbright Fellow. He has also received the honorary degree of doctor of humane letters from Claremont Graduate University and an honorary doctorate from the Physics Department of the Aristotle University of Thessaloniki. In 2024 he was awarded an honorary doctorate from University of Cambridge.
He was elected to hold the Lorentz chair for the year 2009 Leiden University, the Netherlands.
Thorne has served on the:
International Committee on General Relativity and Gravitation,
Committee on US-USSR Cooperation in Physics, and
National Academy of Sciences' Space Science Board, which has advised NASA and Congress on space science policy.
Kip Thorne was selected by Time magazine in an annual list of the 100 most influential people in the American world in 2016.
Adaptation in media
Thorne contributed ideas on wormhole travel to Carl Sagan for use in his novel Contact.
Thorne and his friend, producer Lynda Obst, also developed the concept for the Christopher Nolan film Interstellar. He also wrote a tie-in book, The Science of Interstellar. Thorne later advised Nolan on the physics of his movie Tenet, and advised Cillian Murphy on his portrayal of J. Robert Oppenheimer in Nolan's film Oppenheimer.
In Larry Niven's novel Rainbow Mars, the time travel technology used in the novel is based on the wormhole theories of Thorne, which in the context of the novel was when time travel first became possible, rather than just fantasy. As a result, any attempts to travel in time prior to Thorne's development of wormhole theory results in the time traveller entering a fantastic version of reality, rather than the actual past.
In the film The Theory of Everything, Thorne was portrayed by actor Enzo Cilenti.
Thorne played himself in the episode of The Big Bang Theory entitled "The Laureate Accumulation".
Thorne is featured in an episode of the documentary series The Craftsman entitled "Science, Art & Inspiration".
Partial bibliography
Misner, Charles W., Thorne, K. S. and Wheeler, John Archibald, Gravitation 1973, (W H Freeman & Co)
Thorne, K. S., in 300 Years of Gravitation, (Eds.) S. W. Hawking and W. Israel, 1987, (Chicago: Univ. of Chicago Press), Gravitational Radiation.
Thorne, K. S., Price, R. H. and Macdonald, DM, Black Holes, The Membrane Paradigm, 1986, (New Haven: Yale Univ. Press).
Friedman, J., Morris, MS, Novikov, I. D., Echeverria, F., Klinkhammer, G., Thorne, K. S. and Yurtsever, U., Physical Review D., 1990, (in press), Cauchy Problem in Spacetimes with Closed Timelike Curves.
Thorne, K. S. and Blandford, R. D., Modern Classical Physics: Optics, Fluids, Plasmas, Elasticity, Relativity, and Statistical Physics, 2017, (Princeton: Princeton University Press).
Notes
References
External links
Home Page at California Institute of Technology
Crunch Time
Founding Fathers of Relativity
1940 births
21st-century American physicists
Albert Einstein Medal recipients
American astronomers
American atheists
American Nobel laureates
American relativity theorists
California Institute of Technology alumni
California Institute of Technology faculty
Cornell University faculty
Fellows of the American Academy of Arts and Sciences
Fellows of the American Physical Society
Foreign members of the Russian Academy of Sciences
Former Latter Day Saints
Gravitational-wave astronomy
Kavli Prize laureates in Astrophysics
Living people
Members of the American Philosophical Society
Members of the United States National Academy of Sciences
Nobel laureates in Physics
American particle physicists
Princeton University alumni
Scientists from Logan, Utah
UNESCO Niels Bohr Medal recipients
University of Utah faculty | Kip Thorne | Physics,Astronomy | 3,548 |
13,798,040 | https://en.wikipedia.org/wiki/Field%20with%20one%20element | In mathematics, the field with one element is a suggestive name for an object that should behave similarly to a finite field with a single element, if such a field could exist. This object is denoted F1, or, in a French–English pun, Fun. The name "field with one element" and the notation F1 are only suggestive, as there is no field with one element in classical abstract algebra. Instead, F1 refers to the idea that there should be a way to replace sets and operations, the traditional building blocks for abstract algebra, with other, more flexible objects. Many theories of F1 have been proposed, but it is not clear which, if any, of them give F1 all the desired properties. While there is still no field with a single element in these theories, there is a field-like object whose characteristic is one.
Most proposed theories of F1 replace abstract algebra entirely. Mathematical objects such as vector spaces and polynomial rings can be carried over into these new theories by mimicking their abstract properties. This allows the development of commutative algebra and algebraic geometry on new foundations. One of the defining features of theories of F1 is that these new foundations allow more objects than classical abstract algebra does, one of which behaves like a field of characteristic one.
The possibility of studying the mathematics of F1 was originally suggested in 1956 by Jacques Tits, published in , on the basis of an analogy between symmetries in projective geometry and the combinatorics of simplicial complexes. F1 has been connected to noncommutative geometry and to a possible proof of the Riemann hypothesis.
History
In 1957, Jacques Tits introduced the theory of buildings, which relate algebraic groups to abstract simplicial complexes. One of the assumptions is a non-triviality condition: If the building is an ndimensional abstract simplicial complex, and if , then every ksimplex of the building must be contained in at least three nsimplices. This is analogous to the condition in classical projective geometry that a line must contain at least three points. However, there are degenerate geometries that satisfy all the conditions to be a projective geometry except that the lines admit only two points. The analogous objects in the theory of buildings are called apartments. Apartments play such a constituent role in the theory of buildings that Tits conjectured the existence of a theory of projective geometry in which the degenerate geometries would have equal standing with the classical ones. This geometry would take place, he said, over a field of characteristic one. Using this analogy it was possible to describe some of the elementary properties of F1, but it was not possible to construct it.
After Tits' initial observations, little progress was made until the early 1990s. In the late 1980s, Alexander Smirnov gave a series of talks in which he conjectured that the Riemann hypothesis could be proven by considering the integers as a curve over a field with one element. By 1991, Smirnov had taken some steps towards algebraic geometry over F1, introducing extensions of F1 and using them to handle the projective line P1 over F1. Algebraic numbers were treated as maps to this P1, and conjectural approximations to the Riemann–Hurwitz formula for these maps were suggested. These approximations imply solutions to important problems like the abc conjecture. The extensions of F1 later on were denoted as Fq with . Together with Mikhail Kapranov, Smirnov went on to explore how algebraic and number-theoretic constructions in prime characteristic might look in "characteristic one", culminating in an unpublished work released in 1995. In 1993, Yuri Manin gave a series of lectures on zeta functions where he proposed developing a theory of algebraic geometry over F1. He suggested that zeta functions of varieties over F1 would have very simple descriptions, and he proposed a relation between the Ktheory of F1 and the homotopy groups of spheres. This inspired several people to attempt to construct explicit theories of F1geometry.
The first published definition of a variety over F1 came from Christophe Soulé in 1999, who constructed it using algebras over the complex numbers and functors from categories of certain rings. In 2000, Zhu proposed that F1 was the same as F2 except that the sum of one and one was one, not zero. Deitmar suggested that F1 should be found by forgetting the additive structure of a ring and focusing on the multiplication. Toën and Vaquié built on Hakim's theory of relative schemes and defined F1 using symmetric monoidal categories. Their construction was later shown to be equivalent to Deitmar's by Vezzani. Nikolai Durov constructed F1 as a commutative algebraic monad. Borger used descent to construct it from the finite fields and the integers.
Alain Connes and Caterina Consani developed both Soulé and Deitmar's notions by "gluing" the category of multiplicative monoids and the category of rings to create a new category then defining F1schemes to be a particular kind of representable functor on Using this, they managed to provide a notion of several number-theoretic constructions over F1 such as motives and field extensions, as well as constructing Chevalley groups over F12. Along with Matilde Marcolli, Connes and Consani have also connected F1 with noncommutative geometry. It has also been suggested to have connections to the unique games conjecture in computational complexity theory.
Oliver Lorscheid, along with others, has recently achieved Tits' original aim of describing Chevalley groups over F1 by introducing objects called blueprints, which are a simultaneous generalisation of both semirings and monoids. These are used to define so-called "blue schemes", one of which is Spec F1. Lorscheid's ideas depart somewhat from other ideas of groups over F1, in that the F1scheme is not itself the Weyl group of its base extension to normal schemes. Lorscheid first defines the Tits category, a full subcategory of the category of blue schemes, and defines the "Weyl extension", a functor from the Tits category to Set. A Tits–Weyl model of an algebraic group is a blue scheme G with a group operation that is a morphism in the Tits category, whose base extension is and whose Weyl extension is isomorphic to the Weyl group of
F1geometry has been linked to tropical geometry, via the fact that semirings (in particular, tropical semirings) arise as quotients of some monoid semiring N[A] of finite formal sums of elements of a monoid A, which is itself an F1algebra. This connection is made explicit by Lorscheid's use of blueprints. The Giansiracusa brothers have constructed a tropical scheme theory, for which their category of tropical schemes is equivalent to the category of Toën–Vaquié F1schemes. This category embeds faithfully, but not fully, into the category of blue schemes, and is a full subcategory of the category of Durov schemes.
Motivations
Algebraic number theory
One motivation for F1 comes from algebraic number theory. Weil's proof of the Riemann hypothesis for curves over finite fields starts with a curve C over a finite field k, which comes equipped with a function field F, which is a field extension of k. Each such function field gives rise to a Hasse–Weil zeta function ζF, and the Riemann hypothesis for finite fields determines the zeroes of ζF. Weil's proof then uses various geometric properties of C to study ζF.
The field of rational numbers Q is linked in a similar way to the Riemann zeta function, but Q is not the function field of a variety. Instead, Q is the function field of the scheme . This is a one-dimensional scheme (also known as an algebraic curve), and so there should be some "base field" that this curve lies over, of which Q would be a field extension (in the same way that C is a curve over k, and F is an extension of k). The hope of F1geometry is that a suitable object F1 could play the role of this base field, which would allow for a proof of the Riemann hypothesis by mimicking Weil's proof with F1 in place of k.
Arakelov geometry
Geometry over a field with one element is also motivated by Arakelov geometry, where Diophantine equations are studied using tools from complex geometry. The theory involves complicated comparisons between finite fields and the complex numbers. Here the existence of F1 is useful for technical reasons.
Expected properties
F1 is not a field
F1 cannot be a field because by definition all fields must contain two distinct elements, the additive identity zero and the multiplicative identity one. Even if this restriction is dropped (for instance by letting the additive and multiplicative identities be the same element), a ring with one element must be the zero ring, which does not behave like a finite field. For instance, all modules over the zero ring are isomorphic (as the only element of such a module is the zero element). However, one of the key motivations of F1 is the description of sets as "F1vector spaces" – if finite sets were modules over the zero ring, then every finite set would be the same size, which is not the case. Moreover, the spectrum of the trivial ring is empty, but the spectrum of a field has one point.
Other properties
Finite sets are both affine spaces and projective spaces over F1.
Pointed sets are vector spaces over F1.
The finite fields Fq are quantum deformations of F1, where q is the deformation.
Weyl groups are simple algebraic groups over F1:
Given a Dynkin diagram for a semisimple algebraic group, its Weyl group is the semisimple algebraic group over F1.
The affine scheme Spec Z is a curve over F1.
Groups are Hopf algebras over F1. More generally, anything defined purely in terms of diagrams of algebraic objects should have an F1analog in the category of sets.
Group actions on sets are projective representations of G over F1, and in this way, G is the group Hopf algebra F1[G].
Toric varieties determine F1varieties. In some descriptions of F1geometry the converse is also true, in the sense that the extension of scalars of F1varieties to Z are toric. Whilst other approaches to F1geometry admit wider classes of examples, toric varieties appear to lie at the very heart of the theory.
The zeta function of PN(F1) should be .
The mth Kgroup of F1 should be the mth stable homotopy group of the sphere spectrum.
Computations
Various structures on a set are analogous to structures on a projective space, and can be computed in the same way:
Sets are projective spaces
The number of elements of , the dimensional projective space over the finite field Fq, is the qinteger
Taking yields .
The expansion of the qinteger into a sum of powers of q corresponds to the Schubert cell decomposition of projective space.
Permutations are maximal flags
There are n! permutations of a set with n elements, and [n]!q maximal flags in F, where
is the qfactorial. Indeed, a permutation of a set can be considered a filtered set, as a flag is a filtered vector space: for instance, the ordering of the set corresponds to the filtration .
Subsets are subspaces
The binomial coefficient
gives the number of m-element subsets of an n-element set, and the qbinomial coefficient
gives the number of m-dimensional subspaces of an n-dimensional vector space over Fq.
The expansion of the qbinomial coefficient into a sum of powers of q corresponds to the Schubert cell decomposition of the Grassmannian.
Monoid schemes
Deitmar's construction of monoid schemes has been called "the very core of F1geometry", as most other theories of F1geometry contain descriptions of monoid schemes. Morally, it mimicks the theory of schemes developed in the 1950s and 1960s by replacing commutative rings with monoids. The effect of this is to "forget" the additive structure of the ring, leaving only the multiplicative structure. For this reason, it is sometimes called "non-additive geometry".
Monoids
A multiplicative monoid is a monoid A that also contains an absorbing element 0 (distinct from the identity 1 of the monoid), such that for every a in the monoid A. The field with one element is then defined to be , the multiplicative monoid of the field with two elements, which is initial in the category of multiplicative monoids. A monoid ideal in a monoid A is a subset I that is multiplicatively closed, contains 0, and such that . Such an ideal is prime if is multiplicatively closed and contains 1.
For monoids A and B, a monoid homomorphism is a function such that
and
for every and in
Monoid schemes
The spectrum of a monoid A, denoted , is the set of prime ideals of A. The spectrum of a monoid can be given a Zariski topology, by defining basic open sets
for each h in A. A monoidal space is a topological space along with a sheaf of multiplicative monoids called the structure sheaf. An affine monoid scheme is a monoidal space that is isomorphic to the spectrum of a monoid, and a monoid scheme is a sheaf of monoids that has an open cover by affine monoid schemes.
Monoid schemes can be turned into ring-theoretic schemes by means of a base extension functor that sends the monoid A to the Zmodule (i.e. ring) , and a monoid homomorphism extends to a ring homomorphism that is linear as a Zmodule homomorphism. The base extension of an affine monoid scheme is defined via the formula
which in turn defines the base extension of a general monoid scheme.
Consequences
This construction achieves many of the desired properties of F1geometry: consists of a single point, so behaves similarly to the spectrum of a field in conventional geometry, and the category of affine monoid schemes is dual to the category of multiplicative monoids, mirroring the duality of affine schemes and commutative rings. Furthermore, this theory satisfies the combinatorial properties expected of F1 mentioned in previous sections; for instance, projective space over F1 of dimension n as a monoid scheme is identical to an apartment of projective space over Fq of dimension n when described as a building.
However, monoid schemes do not fulfill all of the expected properties of a theory of F1geometry, as the only varieties that have monoid scheme analogues are toric varieties. More precisely, if X is a monoid scheme whose base extension is a flat, separated, connected scheme of finite type, then the base extension of X is a toric variety. Other notions of F1geometry, such as that of Connes–Consani, build on this model to describe F1varieties that are not toric.
Field extensions
One may define field extensions of the field with one element as the group of roots of unity, or more finely (with a geometric structure) as the group scheme of roots of unity. This is non-naturally isomorphic to the cyclic group of order n, the isomorphism depending on choice of a primitive root of unity:
Thus a vector space of dimension d over F1n is a finite set of order dn on which the roots of unity act freely, together with a base point.
From this point of view the finite field Fq is an algebra over F1n, of dimension for any n that is a factor of (for example or ). This corresponds to the fact that the group of units of a finite field Fq (which are the non-zero elements) is a cyclic group of order , on which any cyclic group of order dividing acts freely (by raising to a power), and the zero element of the field is the base point.
Similarly, the real numbers R are an algebra over F12, of infinite dimension, as the real numbers contain ±1, but no other roots of unity, and the complex numbers C are an algebra over F1n for all n, again of infinite dimension, as the complex numbers have all roots of unity.
From this point of view, any phenomenon that only depends on a field having roots of unity can be seen as coming from F1 – for example, the discrete Fourier transform (complex-valued) and the related number-theoretic transform (Z/nZvalued).
See also
Arithmetic derivative
Semigroup with one element
Notes
Bibliography
External links
John Baez's This Week's Finds in Mathematical Physics: Week 259
The Field With One Element at the ncategory cafe
The Field With One Element at Secret Blogging Seminar
Looking for Fun and The Fun folklore, Lieven le Bruyn.
Mapping F1land: An overview of geometries over the field with one element, Javier López Peña, Oliver Lorscheid
Fun Mathematics, Lieven le Bruyn, Koen Thas.
Vanderbilt conference on Noncommutative Geometry and Geometry over the Field with One Element (Schedule )
NCG and F_un, by Alain Connes and K. Consani: summary of talks and slides
Algebraic geometry
Noncommutative geometry
Finite fields
1 (number)
Abc conjecture | Field with one element | Mathematics | 3,631 |
4,679,870 | https://en.wikipedia.org/wiki/British%20National%20Committee%20for%20Space%20Research | The British National Committee for Space Research (BNCSR) was a Royal Society committee formed in December 1958. It was formed primarily to be Britain's interface with the newly formed Committee on Space Research (COSPAR).
History
In October 1958, the International Council of Scientific Unions (ICSU) proposed to form a committee for space research. The Committee on Space Research (COSPAR) was the result of the proposal and first met in November 1958. Britain desired a new committee to interface with COSPAR and to organise British spaceflight activities after the International Geophysical Year (IGY). The Royal Society consolidated the Gassiot Committee's rocket and the National IGY Committee's artificial satellite subcommittees into the newly formed British National Committee for Space Research (BNCSR). The BNCSR was officially formed on 18 December 1958 and selected its members 12 February 1959. The 28-person committee was chaired by Harrie Massey and had W. V. D. Hodge as the physical secretary. The subcommittees that were to be incorporated into BNCSR submitted their final reports during the committee's first meeting on 4 March 1959 and were officially dissolved.
Subcommittees
The BNCSR formed three subcommittees: Tracking Analysis and Data Recovery (TADREC, chaired by J. A. Ratcliffe), Design for Experiments (DOE, chaired by Massey), and another to coordinate with the World Data Centre at Radio Research Station (RRS) at Slough (chaired by E. Bullard).
TADREC took over the work National IGY Committee's artificial satellite subcommittee.
DOE continued the work of the National IGY Committee's artificial satellite subcommittee. The new subcommittee had two initials tasks: to find artificial satellites to launch on and to consider if it was worth providing attitude control to Skylark for better scientific results.
See also
List of astronomical societies
Notes
References
Space organizations
Astronomy organizations
Scientific organisations based in the United Kingdom
Astronomy in the United Kingdom
Space programme of the United Kingdom
Royal Society
1958 establishments in the United Kingdom
Scientific organizations established in 1958 | British National Committee for Space Research | Astronomy | 417 |
3,682,781 | https://en.wikipedia.org/wiki/Seam%20sealant | Seam sealants are chemical coating compositions.
Textiles
Seam sealants are applied to waterproof seams of items such as rainwear, tents, backpacks, shoes, drysacks, and drysuits. They are often applied by the consumer post-purchase.
Automotive industry
Seam sealing was already performed manually successfully for preventing perforation corrosion in the 1980s. Today used in the OEM automotive industry primarily for the purpose of seals against air leaks and to waterproof sheetmetal overlaps that occur in the assembly of a vehicle. Such overlaps are typically decorative rather than structurally supportive. Accordingly, they are usually only spot welded and this process results in a closure that is not air or water tight.
Seam sealants are sprayed or extruded over the joined edges of these overlaps, and they then either cure to a flexible waterproof "seal" by drying (dehydrating) in the case of water borne compositions, or thermoset irreversibly to a flexible adherent seam seal by going through an oven bake in the case of plasticized polyvinylchloride compositions. Most interior seam seals are not visible after the vehicle is finished, because they are covered by carpeting, interior roof headliner, or decorative trim panels. Exterior seam seals are always painted over and are referred to as "coach joint seals."
The flat-stream application with many special solutions (special nozzles) was increasingly used as the most flexible procedure.
References
Further reading
Detlef Symietz; Andreas Lutz: Strukturkleben im Fahrzeugbau. Eigenschaften, Anwendungen und Leistungsfähigkeit eines neuen Fügeverfahrens. In: Die Bibliothek der Technik, Verlag Moderne Industrie, 2006,
Seals (mechanical)
Materials | Seam sealant | Physics | 382 |
46,936,167 | https://en.wikipedia.org/wiki/Dehydronorketamine | Dehydronorketamine (DHNK), or 5,6-dehydronorketamine, is a minor metabolite of ketamine which is formed by dehydrogenation of its metabolite norketamine. Though originally considered to be inactive, DHNK has been found to act as a potent and selective negative allosteric modulator of the α7-nicotinic acetylcholine receptor (IC50 = 55 nM). For this reason, similarly to hydroxynorketamine (HNK), it has been hypothesized that DHNK may have the capacity to produce rapid antidepressant effects. However, unlike ketamine, norketamine, and HNK, DHNK has been found to be inactive in the forced swim test (FST) in mice at doses up to 50 mg/kg. DHNK is inactive at the α3β4-nicotinic acetylcholine receptor (IC50 > 100 μM) and is only very weakly active at the NMDA receptor (Ki = 38.95 μM for (S)-(+)-DHNK). It can be detected 7–10 days after a modest dose of ketamine, and because of this, is useful in drug detection assays.
See also
Arketamine
Esketamine
References
Amines
2-Chlorophenyl compounds
Enones
Nicotinic antagonists
Human drug metabolites
Cyclohexenes
Negative allosteric modulators | Dehydronorketamine | Chemistry | 322 |
6,874,521 | https://en.wikipedia.org/wiki/Equating%20coefficients | In mathematics, the method of equating the coefficients is a way of solving a functional equation of two expressions such as polynomials for a number of unknown parameters. It relies on the fact that two expressions are identical precisely when corresponding coefficients are equal for each different type of term. The method is used to bring formulas into a desired form.
Example in real fractions
Suppose we want to apply partial fraction decomposition to the expression:
that is, we want to bring it into the form:
in which the unknown parameters are A, B and C.
Multiplying these formulas by x(x − 1)(x − 2) turns both into polynomials, which we equate:
or, after expansion and collecting terms with equal powers of x:
At this point it is essential to realize that the polynomial 1 is in fact equal to the polynomial 0x2 + 0x + 1, having zero coefficients for the positive powers of x. Equating the corresponding coefficients now results in this system of linear equations:
Solving it results in:
Example in nested radicals
A similar problem, involving equating like terms rather than coefficients of like terms, arises if we wish to de-nest the nested radicals to obtain an equivalent expression not involving a square root of an expression itself involving a square root, we can postulate the existence of rational parameters d, e such that
Squaring both sides of this equation yields:
To find d and e we equate the terms not involving square roots, so and equate the parts involving radicals, so which when squared implies This gives us two equations, one quadratic and one linear, in the desired parameters d and e, and these can be solved to obtain
which is a valid solution pair if and only if is a rational number.
Example of testing for linear dependence of equations
Consider this overdetermined system of equations (with 3 equations in just 2 unknowns):
To test whether the third equation is linearly dependent on the first two, postulate two parameters a and b such that a times the first equation plus b times the second equation equals the third equation. Since this always holds for the right sides, all of which are 0, we merely need to require it to hold for the left sides as well:
Equating the coefficients of x on both sides, equating the coefficients of y on both sides, and equating the constants on both sides gives the following system in the desired parameters a, b:
Solving it gives:
The unique pair of values a, b satisfying the first two equations is (a, b) = (1, 1); since these values also satisfy the third equation, there do in fact exist a, b such that a times the original first equation plus b times the original second equation equals the original third equation; we conclude that the third equation is linearly dependent on the first two.
Note that if the constant term in the original third equation had been anything other than –7, the values (a, b) = (1, 1) that satisfied the first two equations in the parameters would not have satisfied the third one (a – 8b = constant), so there would exist no a, b satisfying all three equations in the parameters, and therefore the third original equation would be independent of the first two.
Example in complex numbers
The method of equating coefficients is often used when dealing with complex numbers. For example, to divide the complex number a+bi by the complex number c+di, we postulate that the ratio equals the complex number e+fi, and we wish to find the values of the parameters e and f for which this is true. We write
and multiply both sides by the denominator to obtain
Equating real terms gives
and equating coefficients of the imaginary unit i gives
These are two equations in the unknown parameters e and f, and they can be solved to obtain the desired coefficients of the quotient:
References
Elementary algebra
Equations | Equating coefficients | Mathematics | 807 |
56,075,256 | https://en.wikipedia.org/wiki/Graph-encoded%20map | In topological graph theory, a graph-encoded map or gem is a method of encoding a cellular embedding of a graph using a different graph with four vertices per edge of the original graph. It is the topological analogue of runcination, a geometric operation on polyhedra. Graph-encoded maps were formulated and named by .
Alternative and equivalent systems for representing cellular embeddings include signed rotation systems and ribbon graphs.
The graph-encoded map for an embedded graph is another cubic graph together with a 3-edge-coloring of . Each edge of is expanded into exactly four vertices in , one for each choice of a side and endpoint of the edge. An edge in connects each such vertex to the vertex representing the opposite side and same endpoint of ; these edges are by convention colored red. Another edge in connects each vertex to the vertex representing the opposite endpoint and same side of ; these edges are by convention colored blue. An edge in of the third color, yellow, connects each vertex to the vertex representing another edge that meets at the same side and endpoint.
An alternative description of is that it has a vertex for each flag of (a mutually incident triple of a vertex, edge, and face). If is a flag,
then there is exactly one vertex , edge , and face such that , , and are also flags. The three colors of edges in represent each of these three types of flags that differ by one of their three elements. However, interpreting a graph-encoded map in this way requires more care. When the same face appears on both sides of an edge, as can happen for instance for a planar embedding of a tree, the two sides give rise to different gem vertices. And when the same vertex appears at both endpoints of a self-loop, the two ends of the edge again give rise to different gem vertices. In this way, each triple may be associated with up to four different vertices of the gem.
Whenever a cubic graph can be 3-edge-colored so that the red-blue cycles of the coloring all have length four, the colored graph can be interpreted as a graph-encoded map, and represents an embedding of another graph .
To recover and its embedding, interpret each 2-colored cycle of as the face of an embedding of onto a surface,
contract each red--yellow cycle into a single vertex of , and replace each pair of parallel blue edges left by the contraction with a single edge of .
The dual graph of a graph-encoded map may be obtained from the map by recoloring it so that the red edges of the gem become blue and the blue edges become red.
References
Topological graph theory | Graph-encoded map | Mathematics | 548 |
4,135,937 | https://en.wikipedia.org/wiki/In-vessel%20composting | In-vessel composting generally describes a group of methods that confine the composting materials within a building, container, or vessel. In-vessel composting systems can consist of metal or plastic tanks or concrete bunkers in which air flow and temperature can be controlled, using the principles of a "bioreactor". Generally the air circulation is metered in via buried tubes that allow fresh air to be injected under pressure, with the exhaust being extracted through a biofilter, with temperature and moisture conditions monitored using probes in the mass to allow maintenance of optimum aerobic decomposition conditions.
This technique is generally used for municipal scale organic waste processing, including final treatment of sewage biosolids, to a stable state with safe pathogen levels, for reclamation as a soil amendment. In-vessel composting can also refer to aerated static pile composting with the addition of removable covers that enclose the piles, as with the system in extensive use by farmer groups in Thailand, supported by the National Science and Technology Development Agency there. In recent years, smaller scale in-vessel composting has been advanced. These can even use common roll-off waste dumpsters as the vessel. The advantage of using roll-off waste dumpsters is their relatively low cost, wide availability, they are highly mobile, often do not need building permits and can be obtained by renting or buying.
Evaluation is ongoing with regard to the health risks associated with compost derived from sewage biosolids—including identifying safe levels of contaminates such as PFASs ("forever chemicals").
Offensive odors are caused by putrefaction (anaerobic decomposition) of nitrogenous animal and vegetable matter gassing off as ammonia. This is controlled with a higher carbon to nitrogen ratio, or increased aeration by ventilation, and use of a coarser grade of carbon material to allow better air circulation. Prevention and capture of any gases naturally occurring (volatile organic compounds) during the hot aerobic composting involved is the objective of the biofilter, and as the filtering material saturates over time, it can be used in the composting process and replaced with fresh material.
A more advanced systems design is able to limit the odor issues considerably, and it is also able to raise the total energy and resource output by integrating in-vessel composting with anaerobic digestion. Through anaerobic decomposition it is also possible to reduce pathogen levels similarly to that of traditional aerated composting when the anaerobic bioreactors operate at thermophilic temperatures, between .§
Gallery
See also
Aerated static pile composting
Anaerobic digestion
Compost
List of solid waste treatment technologies
Mechanical biological treatment
Waste management
Windrow composting
References
Industrial composting
Waste treatment technology | In-vessel composting | Chemistry,Engineering | 575 |
18,321,202 | https://en.wikipedia.org/wiki/Cyprodime | Cyprodime is an opioid antagonist from the morphinan family of drugs.
Cyprodime is a selective opioid antagonist which blocks the μ-opioid receptor, but without affecting the δ-opioid or κ-opioid receptors. This makes it useful for scientific research as it allows the μ-opioid receptor to be selectively deactivated so that the actions of the δ and κ receptors can be studied separately, in contrast to better known opioid antagonists such as naloxone which block all three opioid receptor subtypes.
See also
Tianeptine, an atypical, selective MOR full-agonist licensed for major depression since 1989.
Samidorphan, an opioid antagonist preferring the MOR, which is under development for major depression.
References
Synthetic opioids
Morphinans
Mu-opioid receptor antagonists
Ketones
Ethers
Phenol ethers | Cyprodime | Chemistry | 198 |
49,280,110 | https://en.wikipedia.org/wiki/Frost%20damage%20%28construction%29 | Frost damage is caused by moisture freezing in the construction. Frost damage can occur as cracks, stone splinters and swelling of the material.
When water freezes, the volume of water increases by 9 %. When the volumetric moisture content exceeds 91 %, then the volume increase of water in the pores of the material caused by freezing cannot be absorbed by sufficient empty pores. This causes an increase in the internal pressure. If this pressure exceeds the tensile strength of the material, then micro-cracks occur. Visible frost damage develops after an accumulation of micro-cracks as a result of several freeze-thaw cycles.
Frost damage can be prevented by the use of frost-proof materials, i.e., a material which has sufficient closed pores, by which the volume increase caused by the freezing of water in capillary pores can be absorbed by the ice-free closed pores.
Concrete
Frost damage of early-age concrete is particularly harmful for the concrete mechanical resistance because the ice volume expansion causes micro-cracks in the concrete structures, and as a consequence it lowers the compressive strength of concrete. Therefore, when concreting at cold temperature cannot be avoided, it is essential to have a minimum curing time at a temperature sufficiently above the freezing point of the concrete pore water, so that the early strength of concrete is high enough to resist the inner tensile stress caused by water freezing.
See also
Frost weathering
Ice thermal expansion
Sources
References
Bibliography
Goesten A.J.P.M. (2016). Hygrothermal simulation model: Damage as a result of insulating historical buildings. Eindhoven University of Technology.
ter Bekke T. (2001). Vochttransport in monumentaal metselwerk. Eindhoven University of Technology.
This article is a translation of the corresponding article on the Dutch Wikipedia.
Construction
Concrete
Water ice
Weathering | Frost damage (construction) | Engineering | 388 |
24,415,026 | https://en.wikipedia.org/wiki/Hopkinson%20and%20Imperial%20Chemical%20Industries%20Professor%20of%20Applied%20Thermodynamics | The Hopkinson and Imperial Chemical Industries Professorship of Applied Thermodynamics at the University of Cambridge was established on 10 February 1950, largely from the endowment fund of the proposed Hopkinson Professorship in Thermodynamics and a gift from ICI Limited of £50,000, less tax, spread over the seven years from 1949 to 1955. The professorship is assigned primarily to the Faculty of Engineering.
The chair is named in honour of John Hopkinson, whose widow originally endowed a lectureship in thermodynamics in the hope that it would eventually be upgraded to a professorship.
List of Hopkinson and Imperial Chemical Industries Professors of Applied Thermodynamics
1951 - 1980 Sir William Rede Hawthorne
1980 - 1983 John Arthur Shercliff
1985 - 1997 Kenneth Noel Corbett Bray
1998 - 2015 John Bernard Young
2015–present Epaminondas Mastorakos
References
Engineering education in the United Kingdom
Imperial Chemical Industries
Applied Thermodynamics, Hopkinson and Imperial Chemical Industries
School of Technology, University of Cambridge
Applied Thermodynamics, Hopkinson and Imperial Chemical Industries
1950 establishments in the United Kingdom | Hopkinson and Imperial Chemical Industries Professor of Applied Thermodynamics | Physics,Chemistry | 225 |
51,295,111 | https://en.wikipedia.org/wiki/Schema-agnostic%20databases | Schema-agnostic databases or vocabulary-independent databases aim at supporting users to be abstracted from the representation of the data, supporting the automatic semantic matching between queries and databases. Schema-agnosticism is the property of a database of mapping a query issued with the user terminology and structure, automatically mapping it to the dataset vocabulary.
The increase in the size and in the semantic heterogeneity of database schemas bring new requirements for users querying and searching structured data. At this scale it can become unfeasible for data consumers to be familiar with the representation of the data in order to query it. At the center of this discussion is the semantic gap between users and databases, which becomes more central as the scale and complexity of the data grows.
Description
The evolution of data environments towards the consumption of data from multiple data sources and the growth in the schema size, complexity, dynamicity and decentralisation (SCoDD) of schemas increases the complexity of contemporary data management. The SCoDD trend emerges as a central data management concern in Big Data scenarios, where users and applications have a demand for more complete data, produced by independent data sources, under different semantic assumptions and contexts of use, which is the typical scenario for Semantic Web Data applications.
The evolution of databases in the direction of heterogeneous data environments strongly impacts the usability, semiotics and semantic assumptions behind existing data accessibility methods such as structured queries, keyword-based search and visual query systems. With schema-less databases containing potentially millions of dynamically changing attributes, it becomes unfeasible for some users to become aware of the 'schema' or vocabulary in order to query the database. At this scale, the effort in understanding the schema in order to build a structured query can become prohibitive.
Schema-agnostic queries
Schema-agnostic queries can be defined as query approaches over structured databases which allow users satisfying complex information needs without the understanding of the representation (schema) of the database. Similarly, Tran et al. defines it as "search approaches, which do not require users to know the schema underlying the data". Approaches such as keyword-based search over databases allow users to query databases without employing structured queries. However, as discussed by Tran et al.: "From these points, users however have to do further navigation and exploration to address complex information needs. Unlike keyword search used on the Web, which focuses on simple needs, the keyword search elaborated here is used to obtain more complex results. Instead of a single set of resources, the goal is to compute complex sets of resources and their relations."
The development of approaches to support natural language interfaces (NLI) over databases have aimed towards the goal of schema-agnostic queries. Complementarily, some approaches based on keyword search have targeted keyword-based queries which express more complex information needs. Other approaches have explored the construction of structured queries over databases where schema constraints can be relaxed. All these approaches (natural language, keyword-based search and structured queries) have targeted different degrees of sophistication in addressing the problem of supporting a flexible semantic matching between queries and data, which vary from the completely absence of the semantic concern to more principled semantic models.
While the demand for schema-agnosticism has been an implicit requirement across semantic search and natural language query systems over structured data, it is not sufficiently individuated as a concept and as a necessary requirement for contemporary database management systems. Recent works have started to define and model the semantic aspects involved on schema-agnostic queries.
Schema-agnostic structured queries
Consist of schema-agnostic queries following the syntax of a structured standard (for example SQL, SPARQL). The syntax and semantics of operators are maintained, while different terminologies are used.
Example 1
SELECT ?y {
BillClinton hasDaughter ?x .
?x marriedTo ?y .
}
which maps to the following SPARQL query in the dataset vocabulary:
PREFIX : <http://dbpedia.org/resource/>
PREFIX dbpedia2: <http://dbpedia.org/property/>
PREFIX dbpedia: <http://dbpedia.org/ontology/>
PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
PREFIX dbo: <http://dbpedia.org/ontology/>
SELECT ?y {
:Bill_Clinton dbpedia:child ?x .
?x dbpedia2:spouse ?y .
}
Example 2
SELECT ?x {
?x isA book .
?x by William_Goldman .
?x has_pages ?p .
FILTER (?p > 300)
}
which maps to the following SPARQL query in the dataset vocabulary:
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX : <http://dbpedia.org/resource/>
PREFIX dbpedia2: <http://dbpedia.org/property/>
PREFIX dbpedia: <http://dbpedia.org/ontology/>
SELECT ?x {
?x rdf:type dbpedia:Book .
?x dbpedia2:author :William_Goldman .
?x dbpedia:numberOfPages ?p .
FILTER(?p > 300)
}
Schema-agnostic keyword queries
Consist of schema-agnostic queries using keyword queries. In this case the syntax and semantics of operators are different from the structured query syntax.
Example
"Bill Clinton daughter married to"
"Books by William Goldman with more than 300 pages"
Semantic complexity
As of 2016 the concept of schema-agnostic queries has been developed primarily in academia. Most of schema-agnostic query systems have been investigated in the context of Natural Language Interfaces over databases or over the Semantic Web. These works explore the application of semantic parsing techniques over large, heterogeneous and schema-less databases.
More recently, the individuation of the concept of schema-agnostic query systems and databases have appeared more explicitly within the literature. Freitas et al. provide a probabilistic model on the semantic complexity of mapping schema-agnostic queries.
References
Artificial intelligence
Computer data
Database management systems | Schema-agnostic databases | Technology | 1,358 |
9,132,604 | https://en.wikipedia.org/wiki/Eli%20Lilly%20%26%20Co.%20v.%20Medtronic%2C%20Inc. | Eli Lilly and Company v. Medtronic, Inc., 496 U.S. 661 (1990), is a United States Supreme Court case related to patent infringement in the medical device industry. It held that (e)(1) of United States patent law exempted premarketing activity conducted to gain approval of a device under the Federal Food, Drug, and Cosmetic Act from a finding of infringement.
See also
Medtronic, Inc. v. Lohr (1996)
Riegel v. Medtronic, Inc. (2008)
List of United States Supreme Court cases, volume 496
References
External links
United States biotechnology case law
Eli Lilly and Company
United States Supreme Court cases
United States Supreme Court cases of the Rehnquist Court
United States patent case law
1990 in United States case law
Medtronic litigation | Eli Lilly & Co. v. Medtronic, Inc. | Biology | 171 |
71,674,924 | https://en.wikipedia.org/wiki/Albert%20M.%20Gessler | Albert M. Gessler (191918 May 2003) was an ExxonMobil research chemist known for the development of elastomeric thermoplastics.
Personal
Gessler was a resident of Cranford, New Jersey for 58 years. He was active in civic life, founding Cranford's recycling program in 1971. He worked to establish Cranford's Conservation Center, chairing the Environmental Commission for several years. Gessler served as a leader in the Boy Scouts for more than 20 years. He received the Silver Beaver award in 1962. In 1999, the mayor of Cranford recognized Gessler's positive community impact with a resolution of Grateful Appreciation.
Education
Gessler completed his Bachelor of Arts in chemistry at Cornell University in 1941.
Career
Gessler began his career at Esso, joining in 1942. His most cited work is a 1959 patent on regarding a process for preparing a vulcanized blend of crystalline polypropylene and chlorinated butyl rubber. He was a mentor to Edward Kresge and coworker of William J. Sparks. He studied the chemical interaction between carbon black and various polymers. He served as chairman of the New York Rubber group in 1966. He was an organizer of the 1971 Gordon Conference on Elastomers. He is credited among the chief organizers of the popular text Science and Technology of Rubber. He was the 1986 recipient of the Melvin Mooney Distinguished Technology Award. At his retirement with 38 years of service, his title was senior research chemist.
References
1919 births
2003 deaths
Polymer scientists and engineers
20th-century American engineers
People from Cranford, New Jersey
Cornell University alumni
ExxonMobil people | Albert M. Gessler | Chemistry,Materials_science | 339 |
1,731,024 | https://en.wikipedia.org/wiki/Tetrahedral%20carbonyl%20addition%20compound | A tetrahedral intermediate is a reaction intermediate in which the bond arrangement around an initially double-bonded carbon atom has been transformed from trigonal to tetrahedral. Tetrahedral intermediates result from nucleophilic addition to a carbonyl group. The stability of tetrahedral intermediate depends on the ability of the groups attached to the new tetrahedral carbon atom to leave with the negative charge. Tetrahedral intermediates are very significant in organic syntheses and biological systems as a key intermediate in esterification, transesterification, ester hydrolysis, formation and hydrolysis of amides and peptides, hydride reductions, and other chemical reactions.
History
One of the earliest accounts of the tetrahedral intermediate came from Rainer Ludwig Claisen in 1887. In the reaction of benzyl benzoate with sodium methoxide, and methyl benzoate with sodium benzyloxide, he observed a white precipitate which under acidic conditions yields benzyl benzoate, methyl benzoate, methanol, and benzyl alcohol. He named the likely common intermediate “.”
Victor Grignard assumed the existence of unstable tetrahedral intermediate in 1901, while investigating the reaction of esters with organomagnesium reagents.
The first evidence for tetrahedral intermediates in the substitution reactions of carboxylic derivatives was provided by Myron L. Bender in 1951. He labeled carboxylic acid derivatives with oxygen isotope O18 and reacted these derivatives with water to make labeled carboxylic acids. At the end of the reaction he found that the remaining starting material had a decreased proportion of labeled oxygen, which is consistent with the existence of the tetrahedral intermediate.
Reaction mechanism
The nucleophilic attack on the carbonyl group proceeds via the Bürgi-Dunitz trajectory. The angle between the line of nucleophilic attack and the C-O bond is greater than 90˚ due to a better orbital overlap between the HOMO of the nucleophile and the π* LUMO of the C-O double bond.
Structure of tetrahedral intermediates
General features
Although the tetrahedral intermediates are usually transient intermediates, many compounds of this general structures are known. The reactions of aldehydes, ketones, and their derivatives frequently have a detectable tetrahedral intermediate, while for the reactions of derivatives of carboxylic acids this is not the case. At the oxidation level of carboxylic acid derivatives, the groups such as OR, OAr, NR2, or Cl are conjugated with the carbonyl group, which means that addition to the carbonyl group is thermodynamically less favored than addition to corresponding aldehyde or ketone. Stable tetrahedral intermediates of carboxylic acid derivatives do exist and they usually possess at least one of the following four structural features:
polycyclic structures (e.g. tetrodotoxin)
compounds with a strong electron-withdrawing group attached to the acyl carbon (e.g. N,N-dimethyltrifluoroacetamide)
compounds with donor groups that are poorly conjugated with the potential carbonyl group (e.g. cyclol)
compounds with sulfur atoms bonded to the anomeric centre (e.g., )
These compounds were used to study the kinetics of tetrahedral intermediate decomposition into its respective carbonyl species, and to measure the IR, UV, and NMR spectra of the tetrahedral adduct.
X-ray crystal structure determination
The first X-ray crystal structures of tetrahedral intermediates were obtained in 1973 from bovine trypsin crystallized with bovine pancreatic trypsin inhibitor, and in 1974 from porcine trypsin crystallized with soybean trypsin inhibitor. In both cases the tetrahedral intermediate is stabilized in the active sites of enzymes, which have evolved to stabilize the transition state of peptide hydrolysis.
Some insight into the structure of tetrahedral intermediate can be obtained from the crystal structure of N-brosylmitomycin A, crystallized in 1967. The tetrahedral carbon C17 forms a 136.54 pm bond with O3, which is shorter than C8-O3 bond (142.31 pm). In contrast, C17-N2 bond (149.06 pm) is longer than N1-C1 bond (148.75 pm) and N1-C11 bond (147.85 pm) due to donation of O3 lone pair into σ* orbital of C17-N2. This model however is forced into tetracyclic sceleton, and tetrahedral O3 is methylated which makes it a poor model overall.
The more recent x-ray crystal structure of 1-aza-3,5,7-trimethyladamantan-2-one is a good model for cationic tetrahedral intermediate. The C1-N1 bond is rather long [155.2(4) pm], and C1-O1(2) bonds are shortened [138.2(4) pm]. The protonated nitrogen atom N1 is a great amine leaving group.
In 2002 David Evans et al. observed a very stable neutral tetrahedral intermediate in the reaction of N-acylpyrroles with organometallic compounds, followed by protonation with ammonium chloride producing a carbinol. The C1-N1 bond [147.84(14) pm] is longer than the usual Csp3-Npyrrole bond which range from 141.2-145.8 pm. In contrast, the C1-O1 bond [141.15(13) pm] is shorter than the average Csp3-OH bond which is about 143.2 pm. The elongated C1-N1, and shortened C1-O1 bonds are explained with an anomeric effect resulting from the interaction of the oxygen lone pairs with the σ*C-N orbital. Similarly, an interaction of an oxygen lone pair with σ*C-C orbital should be responsible for the lengthened C1-C2 bond [152.75(15) pm] compared to the average Csp2-Csp2 bonds which are 151.3 pm. Also, the C1-C11 bond [152.16(17) pm] is slightly shorter than the average Csp3-Csp3 bond which is around 153.0 pm.
Stability of tetrahedral intermediates
Acetals and hemiacetals
Hemiacetals and acetals are essentially tetrahedral intermediates. They form when nucleophiles add to a carbonyl group, but unlike tetrahedral intermediates they can be very stable and used as protective groups in synthetic chemistry. A very well known reaction occurs when acetaldehyde is dissolved in methanol, producing a hemiacetal. Most hemiacetals are unstable with respect to their parent aldehydes and alcohols. For example, the equilibrium constant for reaction of acetaldehyde with simple alcohols is about 0.5, where the equilibrium constant is defined as K = [hemiacetal]/[aldehyde][alcohol]. Hemiacetals of ketones (sometimes called hemiketals) are even less stable than those of aldehydes. However, cyclic hemiacetals and hemiacetals bearing electron withdrawing groups are stable. Electron-withdrawing groups attached to the carbonyl atom shift the equilibrium constant toward the hemiacetal. They increase the polarization of the carbonyl group, which already has a positively polarized carbonyl carbon, and make it even more prone to attack by a nucleophile. The chart below shows the extent of hydration of some carbonyl compounds. Hexafluoroacetone is probably the most hydrated carbonyl compound possible. Formaldehyde reacts with water so readily because its substituents are very small- a purely steric effect.
Cyclopropanones- three-membered ring ketones- are also hydrated to a significant extent. Since three-membered rings are very strained (bond angles forced to be 60˚), sp3 hybridization is more favorable than sp2 hybridization. For the sp3 hybridized hydrate the bonds have to be distorted by about 49˚, while for the sp2 hybridized ketone the bond angle distortion is about 60˚. So the addition to the carbonyl group allows some of the strain inherent in the small ring to be released, which is why cyclopropanone and cyclobutanone are very reactive electrophiles. For larger rings, where the bond angles are not as distorted, the stability of the hemiacetals is due to entropy and the proximity of the nucleophile to the carbonyl group. Formation of an acyclic acetal involves a decrease in entropy because two molecules are consumed for every one produced. In contrast, the formation of cyclic hemiacetals involves a single molecule reacting with itself, making the reaction more favorable. Another way to understand the stability of cyclic hemiacetals is to look at the equilibrium constant as the ratio of the forward and backward reaction rate. For a cyclic hemiacetal the reaction is intramolecular so the nucleophile is always held close to the carbonyl group ready to attack, so the forward rate of reaction is much higher than the backward rate. Many biologically relevant sugars, such as glucose, are cyclic hemiacetals.
In the presence of acid, hemiacetals can undergo an elimination reaction, losing the oxygen atom that once belonged to the parent aldehyde’s carbonyl group. These oxonium ions are powerful electrophiles, and react rapidly with a second molecule of alcohol to form new, stable compounds, called acetals. The whole mechanism of acetal formation from hemiacetal is drawn below.
Acetals, as already pointed out, are stable tetrahedral intermediates so they can be used as protective groups in organic synthesis. Acetals are stable under basic conditions, so they can be used to protect ketones from a base. The acetal group is hydrolyzed under acidic conditions. An example with a dioxolane protecting group is given below.
Weinreb amides
Weinreb amides are N-methoxy-N-methylcarboxylic acid amides. Weinreb amides are reacted with organometallic compounds to give, on protonation, ketones (see Weinreb ketone synthesis). It is generally accepted that the high yields of ketones are due to the high stability of the chelated five-membered ring intermediate. Quantum mechanical calculations have shown that the tetrahedral adduct is formed easily and it is fairly stable, in agreement with the experimental results. The very facile reaction of Weinreb amides with organolithium and Grignard reagents results from the chelate stabilization in the tetrahedral adduct and, more importantly, the transition state leading to the adduct. The tetrahedral adducts are shown below.
Applications in biomedicine
Drug design
A solvated ligand that binds the protein of interest is likely to exist as an equilibrium mixture of several conformers. Likewise the solvated protein also exists as several conformers in equilibrium. Formation of protein-ligand complex includes displacement of the solvent molecules that occupy the binding site of the ligand, to produce a solvated complex. Because this necessarily means that the interaction is entropically disfavored, highly favorable enthalpic contacts between the protein and the ligand must compensate for the entropic loss. The design of new ligands is usually based on the modification of known ligands for the target proteins. Proteases are enzymes that catalyze hydrolysis of a peptide bond. These proteins have evolved to recognize and bind the transition state of peptide hydrolysis reaction which is a tetrahedral intermediate. Therefore, the main protease inhibitors are tetrahedral intermediate mimics having an alcohol or a phosphate group. Examples are saquinavir, ritonavir, pepstatin, etc.
Enzymatic activity
Stabilization of tetrahedral intermediates inside of the enzyme active site has been investigated using tetrahedral intermediate mimics. The specific binding forces involved in stabilizing the transition state have been describe crystallographycally. In the mammalian serine proteases, trypsin and chymotrypsin, two peptide NH groups of the polypeptide backbone form the so-called oxyanion hole by donating hydrogen bonds to the negatively charged oxygen atom of the tetrahedral intermediate. A simple diagram describing the interaction is shown below.
References
Reactive intermediates
A
C | Tetrahedral carbonyl addition compound | Chemistry | 2,664 |
3,048,850 | https://en.wikipedia.org/wiki/Sodium%20trimetaphosphate | Sodium trimetaphosphate (also STMP), with formula Na3P3O9, is one of the metaphosphates of sodium. It has the formula but the hexahydrate is also well known. It is the sodium salt of trimetaphosphoric acid. It is a colourless solid that finds specialised applications in food and construction industries.
Although drawn with a particular resonance structure, the trianion has high symmetry.
Synthesis and reactions
Trisodium trimetaphosphate is produced industrially by heating sodium dihydrogen phosphate to 550 °C, a method first developed in 1955:
The trimetaphosphate dissolves in water and is precipitated by the addition of sodium chloride (common ion effect), affording the hexahydrate. STMP can also prepared by heating samples of sodium polyphosphate, or by a thermal reaction of orthophosphoric acid and sodium chloride at 600°C.
Hydrolysis of the ring leads to the acyclic sodium triphosphate:
Na3P3O9 + H2O → H2Na3P3O10
The analogous reaction of the metatriphosphate anion involves ring-opening by amine nucleophiles.
References
Food additives
Sodium compounds
Metaphosphates | Sodium trimetaphosphate | Chemistry | 275 |
3,012,448 | https://en.wikipedia.org/wiki/Nitrogen%20balance | In human physiology, nitrogen balance is the net difference between bodily nitrogen intake (ingestion) and loss (excretion). It can be represented as the following:
Nitrogen is a fundamental chemical component of amino acids, the molecular building blocks of protein. As such, nitrogen balance may be used as an index of protein metabolism. When more nitrogen is gained than lost by an individual, they are considered to have a positive nitrogen balance and be in a state of overall protein anabolism. In contrast, a negative nitrogen balance, in which more nitrogen is lost than gained, indicates a state of overall protein catabolism.
The body obtains nitrogen from dietary protein, sources of which include meat, fish, eggs, dairy products, nuts, legumes, cereals, and grains. Nitrogen loss occurs largely through urine in the form of urea, as well as through faeces, sweat, and growth of hair and skin.
Blood urea nitrogen and urine urea nitrogen tests can be used to estimate nitrogen balance.
Physiological and Clinical Implications
Positive nitrogen balance is associated with periods of growth, hypothyroidism, tissue repair, and pregnancy.
Negative nitrogen balance is associated with burns, serious tissue injuries, fever, hyperthyroidism, wasting diseases, and periods of fasting. A negative nitrogen balance can be used as part of a clinical evaluation of malnutrition.
Nitrogen balance is a method traditionally used to measure dietary protein requirements. This approach necessitates the meticulous collection of all nitrogen inputs and outputs to ensure comprehensive accounting of nitrogen exchanges. Nitrogen balance studies typically involve controlled dietary conditions, requiring participants to consume specific diets to determine total nitrogen intake precisely. Furthermore, participants often must remain at the study location for the duration of the study to facilitate the collection of all nitrogen losses. Physical exercise is also known to influence nitrogen excretion, adding another variable that requires control during these studies. Due to the stringent conditions required for accurate results, the nitrogen balance method may pose challenges when studying dietary protein requirements across different demographics, such as children.
See also
Protein (nutrient)
Biological value
Net protein utilization
Protein efficiency ratio
Protein digestibility
Protein Digestibility Corrected Amino Acid Score
References
External links
(with clinical information & interpretation related to nitrogen balance and its clinical testing)
Nitrogen
Proteins | Nitrogen balance | Chemistry | 467 |
32,174,433 | https://en.wikipedia.org/wiki/Akira%20Fujii | was a Japanese astrophotographer and astronomer. PBS has described him as "the world's foremost wide-angle astrophotographer".
Fujii graduated from Tama Art University in 1961, and began working at observatories, producing a substantial bibliography of general-audience astronomy books. In 1974, Fujii began Japan's first star party, the "Invitation to Starlit Skies", which he hosted on Mount Azuma until 1984.
Fujii's work is marketed by David Malin; he collaborated with Serge Brunier in the production of 2001's Great Atlas of the Stars.
The main-belt asteroid 3872 Akirafujii is named in his honor.
Fujii died on 28 December 2022, at the age of 81.
References
External links
David Ratledge analyzes the "Akira Fujii effect", at Deep-Sky.co.uk
1941 births
2022 deaths
20th-century Japanese astronomers
21st-century Japanese astronomers
People from Yamaguchi Prefecture
Astrophotographers
20th-century Japanese photographers
21st-century Japanese photographers | Akira Fujii | Astronomy | 221 |
36,100,009 | https://en.wikipedia.org/wiki/Shove%20knife | A shove knife is a forcible entry tool used mainly by firefighters. Generally, they consist of a small, semi-rigid piece of 10 gauge steel with an indented end. The device is inserted between a door and the door frame, above the spring latch on outwardly-swinging doors equipped with key-in-the-knob locks. The tool is pulled down and outward, releasing the locking mechanism.
References
Firefighter tools
Hand tools | Shove knife | Engineering | 91 |
62,720,164 | https://en.wikipedia.org/wiki/Hard%20carbon | Hard carbon is a solid form of carbon that cannot be converted to graphite by heat-treatment, even at temperatures as high as 3000 °C. It is also known as char, or non-graphitizing carbon. More colloquially it can be described as charcoal.
Hard carbon is produced by heating carbonaceous precursors to approximately 1000 °C in the absence of oxygen. Among the precursors for hard carbon are polyvinylidene chloride (PVDC), lignin and sucrose. Other precursors, such as polyvinyl chloride (PVC) and petroleum coke, produce soft carbon, or graphitizing carbon. Soft carbon can be readily converted to graphite by heating to 3000 °C.
The physical properties of the two classes of carbons are quite different. Hard carbon is a low density material, with extremely high microporosity, while soft carbon has little microporosity. Hard carbon is extensively used as anode materials in lithium-ion batteries and sodium-ion batteries.
Manufacturers of hard carbon include Xiamen Tob New Energy (China), Kuraray (Japan) and Stora Enso (Finland).
See also
Carbon
Graphitizing and non-graphitizing carbons
Carbonization
Graphite
References
External links
Investigating hard carbons for battery materials
Researchers use hard carbon for anode in sodium-ion batteries
Allotropes of carbon | Hard carbon | Chemistry | 279 |
992,626 | https://en.wikipedia.org/wiki/NGC%20891 | NGC 891 (also known as Caldwell 23, the Silver Sliver Galaxy, and the Outer Limits Galaxy) is an edge-on unbarred spiral galaxy about 30 million light-years away in the constellation Andromeda. It was discovered by William Herschel on October 6, 1784. The galaxy is a member of the NGC 1023 group of galaxies in the Local Supercluster. It has an H II nucleus.
The object is visible in small to moderate size telescopes as a faint elongated smear of light with a dust lane visible in larger apertures.
In 1999, the Hubble Space Telescope imaged NGC 891 in infrared.
In 2005, due to its attractiveness and scientific interest, NGC 891 was selected to be the first light image of the Large Binocular Telescope.
In 2012, it was again used as a first light image of the Lowell Discovery Telescope with the Large Monolithic Imager.
Supernova SN 1986J was discovered on August 21, 1986 at apparent magnitude 14.
Properties
NGC 891 looks as the Milky Way would look like when viewed edge-on (some astronomers have even noted how similar to NGC 891 our galaxy looks as seen from the Southern Hemisphere) and, in fact, both galaxies are considered very similar in terms of luminosity and size; studies of the dynamics of its molecular hydrogen have also proven the likely presence of a central bar.
Despite this, recent high-resolution images of its dusty disk show unusual filamentary patterns. These patterns are extending into the halo of the galaxy, away from its galactic disk. Scientists presume that supernova explosions caused this interstellar dust to be thrown out of the galactic disk toward the halo.
It may also be possible that the light pressure from surrounding stars causes this phenomenon.
The galaxy is a member of a small group of galaxies, sometimes called the NGC 1023 Group. Other galaxies in this group are the NGCs 925, 949, 959, 1003, 1023, and 1058, and the UGCs 1807, 1865 (DDO 19), 2014 (DDO 22), 2023 (DDO 25), 2034 (DDO 24), and 2259. Its outskirts are populated by multiple low-surface brightness, coherent, and vast substructures, like giant streams that loop around the parent galaxy up to distances of approximately 50 kpc. The bulge and the disk are surrounded by a flat and thick cocoon-like stellar structure. These have vertical and radial distances of up to 15 kpc and 40 kpc, respectively, and are interpreted as the remnant of a satellite galaxy disrupted and in the process of being absorbed by NGC 891.
In popular culture
NGC 891 appears alongside M67, the Sombrero Galaxy, the Pinwheel Galaxy, NGC 5128, NGC 1300, M81, and the Andromeda Galaxy in the end credits of the Outer Limits TV series, which is why it is occasionally called the Outer Limits Galaxy.
The soundtrack of the 1974 film Dark Star by John Carpenter features a muzak-style instrumental piece called "When Twilight Falls on NGC 891".
The first solo album by Edgar Froese, Aqua, also released in 1974, contained a track called "NGC 891". Side 2 of the album, which included this track, was unusual in having been a rare example of a commercially issued piece of music recorded using the artificial head system.
See also
Messier 82
References
External links
APOD: Interstellar Dust-Bunnies of NGC 891 (9/9/1999)
SEDS: Information on NGC 891
NGC 891 on Astrophotography by Wolfgang Kloehr
Unbarred spiral galaxies
Andromeda (constellation)
0891
01831
09031
023b
NGC 1023 Group | NGC 891 | Astronomy | 779 |
77,814,460 | https://en.wikipedia.org/wiki/List%20of%20Oceanian%20regions%20by%20life%20expectancy | This is a list of Oceanian regions according to estimation of the Global Data Lab, as of 15 October 2024. By default, regions within country are sorted by overall life expectancy in 2022. Countries are sorted by the most favorable for life expectancy region inside them.
See also
References
Life expectancy
Oceania | List of Oceanian regions by life expectancy | Biology | 65 |
574,544 | https://en.wikipedia.org/wiki/Circular%20motion | In physics, circular motion is movement of an object along the circumference of a circle or rotation along a circular arc. It can be uniform, with a constant rate of rotation and constant tangential speed, or non-uniform with a changing rate of rotation. The rotation around a fixed axis of a three-dimensional body involves the circular motion of its parts. The equations of motion describe the movement of the center of mass of a body, which remains at a constant distance from the axis of rotation. In circular motion, the distance between the body and a fixed point on its surface remains the same, i.e., the body is assumed rigid.
Examples of circular motion include: special satellite orbits around the Earth (circular orbits), a ceiling fan's blades rotating around a hub, a stone that is tied to a rope and is being swung in circles, a car turning through a curve in a race track, an electron moving perpendicular to a uniform magnetic field, and a gear turning inside a mechanism.
Since the object's velocity vector is constantly changing direction, the moving object is undergoing acceleration by a centripetal force in the direction of the center of rotation. Without this acceleration, the object would move in a straight line, according to Newton's laws of motion.
Uniform circular motion
In physics, uniform circular motion describes the motion of a body traversing a circular path at a constant speed. Since the body describes circular motion, its distance from the axis of rotation remains constant at all times. Though the body's speed is constant, its velocity is not constant: velocity, a vector quantity, depends on both the body's speed and its direction of travel. This changing velocity indicates the presence of an acceleration; this centripetal acceleration is of constant magnitude and directed at all times toward the axis of rotation. This acceleration is, in turn, produced by a centripetal force which is also constant in magnitude and directed toward the axis of rotation.
In the case of rotation around a fixed axis of a rigid body that is not negligibly small compared to the radius of the path, each particle of the body describes a uniform circular motion with the same angular velocity, but with velocity and acceleration varying with the position with respect to the axis.
Formula
For motion in a circle of radius , the circumference of the circle is . If the period for one rotation is , the angular rate of rotation, also known as angular velocity, is:
and the units are radians/second.
The speed of the object traveling the circle is:
The angle swept out in a time is:
The angular acceleration, , of the particle is:
In the case of uniform circular motion, will be zero.
The acceleration due to change in the direction is:
The centripetal and centrifugal force can also be found using acceleration:
The vector relationships are shown in Figure 1. The axis of rotation is shown as a vector perpendicular to the plane of the orbit and with a magnitude . The direction of is chosen using the right-hand rule. With this convention for depicting rotation, the velocity is given by a vector cross product as
which is a vector perpendicular to both and , tangential to the orbit, and of magnitude . Likewise, the acceleration is given by
which is a vector perpendicular to both and of magnitude and directed exactly opposite to .
In the simplest case the speed, mass, and radius are constant.
Consider a body of one kilogram, moving in a circle of radius one metre, with an angular velocity of one radian per second.
The speed is 1 metre per second.
The inward acceleration is 1 metre per square second, .
It is subject to a centripetal force of 1 kilogram metre per square second, which is 1 newton.
The momentum of the body is 1 kg·m·s−1.
The moment of inertia is 1 kg·m2.
The angular momentum is 1 kg·m2·s−1.
The kinetic energy is 0.5 joule.
The circumference of the orbit is 2 (~6.283) metres.
The period of the motion is 2 seconds.
The frequency is (2)−1 hertz.
In polar coordinates
During circular motion, the body moves on a curve that can be described in the polar coordinate system as a fixed distance from the center of the orbit taken as the origin, oriented at an angle from some reference direction. See Figure 4. The displacement vector is the radial vector from the origin to the particle location:
where is the unit vector parallel to the radius vector at time and pointing away from the origin. It is convenient to introduce the unit vector orthogonal to as well, namely . It is customary to orient to point in the direction of travel along the orbit.
The velocity is the time derivative of the displacement:
Because the radius of the circle is constant, the radial component of the velocity is zero. The unit vector has a time-invariant magnitude of unity, so as time varies its tip always lies on a circle of unit radius, with an angle the same as the angle of . If the particle displacement rotates through an angle in time , so does , describing an arc on the unit circle of magnitude . See the unit circle at the left of Figure 4. Hence:
where the direction of the change must be perpendicular to (or, in other words, along ) because any change in the direction of would change the size of . The sign is positive because an increase in implies the object and have moved in the direction of .
Hence the velocity becomes:
The acceleration of the body can also be broken into radial and tangential components. The acceleration is the time derivative of the velocity:
The time derivative of is found the same way as for . Again, is a unit vector and its tip traces a unit circle with an angle that is . Hence, an increase in angle by implies traces an arc of magnitude , and as is orthogonal to , we have:
where a negative sign is necessary to keep orthogonal to . (Otherwise, the angle between and would decrease with an increase in .) See the unit circle at the left of Figure 4. Consequently, the acceleration is:
The centripetal acceleration is the radial component, which is directed radially inward:
while the tangential component changes the magnitude of the velocity:
Using complex numbers
Circular motion can be described using complex numbers. Let the axis be the real axis and the axis be the imaginary axis. The position of the body can then be given as , a complex "vector":
where is the imaginary unit, and is the argument of the complex number as a function of time, .
Since the radius is constant:
where a dot indicates differentiation in respect of time.
With this notation, the velocity becomes:
and the acceleration becomes:
The first term is opposite in direction to the displacement vector and the second is perpendicular to it, just like the earlier results shown before.
Velocity
Figure 1 illustrates velocity and acceleration vectors for uniform motion at four different points in the orbit. Because the velocity is tangent to the circular path, no two velocities point in the same direction. Although the object has a constant speed, its direction is always changing. This change in velocity is caused by an acceleration , whose magnitude is (like that of the velocity) held constant, but whose direction also is always changing. The acceleration points radially inwards (centripetally) and is perpendicular to the velocity. This acceleration is known as centripetal acceleration.
For a path of radius , when an angle is swept out, the distance traveled on the periphery of the orbit is . Therefore, the speed of travel around the orbit is
where the angular rate of rotation is . (By rearrangement, .) Thus, is a constant, and the velocity vector also rotates with constant magnitude , at the same angular rate .
Relativistic circular motion
In this case, the three-acceleration vector is perpendicular to the three-velocity vector,
and the square of proper acceleration, expressed as a scalar invariant, the same in all reference frames,
becomes the expression for circular motion,
or, taking the positive square root and using the three-acceleration, we arrive at the proper acceleration for circular motion:
Acceleration
The left-hand circle in Figure 2 is the orbit showing the velocity vectors at two adjacent times. On the right, these two velocities are moved so their tails coincide. Because speed is constant, the velocity vectors on the right sweep out a circle as time advances. For a swept angle the change in is a vector at right angles to and of magnitude , which in turn means that the magnitude of the acceleration is given by
Non-uniform circular motion
In non-uniform circular motion, an object moves in a circular path with varying speed. Since the speed is changing, there is tangential acceleration in addition to normal acceleration.
The net acceleration is directed towards the interior of the circle (but does not pass through its center).
The net acceleration may be resolved into two components: tangential acceleration and centripetal acceleration. Unlike tangential acceleration, centripetal acceleration is present in both uniform and non-uniform circular motion.
In non-uniform circular motion, the normal force does not always point to the opposite direction of weight.
The normal force is actually the sum of the radial and tangential forces. The component of weight force is responsible for the tangential force (when we neglect friction). The centripetal force is due to the change in the direction of velocity.
The normal force and weight may also point in the same direction. Both forces can point downwards, yet the object will remain in a circular path without falling down.
The normal force can point downwards. Considering that the object is a person sitting inside a plane moving in a circle, the two forces (weight and normal force) will point down only when the plane reaches the top of the circle. The reason for this is that the normal force is the sum of the tangential force and centripetal force. The tangential force is zero at the top (as no work is performed when the motion is perpendicular to the direction of force). Since weight is perpendicular to the direction of motion of the object at the top of the circle and the centripetal force points downwards, the normal force will point down as well.
From a logical standpoint, a person travelling in that plane will be upside down at the top of the circle. At that moment, the person's seat is actually pushing down on the person, which is the normal force.
The reason why an object does not fall down when subjected to only downward forces is a simple one. Once an object is thrown into the air, there is only the downward gravitational force that acts on the object. That does not mean that once an object is thrown into the air, it will fall instantly. The velocity of the object keeps it up in the air. The first of Newton's laws of motion states that an object's inertia keeps it in motion; since the object in the air has a velocity, it will tend to keep moving in that direction.
A varying angular speed for an object moving in a circular path can also be achieved if the rotating body does not have a homogeneous mass distribution.
One can deduce the formulae of speed, acceleration and jerk, assuming that all the variables to depend on :
Further transformations may involve and their corresponding derivatives:
Applications
Solving applications dealing with non-uniform circular motion involves force analysis. With a uniform circular motion, the only force acting upon an object traveling in a circle is the centripetal force. In a non-uniform circular motion, there are additional forces acting on the object due to a non-zero tangential acceleration. Although there are additional forces acting upon the object, the sum of all the forces acting on the object will have to be equal to the centripetal force.
Radial acceleration is used when calculating the total force. Tangential acceleration is not used in calculating total force because it is not responsible for keeping the object in a circular path. The only acceleration responsible for keeping an object moving in a circle is the radial acceleration. Since the sum of all forces is the centripetal force, drawing centripetal force into a free body diagram is not necessary and usually not recommended.
Using , we can draw free body diagrams to list all the forces acting on an object and then set it equal to . Afterward, we can solve for whatever is unknown (this can be mass, velocity, radius of curvature, coefficient of friction, normal force, etc.). For example, the visual above showing an object at the top of a semicircle would be expressed as .
In a uniform circular motion, the total acceleration of an object in a circular path is equal to the radial acceleration. Due to the presence of tangential acceleration in a non uniform circular motion, that does not hold true any more. To find the total acceleration of an object in a non uniform circular, find the vector sum of the tangential acceleration and the radial acceleration.
Radial acceleration is still equal to . Tangential acceleration is simply the derivative of the speed at any given point: . This root sum of squares of separate radial and tangential accelerations is only correct for circular motion; for general motion within a plane with polar coordinates , the Coriolis term should be added to , whereas radial acceleration then becomes .
See also
Angular momentum
Equations of motion for circular motion
Fictitious force
Geostationary orbit
Geosynchronous orbit
Pendulum (mechanics)
Reactive centrifugal force
Reciprocating motion
Sling (weapon)
References
External links
Physclips: Mechanics with animations and video clips from the University of New South Wales
Circular Motion – a chapter from an online textbook, Mechanics, by Benjamin Crowell (2019)
Circular Motion Lecture – a video lecture on CM
– an online textbook with different analysis for circular motion
Rotation
Classical mechanics
Motion (physics)
Circles | Circular motion | Physics,Mathematics | 2,834 |
23,956,192 | https://en.wikipedia.org/wiki/Green%20Archer%20%28radar%29 | Green Archer, also called Radar, Field Artillery, No 8 was a widely used British mortar locating radar operating in the X band using a Foster scanner. Developed by EMI after an experimental model by the Royal Radar Establishment, it was in British service from 1962 until 1975 with the Royal Artillery. A self-propelled version was designated FV436 or Radar, FA, No 8 Mk 2. It was replaced by Cymbeline starting in 1975.
Concept
Mortars, using indirect fire, became a major threat to infantry in World War II. It was found that mortar bombs in flight could be detected and tracked by radar. US and UK anti-aircraft radars were used and specialised mortar locating radars appeared at the end of the war, and were used in Korea with varying degrees of success. Hostile mortars had to be accurately located before they could be attacked with indirect fire from guns or mortars. Since hostile mortars moved frequently to avoid return fire it was essential to have a means of locating them to a few tens of metres of accuracy and to be able to respond quickly when they are located.
Previous radar systems used parabolic reflectors or similar systems to produce a narrow beam of radio energy rather like a flashlight beam. This beam was then swung around the sky by moving the entire reflector, with returns, or blips, appearing on the displays when an object was caught in the beam. For tracking mortar shells this was a particularly difficult task, requiring the operator to have the antenna pointed in roughly the right location by estimates based on previous rounds, and then following the shell through its trajectory. Finding was made a bit easier if the beam cone had a large angle, the problem with this was that it reduced the accuracy of location.
The key advance in tracking mortar shells was the Foster scanner, a type of radar antenna. Instead of producing a beam of radio energy, the Foster scanner produced a fan (pie-slice shape). In the case of the Green Archer, the scanner was built in a manner to produce a beam that was less than 1° wide, but rapidly scanned across a 40° wide band in front of the radar. Any object in the scanner's view would appear on the display each time the beam crossed its horizontal bearing. To measure the vertical angle, some other system was required.
Green Archer solved this problem by quickly moving the antenna between two set vertical angles. The scanner was first set so it scanned back and forth near the horizon line. When a mortar shell was seen on the display, the operator used a grease pencil to mark its location. He then pressed a button that quickly raised the scanner so it was pointed at a higher vertical angle. This happened rapidly enough that the bomb would take some time to reach this higher altitude, at which point it would appear on the display again and this second location would also be marked. The operator them placed cursors over the marks and input the plot to the radar's analogue computer.
These two plots, the time between them and the angle between the two beam positions gave two points on a parabolic curve. Such a curve is defined by two points and is a good approximation of a mortar bomb trajectory. Using these, the azimuth of the radar beam centre and the radar's coordinates, the mortar position coordinates were calculated. These could be adjusted to reflect the actual height of the ground.
Description
Green Archer comprised two units each mounted on a four-wheel trailer with levelling jacks, one unit was the complete radar, the other a fully silenced generator inaudible at 200 m to permit operation in forward areas. The radar unit weighed 2,915 kg and with the antenna in the operating position was 2.9 m high. The radar display was positioned up to about 15 metres from the radar and had a built in simulator for training. Each radar and generator was usually towed by a Humber 1 ton armoured vehicle, or the FV610 version of the Saracen six-wheel armoured vehicle. Each radar section was supported by an electronic repair vehicle which carried a spare for each of the 13 major sub-assemblies in each radar.
Green Archer could locate a medium mortar up to about 10 km away and a heavy mortar out to 17 km, the maximum range. It took about 30 seconds from a mortar firing to producing its location. The radar could also be used "in reverse" to observe and adjust mortar fall of shot and that of guns firing in high angle. It was also capable of surface observation.
In British service it was mostly organised as a radar section of two radars in the locating (G) troop of field regiments. In addition to the radars the section also had a command post and deployed two Listening Posts (LPs). The task of the LPs was to report mortars firing and the area they were in. This told the radar to switch-on, and so avoided continuous transmission as an electronic counter measure. The other section of the troop provided an artillery intelligence section at brigade headquarters responsible for fighting the brigade’s counter-mortar battle.
British Green Archers were successfully used on operations in Borneo, South Arabia and Oman against mortars and for border surveillance in Hong Kong.
Variants
A self-propelled version was fully developed. It was mounted on the cutaway rear of FV432 APC and designated FV436 or Radar, FA, No 8 Mk 2. It had an automatic radar levelling arrangement using mercury, which was adopted for the Radar, FA No 15 Mk 2, or Cymbeline that replaced Green Archer. However, it did not enter UK service. Nevertheless, the cutaway hull design was applied to M113 APCs and used by at least three armies.
Raytheon made a similar radar, the AN-MPQ-501 1958 001, as used by the Canadian army.
Other Users
In addition to the UK, Green Archer was used by the armies of Germany, the Netherlands and Denmark (all self-propelled, mounted on an M-113 chassis), Italy, Israel, South Africa, Sweden, and Switzerland.
Survivors
A complete system, a radar trailer and a generator trailer, is preserved at the Radar Museum at RAF Neatishead in Norfolk.
An M113 mounted Green archer can be seen in the artillery museum in Varde/Denmark.
A radar trailer is complete with all computer units in the South Yorkshire Transport Museum in Rotherham.
Notes
References
Cold War military equipment of the United Kingdom
Military radars of the United Kingdom
Weapon locating radar
British Army equipment
Royal Artillery
Counter-battery radars
Rainbow code | Green Archer (radar) | Technology | 1,324 |
21,946 | https://en.wikipedia.org/wiki/Nutcracker | A nutcracker is a tool designed to open nuts by cracking their shells. There are many designs, including levers, screws, and ratchets. The lever version is also used for cracking lobster and crab shells.
A decorative version, a nutcracker doll, portrays a person whose mouth forms the jaws of the nutcracker.
Functions
Nuts were historically opened using a hammer and anvil, often made of stone. Some nuts such as walnuts can also be opened by hand, by holding the nut in the palm of the hand and applying pressure with the other palm or thumb, or using another nut.
Manufacturers produce modern functional nutcrackers usually somewhat resembling pliers, but with the pivot point at the end beyond the nut, rather than in the middle. These are also used for cracking the shells of crab and lobster to make the meat inside available for eating. Hinged lever nutcrackers, often called a "pair of nutcrackers", may date back to Ancient Greece. By the 14th century in Europe, nutcrackers were documented in England, including in the Canterbury Tales, and in France. The lever design may derive from blacksmiths' pincers. Materials included metals such as silver, cast-iron and bronze, and wood including boxwood, especially those from France and Italy. More rarely, porcelain was used. Many of the wooden carved nutcrackers were in the form of people and animals.
During the Victorian era, fruit and nuts were presented at dinner and ornate and often silver-plated nutcrackers were produced to accompany them on the dinner table. Nuts have long been a popular choice for desserts, particularly throughout Europe. The nutcrackers were placed on dining tables to serve as a fun and entertaining center of conversation while diners awaited their final course. At one time, nutcrackers were actually made of metals such as brass, and it was not until the 1800s in Germany that the popularity of wooden ones began to spread.
The late 19th century saw two shifts in nutcracker production: the rise in figurative and decorative designs, particularly from the Alps where they were sold as souvenirs, and a switch to industrial manufacture, including availability in mail-order catalogues, rather than artisan production. After the 1960s, the availability of pre-shelled nuts led to a decline in ownership of nutcrackers and a fall in the tradition of nuts being put in children's Christmas stockings.
Alternative designs
In the 17th century, screw nutcrackers were introduced that applied more gradual pressure to the shell, some like a vise. The spring-jointed nutcracker was patented by Henry Quackenbush in 1913. A ratchet design, similar to a car jack, that gradually increases pressure on the shell to avoid damaging the kernel inside is used by the Crackerjack, patented in 1947 by Cuthbert Leslie Rimes of Morley, Leeds and exhibited at the Festival of Britain. Unshelled nuts are still popular in China, where a key device is inserted into the crack in walnuts, pecans, and macadamias and twisted to open the shell.
Screw nutcrackers are still commonly used to crack macadamia nuts, since their shell is too hard to be cracked with an ordinary nutcracker.
For crustaceans
A crab cracker (also known as a lobster cracker or crab claw cracker) is a specialized food utensil, similar in construction (and sometimes appearance) to certain types of nutcrackers, used to crack the hard shells of crabs and lobsters by pulling the two handles together to access the flesh inside, while preparing or eating them.
Decorative
Nutcrackers in the form of wood carvings of a soldier, knight, king, or other profession have existed since at least the 15th century. Figurative nutcrackers are a good luck symbol in Germany, and a folktale recounts that a puppet-maker won a nutcracking challenge by creating a doll with a mouth for a lever to crack the nuts. These nutcrackers portray a person with a large mouth which the operator opens by lifting a lever in the back of the figurine. Originally one could insert a nut in the big-toothed mouth, press down and thereby crack the nut. Modern nutcrackers in this style serve mostly for decoration, mainly at Christmas time, a season of which they have long been a traditional symbol. Pyotr Ilyich Tchaikovsky's ballet The Nutcracker, based on a story by E. T. A. Hoffmann, derives its name from this festive holiday decoration.
The carving of nutcrackers—as well as of religious figures and of cribs—developed as a cottage industry in forested rural areas of Germany. The most famous nutcracker carvings come from Sonneberg in Thuringia (also a center of dollmaking) and Seiffen, as part of the industry of wooden toymaking in the Ore Mountains. Wood-carving usually provided the only income for the people living there. Today the travel industry supplements their income by bringing visitors to the remote areas. Carvings by famous names like Junghanel, Klaus Mertens, Karl, Olaf Kolbe, Petersen, Christian Ulbricht and especially the Steinbach nutcrackers have become collectors' items.
Decorative nutcrackers became popular in the United States after the Second World War, following the first US production of The Nutcracker ballet in 1940 and the exposure of US soldiers to the dolls during the war. In the United States, few of the decorative nutcrackers are now functional, though expensive working designs are still available. Many of the woodworkers in Germany were in Erzgebirge, in the Soviet zone after the end of the war, and they mass-produced poorly-made designs for the US market. With the increase in pre-shelled nuts, the need for functionality was also lessened. After the 1980s, Chinese and Taiwanese imports that copied the traditional German designs took over. The recreated "Bavarian village" of Leavenworth, Washington, features a nutcracker museum. Many other materials also serve to make decorated nutcrackers, such as porcelain, silver, and brass; the museum displays samples. The United States Postal Service (USPS) issued four stamps in October 2008 with custom-made nutcrackers made by Richmond, Virginia artist Glenn Crider.
Other uses
Some artists, among them the multi-instrumentalist Mike Oldfield, have used the sound nutcrackers make in music.
An old belief among the Malay people in Southeast Asia states that an areca nutcracker (kacip pinang) can be placed under a baby's pillow to prevent any harm from paranormal creatures.
In animals
Many animals shell nuts to eat them, including using tools. The Capuchin monkey is a fine example. Parrots use their beaks as natural nutcrackers, in much the same way smaller birds crack seeds. In this case, the pivot point stands opposite the nut, at the jaw, or the beak.
References
External links
Black Walnut Crackers
Food preparation utensils
Eating utensils
Mechanical hand tools
Culture of the Ore Mountains | Nutcracker | Physics | 1,479 |
28,400,431 | https://en.wikipedia.org/wiki/V%20Cephei | V Cephei is a white main sequence star in the constellation Cepheus. It only varies slightly by 0.03 of a magnitude. It was suspected of being variable by American astronomer Seth Carlo Chandler noting in 1890 that it varied by 0.7 magnitude but that it needed more confirmation. Subsequent observers were divided in whether they noted variability or not. A subsequent study with photoelectric photometry showed no variability.
With a spectral class of A1V, V Cephei is a main sequence star with a surface temperature of . It has twice the mass of the Sun and, with nearly twice its radius, it shines at .
References
Cepheus (constellation)
A-type main-sequence stars
Cephei, V
Durchmusterung objects
224309
118027
9056 | V Cephei | Astronomy | 161 |
944,335 | https://en.wikipedia.org/wiki/Muscazone | Muscazone is a toxic chemical compound. It is an amino acid found in European fly agaric mushrooms.
Consumption causes visual damage, mental confusion, and memory loss.
See also
Ibotenic acid
Muscimol
References
Alpha-Amino acids
Mycotoxins
Oxazolones
Toxic amino acids | Muscazone | Chemistry | 63 |
34,436,247 | https://en.wikipedia.org/wiki/Dry%20gallon | The dry gallon, also known as the corn gallon or grain gallon, is a historic British dry measure of volume that was used to measure grain and other dry commodities and whose earliest recorded official definition, in 1303, was the volume of of wheat. It is not used in the US customary system – though it implicitly exists since the US dry measures of bushel, peck, quart, and pint are still used – and is not included in the National Institute of Standards and Technology handbook that many US states recognize as the authority on measurement law.
The US fluid gallon is about 14.1% smaller than the US dry gallon, while the Imperial fluid gallon is about 3.2% larger than the US dry gallon.
The dry gallon's implicit value in the US system was originally one eighth of the Winchester bushel, which was a cylindrical measure of in diameter and in depth, making it an irrational number of cubic inches; its value to seven significant digits was , from an exact value of cubic inches. Since the bushel was later redefined to be exactly 2150.42 cubic inches, 268.8025 became the exact value for the dry gallon ( is ).
References
Units of volume | Dry gallon | Mathematics | 244 |
10,465,001 | https://en.wikipedia.org/wiki/Eigenvalue%20perturbation | In mathematics, an eigenvalue perturbation problem is that of finding the eigenvectors and eigenvalues of a system that is perturbed from one with known eigenvectors and eigenvalues . This is useful for studying how sensitive the original system's eigenvectors and eigenvalues are to changes in the system.
This type of analysis was popularized by Lord Rayleigh, in his investigation of harmonic vibrations of a string perturbed by small inhomogeneities.
The derivations in this article are essentially self-contained and can be found in many texts on numerical linear algebra or numerical functional analysis.
This article is focused on the case of the perturbation of a simple eigenvalue (see in
multiplicity of eigenvalues).
Why generalized eigenvalues?
In the entry applications of eigenvalues and eigenvectors we find numerous scientific fields in which eigenvalues are used to obtain solutions. Generalized eigenvalue problems are less widespread but are a key in the study of vibrations.
They are useful when we use the Galerkin method or Rayleigh-Ritz method to find approximate
solutions of partial differential equations modeling vibrations of structures such as strings and plates; the paper of Courant (1943)
is fundamental. The Finite element method is a widespread particular case.
In classical mechanics, generalized eigenvalues may crop up when we look for vibrations of multiple degrees of freedom systems close to equilibrium; the kinetic energy provides the mass matrix , the potential strain energy provides the rigidity matrix .
For further details, see the first section of this article of Weinstein (1941, in French)
With both methods, we obtain a system of differential equations or Matrix differential equation
with the mass matrix , the damping matrix and the rigidity matrix . If we neglect the damping effect, we use , we can look for a solution of the following form ; we obtain that and are solution of the generalized eigenvalue problem
Setting of perturbation for a generalized eigenvalue problem
Suppose we have solutions to the generalized eigenvalue problem,
where and are matrices. That is, we know the eigenvalues and eigenvectors for . It is also required that the eigenvalues are distinct.
Now suppose we want to change the matrices by a small amount. That is, we want to find the eigenvalues and eigenvectors of
where
with the perturbations and much smaller than and respectively. Then we expect the new eigenvalues and eigenvectors to be similar to the original, plus small perturbations:
Steps
We assume that the matrices are symmetric and positive definite, and assume we have scaled the eigenvectors such that
where is the Kronecker delta.
Now we want to solve the equation
In this article we restrict the study to first order perturbation.
First order expansion of the equation
Substituting in (1), we get
which expands to
Canceling from (0) () leaves
Removing the higher-order terms, this simplifies to
In other words, no longer denotes the exact variation of the eigenvalue but its first order approximation.
As the matrix is symmetric, the unperturbed eigenvectors are orthogonal and so we use them as a basis for the perturbed eigenvectors.
That is, we want to construct
with ,
where the are small constants that are to be determined.
In the same way, substituting in (2), and removing higher order terms, we get
The derivation can go on with two forks.
First fork: get first eigenvalue perturbation
Eigenvalue perturbation
We start with (3)
we left multiply with and use (2) as well as its first order variation (5); we get
or
We notice that it is the first order perturbation of the generalized Rayleigh quotient with fixed :
Moreover, for , the formula should be compared with Bauer-Fike theorem which provides a bound for eigenvalue perturbation.
Eigenvector perturbation
We left multiply (3) with for and get
We use for .
or
As the eigenvalues are assumed to be simple, for
Moreover (5) (the first order variation of (2) ) yields
We have obtained all the components of .
Second fork: Straightforward manipulations
Substituting (4) into (3) and rearranging gives
Because the eigenvectors are -orthogonal when is positive definite, we can remove the summations by left-multiplying by :
By use of equation (1) again:
The two terms containing are equal because left-multiplying (1) by gives
Canceling those terms in (6) leaves
Rearranging gives
But by (2), this denominator is equal to 1. Thus
Then, as for (assumption simple eigenvalues) by left-multiplying equation (5) by :
Or by changing the name of the indices:
To find , use the fact that:
implies:
Summary of the first order perturbation result
In the case where all the matrices are Hermitian positive definite and all the eigenvalues are distinct,
for infinitesimal and (the higher order terms in (3) being neglected).
So far, we have not proved that these higher order terms may be neglected. This point may be derived using the implicit function theorem; in next section, we summarize the use of this theorem in order to obtain a first order expansion.
Theoretical derivation
Perturbation of an implicit function.
In the next paragraph, we shall use the Implicit function theorem (Statement of the theorem ); we notice that for a continuously differentiable function , with an invertible Jacobian matrix , from a point solution of , we get solutions of with close to in the form where is a continuously differentiable function ; moreover the Jacobian marix of is provided by the linear system
.
As soon as the hypothesis of the theorem is satisfied, the Jacobian matrix of may be computed with a first order expansion of
, we get
; as , it is equivalent to equation .
Eigenvalue perturbation: a theoretical basis.
We use the previous paragraph (Perturbation of an implicit function) with somewhat different notations suited to eigenvalue perturbation; we introduce , with
with
. In order to use the Implicit function theorem, we study the invertibility of the Jacobian with
. Indeed, the solution of
may be derived with computations similar to the derivation of the expansion.
When is a simple eigenvalue, as the eigenvectors form an orthonormal basis, for any right-hand side, we have obtained one solution therefore, the Jacobian is invertible.
The implicit function theorem provides a continuously differentiable function
hence the expansion with little o notation:
.
with
This is the first order expansion of the perturbed eigenvalues and eigenvectors. which is proved.
Results of sensitivity analysis with respect to the entries of the matrices
The results
This means it is possible to efficiently do a sensitivity analysis on as a function of changes in the entries of the matrices. (Recall that the matrices are symmetric and so changing will also change , hence the term.)
Similarly
Eigenvalue sensitivity, a small example
A simple case is ; however you can compute eigenvalues and eigenvectors with the help of online tools such as (see introduction in Wikipedia WIMS) or using Sage SageMath. You get the smallest eigenvalue and an explicit computation ; more over, an associated eigenvector is ; it is not an unitary vector; so ; we get and ; hence ; for this example , we have checked that or .
Existence of eigenvectors
Note that in the above example we assumed that both the unperturbed and the perturbed systems involved symmetric matrices, which guaranteed the existence of linearly independent eigenvectors. An eigenvalue problem involving non-symmetric matrices is not guaranteed to have linearly independent eigenvectors, though a sufficient condition is that and be simultaneously diagonalizable.
The case of repeated eigenvalues
A technical report of Rellich for perturbation of eigenvalue problems provides several examples. The elementary examples are in chapter 2. The report may be downloaded from
archive.org. We draw an example in which the eigenvectors have a nasty behavior.
Example 1
Consider the following matrix
and
For , the matrix has eigenvectors belonging to eigenvalues .
Since for if are any normalized eigenvectors belonging to respectively
then where
are real for
It is obviously impossible to define , say, in such a way that tends to a limit as because has no limit as
Note in this example that
is not only continuous but also has continuous derivatives of all orders.
Rellich draws the following important consequence.
<< Since in general the individual eigenvectors do not depend continuously on the perturbation parameter even though the operator does, it is necessary to work, not with an eigenvector, but rather with the space spanned by all the eigenvectors belonging to the same eigenvalue. >>
Example 2
This example is less nasty that the previous one. Suppose is the 2x2 identity matrix, any vector is an eigenvector; then is one possible eigenvector. But if one makes a small perturbation, such as
Then the eigenvectors are and ; they are constant with respect to so that is constant and does not go to zero.
See also
Perturbation theory (quantum mechanics)
Bauer–Fike theorem
References
.
Further reading
Books
.
Bhatia, R. (1987). Perturbation bounds for matrix eigenvalues. SIAM.
Report
Journal papers
Simon, B. (1982). Large orders and summability of eigenvalue perturbation theory: a mathematical overview. International Journal of Quantum Chemistry, 21(1), 3-25.
Crandall, M. G., & Rabinowitz, P. H. (1973). Bifurcation, perturbation of simple eigenvalues, and linearized stability. Archive for Rational Mechanics and Analysis, 52(2), 161-180.
Stewart, G. W. (1973). Error and perturbation bounds for subspaces associated with certain eigenvalue problems. SIAM review, 15(4), 727-764.
Löwdin, P. O. (1962). Studies in perturbation theory. IV. Solution of eigenvalue problem by projection operator formalism. Journal of Mathematical Physics, 3(5), 969-982.
Perturbation theory
Differential calculus
Multivariable calculus
Linear algebra
Numerical linear algebra | Eigenvalue perturbation | Physics,Mathematics | 2,239 |
49,296,992 | https://en.wikipedia.org/wiki/Medicines%20reconciliation | Medicines reconciliation or medication reconciliation is the process of ensuring that a hospital patient's medication list is as up-to-date as possible. It is usually undertaken by a pharmacist and may include consulting several sources such as the patient, their relatives or caregivers, or their primary care physician.
In the United Kingdom, guidelines on medicines reconciliation are provided by the National Institute for Health and Care Excellence (NICE) in collaboration with the National Patient Safety Agency. In accordance with these, it should be carried out within 24 hours of admission to hospital. From April 2020 it is to be an essential service in the community pharmacy contract in England.
In the United States, the Joint Commission prioritizes medication reconciliation at hospital admission and during ambulatory care as one of the National Patient Safety Goals.
Importance
Research has shown that, on average, there is around a 20% discrepancy between medications prescribed on admission to hospital and the true medication list for a given patient. Chronic medications are stopped in about 11% of the patients after elective surgeries and 33% of the patients after admission to intensive care unit. The most common omissions are inhalers and analgesia. There are also a small minority of errors in prescribing drugs such as insulin or warfarin, which could have catastrophic consequences including death of the patient. Pharmacist involvement help reasons for drug discontinuation being documented and adverse drug reactions being reconciled in the prescription charts. The value of medicines reconciliation is in noticing and correcting these errors before they have a chance to adversely affect the patient concerned. Research shows that the main activity of medicines reconciliation by pharmacists is to identify or assess drug-related problems and discuss them with other professionals. However, the process and the tools used in medicines reconciliation vary greatly. There is a wide variation in how medicines reconciliation is conducted and which methods are utilized in different country and hospitals.
References
Pharmacy
Pharmaceuticals policy | Medicines reconciliation | Chemistry | 398 |
92,377 | https://en.wikipedia.org/wiki/Electromagnet | An electromagnet is a type of magnet in which the magnetic field is produced by an electric current. Electromagnets usually consist of wire wound into a coil. A current through the wire creates a magnetic field which is concentrated along the center of the coil. The magnetic field disappears when the current is turned off. The wire turns are often wound around a magnetic core made from a ferromagnetic or ferrimagnetic material such as iron; the magnetic core concentrates the magnetic flux and makes a more powerful magnet.
The main advantage of an electromagnet over a permanent magnet is that the magnetic field can be quickly changed by controlling the amount of electric current in the winding. However, unlike a permanent magnet, which needs no power, an electromagnet requires a continuous supply of current to maintain the magnetic field.
Electromagnets are widely used as components of other electrical devices, such as motors, generators, electromechanical solenoids, relays, loudspeakers, hard disks, MRI machines, scientific instruments, and magnetic separation equipment. Electromagnets are also employed in industry for picking up and moving heavy iron objects such as scrap iron and steel.
History
Danish scientist Hans Christian Ørsted discovered in 1820 that electric currents create magnetic fields. In the same year, the French scientist André-Marie Ampère showed that iron can be magnetized by inserting it into an electrically fed solenoid.
British scientist William Sturgeon invented the electromagnet in 1824.
His first electromagnet was a horseshoe-shaped piece of iron that was wrapped with about 18 turns of bare copper wire. (Insulated wire did not then exist.) The iron was varnished to insulate it from the windings. When a current was passed through the coil, the iron became magnetized and attracted other pieces of iron; when the current was stopped, it lost magnetization. Sturgeon displayed its power by showing that although it only weighed seven ounces (roughly 200 grams), it could lift nine pounds (roughly 4 kilos) when the current of a single-cell power supply was applied. However, Sturgeon's magnets were weak because the uninsulated wire he used could only be wrapped in a single spaced-out layer around the core, limiting the number of turns.
Beginning in 1830, US scientist Joseph Henry systematically improved and popularised the electromagnet. By using wire insulated by silk thread and inspired by Schweigger's use of multiple turns of wire to make a galvanometer, he was able to wind multiple layers of wire onto cores, creating powerful magnets with thousands of turns of wire, including one that could support . The first major use for electromagnets was in telegraph sounders.
The magnetic domain theory of how ferromagnetic cores work was first proposed in 1906 by French physicist Pierre-Ernest Weiss, and the detailed modern quantum mechanical theory of ferromagnetism was worked out in the 1920s by Werner Heisenberg, Lev Landau, Felix Bloch, and others.
Applications of electromagnets
A portative electromagnet is one designed to just hold material in place; an example is a lifting magnet. A tractive electromagnet applies a force and moves something.
Electromagnets are very widely used in electric and electromechanical devices, including:
Motors and generators
Transformers
Relays
Electric bells and buzzers
Loudspeakers and headphones
Actuators such as valves
Magnetic recording and data storage equipment: tape recorders, VCRs, hard disks
MRI machines
Scientific equipment such as mass spectrometers
Particle accelerators
Magnetic locks
Magnetic separation equipment used for separating magnetic from nonmagnetic material; for example, separating ferrous metal in scrap
Industrial lifting magnets
Magnetic levitation, used in maglev trains
Induction heating for cooking, manufacturing, and hyperthermia therapy
Simple solenoid
A common tractive electromagnet is a uniformly wound solenoid and plunger. The solenoid is a coil of wire, and the plunger is made of a material such as soft iron. Applying a current to the solenoid applies a force to the plunger and may make it move. The plunger stops moving when the forces upon it are balanced. For example, the forces are balanced when the plunger is centered in the solenoid.
The maximum uniform pull happens when one end of the plunger is at the middle of the solenoid. An approximation for the force is
where is a proportionality constant, is the cross-sectional area of the plunger, is the number of turns in the solenoid, is the current through the solenoid wire, and is the length of the solenoid. For long, slender, solenoids (in units using inches, pounds force, and amperes), the value of is around 0.009 to 0.010 psi (maximum pull pounds per square inch of plunger cross-sectional area). For example, a 12-inch-long coil () with a long plunger with a cross section of one inch square () and 11,200 ampere-turns () had a maximum pull of 8.75 pounds (corresponding to ).
The maximum pull is increased when a magnetic stop is inserted into the solenoid. The stop becomes a magnet that will attract the plunger; it adds little to the solenoid pull when the plunger is far away but dramatically increases the pull when the plunger is close. An approximation for the pull is
Here is the distance between the end of the stop and the end of the plunger. The additional constant for units of inches, pounds, and amperes with slender solenoids is about 2660. The first term inside the bracket represents the attraction between the stop and the plunger; the second term represents the same force as the solenoid without a stop ().
Some improvements can be made on this basic design. The ends of the stop and plunger are often conical. For example, the plunger may have a pointed end that fits into a matching recess in the stop. The shape makes the solenoid's pull more uniform as a function of separation. Another improvement is to add a magnetic return path around the outside of the solenoid (an "iron-clad solenoid"). The magnetic return path, just as the stop, has little impact until the air gap is small.
Physics
An electric current flowing in a wire creates a magnetic field around the wire, due to Ampere's law (see drawing of wire with magnetic field). To concentrate the magnetic field in an electromagnet, the wire is wound into a coil with many turns of wire lying side-by-side. The magnetic field of all the turns of wire passes through the center of the coil, creating a strong magnetic field there. A coil forming the shape of a straight tube (a helix) is called a solenoid.
The direction of the magnetic field through a coil of wire can be determined by the right-hand rule. If the fingers of the right hand are curled around the coil in the direction of current flow (conventional current, flow of positive charge) through the windings, the thumb points in the direction of the field inside the coil. The side of the magnet that the field lines emerge from is defined to be the north pole.
Magnetic core
For definitions of the variables below, see box at end of article.
Much stronger magnetic fields can be produced if a magnetic core, made of a soft ferromagnetic (or ferrimagnetic) material such as iron, is placed inside the coil. A core can increase the magnetic field to thousands of times the strength of the field of the coil alone, due to the high magnetic permeability of the material. Not all electromagnets use cores, so this is called a ferromagnetic-core or iron-core electromagnet.
This phenomenon occurs because the magnetic core's material (often iron or steel) is composed of small regions called magnetic domains that act like tiny magnets (see ferromagnetism). Before the current in the electromagnet is turned on, these domains point in random directions, so their tiny magnetic fields cancel each other out, and the core has no large-scale magnetic field. When a current passes through the wire wrapped around the core, its magnetic field penetrates the core and turns the domains to align in parallel with the field. As they align, all their tiny magnetic fields add to the wire's field, which creates a large magnetic field that extends into the space around the magnet. The core concentrates the field, and the magnetic field passes through the core in lower reluctance than it would when passing through air.
The larger the current passed through the wire coil, the more the domains align, and the stronger the magnetic field is. Once all the domains are aligned, any additional current only causes a slight increase in the strength of the magnetic field. Eventually, the field strength levels off and becomes nearly constant, regardless of how much current is sent through the windings. This phenomenon is called saturation, and is the main nonlinear feature of ferromagnetic materials. For most high-permeability core steels, the maximum possible strength of the magnetic field is around 1.6 to 2 teslas (T). This is why the very strongest electromagnets, such as superconducting and very high current electromagnets, cannot use cores.
When the current in the coil is turned off, most of the domains in the core material lose alignment and return to a random state, and the electromagnetic field disappears. However, some of the alignment persists because the domains resist turning their direction of magnetization, which leaves the core magnetized as a weak permanent magnet. This phenomenon is called hysteresis and the remaining magnetic field is called remanent magnetism. The residual magnetization of the core can be removed by degaussing. In alternating current electromagnets, such as those used in motors, the core's magnetization is constantly reversed, and the remanence contributes to the motor's losses.
Ampere's law
The magnetic field of electromagnets in the general case is given by Ampere's Law:
which says that the integral of the magnetizing field around any closed loop is equal to the sum of the current flowing through the loop. A related equation is the Biot–Savart law, which gives the magnetic field due to each small segment of current.
Force exerted by magnetic field
Likewise, on the solenoid, the force exerted by an electromagnet on a conductor located at a section of core material is:
This equation can be derived from the energy stored in a magnetic field. Energy is force times distance. Rearranging terms yields the equation above.
The 1.6 T limit on the field previously mentioned sets a limit on the maximum force per unit core area, or magnetic pressure, an iron-core electromagnet can exert; roughly:
for the core's saturation limit, . In more intuitive units, it is useful to remember that at 1 T the magnetic pressure is approximately .
Given a core geometry, the magnetic field needed for a given force can be calculated from (); if the result is much more than 1.6 T, a larger core must be used.
However, computing the magnetic field and force exerted by ferromagnetic materials in general is difficult for two reasons. First, the strength of the field varies from point to point in a complicated way, particularly outside the core and in air gaps, where fringing fields and leakage flux must be considered. Second, the magnetic field and force are nonlinear functions of the current, depending on the nonlinear relation between and for the particular core material used. For precise calculations, computer programs that can produce a model of the magnetic field using the finite element method are employed.
Magnetic circuit
In many practical applications of electromagnets, such as motors, generators, transformers, lifting magnets, and loudspeakers, the iron core is in the form of a loop or magnetic circuit, possibly broken by a few narrow air gaps. Iron presents much less "resistance" (reluctance) to the magnetic field than air, so a stronger field can be obtained if most of the magnetic field's path is within the core. Since the magnetic field lines are closed loops, the core is usually made in the form of a loop.
Since most of the magnetic field is confined within the outlines of the core loop, this allows a simplification of the mathematical analysis. A common simplifying assumption satisfied by many electromagnets, which will be used in this section, is that the magnetic field strength is constant around the magnetic circuit (within the core and air gaps) and zero outside it. Most of the magnetic field will be concentrated in the core material (C) (see Fig. 1). Within the core, the magnetic field (B) will be approximately uniform across any cross-section; if the core also has roughly constant area throughout its length, the field in the core will be constant.
At any air gaps (G) between core sections, the magnetic field lines are no longer confined by the core. Here, they bulge out beyond the core geometry over the length of the gap, reducing the field strength in the gap. The "bulges" (BF) are called fringing fields. However, as long as the length of the gap is smaller than the cross-section dimensions of the core, the field in the gap will be approximately the same as in the core.
In addition, some of the magnetic field lines (BL) will take "short cuts" and not pass through the entire core circuit, and thus will not contribute to the force exerted by the magnet. This also includes field lines that encircle the wire windings but do not enter the core. This is called leakage flux.
The equations in this section are valid for electromagnets for which:
the magnetic circuit is a single loop of core material, possibly broken by a few air gaps;
the core has roughly the same cross-sectional area throughout its length;
any air gaps between sections of core material are not large compared with the cross-sectional dimensions of the core;
there is negligible leakage flux.
Magnetic field in magnetic circuit
The magnetic field created by an electromagnet is proportional to both and ; their product, , is magnetomotive force. For an electromagnet with a single magnetic circuit, Ampere's Law reduces to:
This is a nonlinear equation, because the permeability of the core varies with . For an exact solution, must be obtained from the core material hysteresis curve. If is unknown, the equation must be solved by numerical methods.
However, if the magnetomotive force is well above saturation (so the core material is in saturation), the magnetic field will be approximately the material's saturation value , and will not vary much with changes in . For a closed magnetic circuit (no air gap), most core materials saturate at a magnetomotive force of roughly 800 ampere-turns per meter of flux path.
For most core materials, the relative permeability . So in (), the second term dominates. Therefore, in magnetic circuits with an air gap, depends strongly on the length of the air gap, and the length of the flux path in the core does not matter much. Given an air gap of 1mm, a magnetomotive force of about 796 ampere-turns is required to produce a magnetic field of 1 T.
Closed magnetic circuit
For a closed magnetic circuit (no air gap), such as would be found in an electromagnet lifting a piece of iron bridged across its poles, equation () becomes:
Substituting into (), the force is:
To maximize the force, a core with a short flux path and a wide cross-sectional area is preferred (this also applies to magnets with an air gap). To achieve this, in applications like lifting magnets and loudspeakers, a flat cylindrical design is often used. The winding is wrapped around a short wide cylindrical core that forms one pole, and a thick metal housing that wraps around the outside of the windings forms the other part of the magnetic circuit, bringing the magnetic field to the front to form the other pole.
Force between electromagnets
The previous methods are applicable to electromagnets with a magnetic circuit; however, they do not apply when a large part of the magnetic field path is outside the core. (A non-circuit example would be a magnet with a straight cylindrical core.) To determine the force between two electromagnets (or permanent magnets) in these cases, a special analogy called a magnetic-charge model can be used. In this model, it is assumed that the magnets have well-defined "poles" where the field lines emerge from the core, and that the magnetic field is produced by fictitious "magnetic charges" on the surface of the poles. This model assumes point-like poles (instead of surfaces), and thus it only yields a good approximation when the distance between the magnets is much larger than their diameter; thus, it is useful just for determining a force between them.
The magnetic pole strength of an electromagnet is given by
and thus the force between two poles is
Each electromagnet has two poles, so the total force on magnet 1 from magnet 2 is equal to the vector sum of the forces of magnet 2's poles acting on each pole of magnet 1.
Side effects
There are several side effects which occur in electromagnets, which must be considered in their design. These effects generally become more significant in larger electromagnets.
Ohmic heating
The only power consumed in a direct current (DC) electromagnet under steady-state conditions is due to the resistance of the windings, and is dissipated as heat. Some large electromagnets require water cooling systems in the windings to carry off the waste heat.
Since the magnetic field is proportional to the product , the number of turns in the windings and the current can be chosen to minimize heat losses, as long as their product is constant. Since the power dissipation, , increases with the square of the current but only increases approximately linearly with the number of windings, the power lost in the windings can be minimized by reducing and proportionally increasing the number of turns , or using thicker wire to reduce the resistance. For example, halving and doubling halves the power loss, as does doubling the area of the wire. In either case, increasing the amount of wire reduces the ohmic losses. For this reason, electromagnet windings often have a significant thickness.
However, the limit to increasing or lowering the resistance is that the windings take up more space between the magnet's core pieces. If the area available for windings is filled up, adding more turns requires a smaller diameter of wire, which has higher resistance, and thus cancels the advantage of using more turns. So, in large magnets there is a minimum amount of heat loss that cannot be reduced. This increases with the square of the magnetic flux, .
Inductive voltage spikes
An electromagnet has significant inductance, and resists changes in the current through its windings. Any sudden changes in the winding current cause large voltage spikes across the windings. This is because when the current through the magnet is increased, such as when it is turned on, energy from the circuit must be stored in the magnetic field. When it is turned off, the energy in the field is returned to the circuit.
If an ordinary switch is used to control the winding current, this can cause sparks at the terminals of the switch. This does not occur when the magnet is switched on, because the limited supply voltage causes the current through the magnet and the field energy to increase slowly. But when it is switched off, the energy in the magnetic field is suddenly returned to the circuit, causing a large voltage spike and an arc across the switch contacts, which can damage them. With small electromagnets, a capacitor is sometimes used across the contacts, which reduces arcing by temporarily storing the current. More often, a diode is used to prevent voltage spikes by providing a path for the current to recirculate through the winding until the energy is dissipated as heat. The diode is connected across the winding, oriented so it is reverse-biased during steady state operation and does not conduct. When the supply voltage is removed, the voltage spike forward-biases the diode and the reactive current continues to flow through the winding, through the diode, and back into the winding. A diode used in this way is called a freewheeling diode or flyback diode.
Large electromagnets are usually powered by variable current electronic power supplies, controlled by a microprocessor, which prevent voltage spikes by accomplishing current changes slowly, in gentle ramps. It may take several minutes to energize or deenergize a large magnet.
Lorentz forces
In powerful electromagnets, the magnetic field exerts a force on each turn of the windings, due to the Lorentz force acting on the moving charges within the wire. The Lorentz force is perpendicular to both the axis of the wire and the magnetic field. It can be visualized as a pressure between the magnetic field lines, pushing them apart. It has two effects on an electromagnet's windings:
The field lines within the axis of the coil exert a radial force on each turn of the windings, tending to push them outward in all directions. This causes a tensile stress in the wire.
The leakage field lines between each turn of the coil exert an attractive force between adjacent turns, tending to pull them together.
The Lorentz forces increase with . In large electromagnets the windings must be firmly clamped in place, to prevent motion on power-up and power-down from causing metal fatigue in the windings. In the Bitter electromagnet design (Fig. 2), used in very high-field research magnets, the windings are constructed as flat disks to resist the radial forces, and clamped in an axial direction to resist the axial ones.
Core losses
In alternating current (AC) electromagnets, used in transformers, inductors, and AC motors and generators, the magnetic field is constantly changing. This causes energy losses in their magnetic cores, which is dissipated as heat in the core. The losses stem from two processes: eddy currents and hysteresis losses.
Eddy currents: From Faraday's law of induction, a changing magnetic field induces circulating electric currents (eddy currents) inside nearby conductors. The energy in these currents is dissipated as heat in the electrical resistance of the conductor, so they are a cause of energy loss. Since the magnet's iron core is conductive, and most of the magnetic field is concentrated there, eddy currents in the core are the major problem. Eddy currents are closed loops of current that flow in planes perpendicular to the magnetic field. The energy dissipated is proportional to the area enclosed by the loop. To prevent them, the cores of AC electromagnets are made of stacks of thin steel sheets, or laminations, oriented parallel to the magnetic field, with an insulating coating on the surface. The insulation layers prevent eddy current from flowing between the sheets. Any remaining eddy currents must flow within the cross-section of each individual lamination, which reduces losses greatly. Another alternative is to use a ferrite core, which is a nonconductor.
Hysteresis losses: Reversing the direction of magnetization of the magnetic domains in the core material each cycle causes energy loss, because of the coercivity of the material. These are called hysteresis losses. The energy lost per cycle is proportional to the area of the hysteresis loop in the graph. To minimize this loss, magnetic cores used in transformers and other AC electromagnets are made of "soft" low coercivity materials, such as silicon steel or soft ferrite. The energy loss per cycle of the alternating current is constant for each of these processes, so the power loss increases linearly with frequency.
High-field electromagnets
Superconducting electromagnets
When a magnetic field higher than the ferromagnetic limit of 1.6 T is needed, superconducting electromagnets can be used. Instead of using ferromagnetic materials, these use superconducting windings cooled with liquid helium, which conduct current without electrical resistance. These allow enormous currents to flow, which generate intense magnetic fields. Superconducting magnets are limited by the field strength at which the winding material ceases to be superconducting. Current designs are limited to 10–20 T, with the current (2017) record of 32 T. The necessary refrigeration equipment and cryostat make them much more expensive than ordinary electromagnets. However, in high-power applications this can be offset by lower operating costs, since after startup no power is required for the windings, since no energy is lost to ohmic heating. They are used in particle accelerators and MRI machines.
Bitter electromagnets
Both iron-core and superconducting electromagnets have limits to the field they can produce. Therefore, the most powerful man-made magnetic fields have been generated by air-core non-superconducting electromagnets of a design invented by Francis Bitter in 1933, called Bitter electromagnets. Instead of wire windings, a Bitter magnet consists of a solenoid made of a stack of conducting disks, arranged so that the current moves in a helical path through them, with a hole through the center where the maximum field is created. This design has the mechanical strength to withstand the extreme Lorentz forces of the field, which increase with . The disks are pierced with holes through which cooling water passes to carry away the heat caused by the high current. The strongest continuous field achieved solely with a resistive magnet is 41.5 T , produced by a Bitter electromagnet at the National High Magnetic Field Laboratory in Tallahassee, Florida. The previous record was 37.5 T. The strongest continuous magnetic field overall, 45 T, was achieved in June 2000 with a hybrid device consisting of a Bitter magnet inside a superconducting magnet.
The factor that limits the strength of electromagnets is the inability to dissipate the enormous waste heat, so more powerful fields, up to 100 T, have been obtained from resistive magnets by sending brief pulses of high current through them; the inactive period after each pulse allows the heat produced during the pulse to be removed before the next pulse.
Explosively pumped flux compression
The most powerful man-made magnetic fields have been created by using explosives to compress the magnetic field inside an electromagnet as it is pulsed; these are called explosively pumped flux compression generators. The implosion compresses the magnetic field to values of around 1,000 T for a few microseconds. While this method may seem very destructive, shaped charges redirect the blast outward to minimize harm to the experiment. These devices are known as destructive pulsed electromagnets. They are used in physics and materials science research to study the properties of materials at high magnetic fields.
Definition of terms
See also
Dipole magnet – the most basic form of magnet
Electromagnetism
Electropermanent magnet – a magnetically hard electromagnet arrangement
Field coil
Magnetic bearing
Pulsed field magnet
Quadrupole magnet – a combination of magnets and electromagnets used mainly to affect the motion of charged particles
References
External links
Electromagnets - The Feynman Lectures on Physics
Electromagnetism
Types of magnets | Electromagnet | Physics | 5,777 |
12,868,737 | https://en.wikipedia.org/wiki/Calvera%20%28X-ray%20source%29 | In astronomy, Calvera (also known as 1RXS J141256.0+792204) is an X-ray source in the constellation Ursa Minor, identified in 2007 as an isolated neutron star. It is one of the hottest and closest of its kind to Earth.
It is named after the villain in the 1960 film The Magnificent Seven, as it is the eighth such neutron star known within 500 parsecs of Earth, and the seven previously discovered isolated neutron stars are called 'The Magnificent Seven'.
There is a ring of radio emission almost a degree in diameter, offset about 4′.9 from Calvera itself, which is possibly its supernova remnant.
References
External links
Universe Today, Closest Neutron Star Discovered
Pennsylvania State University.
Neutron stars
Ursa Minor
ROSAT objects
Stars with proper names | Calvera (X-ray source) | Astronomy | 168 |
23,889,683 | https://en.wikipedia.org/wiki/Sulfolipid | Sulfolipids are a class of lipids which possess a sulfur-containing functional group. An abundant sulfolipid is sulfoquinovosyl diacylglycerol, which is composed of a glycoside of sulfoquinovose and diacylglycerol. In plants, sulfoquinovosyl diacylglycerides (SQDG) are important members of the sulfur cycle. Other important sulfolipids include sulfatide and seminolipid, each of which are sulfated glycolipids. Sulfolipids have been implicated in the functions of two of the core components of the photosynthetic electron transport chain and while not necessarily essential, might have a protective function when the photosynthetic apparatus is under stress.
See also
Sulfatide
Galactolipid
Phospholipid
Glycolipid
References
Lipids
Sulfate esters
Anionic surfactants | Sulfolipid | Chemistry,Biology | 198 |
6,197,802 | https://en.wikipedia.org/wiki/ISO%2031-10 | ISO 31-10 is the part of international standard ISO 31 that defines names and symbols for quantities and units related to nuclear reactions and ionizing radiations. It gives names and symbols for 70 quantities and units. Where appropriate, conversion factors are also given. The standard was withdrawn in 2009 and replaced by ISO 80000-10.
Its definitions include:
External links
ISO 31-10:1992 - Quantities and units - Part 10: Nuclear reactions and ionizing radiations
00031-10
Radioactivity quantities | ISO 31-10 | Physics,Chemistry,Mathematics | 103 |
69,447,725 | https://en.wikipedia.org/wiki/Inner%20Team | The inner team is a personality model created by German psychologist Friedemann Schulz von Thun. The plurality of the human inner life or facets of the personality (Self) is presented using a metaphor of a team and a team leader. This is supposed to support the self-clarification process and by doing so set the foundation for a clear and authentic external communication.
Motivation
In the first two volumes of his seminal work Miteinander reden (engl. Talking to each other), Schulz von Thun deals with the topic of functioning communication. In 1998, Thun published Miteinander reden 3, which expands his theory of communication to the notion of the inner team. By introducing the model of the inner team, he wants to provide instructions for self-help.
The inner team is a modification of the "parts party", a method from systemic family therapy, which was developed by Virginia Satir in the 1970s. Additionally, his model draws upon the interacting parts of the personality within a human being that, amongst others, have been described by Margaret Paul and Erika J. Chopich.
The Inner Team Member
The inner team and its team members are a metaphor. Each member of the inner team thus represents an inner part or aspect of the whole personality. It is neither a pluralistic subpersonality in the sense of multiple personalities, nor is it to be confused with behaviors. Visible behavior is the result of an inner process. Each team member only wants the best for the team manager. Behavior can therefore only rarely be permanently and inevitably associated with one single team member.
Team members differ in various ways: they are loud or quiet, are slow or fast to join the conversation, are dominant with external contacts or only show inwards where they appear as thoughts, emotions, impulses, moods or bodily signals. Between the team members, there are group dynamics similar to the external life. In their entirety, they mirror the life experience of a human, including the opinion of parents, friends and life partners, or values of a society of which one feels part.
The Team Leader
The team leader is described by Schulz von Thun as the superordinate "I", the cohesive entity, which either passively follows the dialogue of its team members or actively interferes, but which always has the last word with externally effective decisions. Many aspects of actual team leadership can be transferred to the inner team leader.
The Inner Team Meeting
If a human being has to make a difficult decision, it more or less consciously has inner team meetings. In reality, mess, inconsistent statements (e.g. bad gut feeling vs. rational argument) and the dominance of the loud, fast, and popular team members often shape the not consciously controlled team meetings. Still, the team leader is successful at precipitating a satisfactory decision in many cases, thanks to their practice. For especially difficult or unfamiliar decisions, this does not have to be the case anymore. That is when Schulz von Thun recommends a team meeting.
For this, to begin with, those team members who want to comment on the question have to be identified. Often, this works amazingly well, if one takes a little time to listen to what is going on inside oneself. Afterwards, each team member should have the right to bring forward their message without encountering criticism. A free discussion offers everyone the chance to really meet each other head-on. The team leader should pay great attention in order to be able to summarize the controversial questions and positions to it. Here, leadership qualities are especially important. The team leaders ought to remain neutral and should value all opinions. On the basis of this, one can think about a compromise, much like in real teams. Finally, the result can be summarized and the approval from all participants can be sought.
Further aspects of the Inner Team
The metaphor of the inner team can be utilized even more widely. In Miteinander reden 3, Schulz von Thun also introduces the following concepts:
Inner conflict management
non-acceptance of team members and its consequences
Team building in inner and external contact
situation-dependent team composition
See also
Nonviolent Communication
Ego-state therapy
Four-sides model (differences in external communication)
Autocommunication
Dialogical self
Internal Family Systems Model
Inside Out (2015 film)
Literature
Friedemann Schulz von Thun: Miteinander reden 3 - Das 'innere Team' und situationsgerechte Kommunikation. Rowohlt, Reinbek 1998, .
Friedemann Schulz von Thun, Wibke Stegemann (Publisher): Das Innere Team in Aktion. Praktische Arbeit mit dem Modell. Rowohlt, Reinbek 2004, .
External links
'Vom "zerstrittenen Haufen" zum "Inneren Team"' - Interview with Professor Schulz von Thun
References
Personality theories
Personality
Communication theory
Intrapersonal communication | Inner Team | Biology | 1,010 |
32,610,132 | https://en.wikipedia.org/wiki/Linifanib | Linifanib (ABT-869) is a structurally novel, potent inhibitor of receptor tyrosine kinases (RTK), vascular endothelial growth factor (VEGF) and platelet-derived growth factor (PDGF) with IC50 of 0.2, 2, 4, and 7 nM for human endothelial cells, PDGF receptor beta (PDGFR-β), KDR, and colony stimulating factor 1 receptor (CSF-1R), respectively. It has much less activity (IC50s > 1 μM) against unrelated RTKs, soluble tyrosine kinases, or serine/threonine kinases. In vivo linifanib is effective orally in mechanism-based murine models of VEGF-induced uterine edema (ED50 = 0.5 mg/kg) and corneal angiogenesis (>50%inhibition, 15 mg/kg).
The substance has been used as part of a chemical cocktail to turn old and senescent human cells back into young ones (as measured by transcriptomic age), without turning them all the way back into undifferentiated stem cells.
References
Tyrosine kinase inhibitors
Ureas
Experimental cancer drugs | Linifanib | Chemistry | 259 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.