id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
11,891,713 | https://en.wikipedia.org/wiki/Gs%20alpha%20subunit | {{DISPLAYTITLE:Gs alpha subunit}}
The Gs alpha subunit (Gαs, Gsα) is a subunit of the heterotrimeric G protein Gs that stimulates the cAMP-dependent pathway by activating adenylyl cyclase. Gsα is a GTPase that functions as a cellular signaling protein.
Gsα is the founding member of one of the four families of heterotrimeric G proteins, defined by the alpha subunits they contain: the Gαs family, Gαi/Gαo family, Gαq family, and Gα12/Gα13 family. The Gs-family has only two members: the other member is Golf, named for its predominant expression in the olfactory system. In humans, Gsα is encoded by the GNAS complex locus, while Golfα is encoded by the GNAL gene.
Function
The general function of Gs is to activate intracellular signaling pathways in response to activation of cell surface G protein-coupled receptors (GPCRs). GPCRs function as part of a three-component system of receptor-transducer-effector. The transducer in this system is a heterotrimeric G protein, composed of three subunits: a Gα protein such as Gsα, and a complex of two tightly linked proteins called Gβ and Gγ in a Gβγ complex. When not stimulated by a receptor, Gα is bound to GDP and to Gβγ to form the inactive G protein trimer. When the receptor binds an activating ligand outside the cell (such as a hormone or neurotransmitter), the activated receptor acts as a guanine nucleotide exchange factor to promote GDP release from and GTP binding to Gα, which drives dissociation of GTP-bound Gα from Gβγ. In particular, GTP-bound, activated Gsα binds to adenylyl cyclase to produce the second messenger cAMP, which in turn activates the cAMP-dependent protein kinase (also called Protein Kinase A or PKA). Cellular effects of Gsα acting through PKA are described here.
Although each GTP-bound Gsα can activate only one adenylyl cyclase enzyme, amplification of the signal occurs because one receptor can activate multiple copies of Gs while that receptor remains bound to its activating agonist, and each Gsα-bound adenylyl cyclase enzyme can generate substantial cAMP to activate many copies of PKA.
Receptors
The G protein-coupled receptors that couple to the Gs family proteins include:
5-HT4, 5-HT6 and 5-HT7 serotonergic receptors
Adenosine receptor types A2a and A2b
Adrenocorticotropic hormone receptor (a.k.a. MC2R)
Arginine vasopressin receptor 2
β-adrenergic receptors types β1, β2 and β3
Calcitonin receptor
Calcitonin gene-related peptide receptor
Corticotropin-releasing hormone receptor
Dopamine D1 and D5 receptors
Follicle-stimulating hormone receptor
Gastric inhibitory polypeptide receptor
Glucagon receptor
Growth-hormone-releasing hormone receptor
Histamine H2 receptor
Luteinizing hormone/choriogonadotropin receptor
Melanocortin receptor: MC1R, MC2R (a.k.a. ACTH receptor), MC3R, MC4R, MC5R
Olfactory receptors, through Golf in the olfactory neurons
Parathyroid hormone receptors PTH1R and PTH2R
Prostaglandin receptor types D2 and I2
Secretin receptor
Thyrotropin receptor
Trace amine-associated receptor 1
Vasopressin receptor 2
See also
Second messenger system
G protein-coupled receptor
Heterotrimeric G protein
Adenylyl cyclase
Protein kinase A
Gi alpha subunit
Gq alpha subunit
G12/G13 alpha subunits
References
External links
Peripheral membrane proteins
Medical mnemonics | Gs alpha subunit | Chemistry | 830 |
19,675,479 | https://en.wikipedia.org/wiki/Fractional%20coordinates | In crystallography, a fractional coordinate system (crystal coordinate system) is a coordinate system in which basis vectors used to describe the space are the lattice vectors of a crystal (periodic) pattern. The selection of an origin and a basis define a unit cell, a parallelotope (i.e., generalization of a parallelogram (2D) or parallelepiped (3D) in higher dimensions) defined by the lattice basis vectors where is the dimension of the space. These basis vectors are described by lattice parameters (lattice constants) consisting of the lengths of the lattice basis vectors and the angles between them .
Most cases in crystallography involve two- or three-dimensional space. In the three-dimensional case, the basis vectors are commonly displayed as with their lengths denoted by respectively, and the angles denoted by , where conventionally, is the angle between and , is the angle between and , and is the angle between and .
Crystal Structure
A crystal structure is defined as the spatial distribution of the atoms within a crystal, usually modeled by the idea of an infinite crystal pattern. An infinite crystal pattern refers to the infinite 3D periodic array which corresponds to a crystal, in which the lengths of the periodicities of the array may not be made arbitrarily small. The geometrical shift which takes a crystal structure coincident with itself is termed a symmetry translation (translation) of the crystal structure. The vector which is related to this shift is called a translation vector . Since a crystal pattern is periodic, all integer linear combinations of translation vectors are also themselves translation vectors,
Lattice
The vector lattice (lattice) is defined as the infinite set consisting of all of the translation vectors of a crystal pattern. Each of the vectors in the vector lattice are called lattice vectors. From the vector lattice it is possible to construct a point lattice. This is done by selecting an origin with position vector . The endpoints of each of the vectors make up the point lattice of and . Each point in a point lattice has periodicity i.e., each point is identical and has the same surroundings. There exist an infinite number of point lattices for a given vector lattice as any arbitrary origin can be chosen and paired with the lattice vectors of the vector lattice. The points or particles that are made coincident with one another through a translation are called translation equivalent.
Coordinate systems
General coordinate systems
Usually when describing a space geometrically, a coordinate system is used which consists of a choice of origin and a basis of linearly independent, non-coplanar basis vectors , where is the dimension of the space being described. With reference to this coordinate system, each point in the space can be specified by coordinates (a coordinate -tuple). The origin has coordinates and an arbitrary point has coordinates . The position vector is then,
In -dimensions, the lengths of the basis vectors are denoted and the angles between them . However, most cases in crystallography involve two- or three-dimensional space in which the basis vectors are commonly displayed as with their lengths and angles denoted by and respectively.
Cartesian coordinate system
A widely used coordinate system is the Cartesian coordinate system, which consists of orthonormal basis vectors. This means that,
and
However, when describing objects with crystalline or periodic structure a Cartesian coordinate system is often not the most useful as it does not often reflect the symmetry of the lattice in the simplest manner.
Fractional (crystal) coordinate system
In crystallography, a fractional coordinate system is used in order to better reflect the symmetry of the underlying lattice of a crystal pattern (or any other periodic pattern in space). In a fractional coordinate system the basis vectors of the coordinate system are chosen to be lattice vectors and the basis is then termed a crystallographic basis (or lattice basis).
In a lattice basis, any lattice vector can be represented as,
There are an infinite number of lattice bases for a crystal pattern. However, these can be chosen in such a way that the simplest description of the pattern can be obtained. These bases are used in the International Tables of Crystallography Volume A and are termed conventional bases. A lattice basis is called primitive if the basis vectors are lattice vectors and all lattice vectors can be expressed as,
However, the conventional basis for a crystal pattern is not always chosen to be primitive. Instead, it is chosen so the number of orthogonal basis vectors is maximized. This results in some of the coefficients of the equations above being fractional. A lattice in which the conventional basis is primitive is called a primitive lattice, while a lattice with a non-primitive conventional basis is called a centered lattice.
The choice of an origin and a basis implies the choice of a unit cell which can further be used to describe a crystal pattern. The unit cell is defined as the parallelotope (i.e., generalization of a parallelogram (2D) or parallelepiped (3D) in higher dimensions) in which the coordinates of all points are such that, .
Furthermore, points outside of the unit cell can be transformed inside of the unit cell through standardization, the addition or subtraction of integers to the coordinates of points to ensure . In a fractional coordinate system, the lengths of the basis vectors and the angles between them are called the lattice parameters (lattice constants) of the lattice. In two- and three-dimensions, these correspond to the lengths and angles between the edges of the unit cell.
The fractional coordinates of a point in space in terms of the lattice basis vectors is defined as,
Calculations involving the unit cell
General transformations between fractional and Cartesian coordinates
Three Dimensions
The relationship between fractional and Cartesian coordinates can be described by the matrix transformation :
Similarly, the Cartesian coordinates can be converted back to fractional coordinates using the matrix transformation :
Transformations using the cell tensor
Another common method of converting between fractional and Cartesian coordinates involves the use of a cell tensor which contains each of the basis vectors of the space expressed in Cartesian coordinates.
Two Dimensions
Cell tensor
In Cartesian coordinates the 2 basis vectors are represented by a cell tensor :
The area of the unit cell, , is given by the determinant of the cell matrix:
For the special case of a square or rectangular unit cell, the matrix is diagonal, and we have that:
Relationship between fractional and Cartesian coordinates
The relationship between fractional and Cartesian coordinates can be described by the matrix transformation :
Similarly, the Cartesian coordinates can be converted back to fractional coordinates using the matrix transformation :
Three Dimensions
Cell tensor
In Cartesian coordinates the 3 basis vectors are represented by a cell tensor :
The volume of the unit cell, , is given by the determinant of the cell tensor:
For the special case of a cubic, tetragonal, or orthorhombic cell, the matrix is diagonal, and we have that:
Relationship between fractional and Cartesian coordinates
The relationship between fractional and Cartesian coordinates can be described by the matrix transformation :
Similarly, the Cartesian coordinates can be converted back to fractional coordinates using the matrix transformation :
Arbitrary number of dimensions
Cell tensor
In Cartesian coordinates the basis vectors are represented by a cell tensor :
The hypervolume of the unit cell, , is given by the determinant of the cell tensor:
Relationship between fractional and Cartesian coordinates
The relationship between fractional and Cartesian coordinates can be described by the matrix transformation :
Similarly, the Cartesian coordinates can be converted back to fractional coordinates using the transformation :
Determination of cell properties in two and three dimensions using the metric tensor
The metric tensor is sometimes used for calculations involving the unit cell and is defined (in matrix form) as:
In two dimensions,
In three dimensions,
The distance between two points and in the unit cell can be determined from the relation:
The distance from the origin of the unit cell to a point within the unit cell can be determined from the relation:
The angle formed from three points , (apex), and within the unit cell can determined from the relation:
The volume of the unit cell, can be determined from the relation:
References
Crystallography
Coordinate systems | Fractional coordinates | Physics,Chemistry,Materials_science,Mathematics,Engineering | 1,644 |
40,889,432 | https://en.wikipedia.org/wiki/Leviton%20%28quasiparticle%29 | A leviton is a collective excitation of a single electron within a metal. It has been mostly studied in two-dimensional electron gases alongside quantum point contacts. The main feature is that the excitation produces an electron pulse without the creation of electron holes. The time-dependence of the pulse is described by a Lorentzian distribution created by a pulsed electric potential.
Levitons have also been described in graphene.
The leviton is named after Leonid Levitov, who first predicted its existence in 1996.
References
Quasiparticles | Leviton (quasiparticle) | Physics,Materials_science | 111 |
58,506,604 | https://en.wikipedia.org/wiki/Pharmacy%20Act%201852 | The Pharmacy Act 1852 (15 & 16 Vict. c. 56) was the first legislation in the United Kingdom to regulate pharmacists and druggists.
It set up a register of pharmacists and limited the use of the title to people registered with the Pharmaceutical Society, but proposals to give the society exclusive rights to sell drugs or poisons were rejected. It did not provide a legal definition for the trade and practice of pharmacy.
Notes
United Kingdom Acts of Parliament 1852
Drug control law in the United Kingdom
Substance dependence
Pharmacy in the United Kingdom | Pharmacy Act 1852 | Chemistry | 114 |
46,842,956 | https://en.wikipedia.org/wiki/Bekker%20numbering | Bekker numbering or Bekker pagination is the standard form of citation to the works of Aristotle. It is based on the page numbers used in the Prussian Academy of Sciences edition of the complete works of Aristotle (1831–1837) and takes its name from the editor of that edition, the classical philologist August Immanuel Bekker (1785–1871); because the academy was located in Berlin, Germany, the system is occasionally referred to by the alternative name Berlin numbering or Berlin pagination.
Bekker numbers consist of up to three ordered coordinates, or pieces of information: a number, the letter a or b, and another number, which refer respectively to the page number of Bekker's edition of the Greek text of Aristotle's works, the page column (a standard page of Bekker's edition has exactly two columns), and the line number (total lines typically ranging from 20 to 40 on a given column or page in Bekker's edition). For example, the Bekker number denoting the beginning of Aristotle's Nicomachean Ethics is 1094a1, which corresponds to page 1094 of Bekker's edition, first column (column a), line 1.
All modern editions or translations of Aristotle intended for scholarly readers use Bekker numbers, in addition to or instead of page numbers. Contemporary scholars writing on Aristotle use the Bekker number so that the author's citations can be checked by readers without having to use the same edition or translation that the author used.
While Bekker numbers are the dominant method used to refer to the works of Aristotle, Catholic or Thomist scholars often use the medieval method of reference by book, chapter, and sentence, albeit generally in addition to Bekker numbers.
Stephanus pagination is the comparable system for referring to the works of Plato, and Diels–Kranz numbering is the comparable system for Pre-Socratic philosophy. Unlike Stephanus pagination, which is based upon a three-volume translation of Plato's works and which recycles low page numbers across the three volumes, introducing the possibility for ambiguity if the Platonic work or volume is not specified, Bekker page numbers cycle from 1 through the end of the Corpus Aristotelicum regardless of volume, without starting over for some other given volume. Bekker numbering therefore has the advantage that its notation is unambiguous as compact numerical information, although it relies upon the ordering of Aristotle's works as presented in Bekker's edition.
Aristotle's works by Bekker numbers
The following list is complete. The titles are given in accordance with the standard set by the Revised Oxford Translation. Latin titles, still often used by scholars, are also given.
Aristotelian works lacking Bekker numbers
Constitution of the Athenians
The Constitution of the Athenians (or ) was not included in Bekker's edition because it was first edited in 1891 from papyrus rolls acquired in 1890 by the British Museum. The standard reference to it is by section (and subsection) numbers.
Fragments
Surviving fragments of the many lost works of Aristotle were included in the fifth volume of Bekker's edition, edited by Valentin Rose. These are not cited by Bekker numbers, however, but according to fragment numbers. Rose's first edition of the fragments of Aristotle was Aristoteles Pseudepigraphus (1863). As the title suggests, Rose considered these all to be spurious. The numeration of the fragments in a revised edition by Rose, published in the Teubner series, Aristotelis qui ferebantur librorum fragmenta, Leipzig, 1886, is still commonly used (indicated by R3), although there is a more current edition with a different numeration by Olof Gigon (published in 1987 as a new vol. 3 in Walter de Gruyter's reprint of the Bekker edition), and a new de Gruyter edition by Eckart Schütrumpf is in preparation.
For a selection of the fragments in English translation, see W.D. Ross, Select Fragments (Oxford 1952), and Jonathan Barnes (ed.), The Complete Works of Aristotle: The Revised Oxford Translation, vol. 2, Princeton 1984, pp. 2384–2465.
The works surviving only in fragments include the dialogues On Philosophy (or On the Good), Eudemus (or On the Soul), On Justice, and On Good Birth. The possibly spurious work, On Ideas survives in quotations by Alexander of Aphrodisias in his commentary on Aristotle's Metaphysics. For the dialogues, see also the editions of Richard Rudolf Walzer, Aristotelis Dialogorum fragmenta, in usum scholarum (Florence 1934), and Renato Laurenti, Aristotele: I frammenti dei dialoghi (2 vols.), Naples: Luigi Loffredo, 1987.
Use in citations
To cite a work of the Corpus Aristotelicum or part thereof, Bekker numbers may be combined with book, chapter, and line numbers to give a precise reference. By academic convention, and regardless of the citation style otherwise generally followed throughout an academic work, the pagination in a citation to Aristotle would be in the general form of: Book number(s).Chapter number(s),Bekker number(s)Line number(s).
For example, a citation of (Metaphysics, 1.9, 991b9-20) would refer to lines 9–20 on page 991b of chapter 9 in Book I of the Metaphysics.
See also
Stephanus pagination
Diels–Kranz numbering
References
Referencing systems
1831 introductions
Ancient Greek philosophy studies
Classical Greek philosophy | Bekker numbering | Technology | 1,180 |
31,358,428 | https://en.wikipedia.org/wiki/S1%20domain | The S1 domain is a protein domain that was originally identified in ribosomal protein S1 but is found in a large number of RNA-associated proteins. The structure of the S1 RNA-binding domain from the Escherichia coli polynucleotide phosphorylase has been determined using NMR methods and consists of a five-stranded antiparallel beta barrel. Conserved residues on one face of the barrel and adjacent loops form the putative RNA-binding site.
The structure of the S1 domain is very similar to that of cold shock proteins. This suggests that they may both be derived from an ancient nucleic acid-binding protein.
Function
The S1 domain is an essential in protein translation as it interacts with the ribosome and messenger RNA. S1 bind to RNA in a sequence specific manner.
Structure
This protein domain contains six motifs and 70 amino acids and it folds into a five-stranded antiparallel beta barrel. The structure of the S1 domain is very similar to that of cold shock proteins. This suggests that they may both be derived from an ancient nucleic acid-binding protein.
Conserved residues on one face of the barrel and adjacent loops form the putative RNA-binding site.
References
Protein domains | S1 domain | Biology | 254 |
2,549,893 | https://en.wikipedia.org/wiki/Reaction%20norm | In ecology and genetics, a reaction norm, also called a norm of reaction, describes the pattern of phenotypic expression of a single genotype across a range of environments. One use of reaction norms is in describing how different species—especially related species—respond to varying environments. But differing genotypes within a single species may also show differing reaction norms relative to a particular phenotypic trait and environment variable. For every genotype, phenotypic trait, and environmental variable, a different reaction norm can exist; in other words, an enormous complexity can exist in the interrelationships between genetic and environmental factors in determining traits. The concept was introduced by Richard Woltereck in 1909.
A monoclonal example
Scientifically analyzing norms of reaction in natural populations can be very difficult, simply because natural populations of sexually reproductive organisms usually do not have cleanly separated or superficially identifiable genetic distinctions. However, seed crops produced by humans are often engineered to contain specific genes, and in some cases seed stocks consist of clones. Accordingly, distinct seed lines present ideal examples of differentiated norms of reaction. In fact, agricultural companies market seeds for use in particular environments based on exactly this.
Suppose the seed line A contains an allele a, and a seed line B of the same crop species contains an allele b, for the same gene. With these controlled genetic groups, we might cultivate each variety (genotype) in a range of environments. This range might be either natural or controlled variations in environment. For example, an individual plant might receive either more or less water during its growth cycle, or the average temperature the plants are exposed to might vary across a range.
A simplification of the norm of reaction might state that seed line A is good for "high water conditions" while a seed line B is good for "low water conditions". But the full complexity of the norm of reaction is a function, for each genotype, relating environmental factor to phenotypic trait. By controlling for or measuring actual environments across which monoclonal seeds are cultivated, one can concretely observe norms of reaction. Normal distributions, for example, are common. Of course, the distributions need not be bell-curves.
Reaction norm from an inbred population
One advantage of plants is that the same genotype, such as a recombinant inbred line (RIL), can be repeatedly evaluated in multiple environments, or a multi-environmental trial (MET). The reaction norm can then be explored based on the geographic location, mean trait value summarized from the whole population at each environment, or an explicit performance-free index capturing relevant environment inputs.
Misunderstanding genetic/environmental interactions
Popular non-scientific or lay-scientific audiences frequently misunderstand or simply fail to recognize the existence of norms of reaction. A widespread conception is that each genotype gives a certain range of possible phenotypic expressions. In popular conception, something which is "more genetic" gives a narrower range, while something which is "less genetic (more environmental)" gives a wider range of phenotypic possibilities. This limited conceptual framework is especially prevalent in discussions of human traits such as IQ, sexual orientation, altruism, or schizophrenia (see Nature versus nurture).
Popular conception of genotype/phenotype interaction
TRAIT SCALE
<--6----------5----------4----------3----------2----------1----------0-->
^ (Genotype A) ^ ^ (Genotype B) ^
| | | |
Environ <------> Other Environ <------> Other
extreme extreme extreme extreme
The problem with this common simplified image is not that it does not represent a possible norm of reaction. Rather, by reducing the picture from two dimensions to just one, it focuses only on discrete, non-overlapping phenotypic expressions, and hides the more common pattern of local minima and maxima in phenotypic expression, with overlapping ranges of phenotypic expression between genotypes.
See also
Canalisation (genetics)
Differential susceptibility
Genetic determinism
Nature versus nurture
Phenotypic plasticity
References
Ecology | Reaction norm | Biology | 895 |
47,439,551 | https://en.wikipedia.org/wiki/Penicillium%20roseopurpureum | Penicillium roseopurpureum is an anamorph species of fungus in the genus Penicillium which produces Carviolin.
References
Further reading
roseopurpureum
Fungi described in 1901
Fungus species | Penicillium roseopurpureum | Biology | 47 |
14,681,729 | https://en.wikipedia.org/wiki/Child%20pyromaniac | A child pyromaniac is a child with an impulse-control disorder that is primarily distinguished by a compulsion to set fires in order to relieve built-up tension. Child pyromania is the rarest form of fire-setting.
Most young children are not diagnosed with pyromania, but rather with conduct disorders. A key feature of pyromania is repeated association with fire without a real motive. Pyromania is not a commonly diagnosed disorder, and only occurs in about one percent of the population. It can occur in children as young as three years old.
About ninety percent of the people officially diagnosed with pyromania are male. Pyromaniacs and people with other mental illnesses are responsible for about 14% of fires.
Symptoms
Many clinical studies have found that fire-setting rarely occurs by itself, but usually occurs in addition to other socially unacceptable behavior. The motives that have earned the most attention are pleasure, a cry for help, retaliation against adults, and a desire to reunite the family.
Fire-setting among children and teens can be recurring or periodic. Some children and teens may set fires often to release tension. Others may only seek to set fires during times of great stress. Some of the symptoms of pyromania are depression, conflicts in relationships, and trouble coping with stress and anxiety.
Diagnosis
The Diagnostic and Statistical Manual of Mental Disorders, also known as the DSM, gives six standards that must be met for a child to be officially diagnosed with pyromania:
The child has to have set more than one fire deliberately.
Before setting the fire, the child must have felt some feelings of tension or arousal.
The child must show that he or she is attracted to fire and anything related to fire.
The child must feel a sense of relief or satisfaction from setting the fire and witnessing it.
The child does not have other motives like revenge, financial gain, delusions, or brain damage for setting the fire.
The fire-setting problem cannot be attributed to other disorders like anti-social personality disorder or conduct disorders.
Even though fire-setting and pyromania are prevalent in children, these standards are hard to apply to their age group. There is not a lot of experience in diagnosing pyromania, mainly because of the little experience that health care professionals have with fire-setting.
Comparison to child fire-setters.
There are many important distinctions between a child pyromaniac and a child fire-setter. In general, a fire-setter is any individual who feels the impulse to set a fire for unusual reasons.
While a child fire-setter is usually curious about fire and has the desire to learn more about it, a child pyromaniac has an unusually bizarre impulse or desire to set intentional fires.
Pyromania, also known as pathological fire-setting, is when the desire to set fires is repetitive and destructive to people or property. The most important difference between pyromania and fire-setting is that pyromania is a mental disorder, but fire-setting is simply a behavior and can be more easily fixed.
Minor or non-severe fire-setting is defined as "accidental or occasional fire-starting behavior" by unsupervised children. Usually these fires are started when a curious child plays with matches, lighters, or small fires. Juveniles in this minor group average at most 2.5 accidental fires in their lifetime.
Most children in this group are between five and ten years of age and do not realize the dangers of playing with fire. Pathological fire-setting manifests when the action is "a deliberate, planned, and persistent behavior". Juveniles in this severe group set about 5.3 fires.
Most young children are not diagnosed as having pyromania but conduct disorders.
Epidemiology
There are two basic types of children that start fires. The first type is the curiosity fire-setter who starts the fire just to find out what will happen. The second type is the problem fire-setter who usually sets fires based on changes in their environment or due to a conduct disorder.
Causes
Fire-setting is made up of five subcategories: the curious fire-setter, the sexually motivated fire-setter, the "cry for help" fire-setter, the "severely disturbed" group, and the rare form of pyromania. Pyromania usually surfaces in childhood, but there is no conclusive data about the average age of onset.
Child pyromaniacs are usually filled with an uncontrollable urge to set fires to relieve tension. Not much is known about what genetically causes pyromania but there have been many studies that have explored the topic.
The causes of fire setting among young children and youths can be attributed to many factors, which are divided into individual and environmental factors:
Individual factors
Antisocial behaviors and attitudes: Children that set fires usually do not only set fires but also commit other crimes or offenses including vandalism, violence, anger, etc.
Sensation seeking: Some children are attracted to fire-setting because they are bored and are looking for something to do.
Attention seeking: Lighting a fire becomes a way to "get back" at adults and, in turn, produce a response from the adults.
Lack of social skills: Some children simply have not been taught enough social skills. Many children and adolescents who have been discovered setting fires consider themselves to be "loners".
Lack of fire-safety skills and ignorance of danger: This is what drives most children who do not display signs of pyromania- just natural curiosity and ignorance of the fire's destructive power.
Learning difficulties.
Parental conflicts like separation, neglect, and abuse.
Sexual abuse.
Maltreatment.
Environmental factors
Poor supervision by parents or guardians.
Seeing adults use fire inappropriately at an early age.
Parental neglect.
Parents abusing drugs or acting violently- this factor has been studied and the conclusions show that fire-setters are more likely in homes where the parents abuse them.
Peer pressure.
Stressful life events: Fire-setting becomes a way to cope with crises.
Treatment
If a child is diagnosed with pyromania, there are treatment options despite the lack of scientific research on the genetic cause. Studies have shown that children with repeat cases of setting fires tend to respond better to a case-management approach rather than a medical approach.
The first crucial step for treatment should be parents sitting down with their child and having a one-on-one interview. The interview itself should try to determine which stresses on the family, methods of discipline, or other factors contribute to the child's uncontrollable desire to set fires. Some examples of treatment methods are problem-solving skills, anger management, communication skills, aggression replacement training, and cognitive restructuring.
The chances that a child will recover from pyromania are very slim according to recent studies, but there are ways to channel the child's desire to set fires to relieve tension—for example, alternate activities such as playing a sport or an instrument.
Another method of treatment is fire-safety education. At times, the best method of treatment is child counseling or a residential treatment center.
However, since cases of child pyromania are so rare, there has not been enough research done on the success of these treatment methods. The most common and effective treatment of pyromania in children is behavioral modification. The results usually range from fair to poor. Behavioral modification seems to work on children with pyromaniac tendencies about 95% of the time.
History
Early studies into the causes of pyromania come from Freudian psychoanalysis. Around 1850, there were many arguments about the causes of pyromania.
The two biggest sides of the argument were whether pyromania comes from a mental or genetic disorder or moral deficiency. Freud reasoned that fire-setting was an archaic desire to gain power over nature.
The first study done on fire-setting behavior in children was in 1940 and was credited to Helen Yarnall, who compared fire-setting to fears of castration in male children and said that by setting a fire, some young males feel that they have gained power over adults. This 1940 study also introduced the idea that a good predictor of violent behavior in adult life is fire-setting and cruelty towards animals as a child.
References
Further reading
External links
Operation Extinguish
Juvenile Firesetter Handbook
Prevent Youth Firesetting
Mental disorders diagnosed in childhood
Fire | Child pyromaniac | Chemistry | 1,733 |
19,089,444 | https://en.wikipedia.org/wiki/1988%20British%20International%20Helicopters%20Sikorsky%20S-61N%20crash | G-BEID was a Sikorsky S-61N helicopter of British International Helicopters which made a controlled ditching in the sea northeast of Sumburgh on 13 July 1988 following an engine fire. There were no fatalities.
Accident
The helicopter left the Safe Felicia semi-submersible oil platform in the Forties oilfield at 13:45 with 2 pilots and a full load of 19 passengers for the one hour flight to Sumburgh Airport on the Mainland of Shetland.
At 14:28 the co-pilot (who was flying) reported hearing a muffled bang which was also heard by some of the passengers, from the area of the No. 2 engine transmission. Shortly after, the No. 2 engine's fire warning lights came on. The pilot immediately began a descent and transmitted a distress call.
About 48 seconds after the noise, the No. 2 engine was shut down and the fire extinguisher triggered. The No. 1 engine fire warning then also illuminated, while passengers saw oil leaking from the cabin ceiling.
The pilot advised the passengers to prepare for an emergency ditching and took control of the aircraft. The floats were deployed and a gentle ditching was made about 3 minutes after the initial noise had been heard, by which time the helicopter's cabin had filled with smoke. All 21 occupants evacuated on to liferafts and were then winched up into a Search and Rescue helicopter. After a strong fire consumed most of the floating helicopter, the remains broke up and sank.
Investigation
A recovery operation was mounted using the DSV (diving support vessel) Stena Marianos which arrived on site on 16 July 1988. The aft fuselage section was raised the following day and the forward section shortly after. The recovery operation had to be ended on 19 July before finding the engines or transmission components due to the Stena Marianos having other commitments.
The recovery continued on 2 August using the DSV Norskald, and the engines, main rotor, and transmission were located and raised on 5 August.
Cause
It was concluded that the fire had occurred in the helicopter's main gearbox, probably resulting from the effects of a bearing failure in the No. 2 engine. A further factor was the lack of fire detection or suppression capability within the gearbox bay. The cause of the bearing failure could not be definitely established.
Safety recommendations
The AAIB made a list of 27 safety recommendations to the CAA. These addressed improvements in maintenance, early detection of problems, emergency escape equipment, documentation and training provisions, and firewall integrity. Most of these were accepted by the CAA.
Notes
References
Airliner accidents and incidents caused by mechanical failure
Aviation accidents and incidents in 1988
Aviation accidents and incidents in Scotland
Aviation in Shetland
1988 in aviation
Accidents and incidents involving the Sikorsky S-61
British International Helicopters accidents and incidents
July 1988 events in the United Kingdom | 1988 British International Helicopters Sikorsky S-61N crash | Materials_science | 572 |
24,146,601 | https://en.wikipedia.org/wiki/C22H28O5 | {{DISPLAYTITLE:C22H28O5}}
The molecular formula C22H28O5 may refer to:
Estradiol hemisuccinate (Estradiol succinate)
Isoprednidene
Meprednisone
16α-Methyl-11-oxoprednisolone
Prednylidene
Pyrethrin II | C22H28O5 | Chemistry | 78 |
2,710,775 | https://en.wikipedia.org/wiki/API-TC | API TC is a certification for two-stroke oils, awarded by the American Petroleum Institute. It is given after the product passes through stringent tests that determine the level of detergent performance, dispersion, and anti-oxidation. It is the only remaining, not revoked classification of the API Two-Cycle motor oil specifications (TA, TB, TC, TD). Being a very old standard itself, most currently produced 2T lubricants meet its specifications, even the lowest quality ones; current high-quality oils exceed them (often labeled "API TC+" although not based on actual measurements).
The more current JASO M345 or the international ISO two-cycle oil specifications are much better indicators of oil quality, with requirements based on modern two-stroke engines and environmental policies. API-TC has been removed from the API website.
A higher grade, TD, was considered similar to the National Marine Manufacturers association (NMMA) "TC-W" (water-cooled outboard engine) grade. The NMMA has now replaced this specification with the higher TC-W3 standard.
References
Lubricants | API-TC | Physics | 230 |
65,848,187 | https://en.wikipedia.org/wiki/Madagascar%20henipavirus | Madagascar henipavirus (MadV) is a poorly characterized henipavirus type. Currently it has only been detected serologically among Madagascan rousettes. High cross reactivity was observed with Hendra and Nipah henipaviruses.
References
Henipavirus | Madagascar henipavirus | Biology | 61 |
56,711,218 | https://en.wikipedia.org/wiki/Facebook%203D%20Posts | Facebook 3D Posts was a feature on the social networking website Facebook. It was first enabled on October 11, 2017 by introducing a new native 3D media type in Facebook News Feed. Initially the users could only post 3D objects from Oculus Medium and marker drawings from Spaces directly to Facebook as fully interactive 3D objects. The feature was available for desktops and mobile phones that support the underlying WebGL API.
On February 20, 2018 Facebook added support for the industry-standard glTF 2.0 file format for Facebook 3D posts. This allowed artists and creators to share 3D content on Facebook from a variety of sources. To make 3D Posts glTF 2.0 compliant, the support for textures, lighting, and physically based rendering techniques was implemented. 3D posts also supported unlit workflows for photogrammetry and stylized art.
Facebook has since disallowed users from sharing 3D objects.
Creating 3D Posts
There were four ways to get a 3D asset to appear in a Facebook Post:
Drag and drop an asset into Facebook's Post composer and publish it.
Share a link to a web page that has Facebook Open Graph Sharing metadata tags.
Share a local asset on an Android device using Android's native Sharing action.
Create a 3D Post programmatically with the 3D Posts API.
Tools for authoring content
GLB files (binary form of glTF) were required to be loaded in Facebook 3D posts. These files could be obtained by converting from other files formats such as FBX or non-binary glTF. GLB files could also be directly exported from a variety of 3D editors, such as Blender, Vectary, Autodesk 3ds Max (using Verge3D exporter), Autodesk Maya, Modo, Microsoft Paint 3D, Substance Painter and others.
References
Facebook
Social software
Software features
Internet properties established in 2017 | Facebook 3D Posts | Technology | 376 |
47,597,338 | https://en.wikipedia.org/wiki/Specific%20output | Specific output is a measure of internal combustion engine performance. It describes the efficiency of an engine in terms of the brake horsepower it outputs relative to its displacement. The measure enables the comparison of differently sized engines, and is usually expressed as kilowatts or horsepower per litre or per cubic inch. On average, forced induction engines out-perform naturally aspirated engines by this measure, primarily due to their increased volumetric efficiency.
See also
Power density
List of automotive superlatives
References
Engine technology | Specific output | Technology | 101 |
645,335 | https://en.wikipedia.org/wiki/Diffusion%20equation | The diffusion equation is a parabolic partial differential equation. In physics, it describes the macroscopic behavior of many micro-particles in Brownian motion, resulting from the random movements and collisions of the particles (see Fick's laws of diffusion). In mathematics, it is related to Markov processes, such as random walks, and applied in many other fields, such as materials science, information theory, and biophysics. The diffusion equation is a special case of the convection–diffusion equation when bulk velocity is zero. It is equivalent to the heat equation under some circumstances.
Statement
The equation is usually written as:
where is the density of the diffusing material at location and time and is the collective diffusion coefficient for density at location ; and represents the vector differential operator del. If the diffusion coefficient depends on the density then the equation is nonlinear, otherwise it is linear.
The equation above applies when the diffusion coefficient is isotropic; in the case of anisotropic diffusion, is a symmetric positive definite matrix, and the equation is written (for three dimensional diffusion) as:
The diffusion equation has numerous analytic solutions.
If is constant, then the equation reduces to the following linear differential equation:
which is identical to the heat equation.
Historical origin
The particle diffusion equation was originally derived by Adolf Fick in 1855.
Derivation
The diffusion equation can be trivially derived from the continuity equation, which states that a change in density in any part of the system is due to inflow and outflow of material into and out of that part of the system. Effectively, no material is created or destroyed:
where j is the flux of the diffusing material. The diffusion equation can be obtained easily from this when combined with the phenomenological Fick's first law, which states that the flux of the diffusing material in any part of the system is proportional to the local density gradient:
If drift must be taken into account, the Fokker–Planck equation provides an appropriate generalization.
Discretization
The diffusion equation is continuous in both space and time. One may discretize space, time, or both space and time, which arise in application. Discretizing time alone just corresponds to taking time slices of the continuous system, and no new phenomena arise.
In discretizing space alone, the Green's function becomes the discrete Gaussian kernel, rather than the continuous Gaussian kernel. In discretizing both time and space, one obtains the random walk.
Discretization in image processing
The product rule is used to rewrite the anisotropic tensor diffusion equation, in standard discretization schemes, because direct discretization of the diffusion equation with only first order spatial central differences leads to checkerboard artifacts. The rewritten diffusion equation used in image filtering:
where "tr" denotes the trace of the 2nd rank tensor, and superscript "T" denotes transpose, in which in image filtering D(ϕ, r) are symmetric matrices constructed from the eigenvectors of the image structure tensors. The spatial derivatives can then be approximated by two first order and a second order central finite differences. The resulting diffusion algorithm can be written as an image convolution with a varying kernel (stencil) of size 3 × 3 in 2D and 3 × 3 × 3 in 3D.
See also
Continuity equation
Heat equation
Self-similar solutions
Reaction-diffusion equation
Fokker–Planck equation
Fick's laws of diffusion
Maxwell–Stefan equation
Radiative transfer equation and diffusion theory for photon transport in biological tissue
Streamline diffusion
Numerical solution of the convection–diffusion equation
References
Further reading
Carslaw, H. S. and Jaeger, J. C. (1959). Conduction of Heat in Solids Oxford: Clarendon Press
Jacobs, M.H. (1935). Diffusion Processes Berlin/Heidelberg: Springer
Crank, J. (1956). The Mathematics of Diffusion Oxford: Clarendon Press
Mathews, Jon; Walker, Robert L. (1970). Mathematical methods of physics (2nd ed.), New York: W. A. Benjamin,
Thambynayagam, R. K. M (2011). The Diffusion Handbook: Applied Solutions for Engineers. McGraw-Hill
Ghez, R. (1988). A Primer Of Diffusion Problems, Wiley
Ghez, R. (2001). Diffusion Phenomena. Long Island, NY, USA: Dover Publication Inc
Pekalski, A. (1994). Diffusion Processes: Experiment, Theory, Simulations, Springer
Bennett, T.D. (2013). Transport by Advection and Diffusion. John Wiley & Sons
Vogel, G. (2019). Adventure Diffusion Springer
Gillespie, D.T.; Seitaridou, E (2013). Simple Brownian Diffusion,Oxford University Press
Nakicenovic, N.; Griübler, A.: (1991). Diffusion of Technologies and Social Behavior; Springer
Michaud, G.; Alecian, G.; Richer, G.: (2013). Atomic Diffusion in Stars, Springer
Stroock, D. W.:, Varadhan, S.R.S.: (2006). Multidimensional diffusion processes, Springer
Zhuoqun, W., Yin J., Li H., Zhao J., Jingxue Y., and Huilai L. (2001). Nonlinear diffusion equations, World Scientific
Shewmon, P. (1989). Diffusion in Solids, Wiley
Banks, R.B. (2010). Growth and diffusion phenomena, Springer
Roque-Malherbe, R.M.A. (2007). Adsorption and Diffusion in Nanoporous Materials, CRC Press
Cunningham, R. (1980). Diffusion in gases and porous media, Plenum
Pasquill, F., Smith, F.B. (1983). Atmospheric diffusion, Horwood
Ikeda, N., Watanabe, S. (1981). Stochastic Differential Equations and Diffusion Processes, Elsevier, Academic Press
Philibert, J., Laskar, A.L., Bocquet, J.L., Brebec, G., Monty, C. (1990). Diffusion in Materials, Springer Netherlands
Freedman, D., (1983). Brownian Motion and Diffusion, Springer-Verlag New York
Nagasawa, M., (1993). Schrödinger Equations and Diffusion Theory, Birkhäuser
Burgers, J.M., (1974). The Nonlinear Diffusion Equation: Asymptotic Solutions and Statistical Problems,Springer Netherlands
Ito, S., (1992). Diffusion Equations, American Mathematical Society
Krylov, N. V. (1994). Introduction to the Theory of Diffusion Processes, American Mathematical Society
Knight, F.B., (1981). Essentials of Brownian Motion and Diffusion, American Mathematical Society
Ibe, O.C., (2013). Elements of random walk and diffusion processes, Wiley
Dattagupta, S. (2013). Diffusion: Formalism and Applications, CRC Press
External links
Diffusion Calculator for Impurities & Dopants in Silicon
A tutorial on the theory behind and solution of the Diffusion Equation.
Classical and nanoscale diffusion (with figures and animations)
Diffusion
Partial differential equations
Parabolic partial differential equations
Functions of space and time
it:Leggi di Fick | Diffusion equation | Physics,Chemistry | 1,526 |
23,681,457 | https://en.wikipedia.org/wiki/NGC%2045 | NGC 45 is a low surface brightness spiral galaxy in the equatorial constellation of Cetus. It was discovered on 11 November 1835 by the English astronomer John Herschel. The galaxy is located at a distance of 22 million light years and is receding with a heliocentric radial velocity of . It is located in the vicinity of the Sculptor Group, but is most likely a background galaxy.
The morphological class of NGC 45 is SA(s)dm, indicating this is a spiral galaxy with no prominent inner bar (SA) or ring (s) feature. There is no central bulge to speak of. The galactic plane is inclined at an angle of to the line of sight from the Earth, with the major axis of the elliptical profile being aligned along a position angle of . Star formation is proceeding at a modest rate of ·yr−1.
Unlike the Milky Way, NGC 45 has no clearly defined spiral arms, and its center bar nucleus is also very small and distorted. NGC 45 thus does not have a galactic habitable zone. For the Milky Way, the galactic habitable zone is commonly believed to be an annulus with an outer radius of about 10 kiloparsecs and an inner radius close to the Galactic Center, both of which lack hard boundaries.
Astronomical Transients
Two astronomical transients have been observed in NGC45. On 22 May 2018 a luminous red nova was detected and subsequently labeled AT2018bwo (type LRN, mag. 16.4). Luminous red novae are thought to be the result of stars merging. The progenitor of AT2018bwo was a yellow supergiant star. A few months later, on 3 November 2018, a luminous blue variable was discovered and designated AT2018htr (type LBV, mag. 17.5).
Gallery
See also
List of NGC objects (1–1000)
References
External links
Revised NGC Data for NGC 45
Unbarred spiral galaxies
NGC 0045
NGC 0045
0045
-04-01-21
004
000930
18351111
Discoveries by John Herschel
Magellanic spiral galaxies
Dwarf spiral galaxies | NGC 45 | Astronomy | 427 |
3,225,759 | https://en.wikipedia.org/wiki/Caryophyllene | Caryophyllene (), more formally (−)-β-caryophyllene (BCP), is a natural bicyclic sesquiterpene that occurs widely in nature. Caryophyllene is notable for having a cyclobutane ring, as well as a trans-double bond in a 9-membered ring, both rarities in nature.
Production
Caryophyllene can be produced synthetically, but it is invariably obtained from natural sources because it is widespread. It is a constituent of many essential oils, especially clove oil, the oil from the stems and flowers of Syzygium aromaticum (cloves), the essential oil of Cannabis sativa, copaiba, rosemary, and hops. It is usually found as a mixture with isocaryophyllene (the cis double bond isomer) and α-humulene (obsolete name: α-caryophyllene), a ring-opened isomer.
Caryophyllene is one of the chemical compounds that contributes to the aroma of black pepper.
Basic research
β-Caryophyllene is under basic research for its potential action as an agonist of the cannabinoid receptor type 2 (CB2 receptor). In other basic studies, β-caryophyllene has a binding affinity of Ki = 155 nM at the CB2 receptors.
β-Caryophyllene has the highest cannabinoid activity compared to the ring opened isomer α-caryophyllene humulene which may modulate CB2 activity. To compare binding, cannabinol binds to the CB2 receptors as a partial agonist with an affinity of Ki = 126.4 nM, while delta-9-tetrahydrocannabinol binds to the CB2 receptors as a partial agonist with an affinity of Ki = 36 nM.
Safety
Caryophyllene has been given generally recognized as safe (GRAS) designation by the FDA and is approved by the FDA for use as a food additive, typically for flavoring. Rats given up to 700 mg/kg daily for 90 days did not produce any significant toxic effects. Caryophyllene has an of 5,000 mg/kg in mice.
Metabolism and derivatives
14-Hydroxycaryophyllene oxide (C15H24O2) was isolated from the urine of rabbits treated with (−)-caryophyllene (C15H24). The X-ray crystal structure of 14-hydroxycaryophyllene (as its acetate derivative) has been reported.
The metabolism of caryophyllene progresses through (−)-caryophyllene oxide (C15H24O) since the latter compound also afforded 14-hydroxycaryophyllene (C15H24O) as a metabolite.
Caryophyllene (C15H24) → caryophyllene oxide (C15H24O) → 14-hydroxycaryophyllene (C15H24O) → 14-hydroxycaryophyllene oxide (C15H24O2).
Caryophyllene oxide, in which the alkene group of caryophyllene has become an epoxide, is the component responsible for cannabis identification by drug-sniffing dogs and is also an approved food additive, often as flavoring. Caryophyllene oxide may have negligible cannabinoid activity.
Natural sources
The approximate quantity of caryophyllene in the essential oil of each source is given in square brackets ([ ]):
Cannabis (Cannabis sativa) [3.8–37.5% of cannabis flower essential oil]
Black caraway (Carum nigrum) [7.8%]
Cloves (Syzygium aromaticum) [1.7–19.5% of clove bud essential oil]
Hops (Humulus lupulus) [5.1–14.5%]
Basil (Ocimum spp.) [5.3–10.5% O. gratissimum; 4.0–19.8% O. micranthum]
Oregano (Origanum vulgare) [4.9–15.7%]
Black pepper (Piper nigrum) [7.29%]
Lavender (Lavandula angustifolia) [4.62–7.55% of lavender oil]
Rosemary (Rosmarinus officinalis) [0.1–8.3%]
True cinnamon (Cinnamomum verum) [6.9–11.1%]
Malabathrum (Cinnamomum tamala) [25.3%]
Ylang-ylang (Cananga odorata) [3.1–10.7%]
Copaiba oil (Copaifera)
Biosynthesis
Caryophyllene is a common sesquiterpene among plant species. It is biosynthesized from the common terpene precursors dimethylallyl pyrophosphate (DMAPP) and isopentenyl pyrophosphate (IPP). First, single units of DMAPP and IPP are reacted via an SN1-type reaction with the loss of pyrophosphate, catalyzed by the enzyme GPPS2, to form geranyl pyrophosphate (GPP). This further reacts with a second unit of IPP, also via an SN1-type reaction catalyzed by the enzyme IspA, to form farnesyl pyrophosphate (FPP). Finally, FPP undergoes QHS1 enzyme-catalyzed intramolecular cyclization to form caryophyllene.
Compendial status
Food Chemicals Codex
Further reading
Notes and references
Flavors
Cannabinoids
Sesquiterpenes
Alkene derivatives
Hydrocarbons
CB2 receptor agonists
Cyclobutanes
Bicyclic compounds | Caryophyllene | Chemistry | 1,241 |
60,998,202 | https://en.wikipedia.org/wiki/Estradiol%20undecylate/norethisterone%20enanthate | Estradiol undecylate/norethisterone enanthate (EU/NETE) is a combination medication of estradiol undecylate (EU), an estrogen, and norethisterone enanthate (NETE), a progestin, which was developed by Schering for potential use as a combined injectable contraceptive in women but was ultimately never marketed. It contained 5 to 10 mg EU and 50 to 70 mg NETE in oil solution and was intended for use by intramuscular injection at regular intervals. Although never commercialized, EU/NETE was found to be effective and well tolerated.
See also
Polyestradiol phosphate/medroxyprogesterone acetate
List of combined sex-hormonal preparations § Estrogens and progestogens
References
Abandoned drugs
Combined estrogen–progestogen formulations
Combined injectable contraceptives | Estradiol undecylate/norethisterone enanthate | Chemistry | 192 |
38,707,099 | https://en.wikipedia.org/wiki/Truncated%20order-7%20heptagonal%20tiling | In geometry, the truncated order-7 heptagonal tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of t0,1{7,7}, constructed from one heptagons and two tetrakaidecagons around every vertex.
Related tilings
See also
Square tiling
Uniform tilings in hyperbolic plane
List of regular polytopes
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Heptagonal tilings
Hyperbolic tilings
Isogonal tilings
Isohedral tilings
Order-7 tilings
Truncated tilings | Truncated order-7 heptagonal tiling | Physics | 188 |
4,936,662 | https://en.wikipedia.org/wiki/Vacant%20niche | A vacant niche or empty niche is an ecological niche in a particular ecosystem that is not occupied by a particular species. The issue of what exactly defines a vacant niche and whether they exist in ecosystems is controversial. The subject is intimately tied into a much broader debate on whether ecosystems can reach equilibrium, where they could theoretically become maximally saturated with species. Given that saturation is a measure of the number of species per resource axis per ecosystem, the question becomes: is it useful to define unused resource clusters as niche 'vacancies'?
History of the concept
Whether vacant niches are permissible has been both confirmed and denied as the definition of a niche has changed over time. In the framework of Grinnell (1917), the species niche was largely equivalent to its habitat, such that a niche vacancy could be looked upon as a habitat vacancy. The Eltonian framework considered the niche to be equivalent to a species position in a trophic web, or food chain, and in this respect there is always going to be a vacant niche at the top predator level. Whether this position gets filled depends upon the ecological efficiency of the species filling it however. The concept of the "vacant" or "empty niche" has been used regularly in the scientific literature.
The Hutchinsonian niche framework, on the other hand, directly precludes the possibility of there being vacant niches. Hutchinson defined the niche as an n-dimensional hyper-volume whose dimensions correspond to resource gradients over which species are distributed in a unimodal fashion. In this we see that the operational definition of his niche rests on the fact that a species is needed in order to rationally define a niche in the first place. This fact didn't stop Hutchinson from making statements inconsistent with this such as: “The question raised by cases like this is whether the three Nilghiri Corixinae fill all the available niches...or whether there are really empty niches.. . .The rapid spread of introduced species often gives evidence of empty niches, but such rapid spread in many instances has taken place in disturbed areas.”.
Definitions
The most notable definition of a vacant niche is that of the ecologist K. Rohde, who has suggested that a vacant niche can be defined as the possibility that in ecosystems or habitats more species could exist than are present at a particular point in time, because many possibilities are not used by potentially existing species.
Potential causes of vacant niches
Vacant niches could potentially have several causes.
• Radical disturbances in a habitat: For example, droughts or forest fires can destroy a flora and fauna partially or completely. However, in such cases species suitable for the habitat usually survive in the neighbourhood and colonize the vacated niches, leading to a relatively fast re-establishment of the original conditions.
• Radical and long-lasting changes in the environment: such as ice ages.
• Evolutionary contingencies: suitable species did not evolve for usually unknown reasons, or niche segregation between pre-existing species created a novel niche vacancy.
Demonstration of vacant niches
Vacant niches can best be demonstrated by considering the spatial component of niches in simple habitats. For example, Lawton and collaborators compared the insect fauna of the bracken Pteridium aquilinum, a widely distributed species, in different habitats and geographical regions and found vastly differing numbers of insect species. They concluded that many niches remain vacant.
Rohde and collaborators have shown that the number of ectoparasitic species on the gills of different species of marine fishes varies from 0 to about 30, even when fish of similar size and from similar habitats are compared. Assuming that the host species with the largest number of parasite species has the largest possible number of parasite species, only about 16% of all niches are occupied. However, the maximum may well be greater, since the possibility cannot be excluded that even on fish with a rich parasite fauna, more species could be accommodated. Using similar reasoning, Walker and Valentine (1984) estimated that 12-54% of niches for marine invertebrates are empty.
The ground breaking theoretical investigations of Kauffman (1993) and Wolfram (2002) also suggest the existence of a vast number of vacant niches. Using different approaches, both have shown that species rarely if ever reach global adaptive optima. Rather, they get trapped in local optima from which they cannot escape, i.e., they are not perfectly adapted. As the number of potential local optima is almost infinite, the niche space is largely unsaturated and species have little opportunity for interspecific competition. Kauffman (p. 19) writes: “...many conceivable useful phenotypes do not exist” and: (p. 218) “Landscapes are rugged and multipeaked. Adaptive processes typically become trapped on such optima”.
The packing rules can be used as a measure of the filling of niche space. They apply to savanna plants and large herbivorous mammals, but not to all the parasite species examined so far. It seems likely that they do not apply to most animal groups. In other words, most species are not densely packed: many niches remain empty.
That niche space may not be saturated is also shown by introduced pest species. Such species lose, almost without exception, all or many of their parasites. Species that could occupy the vacant niches either do not exist or, if they exist, cannot adapt to these niches.
The diversity of marine benthos, i.e. the organisms living near the seabed, though interrupted by some collapses and plateaus has increased from the Cambrian to the Recent. Furthermore, there is no evidence to suggest that saturation has been reached.
Consequences of the nonsaturation of niche space
The view that niche space is largely or completely saturated with species is widespread. It is thought that new species are accommodated mainly by subdivision of niches occupied by previously existing species, although an increase in diversity by colonization of large empty living spaces (such as land in the geologic past) or by the formation of new baupläne also occurs. It is also recognized that many populations never completely reach a climax state (i.e., they may come close to an equilibrium but never quite reach it). However, altogether the view prevails that individuals and species are densely packed and that interspecific competition is of paramount significance. According to this view, nonequilibria are generally caused by environmental disturbances.
However, many recent studies support the view that niche space is largely unsaturated, i.e. that numerous vacant niches exist. As a consequence, competition between species is not as important as usually assumed. Nonequilibria are caused not only by environmental disturbances, but are widespread because of nonsaturation of niche space. Newly evolved species are absorbed into empty niche space, that is, niches occupied by existing species do not necessarily have to shrink.
Relative frequency of vacant niches in various groups of animals and plants
Available evidence suggests that vacant niches are more common in some groups than in others. Using SES values (standardized effect sizes) for various groups, which can be used as approximate predictors of the filling of niche space, Gotelli and Rohde (2002) have shown that SES values are high for large and vagile species or for those which occur in large population densities, and that they are low for animal species which occur in small population densities and/or are of small body size and have little vagility. In other words, more vacant niches can be expected for the latter.
Criticisms of the concept
Not all researchers accept the concept of vacant niches. If one defines a niche as a property of a species, then a niche does not exist if no species is present. In other words, the term appears "illogical". However, some authors who have contributed most to the formulation of the modern niche concept (Hutchinson, Elton) apparently saw no difficulties in using the term. If a niche is defined as the interrelationship of a species with all the biotic and abiotic factors affecting it, there is no reason not to admit the possibility of additional potential interrelationships. So it seems logical to refer to vacant niches.
Furthermore, it seems that authors most critical of the concept "vacant niche" really are critical of the view that niche space is largely empty and can easily absorb additional species. They instead adhere to the view that communities are usually in equilibrium (or at least close to it), resulting in a continual strong competition for resources. But many recent studies, some empirical, some theoretical, have provided support for the alternate view that nonequilibrium conditions are widespread.
In the German literature, an alternate term for vacant niches has found some acceptance - that of freie ökologische Lizens (free ecological license). It has been argued that this conceptualization has a disadvantage in that it does not convey immediately and easily what is meant, furthermore the concept does not correspond exactly to the term "vacant niche". The usefulness of a term should be assessed on the basis of its understandability and on its capacity to promote future research. The term "vacant niche" appears to fulfill these requirements.
See also
Effective evolutionary time
Niche segregation
References
Ecology
Evolutionary biology | Vacant niche | Biology | 1,897 |
49,789,079 | https://en.wikipedia.org/wiki/Small%20Maf | Small Maf (musculoaponeurotic fibrosarcoma) proteins are basic region leucine zipper-type transcription factors that can bind to DNA and regulate gene regulation. There are three small Maf (sMaf) proteins, namely MafF, MafG, and MafK, in vertebrates. HUGO Gene Nomenclature Committee (HGNC)-approved gene names of MAFF, MAFG and MAFK are “v-maf avian musculoaponeurotic fibrosarcoma oncogene homolog F, G, and K”, respectively.
Through the leucine zipper structures, sMafs form homodimers by themselves and heterodimers with other specific bZIP transcription factors, such as transcription factors of the CNC (cap 'n' collar) and Bach families. Because CNC and Bach proteins cannot bind to DNA by themselves, sMafs are indispensable partners of the CNC and Bach families of transcription factors. Through interactions with these transcription factors, sMafs actively participate in transcriptional activation or repression depending on the nature of the heterodimeric partners.
Subtypes
The following genes encode small Maf proteins
(Human), Maff (Mouse), maft renamed maff (Zebrafish)
(Human), Mafg (Mouse), mafg (Zebrafish)
(Human), Mafk (Mouse), mafk (Zebrafish)
History and discovery
sMaf proteins were identified as members of the Maf family transcription factors. The Maf family is divided into two subfamilies, as follows: the large Maf subfamily (c-Maf, MafA, MafB, and NRL); and the small Maf subfamily (MafF, MafG and MafK) (Fig. 1). The first member of the Maf family is c-Maf, which was cloned as a cellular counterpart of the v-Maf oncogene isolated from avian musculoaponeurotic fibrosarcoma. The MafF, MafG, and MafK genes were later isolated. Because MafF, MafG and MafK are well-conserved 18 kDa proteins that lack a transcriptional activation domain, they are classified into the small Maf subfamily, which is structurally and functionally distinct from the large Maf subfamily.
Gene structure and regulation
Three sMaf genes are widely expressed in various cell types and tissues under differential transcriptional regulation. In mouse, each sMaf gene harbors multiple first exons, which partly contribute to their tissue-specific or stimulus-specific expression patterns. Human MAFF is induced by proinflammatory cytokines. Mouse Mafg gene is induced by oxidative stresses (e.g. reactive oxygen species and electrophilic compounds) or the presence of bile acids. Mouse Mafk gene is under the regulation of GATA factors (GATA-1 and GATA-2 in hematopoietic tissues; and GATA-4 and GATA-6 in cardiac tissues).
Protein structure
All members of the Maf family including sMafs have a bZIP structure that consists of the basic region for DNA binding and the leucine zipper structure for dimer formation (Fig. 2). The basic region of each Maf family protein contains a tyrosine residue, which is critical for the unique DNA-binding modes of these proteins (see below for details). In addition, each Maf family protein possesses an extended homology region (EHR), which contributes to stable DNA binding. The C-terminal region of sMaf includes a region required for its proper subnuclear localization. Two modifications have been reported for MafG: SUMOylation through a SUMOylation motif at the N-terminal region; phosphorylation through an ERK phosphorylation site in the C-terminal region.
Function
sMaf proteins form homodimers by themselves and heterodimers with two other bZIP families of transcription factors, namely CNC (cap 'n' collar) proteins (p45 NF-E2 (NFE2), Nrf1 (NFE2L1), Nrf2 (NFE2L2), and Nrf3 (NFE2L3) – not to be confused with Nuclear Respiratory factors) and Bach proteins (Bach1 and Bach2). Because these proteins cannot bind DNA by themselves, sMaf proteins are indispensable partner molecules of the CNC and Bach transcription factors.
sMaf homodimers bind to a palindromic DNA sequence called the Maf recognition element (MARE: TGCTGACTCAGCA) and its related sequences. Structural analyses have demonstrated that the basic region of a Maf factor recognizes the flanking GC sequences. By contrast, CNC-sMaf or Bach-sMaf heterodimers preferentially bind to DNA sequences (RTGA(C/G)NNNGC: R=A or G) that are slightly different from MARE (Fig. 3). The latter DNA sequences have been recognized as antioxidant/electrophile response elements or NF-E2-binding motifs, to which Nrf2-sMaf heterodimers and p45 NF-E2-sMaf heterodimers bind, respectively. It has been proposed that the latter sequences are classified as CNC-sMaf-binding elements (CsMBEs).
It has also been reported that sMafs form heterodimers with other bZIP transcription factors, such as c-Jun and c-Fos. However, the biological significance of these heterodimers remains unknown.
sMaf homodimer
Because sMafs lack any canonical transcriptional activation domains, the sMaf homodimer act as a negative regulator. Overexpression of MafG is known to inhibit proplatelet formation, which is thought to reflect a process of platelet production. SUMOylation is required for MafG homodimer-mediated transcriptional repression.
p45 NF-E2-sMaf heterodimer
The p45 NF-E2-sMaf heterodimers are critical for platelet production. Knockout mouse studies have shown that MafG knockout mice show mild thrombocytopenia, whereas MafG and MafK double mutant mice show severe thrombocytopenia. Similar results were also observed in p45 NF-E2 knockout mice. The p45 NF-E2-sMaf heterodimer regulates genes responsible for platelet production and function.
Nrf1-sMaf heterodimer
The Nrf1-sMaf heterodimers are critical for neuronal homeostasis. Knockout mouse studies have shown that Mafg knockout mice display mild ataxia. Mafg and Mafk mutant mice (Mafg−/−::Mafk+/−) show more severe ataxia with progressive neuronal degeneration. Similar results have also been observed in Nrf1 central nervous-specific knockout mice. The Nrf1-sMaf heterodimers regulate genes responsible for proteasomal genes and metabolism genes.
Nrf2-sMaf heterodimer
The Nrf2-sMaf heterodimers are critical for oxidative and electrophilic stress response. Nrf2 is known as a master regulator of antioxidant and xenobiotic metabolizing enzyme genes. Induction of these cytoprotective genes is impaired in Nrf2 knockout mice. While MafG, MafK and MafF triple knockout mice die in embryonic stage, cultured cells derived from the triple knockout embryo fail to induce Nrf2-dependent cytoprotective genes in response to stimuli.
Bach1-sMaf heterodimer
The Bach1-sMaf heterodimer is critical for heme metabolism. Knockout mouse studies showed that heme oxygenase-1 gene expression is upregulated in Bach1 knockout mice. Similar results were also observed in MafG and MafK double mutant mice (Mafg−/−::Mafk+/−). These data show that the Bach1-sMaf heterodimer negatively regulates heme oxygenase-1 gene.
Bach2-sMaf heterodimer
The Bach2-sMaf heterodimers are critical for B cell differentiation. Bach2 knockout mice studies have demonstrated that Bach2 is required for class switching and somatic hypermutation of immunoglobulin genes. However, these phenotypes have not been examined in sMaf knockout mice.
sMaf function with compound or unknown partners
MafG and MafK double mutant mice (Mafg−/−::Mafk+/−) have cataracts. However, the interaction of CNC partner(s) with sMafs in this context remains undetermined. MafG, MafK and MafF triple knockout mice die during embryogenesis, demonstrating that sMafs are indispensable for embryonic development. Because Nrf1 and Nrf2 double mutant mice also die during embryogenesis, the loss of function of both Nrf1-sMaf and Nrf2-sMaf may contribute to the lethality.
Disease association
sMafs have been suggested to be involved in various diseases as heterodimeric partners of CNC and Bach proteins. Because Nrf2-sMaf heterodimers regulate a battery of antioxidant and xenobiotic metabolizing enzymes, impaired function of sMafs is expected to make cells vulnerable to various stresses and increase the risk of various diseases, such as cancers. SNPs associated with the cancer onset were reported in MAFF and MAFG genes. In addition, Nrf2 is known to be critical for anti-inflammatory responses. Thus, sMaf insufficiencies are expected to result in prolonged inflammation that can cause diseases, such as neurodegeneration and atherosclerosis.
Conversely, sMafs also appear to contribute to cancer malignancy. Certain cancers contain somatic mutations in NRF2(NFE2L2) or KEAP1 that cause constitutive activation of Nrf2 and promote cell proliferation. It has also been reported that the Bach1-MafG heterodimer contributes to cancer malignancy by repressing tumor suppressor genes. Thus, as partners of Nrf2 and Bach1, sMafs are expected to play critical roles in cancer cells.
References
Proteins | Small Maf | Chemistry | 2,214 |
2,296,389 | https://en.wikipedia.org/wiki/223%20%28number%29 | 223 (two hundred [and] twenty-three) is the natural number following 222 and preceding 224.
In mathematics
223 is:
a prime number,
a lucky prime,
a left-truncatable prime, and a left-and-right-truncatable prime.
Among the 720 permutations of the numbers from 1 to 6, exactly 223 of them have the property that at least one of the numbers is fixed in place by the permutation and the numbers less than it and greater than it are separately permuted among themselves.
In connection with Waring's problem, 223 requires the maximum number of terms (37 terms) when expressed as a sum of positive fifth powers, and is the only number that requires that many terms.
See also
The years 223 and 223 BC
References
Integers | 223 (number) | Mathematics | 165 |
3,773,230 | https://en.wikipedia.org/wiki/Normalized%20difference%20vegetation%20index | The normalized difference vegetation index (NDVI) is a widely-used metric for quantifying the health and density of vegetation using sensor data. It is calculated from spectrometric data at two specific bands: red and near-infrared. The spectrometric data is usually sourced from remote sensors, such as satellites.
The metric is popular in industry because of its accuracy. It has a high correlation with the true state of vegetation on the ground. The index is easy to interpret: NDVI will be a value between -1 and 1. An area with nothing growing in it will have an NDVI of zero. NDVI will increase in proportion to vegetation growth. An area with dense, healthy vegetation will have an NDVI of one. NDVI values less than 0 suggest a lack of dry land. An ocean will yield an NDVI of -1
Brief history
The exploration of outer space started in earnest with the launch of Sputnik 1 by the Soviet Union on 4 October 1957. This was the first man-made satellite orbiting the Earth. Subsequent successful launches, both in the Soviet Union (e.g., the Sputnik and Cosmos programs), and in the U.S. (e.g., the Explorer program), quickly led to the design and operation of dedicated meteorological satellites. These are orbiting platforms embarking instruments specially designed to observe the Earth's atmosphere and surface with a view to improve weather forecasting. Starting in 1960, the TIROS series of satellites embarked television cameras and radiometers. This was later (1964 onwards) followed by the Nimbus satellites and the family of Advanced Very High Resolution Radiometer instruments on board the National Oceanic and Atmospheric Administration (NOAA) platforms. The latter measures the reflectance of the planet in red and near-infrared bands, as well as in the thermal infrared. In parallel, NASA developed the Earth Resources Technology Satellite (ERTS), which became the precursor to the Landsat program. These early sensors had minimal spectral resolution, but tended to include bands in the red and near-infrared, which are useful to distinguish vegetation and clouds, amongst other targets.
With the launch of the first ERTS satellite – which was soon to be renamed Landsat 1 – on July 23, 1972 with its MultiSpectral Scanner (MSS) NASA funded a number of investigations to determine its capabilities for Earth remote sensing. One of those early studies was directed toward examining the spring vegetation green-up and subsequent summer and fall dry-down (the so-called “vernal advancement and retrogradation”) throughout the north to south expanse of the Great Plains region of the central U.S. This region covered a wide range of latitudes from the southern tip of Texas to the U.S.-Canada border, which resulted in a wide range of solar zenith angles at the time of the satellite observations.
The researchers for this Great Plains study (PhD student Donald Deering and his advisor Dr. Robert Hass) found that their ability to correlate, or quantify, the biophysical characteristics of the rangeland vegetation of this region from the satellite spectral signals was confounded by these differences in solar zenith angle across this strong latitudinal gradient. With the assistance of a resident mathematician (Dr. John Schell), they studied solutions to this dilemma and subsequently developed the ratio of the difference of the red and infrared radiances over their sum as a means to adjust for or “normalize” the effects of the solar zenith angle. Originally, they called this ratio the “Vegetation Index” (and another variant, the square-root transformation of the difference-sum ratio, the “Transformed Vegetation Index”); but as several other remote sensing researchers were identifying the simple red/infrared ratio and other spectral ratios as the “vegetation index,” they eventually began to identify the difference/sum ratio formulation as the normalized difference vegetation index. The earliest reported use of NDVI in the Great Plains study was in 1973 by Rouse et al. (Dr. John Rouse was the Director of the Remote Sensing Center of Texas A&M University where the Great Plains study was conducted). However, they were preceded in formulating a normalized difference spectral index by Kriegler et al. in 1969. Soon after the launch of ERTS-1 (Landsat-1), Compton Tucker of NASA's Goddard Space Flight Center produced a series of early scientific journal articles describing uses of the NDVI.
Thus, NDVI was one of the most successful of many attempts to simply and quickly identify vegetated areas and their "condition," and it remains the most well-known and used index to detect live green plant canopies in multispectral remote sensing data. Once the feasibility to detect vegetation had been demonstrated, users tended to also use the NDVI to quantify the photosynthetic capacity of plant canopies. This, however, can be a rather more complex undertaking if not done properly, as is discussed below.
Rationale
Live green plants absorb solar radiation in the photosynthetically active radiation (PAR) spectral region, which they use as a source of energy in the process of photosynthesis. Leaf cells have also evolved to re-emit solar radiation in the near-infrared spectral region (which carries approximately half of the total incoming solar energy), because the photon energy at wavelengths longer than about 700 nanometers is too low to synthesize organic molecules. A strong absorption at these wavelengths would only result in overheating the plant and possibly damaging the tissues. Hence, live green plants appear relatively dark in the PAR and relatively bright in the near-infrared. By contrast, clouds and snow tend to be rather bright in the red (as well as other visible wavelengths) and quite dark in the near-infrared.
The pigment in plant leaves, chlorophyll, strongly absorbs visible light (from 400 to 700 nm) for use in photosynthesis. The cell structure of the leaves, on the other hand, strongly reflects near-infrared light (from 700 to 1100 nm). The more leaves a plant has, the more these wavelengths of light are affected.
Since early instruments of Earth Observation, such as NASA's ERTS and NOAA's AVHRR, acquired data in visible and near-infrared, it was natural to exploit the strong differences in plant reflectance to determine their spatial distribution in these satellite images.
The NDVI is calculated from these individual measurements as follows:
where Red and NIR stand for the spectral reflectance measurements acquired in the red (visible) and near-infrared regions, respectively. These spectral reflectances are themselves ratios of the reflected radiation to the incoming radiation in each spectral band individually, hence they take on values between 0 and 1. By design, the NDVI itself thus varies between -1 and +1. NDVI is functionally, but not linearly, equivalent to the simple infrared/red ratio (NIR/VIS). The advantage of NDVI over a simple infrared/red ratio is therefore generally limited to any possible linearity of its functional relationship with vegetation properties (e.g. biomass). The simple ratio (unlike NDVI) is always positive, which may have practical advantages, but it also has a mathematically infinite range (0 to infinity), which can be a practical disadvantage as compared to NDVI. Also in this regard, note that the VIS term in the numerator of NDVI only scales the result, thereby creating negative values. NDVI is functionally and linearly equivalent to the ratio NIR / (NIR+VIS), which ranges from 0 to 1 and is thus never negative nor limitless in range. But the most important concept in the understanding of the NDVI algebraic formula is that, despite its name, it is a transformation of a spectral ratio (NIR/VIS), and it has no functional relationship to a spectral difference (NIR-VIS).
In general, if there is much more reflected radiation in near-infrared wavelengths than in visible wavelengths, then the vegetation in that pixel is likely to be dense and may contain some type of forest. Subsequent work has shown that the NDVI is directly related to the photosynthetic capacity and hence energy absorption of plant canopies. Although the index can take negative values, even in densely populated urban areas the NDVI usually has a (small) positive value. Negative values are more likely to be observed in the atmosphere and some specific materials.
Performance and limitations
It can be seen from its mathematical definition that the NDVI of an area containing a dense vegetation canopy will tend to positive values (say 0.3 to 0.8) while clouds and snow fields will be characterized by negative values of this index. Other targets on Earth visible from space include:
free standing water (e.g., oceans, seas, lakes and rivers) which have a rather low reflectance in both spectral bands (at least away from shores) and thus result in very low positive or even slightly negative NDVI values,
soils which generally exhibit a near-infrared spectral reflectance somewhat larger than the red, and thus tend to also generate rather small positive NDVI values (say 0.1 to 0.2).
In addition to the simplicity of the algorithm and its capacity to broadly distinguish vegetated areas from other surface types, the NDVI also has the advantage of compressing the size of the data to be manipulated by a factor 2 (or more), since it replaces the two spectral bands by a single new field (eventually coded on 8 bits instead of the 10 or more bits of the original data).
The NDVI has been widely used in applications for which it was not originally designed. Using the NDVI for quantitative assessments (as opposed to qualitative surveys as indicated above) raises a number of issues that may seriously limit the actual usefulness of this index if they are not properly addressed. The following subsections review some of these issues.
Mathematically, the sum and the difference of the two spectral channels contain the same information as the original data, but the difference alone (or the normalized difference) carries only part of the initial information. Whether the missing information is relevant or valuable is for the user to judge, but it is important to understand that an NDVI product carries only a fraction of the information available in the original spectral reflectance data.
Users of NDVI have tended to estimate a large number of vegetation properties from the value of this index. Typical examples include the Leaf Area Index, biomass, chlorophyll concentration in leaves, plant productivity, fractional vegetation cover, accumulated rainfall, etc. Such relations are often derived by correlating space-derived NDVI values with ground-measured values of these variables. This approach raises further issues related to the spatial scale associated with the measurements, as satellite sensors always measure radiation quantities for areas substantially larger than those sampled by field instruments. Furthermore, it is of course illogical to claim that all these relations hold at once, because that would imply that all of these environmental properties would be directly and unequivocally related between themselves.
The reflectance measurements should be relative to the same area and be acquired simultaneously. This may not be easy to achieve with instruments that acquire different spectral channels through different cameras or focal planes. Mis-registration of the spectral images may lead to substantial errors and unusable results.
Also, the calculation of the NDVI value turns out to be sensitive to a number of perturbing factors including
Atmospheric effects: The actual composition of the atmosphere (in particular with respect to water vapor and aerosols) can significantly affect the measurements made in space. Hence, the latter may be misinterpreted if these effects are not properly taken into account (as is the case when the NDVI is calculated directly on the basis of raw measurements).
Clouds: Deep (optically thick) clouds may be quite noticeable in satellite imagery and yield characteristic NDVI values that ease their screening. However, thin clouds (such as the ubiquitous cirrus), or small clouds with typical linear dimensions smaller than the diameter of the area actually sampled by the sensors, can significantly contaminate the measurements. Similarly, cloud shadows in areas that appear clear can affect NDVI values and lead to misinterpretations. These considerations are minimized by forming composite images from daily or near-daily images. Composite NDVI images have led to a large number of new vegetation applications where the NDVI or photosynthetic capacity varies over time.
Soil effects: Soils tend to darken when wet, so that their reflectance is a direct function of water content. If the spectral response to moistening is not exactly the same in the two spectral bands, the NDVI of an area can appear to change as a result of soil moisture changes (precipitation or evaporation) and not because of vegetation changes.
Anisotropic effects: All surfaces (whether natural or man-made) reflect light differently in different directions, and this form of anisotropy is generally spectrally dependent, even if the general tendency may be similar in these two spectral bands. As a result, the value of NDVI may depend on the particular anisotropy of the target and on the angular geometry of illumination and observation at the time of the measurements, and hence on the position of the target of interest within the swath of the instrument or the time of passage of the satellite over the site. This is particularly crucial in analyzing AVHRR data since the orbit of the NOAA platforms tended to drift in time. At the same time, the use of composite NDVI images minimizes these considerations and has led to global time series NDVI data sets spanning more than 25 years.
Spectral effects: Since each sensor has its own characteristics and performances, in particular with respect to the position, width and shape of the spectral bands, a single formula like NDVI yields different results when applied to the measurements acquired by different instruments.
Modifiable areal unit problem (MAUP): NDVI is ubiquitous as an index of vegetation. Since mapping and monitoring of vegetation takes place via ‘big data’ image processing systems. These systems may use pixel- or object-based algorithms to assess vegetation health, evapotranspiration, and other ecosystem functions. When a category of vegetation consists of multiple pixels, the calculation of a ‘mean’ can be a mean of NDVI values for each pixel (pixel-based), or a mean of the Red values and a mean of the NIR values for all the pixels in which the mean NDVI is the ratio of these (object-based). NDVI can suffer from the intractable problems that are associated with MAUP. In particular, a recent study demonstrated that when NDVI mean values are estimated for certain buffer distances, the scale of the analysis can influence NDVI measures due to the presence of scale effects associated with MAUP. Another study demonstrated that MAUP does not significantly impact in case of pure vegetation pixels in an urban environment. A modification known as MAUI-NDVI specifically addresses the problem.
A number of derivatives and alternatives to NDVI have been proposed in the scientific literature to address these limitations, including the Perpendicular Vegetation Index, the Soil-Adjusted Vegetation Index, the Atmospherically Resistant Vegetation Index<ref>Kaufman, Y. J. and D. Tanre (1992) 'Atmospherically resistant vegetation index (ARVI) for EOS-MODIS', in 'Proc. IEEE Int. Geosci. and Remote Sensing Symp. '92, IEEE, New York, 261-270.</ref> and the Global Environment Monitoring Index. Each of these attempted to include intrinsic correction(s) for one or more perturbing factors. A current alternative adopted by USGS is the enhanced vegetation index (EVI), correcting for soil effects, canopy background, and aerosol influences.
It is not until the mid-1990s, however, that a new generation of algorithms were proposed to estimate directly the biogeophysical variables of interest (e.g., the fraction of absorbed photosynthetically active radiation, FAPAR), taking advantage of the enhanced performance and characteristics of modern sensors (in particular their multispectral and multiangular capabilities) to take all the perturbing factors into account. In spite of many possible perturbing factors upon the NDVI, it remains a valuable quantitative vegetation monitoring tool when the photosynthetic capacity of the land surface needs to be studied at the appropriate spatial scale for various phenomena.
Agricultural applications
Within precision agriculture, NDVI data provides a measurement of crop health. Today, this often involves agricultural drones, which are paired with NDVI to compare data and recognize crop health issues. One example of this is agriculture drones from PrecisionHawk and Sentera, which allow agriculturalists to capture and process NDVI data within one day, a change from the traditional NDVI uses and their long lag times. Much of the research done currently has proved that the NDVI images can even be obtained using the normal digital RGB cameras by some modifications in order to obtain the results similar to those obtained from the multispectral cameras and can be implemented effectively in the crop health monitoring systems.
Landsat 8, Sentinel-2 and PlanetScope are some of the main providers of satellite imagery to make NDVI maps and monitor crop health.
See also
Normalized difference water index (NDWI)
Red edge
Revised Simple Biosphere Model (SIB-2)
Notes
References
Deering, D.W. 1978. Rangeland reflectance characteristics measured by aircraft and spacecraft sensors. Ph.D. Diss. Texas A&M Univ., College Station, 338p.
Deering D.W., J.W. Rouse, Jr., R.H. Haas, and J.A. Schell. 1975. Measuring "forage production" of grazing units from Landsat MSS data, pp. 1169–1178. In Proc. Tenth Int. Symp. on Remote Sensing of Environment. Univ. Michigan, Ann Arbor.
Rouse, J.W., Jr., R.H. Haas, J.A. Schell, and D.W. Deering. 1973. Monitoring the vernal advancement and retrogradation (green wave effect) of natural vegetation. Prog. Rep. RSC 1978-1, Remote Sensing Center, Texas A&M Univ., College Station, 93p. (NTIS No. E73-106393)
Rouse, J. W., R. H. Haas, J. A. Schell, and D. W. Deering (1973) 'Monitoring vegetation systems in the Great Plains with ERTS', Third ERTS Symposium, NASA SP-351 I, 309-317.
Tucker, C.J. (1979) 'Red and Photographic Infrared Linear Combinations for Monitoring Vegetation', Remote Sensing of Environment'', 8(2),127-150.
External links
Derivation of NDVI
Background on NOAA AVHRR
Background on NDVI
FAQ about vegetation indices
FAPAR as a replacement for NDVI
NDVICentral
VEGETATION Processing and Archiving Facility at VITO
VEGETATION Programme
VEGETATION INDEX
Satellite meteorology
Remote sensing
Biogeography | Normalized difference vegetation index | Biology | 4,009 |
14,585,748 | https://en.wikipedia.org/wiki/Belongingness | Belongingness is the human emotional need to be an accepted member of a group. Whether it is family, friends, co-workers, a religion, or something else, some people tend to have an 'inherent' desire to belong and be an important part of something greater than themselves. This implies a relationship that is greater than simple acquaintance or familiarity.
Belonging is a strong feeling that exists in human nature. To belong or not to belong is a subjective experience that can be influenced by a number of factors within people and their surrounding environment. A person's sense of belonging can greatly impact the physical, emotional, psychological, and spiritual emotions within themselves.
Roy Baumeister and Mark Leary argue that belongingness is such a fundamental human motivation that people feel severe consequences for not belonging. Were it not so fundamental, then lacking a sense of belonging would not have such dire consequences. This desire is so universal that the need to belong is found across all cultures and different types of people.
Active listening can help create the feeling of belonging; this is because it enables the ability to listen and respond to another person in an understanding and meaningful way. When the person feels truly heard, especially in a way that promotes unconditional positive regard, they are able to feel a significantly higher sense of belonging and acceptance.
Psychological needs
Abraham Maslow suggested that the need to belong was a major source of human motivation. He thought that it was one of five human needs in his hierarchy of needs, along with physiological needs, safety, self-esteem, and self-actualization. These needs are arranged on a hierarchy and must be satisfied in order. After physiological and safety needs are met an individual can then work on meeting the need to belong and be loved. According to Maslow, if the first two needs are not met, then an individual cannot completely love someone else.
Other theories have also focused on the need to belong as a fundamental psychological motivation. According to Roy Baumeister and Mark Leary, all human beings need a certain minimum quantity of regular, satisfying social interactions. Inability to meet this need results in loneliness, mental distress, and a strong desire to form new relationships. Several psychologists have proposed that there are individual differences in people's motivation to belong. People with a strong motivation to belong are less satisfied with their relationships and tend to be relatively lonely. As consumers, they tend to seek the opinions of others about products and services and also attempt to influence others' opinions.
According to Baumeister and Leary, much of what human beings do is done in the service of belongingness. They argue that many of the human needs that have been documented, such as the needs for power, intimacy, approval, achievement and affiliation, are all driven by the need to belong. Human culture is compelled and conditioned by pressure to belong. The need to belong and form attachments is universal among humans. This counters the Freudian argument that sexuality and aggression are the major driving psychological forces. Those who believe that the need to belong is the major psychological drive also believe that humans are naturally driven toward establishing and sustaining relationships and belongingness. For example, interactions with strangers are potential first steps towards developing non-hostile and more long-term connections which can satisfy one’s attachment needs. Certain people who are socially deprived can exhibit physical, behavioral, and psychological problems, such as stress or instability.
Attachments
In all cultures, attachments form universally. Social bonds are easily formed, without the need for favorable settings. The need to belong is a goal-directed activity that people try to satisfy with a certain minimum number of social contacts. The quality of interactions is more important than the quantity of interactions. People who form social attachments beyond that minimal amount experience less satisfaction from extra relationships, as well as more stress from terminating those extra relationships. People also effectively replace lost relationship partners by substituting them with new relationships or social environments. For example, individuals with strong family ties could compensate for loneliness at work.
Relationships missing regular contact but characterized by strong feelings of commitment and intimacy also fail to satisfy the need. Just knowing that a bond exists may be emotionally comforting, yet it would not provide a feeling of full belongingness if there is a lack of interaction between the persons. The belongingness hypothesis proposes two main features. First, people need constant, positive, personal interactions with other people. Second, people need to know that the bond is stable, there is mutual concern, and that this attachment will continue. So, the need to belong is not just a need for intimate attachments or a need for connections, but that the perception of the bond is as important as the bond itself. Individuals need to know that other people care about their well-being and love them.
Baumeister and Leary argue that much of the research on group bonds can be interpreted through the lens of belongingness. They argue that plenty of evidence suggests that social bonds are formed easily. In the classic Robber's Cave study, stranger boys were randomly grouped into two different groups and almost immediately, group identification and strong loyalty developed to their specific group. Initially, the two groups were asked to compete with one another, and hostility between the groups ensued. However, when the two groups were combined to form one big group and were given the opportunity to bond by working together to accomplish superordinate goals, behaviors and emotions accommodated quickly to that new group. In an attempt to understand causes of in-group favoritism, researchers formed a group so minimal and insignificant that one would expect that no favoritism would be found, yet in-group favoritism appeared immediately.
Researchers agree that banding together against a threat (the out-group) and sharing rewards are primary reasons groups form and bond so easily. Mere proximity is another powerful factor in relationship formation. Just like babies form attachments with their caregivers, people develop attachments just because they live near one another. This suggests that proximity sometimes overcomes the tendencies to bond with others who are similar to us. Positive social bonds form just as easily under fearful circumstances, such as military veterans who have undergone heavy battle together. This can be explained by either misattribution (interpreting feelings of anxious arousal as feelings of attraction for another person) or reinforcement theory (the presence of another person reduces distress and elicits positive responses). Baumeister and Leary argue that the reinforcement theory explanation provides evidence for the importance of belonging needs because these learned associations create a tendency to seek out the company of others in times of threat. The formation of social attachments with former rivals is a great indicator of the need to belong. Belonging motivations are so strong that they are able to overcome competitive feelings towards opponents.
People form such close attachments with one another that they are hesitant in breaking social bonds. Universally, people distress and protest ending social relationships across all cultures and age spans. Even temporary groups, such as training groups, struggle with the idea that the group may eventually dissolve. The group may have fulfilled their purpose, but the participants want to cling on to the relationships and social bonds that have been formed with one another. The group members make promises individually and collectively to stay in touch, plan for future reunions, and take other steps to ensure the continuity of the attachment. For example, two people may not speak for an entire year, but continue exchanging holiday cards. People do not want to risk damaging a relationship or breaking an attachment, because it is distressing.
People are so hesitant in breaking social bonds that in many cases, they are hesitant to dissolve even bad relationships that could be potentially destructive. For example, many women are unwilling to leave their abusive spouses or boyfriends with excuses ranging from liking for the abuse to economic self-interests that are more important than physical harm. This unwillingness to leave an abusive partner, whether mentally or physically, is just another indicator of the power of the need to belong and how reluctant individuals are to break these bonds. Breaking off an attachment causes pain that is deeply rooted in the need to belong.
People experience a range of both positive and negative emotions; the strongest emotions linked to attachment and belongingness. Empirical evidence suggests that when individuals are accepted, welcomed, or included it leads those individuals to feel positive emotions such as happiness, elation, calm, and satisfaction. However, when individuals are rejected or excluded, they feel strong negative emotions such as anxiety, jealousy, depression, and grief. In fact, the psychological pain caused by social rejection is so intense that it involves the same brain regions involved in the experience of physical pain. Both positive and negative reactions in emotion are connected to status of relationship. The existence of a social attachment changes the way one emotionally responds to the actions of a relationship partner and the emotions have the potential to intensify.
Lack of constant, positive relationships has been linked to a large range of consequences. People who lack belongingness are more prone to behavioral problems such as criminality and suicide and suffer from increasing mental and physical illness. Based on this evidence, multiple and diverse problems are caused by the lack of belongingness and attachments. It therefore seems appropriate to regard belongingness and attachments as a need rather than simply a want.
Relationships that are centrally important in the way people think are interpersonal relationships. The belongingness hypothesis suggests that people devote much of their cognitive thought process to interpersonal relationships and attachments. For example, researchers found that people store information in terms of their social bonds, such as storing more information about a marriage partner as opposed to a work acquaintance. People also sort out-group members on the basis of characteristics, traits, and duties, whereas they sort in-group members on person categories. Cognitive processing organizes information by the person they have a connection with as opposed to strangers. Researchers had a group of people take turns reading out loud and they found that they had the greatest recall for the words they personally spoke, as well as for words spoken by dating partners or close friends. There is a cognitive merging of the self with specific people that is followed by the need to belong. Flattering words that are said to a spouse can enhance the self just as positively. People always believe that nothing bad can happen to themselves, and extend that thought to their family and friends.
There is an emotional implication to belongingness in which positive affect is linked to increases in belongingness while negative affect is linked to decreases in belongingness. Positive emotions are associated with forming social attachments, such as the experience of falling in love, as long as the love is mutual. Unrequited love (love without belongingness) usually leads to disappointment whereas belongingness in love leads to joy. Occasions such as childbirth, new employment, and fraternity/sorority pledging are all associated with the formation of new social attachments surrounded by positive emotions. Forming bonds is cause for joy, especially when the bond is given a permanent status, such as a wedding. Weddings signify permanent commitment and complete the social bond by committing to the spouse's need to belong. Positive experiences shared emotions increases attraction with others. Close personal attachments, a rich network of friends and high levels of intimacy motivation are all correlated to happiness in life.
The breaking of social bonds and threats to those bonds are primary sources of negative affect. People feel anxious, depressed, guilty or lonely when they lose important relationships. Social exclusion is the most common cause of anxiety. Anxiety is a natural consequence of being separated from others. Examples include children suffering from separation anxiety from being separated from their mothers. Adults act similarly when their loved ones leave for a period of time. Memories of past rejection and imagining social rejection all elicit negative emotions. Losses of attachments lead directly to anxiety. If people are excluded from social groups, people get anxious, yet the anxiety is removed when they experience social inclusion. Failing to feel accepted can lead to social and general depression. Depression and anxiety are significantly correlated. Social exclusion is also a major cause of jealousy, which is a common reaction when one's relationships are threatened. Jealousy is cross-culturally universal and in all cultures, sexual jealousy is common. It was said earlier that belongingness needs can only truly be met with social contact, but social contact by itself does not shield people against loneliness. Loneliness matters more when there is a lack of intimacy as opposed to lack of contact. Another negative affect is guilt, which is caused to make the other person want to maintain the relationship more, such as paying more attention to that person.
Divorce and death are two negative events that spoil the need to belong. Divorce causes distress, anger, loneliness, and depression in almost everyone. The death of oneself and other people are the most traumatic and stressful events that people can experience. Death can cause severe depression, which is not a reaction to the loss of the loved one, but because there is a loss of the attachment with that other person. For example, a death of a spouse in which there was marriage problems can still elicit in extreme sadness at the loss of that attachment. Death is linked to anxiety and fear of loneliness. The idea of being separated from friends and family, and not the fact that they would no longer exist on this earth, is what brings about this anxiety.
Evolutionary perspectives
One reason for the need to belong is based on the theory of evolution. In the past belonging to a group was essential to survival: people hunted and cooked in groups. Belonging to a group allowed tribe members to share the workload and protect each other. Not only were they trying to ensure their own survival, but all members of their tribe were invested in each other's outcomes because each member played an important role in the group. More recently in Western society, this is not necessarily the case. Most people no longer belong to tribes, but they still protect those in their groups and still have a desire to belong in groups.
The need to belong is rooted in evolutionary history. Human beings are social animals. Humans have matured over a long period of time in dyadic and group contexts. Humans evolved in small groups that depended on close social connections to fulfill survival and reproductive needs. Unlike other species, humans receive most of what they need from their social group rather than directly from his or her natural environment, suggesting that the human strategy for survival depends on belonging. This explains why a large body of evidence suggests that people are happier and healthier when they experience social belonging. In contrast, lacking belonging and being excluded is perceived as painful and has a variety of negative effects including, shame, anger and depression. Because belongingness is a central component of human functioning, social exclusion has been found to influence many behavioral, cognitive, and emotional outcomes. Given the negative consequences of social exclusion and social rejection, people developed traits that function to prevent rejection and encourage acceptance.
Self-presentation
To be accepted within a group, individuals may convey or conceal certain parts of their personalities. This is known as self-presentation. Self-presentation, or impression management, attempts to control images of the self in front of audiences. It is a conscious and unconscious goal-directed action done to influence audiences to perceive the actor as someone who belongs. Certain aspects of one's personality may not be seen as desirable or essential to the group, so people try to convey what they interpret as valuable to the group.
Self-presentation is frequently used in social media. It has been shown that those who use a strategic self-presentation style in social media versus a more authentic self-presentation style when considering their weaker friendships tend to be happier and feel like they have successfully fulfilled their relationship maintenance goals. Additionally, it has been found that self-presentation in social media highly predicts an individual's sense of belongingness and social support.
Group membership
Individuals join groups with which they have commonalities, whether it is sense of humor, style in clothing, socioeconomic status, or career goals. In general, individuals seek out those who are most similar to them. People like to feel that they can relate to someone and those who are similar to them give them that feeling. People also like those that they think they can understand and who they think can understand them.
Social connections
The desire to form and maintain social bonds is among the most powerful human motives. If an individual's sense of social connectedness is threatened, their ability to self-regulate suffers. Social relationships are important for human functioning and well-being therefore, research on how social relationships affect people's personal interests and motivated behavior has been a focus of numerous studies. Walton, Cohen, and Spencer for example, believed that a mere sense of social connectedness (even with people who were unfamiliar) can cause one to internalize the goals and motivations of others. By doing so, this shapes people's motivated behavior suggesting achievement motivation and one's self-identity are highly sensitive to minor cues of social connection. Mere belonging is defined as an entryway to a social relationship, represented by a small cue of social connection to an individual or group. Social belonging is a sense of relatedness connected to a positive, lasting, and significant interpersonal relationship. While mere belonging is a minimal or even chance social connection, social belonging factors are characterized as social feedback, validation, and shared experiences. Sharing common goals and interests with others strengthens positive social bonds and may enhance feelings of self-worth.
In another study, Walton and Cohen examined stigmatization and its link to belonging uncertainty. Their belonging uncertainty idea suggests that in academic and professional settings, members of socially stigmatized groups are more uncertain of the quality of their social bonds. Therefore, they feel more sensitive to issues of social belonging. They believe in domains of achievement, belonging uncertainty can have large effects on the motivation of those challenging with a threatened social identity.
Conformity
Group membership can involve conformity. Conformity is the act of changing one's actions, attitudes, and behaviors to match the norms of others. Norms are unsaid rules that are shared by a group. The tendency to conform results from direct and indirect social pressures occurring in whole societies and in small groups. There are two types of conformity motivations known as informational social influence and normative social influence. Information social influence is the desire to obtain and form accurate information about reality. Information social influence occurs in certain situations, such as in a crisis. This information can be sought out by other people in the group or experts. If someone is in a situation where they do not know the right way to behave, they look at the cues of others to correct their own behavior. These people conform because group interpretations are generally more accurate than individual interpretations. Normative social influence is the desire to obtain social approval from others. Normative social influence occurs when one conforms to be accepted by members of a group, since the need to belong is in our human desire. When people do not conform, they are less liked by the group and may even be considered deviant. Normative influence usually leads to public compliance, which is fulfilling a request or doing something that one may not necessarily believe in, but that the group believes in.
According to Baumeister and Leary, group conformity can be seen as a way to improve one's chances of being accepted by a social group; thus is serves belongingness needs. People often conform to gain the approval of others, build rewarding relationships, and enhance their own self-esteem. Individuals are more likely to conform to groups who describe out-group members with stereotype traits, even though don't publicly express their agreement. People desire to gain approval so they conform to others. The beliefs held by others and how we react to those beliefs is often reliant on our view of the amount of agreement for those beliefs. Researchers are interested in exploring informational and normative motivational influences to conform on majorities and minorities. Objective consensus theory suggests that majority influence of a group is informational, while conversion theory views it as normative. Normative influences may be the underlying motivations behind certain types of conformity; however, researchers believe that after time, informational influences such as confidence in the accuracy of one's intergroup norms is positively correlated with distinguished level of compromise.
Outside the conscious mind, a type of conformity is behavioral mimicry, otherwise known as the chameleon effect. Behavioral mimicry is when individuals mimic behaviors such as facial expressions, postures, and mannerisms between other individuals. Researchers found that individuals subconsciously conformed to the mannerisms of their partners and friends and liked these partners more who mirrored them. This is important in regard to rapport building and forming new social relationships-we mirror the behaviors we are supposed to, to get to where we want to belong in the group. People are motivated to conform to gain social approval and enhance and protect their own self-esteems. However, people who wish to combat conformity and fight that need to belong with the majority group can do so by focusing on their own self-worth or by straying from the attitudes and norms of others. This can establish a sense of uniqueness within an individual. Yet, most individuals keep positive assessments of themselves and still conform to valued groups.
Self-regulation
When our belongingness needs are not met, Wilkowski and colleagues (2009) suggest that self-regulation is used to fulfill one's need to belong. Self-regulation is defined as the process of regulating oneself, or changing one's behavior, to manage short-term desires according to the self-regulation theory. Self-regulation can occur in many different ways. One of these ways uses other individual's gaze(s) as a reference to understand how attention should be divided. This effect is especially seen within individuals that have low levels of self-esteem. Interpersonal acceptance is not met in individuals with low self-esteem, which prompts them to self-regulate by looking to others for guidance with regards to where to focus attention. Belongingness contributes to this level of self-esteem. Baumeister, Dewall, Ciarocco, and Twenge (2005) found that when people are socially excluded from a group, self-regulation is less likely to be than those who have a heightened sense of belonging. For example, participants were told that the other people in the study did not want to work with them and as a consequence they would have to complete a task on their own. Later, those participants were offered a plate of cookies. The participants that were told that nobody in the group wanted to work with them took more cookies than those who were not told this information, which provides evidence that a lack of belongingness inhibits people's ability to self-regulate. Self-regulation includes impulse control and allows one to manage short-term impulses and have a heightened sense of belongingness within an ingroup. An ingroup is a social group in which a person psychologically defines themselves as being a member of that specific group. By being a part of this group, one has a better ability to self-regulate.
Peer networks
As the span of relationships expands from childhood into adolescence, a sense of peer group membership is likely to develop. Adolescent girls have been found to value group membership more and are more identified with their peer groups than boys. Adolescent girls tend to have a higher number of friends than boys. They expect and desire more nurturing behavior from their friends. Girls experience more self-disclosure, more empathy, and less overt hostility compared to boys. A study found that girls use ruminative coping, which involves perseverating on the negative feelings and the unpleasant situations associated with problems. Boys on the other hand, tend to be less intimate and have more activity based friendships. Boys do not benefit as much as girls from feelings of belonging that are a product of enduring and close friendships. They are less vulnerable to the emotional distress that is likely to accompany high levels of co-rumination and disclosure.
Various peer groups approve of varying activities and when individuals engage in approved activities, the peer group positively reinforces this behavior. For example, allowing the individual to become part of the group or by paying more attention to the individual is a positive reinforcement. This is a source of motivation for the individual to repeat the activity or engage in other approved activities. Adolescents have also been observed to choose friendships with individuals who engage in similar activities to those that they are involved in. This provides the individual with more opportunities to engage in the activity therefore the peer group may influence how often the individual engages in the activity. To feel a sense of belonging and fit in, adolescents often conform to activities of a particular group by participating in the same activities as members of the peer group.
Newman and colleagues found three different aspects of adolescents' perceptions of group membership: peer group affiliation, the importance of peer group membership and a sense of peer group belonging to behavior problems in adolescence. To capture an adolescent's self-perception of group affiliation one may ask an adolescent to identify themselves as a member of a group or discuss whether they belong in a group. An affective aspect of group belongingness includes feelings of being proud of one's group and being a valued group member. The affective nature of a sense of group belonging has been found to be the most internally consistent. It is important to find out how important it is for an adolescent to be a member of a group because not all adolescents are equally concerned about being part of a group. Those who strongly desire to be in a peer group and do not experience a sense of group belonging are expected to have the greatest social distress and are likely to report the most behavior problems.
Schooling
A sense of belonging to a social peer group can enhance students academic achievement. Group membership in early adolescence is associated with greater interest in and more enjoyment of school, while those who are not part of such social groups tend to be less engaged with school. Among middle school and high school students, multiple studies have found a link between a more positive sense of belonging and better academic motivation, lower rates of school dropout, better social-emotional functioning, and higher grade point average. At a college level, a better sense of belonging has been linked to perceived professor caring and greater involvement in campus organizations. In a study exploring associations between a sense of school belonging and academic and psychological adjustment, Pittman and Richmond found that college students who reported a greater sense of belonging at a college level, were doing better academically and felt more competent scholastically but also had a higher self-worth and lower levels of externalizing problems. However, students who were having problems with their relationships with friends, were found to experience more internalizing behaviors and feel less connected to the college.
Schools are important developmental contexts for children and adolescents, and influence their socio-emotional and academic development. One approach used to study naturally occurring peer groups is the social cognitive mapping (SCM). The SCM strategy asks students in a peer system, for example in a classroom, to identify which class members they have observed “hanging out” together. Therefore, determining patterns of observed social affiliations. Interactions and associations within peer networks theorize experience validation, acceptance, and affirmation of early adolescents in schools. The sense of connection within a classroom has been defined as having a sense of classroom belonging. Meaning, students feel they are being valued accepted, included and encouraged by others in the classroom setting. They perceive themselves to be an important part of the setting and activity of the class.
Goodenow and Grady (1993) define school belonging as "the extent to which students feel personally accepted, respected, included, and supported by others in the school social environment" (p 80). School belonging is considered to be a complex multidimensional construct. In much of the research to date, school connectedness has also been used to describe 'school belonging'. Whilst some scholars believe the terms can be used interchangeably, others construe school belonging as something different.
School belonging has been operationalized by the Psychological Sense of School Membership (PSSM) scale. A sense of school belonging has been associated with greater overall well being and happiness, as well as outcomes related to academic achievement.
There are a number of similar concepts centered around school belonging, including school bonding, student engagement, school attachment, school community, school climate, orientation to school, and school connectedness. The inconsistent use of terminology has meant that research into school belonging has been somewhat disjointed and weakened.
School belonging is a student's attachment to their school. Student engagement was explored by Finn in the two-dimensional model, conceptualizing engagement as having two components – participation and identification. Participation refers to behavior, whilst identification relates to affect or a sense of belonging. While school attachment involves a student's connection to school, school community incorporates belonging, meaning that in order to be a part of any community (including a school community), a person first needs to have feelings of belonging
Blum and Libbey characterize school connectedness as a student's perception that teachers, along with other adults in the school community, show a concern for the pupils' learning, pay attention to who the student is as an individual, and also have high academic expectations. Furthermore, school connectedness involves a student having a sense of safety at school as well as positive student-teacher relationships.
Despite the slight differences in meaning, these terms commonly include three aspects: they refer to school-based relationships and experiences, they involve the relationship between students and teachers, and they include a student's general feelings about school as a whole.
A large number of variables have been found to be significantly associated with school belonging. This has made it difficult to present a theoretical model of school belonging. Allen and colleagues (2018) conducted a comprehensive meta-analysis and uncovered 10 themes that influence school belonging during adolescence in educational settings:
Academic motivation
Emotional stability
Personal characteristics
Parent support
Teacher support
Peer support
Gender, race and ethnicity
Extracurricular activities
Environmental/school safety
The meta-analysis found that teacher support and positive personal characteristics are the strongest predictors of school belonging.
Whilst theories pertaining to general ‘belongingness' can also be applied to school belonging, theories of belonging generally imply that belonging comes about because an individual is motivated to meet the fundamental need to belong and to achieve meaningful social relations. However, school belonging is slightly different. School belonging is affected by the school's organisational culture as well as a student's relationships with others and personal characteristics. Schools can help students to develop a sense of belonging because they are in a position to develop social networks and influence policy in practice that is conducive to enhancing student belonging.
The fact that school belonging, by its very nature, is affected by the wider environment, it is consistent with Bronfenbrenner's ecological framework for human development, and the subsequent bio-ecological framework. These frameworks put forward the theory that children's development takes place within the systems in society, and that these systems interact. Every child is at the center of multiple levels of influence. It has been argued that a social-ecological lens is the most adequate lens with which to view the construct of school belonging, given the large number of variables at play, and also the unique nature of school belonging for both the individual and the school.
At school, students are a part of a greater whole influenced by both formal and informal groupings, and overarching systems that are common and typically represented within all schools. Thus, school belonging can be conceptualized as a multi-layered, socio-ecological phenomena, consisting of several interacting layers. This is depicted in the Socio-ecological Model of School Belonging depicted by Allen, Vella-Brodrick, and Waters (2016) in the Figure below.
The innermost layer of the construct is the individual level. This describes the unique student characteristics that contribute to the sense of belonging, including personality and mental health. The micro-system refers to network an individual has that are informal, such as family, friends, teachers, and peers with whom the student interacts with. The mesosystem refers to organisational factors, including school resources, processes, policies, rules and practices. The exosystem refers to the broader school community. Finally, the macro-system involves the legislation, history and social climate of a society. This socio-ecological framework has been developed from empirical studies, and provides schools with a thorough direction in which to foster school belonging.
Given that school belonging is largely about perception, social belonging interventions such as those suggested by Walton and Brady have therefore been found to be useful. They argue that these interventions provide students with an adaptive lens with which to make sense of adversities in school. For minority students, challenges at school can give rise to feelings of non-belonging.
One such social intervention described by Walton and Brady sees stories used, whereby difficulties at school are portrayed as a normal part of education. Rather than attributing challenges as a sign that one doesn't belong, the stories acknowledge group-based difficulties, but show how these experiences are not necessarily a barrier to ultimately belonging and succeeding.
One group that may have the feelings of non-belonging that challenges can lead to, is those of a racial minority. The students who are from minority groups may attribute challenges – both academic and otherwise – to their racial identity. Social support is essential for improving belonging, most especially for students from minority backgrounds for whom acceptance by peers, teachers and parents is an important behavior of pro-social behavior and a positive attitude towards school.
Workplace
The need to belong is especially evident in the workplace. Employees want to fit in at work as much as students want to fit in at school. They seek the approval and acceptance of leaders, bosses, and other employees. Charismatic leaders are especially known to show off organizational citizenship behaviors such as helping and compliance if they feel a sense of belongingness with their work group. Researchers found that charisma and belongingness increased cooperative behavior among employees. Charismatic leaders influence followers by bringing awareness to the collective unit and strengthening the feeling of belonging, and that enhances employees' compliance. Organizational citizenship behaviors are employee activities that benefit the collective group without the individual gaining any direct benefit. Helping is a huge component of organizational citizenship behaviors because helping involves voluntarily assisting others with problems that are work-related and preventing other issues from arising. Task performance is enhanced and supported when the acts of helping in a work environment are established and evident. Charismatic leaders set a striking example for the way to organization should behave by reinforcing certain rules and values for the organization. These self-confident leaders inspire their followers to exceed expectations for the collective group instead of their own self-interest. This in turn gives employees an identity with which to belong. Studies indicate that belongingness is a crucial factor in understanding DEI efficacy in the workplace.
A sense of belongingness increases a person's willingness to assist others in the group by the group rules. Belongingness and group membership encourages social groups with motivation to comply, cooperate, and help. Cohesive work groups show more consideration, report positive relationships within the group and elicits more organizational citizenship behaviors. Also, an already cohesive and collective group makes people more inclined to comply with the rules of the workplace. Some people help each other in return for a future expected favor; however, most workings help because it is the “right” thing to do or because they like their leaders so much and wish to express this likeness. People are more receptive to a leader who provides a clear sense of direction and inspiration with the promise of a better future. Workers who feel more isolated in the workplace feel the need to belong even more than those who are not isolated because they are missing that collective feeling of unity. A workplace functions better as a collective whole.
Acceptance/rejection
The need to belong is among the most fundamental of all personality processes. Given the negative consequences of social rejection, people developed traits that function to encourage acceptance and to prevent rejection. But if the need to belong evolved to provide people with a means of meeting their basic needs for survival and reproduction based on evolutionary experiences, thwarting the need to belong should affect a variety of outcomes. Because it strikes at the core of human functioning, people respond very strongly to social exclusion.
Both interpersonal rejection and acceptance are psychologically powerful events. Feeling disliked, excluded, unappreciated, or devalued can stir up negative emotions in an individual. Some of these negative emotions include a lower self-esteem, aggressive actions and antisocial behavior. However, believing you are liked, included, appreciated, or valued elicits feelings of higher self-esteem and confidence boosts. A different number of events can lead individuals to feel accepted versus rejected. We can simply see the power of interpersonal acceptance and rejection when accepted vs. ostracized by a group, adored vs. abandoned by a romantic partner, or elected vs. defeated in an election.
However, in all examples, people's feelings begin from perceived relational evaluation. Perceived relational evaluation is the degree to which you perceive others value having a relationship with you. You feel more accepted if another person or group regards your relationship with them as real and as important to them as it is to you. If they consider the relationship unimportant, you feel rejected and respond negatively.
In a series of experiments, Buckley, Winkel, and Leary found that the effects of rejection are more potent than the effects of acceptance because negative feelings can cause more feelings of hurt and pain, which in turn can lead to aggression and negative behaviors. They also found people's reactions to extreme and moderate rejection were similar, suggesting that once one has been rejected by an individual or group, the severity of the rejection is less important
Procedural justice
Procedural justice, in terms of belongingness, according to van Prooijen and colleagues (2004), is the process by which people judge their level of belongingness in terms of their ability to contribute to a group. Members of a highly inclusive group show a higher level of procedural justice, meaning that individuals that experience high levels of inclusion respond in a more extreme manner to decisions allocated by members of their ingroup than those that are handed down from members of an outgroup. In other words, a person is more likely to believe and support fairness decisions made by members of an ingroup in which they feel like they are a part of, compared to an ingroup in which they do not feel as strongly connected. De Cremer and Blader (2006) found that when people feel a heightened sense of belongingness, they process information about procedural justice in a more careful and systematic way. This means that when people feel like they belong, they are more likely to examine procedural justice issues in a more thorough manner than if they do not feel like they belong.
Fairness
Fairness principles are applied when belongingness needs are met. Van Prooijen and colleagues (2004) found that fairness maintains an individual's sense of inclusion in social groups. Fairness can be used as an inclusion maintenance tool. Relationships are highly valued within groups, so members of those groups seek out fairness cues so they can understand these relationships. De Cremer and colleagues (2013) suggest that individuals with a high need to belong care more about procedural fairness information and therefore pay closer attention to incoming information. Furthermore, Cornelis, Van Hiel, De Cremer and Mayer (2013) propose that leaders of a group are likely to be more fair when they are aware that the followers of the group have a high need to belong versus a low need to belong. This means that a leader who is aware that people in their group are motivated to adhere to group values is more fair. Leaders are also more fair in congruence with the amount of empathy they feel for followers. Empathetic leaders are more likely to pay attention to differences among followers, and to consider a follower's belongingness needs when making decisions. In addition, Cornelis, Van Hiel, & De Cremer (2012) discovered that leaders are more fair in granting their followers voice when the leader is aware that the follower has a high need to belong. This occurs because of the attraction a leader feels to the follower and to the group. Leaders that are attracted to their followers and to the group are motivated by the follower's need to belong to allow them a greater voice in the group.
Culture
In all cultures, the need to belong is prevalent. Although there are individual differences in the intensity and strength of how people express and satisfy the need, it is really difficult for culture to eradicate the need to belong. Collectivist countries are also more likely to conform and comply with the majority group than members in individualistic societies. Conformity is so important in collectivist societies that nonconformity can represent deviance in Circum-Mediterranean cultures, yet represent uniqueness in Sinosphere culture. Even early civilizations considered both exile and death as equal punishments. Individuals in other countries strive to belong so much that being exiled or shunned from their society is the biggest dishonor.
Motivation to belong varies throughout different cultures, and can affect student achievement in distinct ways. In studies comparing fifteen year old students from 31 countries, the differences between Eastern and Western cultures were apparent. It is important to note that the study is in the perspective of dividing these countries into two groups. The study argues that Asian (eastern) cultures are collectivist, while Western cultures are more individualistic. In Western cultures, peer influence is more predominant while in Eastern cultures, they are more heavily influenced by their families. In a classroom setting, children from Eastern cultures are more competitive, giving them less of a drive to belong among their peers. These children have a great sense of motivation to excel and to do better than those around them which makes their needs for belongingness in a school setting less favorable. While in Western cultures, being so highly impacted by their peers, it gives them less of a drive to be competitive towards them.
Studies have shown that Eastern and Western cultures continue to have one of the largest achievement gaps between them, with Eastern cultures outscoring the Western. It can be hypothesized that the competitive, individualistic drive found in the classroom in Eastern cultures leads to more success. Furthermore, belongingness in Western cultures may have the potential to inhibit classroom success. However, not all cultures respond to belongingness in the same way due to the many variations between cultures.
Furthermore, stigmas can create a global uncertainty about the quality of an individual's social bonds in academic and professional areas. Walton and Cohen conducted two experiments that tested how belonging uncertainty undermines the achievement and motivation of people whose racial group is negatively characterized in academic settings. The first experiment had students believe that they might have a few friends in a field of study. White students were unaffected by this however, black students who were stigmatized academically displayed a drop in potential and sense of belonging. This response of minority students happens because they are aware that they are underrepresented and stigmatized therefore they perceive their worlds differently. Their second experiment was set up as an intervention that was designed to de-racialize the meaning of hardship in college by focusing hardships and doubts as a commonality among 1st year students rather than due to race. What their findings suggest is that majority students may benefit from an assumed sense of social belonging.
Behavior and social problems
Belongingness, also referred to as connectedness, has been established as a strong risk/predictive factor for depressive symptoms. There is growing evidence that the interpersonal factor of belongingness is strongly associated with depressive symptoms. The impression of low relational value is consciously experienced as reduced self-esteem. Reduced self-esteem is a fundamental element of depressive symptoms. According to these views, belongingness perceptions have a direct effect upon depressive symptoms due to innate neurological mechanisms. A number of studies have confirmed a strong link between belongingness and depressive symptoms using the Sense of Belonging Instrument-Psychological measurement. This measurement scale contains 14 items that invoke the social world—for example, “I don't feel there is any place I really fit in this world.” The SOBI-P is intended to measure a general sense of belonging.
Group membership has been found to have both negative and positive associations with behavior problems. Gender differences have been consistently observed in terms of internalizing and externalizing behavior problems. Girls reported more internalizing behaviors such as depression, and boys reported more externalizing problems. However, by providing a sense of security and peer acceptance, group membership may reduce the tendency to develop internalizing problems such as depression or anxiety. A lack of group membership is associated with behavior problems and puts adolescents at a greater risk for both externalizing and internalizing problems However, the need to belong may sometimes result in individuals conforming to delinquent peer groups and engaging in morally dubious activities, such as lying or cheating.
Depression
Humans have a profound need to connect with others and gain acceptance into social groups. When relationships deteriorate or when social bonds are broken, people have been found to suffer from depressive symptoms. Having a greater sense of belonging has been linked to lower levels of loneliness and depression. Although feeling disconnected from others and experiencing a lack of belonging may negatively affect any individual, those who are depressed may be more vulnerable to negative experiences of belonging. Due to the importance of social experiences to people's well-being, and to the etiology and maintenance of depression, it is vital to examine how well-being is enhanced or eroded by positive and negative social interactions in such clinical populations.
When people experience positive social interactions, they should feel a sense of belonging. However, depressed people's social information-processing biases make them less likely to recognize cues of acceptance and belonging in social interactions. For example, in a laboratory study using information-processing tasks assessing attention and memory for sad, physically threatening, socially threatening, and positive stimuli, clinically depressed people were found to show preferential attention to sad faces, emotion words, and adjectives. Depressed people displayed biases for stimuli concerned with sadness and loss.
People who are depressed often fail to satisfy their need for belonging in relationships and therefore, report fewer intimate relationships. Those who are depressed appear to induce negative affect in other individuals, which consequently elicits rejection and the loss of socially rewarding opportunities. Depressed people are less likely to feel a sense of belonging and are more likely to pay attention to negative social interactions. Research has found that depressive symptoms may sensitize people to everyday experiences of both social rejection and social acceptance.
Suicide
Numerous studies have indicated that low belonging, acquired ability to self-injure, and burdensomeness are associated with suicidal behaviors. A recent theoretical development: interpersonal theory of suicidal behavior, offers an explanation for the association between parental displacement and suicidal behavior. Thomas Joiner, who recently proposed an interpersonal theory of suicide, suggests that two elements must be present for suicidal behavior to occur. The first element is the desire for suicide and the second is the acquired capability for suicide. In turn, the desire for suicide, is broken into two components: thwarted belongingness and perceived burdensomeness. Together these two components create a motivational force for suicidal behavior. Specifically speaking of adolescent suicidal behavior, the theory proposes that suicidal behavior is a result of individuals having a desire for death and the acquired ability to self-inflict injuries. Increased acquired ability refers to a lack of pain response during self-injury, which has been found to be linked to the number of suicide attempts in a lifetime.
Displacement from parents includes events such as abandonment of the adolescent, divorce, or death of a parent. Parental relationships are a representation of belonging for adolescents because parents may be particularly important for providing the stable and caring relationships that are a fundamental component of belonging. Relationships between parents and adolescents that are positive have been found to be a protective factor that reduces the risk of suicidal behavior in adolescents. Connectedness with parents such as closeness between parent and child and the perceived caring of parents, has been associated with lower levels of past suicide attempts and ideation. Another protective factor found against adolescent suicide attempts was higher levels of parental involvement.
According to Baumeister and Leary, belongingness theory proposes that the desire for death is caused by failed interpersonal processes. Similar to Joiner, one is a thwarted sense of belonging due to an unmet need to belong and the other process being a sense that one is a burden on others. They argue that all individuals have a fundamental need to belong. This need to belong is only met if an individual has frequent, positive interactions with others and feels cared about by significant others. The concept of low belonging suggested by interpersonal theory of suicidal behavior is most relevant to parental displacement and adolescent suicidal behavior because it is likely that parental displacement would affect perceived belonging of adolescents. It was found that adolescents who averaged at about the age of 16, who experienced both low levels of belonging and displacement had the highest risk for suicide. Parental displacement would disrupt the parent-adolescent relationship and consequently would diminish both the frequency and quality of interactions between the two, reducing the adolescent's sense of belonging.
A study conducted on suicide notes, examined the frequency of themes of thwarted belongingness and perceived burdensomeness in samples of suicide notes. The study of suicide notes has been a useful method for examining the motivations of suicides. It is important to note that this research is limited due to the small proportion of completed suicides that actually leave notes. This specific study explored the extent to which the content in the suicide notes reflected thwarted belongingness and perceived burdensomeness. They also examined the extent to which these two themes were found in the same note. This study found that suicide notes did not significantly support the hypothesis that perceived burdensomeness and thwarted belongingness, combined with acquired capability to cause suicidal behavior. There was no strong support for the relevance of perceived burdensomeness and thwarted belongingness as motivations for suicide. They did, however, find that the suicide notes of women more frequently contained the theme of perceived burdensomeness and suicide notes of younger people more frequently contained thwarted belongingness.
See also
References
Further reading
Youkhana, Eva. "Belonging" (2016). University Bielefeld – Center for InterAmerican Studies.
The International Belonging Laboratory is an external website that facilitates the collaboration of belonging researchers, dissemination of belonging research and a repository of belonging measures.
Interpersonal relationships | Belongingness | Biology | 10,201 |
27,466,687 | https://en.wikipedia.org/wiki/Vulnerability%20database | A vulnerability database (VDB) is a platform aimed at collecting, maintaining, and disseminating information about discovered computer security vulnerabilities. The database will customarily describe the identified vulnerability, assess the potential impact on affected systems, and any workarounds or updates to mitigate the issue. A VDB will assign a unique identifier to each vulnerability cataloged such as a number (e.g. 123456) or alphanumeric designation (e.g. VDB-2020-12345). Information in the database can be made available via web pages, exports, or API. A VDB can provide the information for free, for pay, or a combination thereof.
History
The first vulnerability database was the "Repaired Security Bugs in Multics", published by February 7, 1973 by Jerome H. Saltzer. He described the list as "a list of all known ways in which a user may break down or circumvent the protection mechanisms of Multics". The list was initially kept somewhat private with the intent of keeping vulnerability details until solutions could be made available. The published list contained two local privilege escalation vulnerabilities and three local denial of service attacks.
Types of vulnerability databases
Major vulnerability databases such as the ISS X-Force database, Symantec / SecurityFocus BID database, and the Open Source Vulnerability Database (OSVDB) aggregate a broad range of publicly disclosed vulnerabilities, including Common Vulnerabilities and Exposures (CVE). The primary purpose of CVE, run by MITRE, is to attempt to aggregate public vulnerabilities and give them a standardized format unique identifier. Many vulnerability databases develop the received intelligence from CVE and investigate further providing vulnerability risk scores, impact ratings, and the requisite workaround. In the past, CVE was paramount for linking vulnerability databases so critical patches and debugs can be shared to inhibit hackers from accessing sensitive information on private systems. The National Vulnerability Database (NVD), run by the National Institute of Standards and Technology (NIST), is operated separately from the MITRE-run CVE database, but only includes vulnerability information from CVE. NVD serves as an enhancement to that data by providing Common Vulnerability Scoring System (CVSS) risk scoring and Common Platform Enumeration (CPE) data.
The Open Source Vulnerability Database provides an accurate, technical and unbiased index on vulnerability security. The comprehensive database cataloged over 121,000 vulnerabilities. The OSVDB was founded in August 2002 and was launched in March 2004. In its primitive beginning, newly identified vulnerabilities were investigated by site members and explanations were detailed on the website. However, as the necessity for the service thrived, the need for dedicated staff resulted in the inception of the Open Security Foundation (OSF) which was founded as a non-profit organisation in 2005 to provide funding for security projects and primarily the OSVDB. The OSVDB closed in April 2016.
The U.S. National Vulnerability Database is a comprehensive cyber security vulnerability database formed in 2005 that reports on CVE. The NVD is a primary cyber security referral tool for individuals and industries alike providing informative resources on current vulnerabilities. The NVD holds in excess of 100,000 records. Similar to the OSVDB, the NVD publishes impact ratings and categorises material into an index to provide users with an intelligible search system. Other countries have their own vulnerability databases, such as the Chinese National Vulnerability Database and Russia's Data Security Threats Database.
A variety of commercial companies also maintain their own vulnerability databases, offering customers services which deliver new and updated vulnerability data in machine-readable format as well as through web portals. Examples include A.R.P. Syndicate's Exploit Observer, Symantec's DeepSight portal and vulnerability data feed, Secunia's (purchased by Flexera) vulnerability manager and Accenture's vulnerability intelligence service (formerly ).
Exploit Observer uses its Vulnerability & Exploit Data Aggregation System (VEDAS) to collect exploits & vulnerabilities from a wide array of global sources, including Chinese and Russian databases.
Vulnerability databases advise organisations to develop, prioritize, and execute patches or other mitigations which attempt to rectify critical vulnerabilities. However, this can often lead to the creation of additional susceptibilities as patches are created hastily to thwart further system exploitations and violations. Depending upon the level of a user or organisation, they warrant appropriate access to a vulnerability database which provides the user with disclosure of known vulnerabilities that may affect them. The justification for limiting access to individuals is to impede hackers from being versed in corporation system vulnerabilities which could potentially be further exploited.
Use of vulnerability databases
Vulnerability databases contain a vast array of identified vulnerabilities. However, few organisations possess the expertise, staff, and time to revise and remedy all potential system susceptibilities hence vulnerability scoring is a method of quantitatively determining the severity of a system violation. A multitude of scoring methods exist across vulnerability databases such as US-CERT and SANS Institute's Critical Vulnerability Analysis Scale but the Common Vulnerability Scoring System (CVSS) is the prevailing technique for most vulnerability databases including OSVDB, and NVD. The CVSS is based upon three primary metrics: base, temporal and environmental which each provide a vulnerability rating.
Base
This metric covers the immutable properties of a vulnerability such as the potential impact of the exposure of confidential information, the accessibility of information and the aftermath of the irretrievable deletion of information.
Temporal
The temporal metrics denote the mutable nature of a vulnerability for example the credibility of an exploitability, the current state of a system violation and the development of any workarounds that could be applied.
Environmental
This aspect of the CVSS rates the potential loss to individuals or organisations from a vulnerability. Furthermore, it details the primary target of a vulnerability ranging from personal systems to large organisations and the number of potentially affected individuals.
The complication with utilising different scoring systems it that there is no consensus on the severity of a vulnerability thus different organisations may overlook critical system exploitations. The key benefit of a standardised scoring system like CVSS is that published vulnerability scores can be assessed, pursued and remedied rapidly. Organisations and individuals alike can determine the personal impact of a vulnerability on their system. The benefits derived from vulnerability databases to consumers and organisations are exponential as information systems become increasingly embedded, our dependency and reliance on them grows, as does the opportunity for data exploitation.
Common security vulnerabilities listed on vulnerability databases
Initial deployment failure
Although the functionality of a database may appear unblemished, without rigorous testing, the exiguous flaws can allow hackers to infiltrate a system's cyber security. Frequently, databases are published without stringent security controls hence the sensitive material is easily accessible.
SQL injection
Database attacks are the most recurrent form of cyber security breaches recorded on vulnerability databases. SQL and NoSQL injections penetrate traditional information systems and big data platforms respectively and interpolate malicious statements allowing the hackers unregulated system access.
Misconfigured databases
Established databases ordinarily fail to implement crucial patches suggested by vulnerability databases due to an excessive workload and the necessity for exhaustive trialling to ensure the patches update the defective system vulnerability. Database operators concentrate their efforts into major system deficiencies which offers hackers unmitigated system access through neglected patches.
Inadequate auditing
All databases require audit tracks to record when data is amended or accessed. When systems are created without the necessary auditing system, the exploitation of system vulnerabilities are challenging to identify and resolve. Vulnerability databases promulgate the significance of audit tracking as a deterrent of cyber attacks.
Data protection is essential to any business as personal and financial information is a key asset and the purloining of sensitive material can discredit the reputation of a firm. The implementation of data protection strategies is imperative to guard confidential information. Some hold the view that is it the initial apathy of software designers that in turn, necessitates the existence of vulnerability databases. If systems were devised with greater diligence, they may be impenetrable from SQL and NoSQL injections making vulnerability databases redundant.
See also
Common Vulnerabilities and Exposures
Japan Vulnerability Notes
National Vulnerability Database
Open Source Vulnerability Database
Notes
References
Computer security exploits
Types of databases | Vulnerability database | Technology | 1,749 |
54,624,511 | https://en.wikipedia.org/wiki/Sol%C3%A8r%27s%20theorem | In mathematics, Solèr's theorem is a result concerning certain infinite-dimensional vector spaces. It states that any orthomodular form that has an infinite orthonormal set is a Hilbert space over the real numbers, complex numbers or quaternions. Originally proved by Maria Pia Solèr, the result is significant for quantum logic and the foundations of quantum mechanics. In particular, Solèr's theorem helps to fill a gap in the effort to use Gleason's theorem to rederive quantum mechanics from information-theoretic postulates. It is also an important step in the Heunen–Kornell axiomatisation of the category of Hilbert spaces.
Physicist John C. Baez notes,Nothing in the assumptions mentions the continuum: the hypotheses are purely algebraic. It therefore seems quite magical that [the division ring over which the Hilbert space is defined] is forced to be the real numbers, complex numbers or quaternions.Writing a decade after Solèr's original publication, Pitowsky calls her theorem "celebrated".
Statement
Let be a division ring. That means it is a ring in which one can add, subtract, multiply, and divide but in which the multiplication need not be commutative. Suppose this ring has a conjugation, i.e. an operation for which
Consider a vector space V with scalars in , and a mapping
which is -linear in left (or in the right) entry, satisfying the identity
This is called a Hermitian form. Suppose this form is non-degenerate in the sense that
For any subspace S let be the orthogonal complement of S. Call the subspace "closed" if
Call this whole vector space, and the Hermitian form, "orthomodular" if for every closed subspace S we have that is the entire space. (The term "orthomodular" derives from the study of quantum logic. In quantum logic, the distributive law is taken to fail due to the uncertainty principle, and it is replaced with the "modular law," or in the case of infinite-dimensional Hilbert spaces, the "orthomodular law.")
A set of vectors is called "orthonormal" if The result is this:
If this space has an infinite orthonormal set, then the division ring of scalars is either the field of real numbers, the field of complex numbers, or the ring of quaternions.
References
Hilbert spaces
Mathematical logic
Theorems in quantum mechanics | Solèr's theorem | Physics,Mathematics | 532 |
5,504,984 | https://en.wikipedia.org/wiki/Legal%20year | The legal year, in English law as well as in some other common law jurisdictions, is the calendar during which the judges sit in court. It is traditionally divided into periods called "terms".
Asia
Hong Kong
Hong Kong's legal year is marked as Ceremonial Opening of the Legal Year with an address by the Chief Justice of Hong Kong and begins in January.
Taiwan
The start of the legal year for courts in Taiwan is referred to as Judicial Day and marked in early January.
Europe
England
In England, the year is divided into four terms:
Michaelmas term - from October to December
Hilary term - from January to April
Easter term - from April to May
Trinity term - from June to July.
Between terms, the courts are in vacation, and no trials or appeals are heard in the High Court, Court of Appeal and Supreme Court. The legal terms apply to the High Court, Court of Appeal and Supreme Court only, and so have no application to the Crown Court, County Court, or magistrates' courts. The longest vacation period is between July and October. The dates of the terms are determined in law by a practice direction in the Civil Procedure Rules. The Hilary term was formerly from the 11th to the 31st of January, during which superior courts of
England were open.
The legal year commences at the beginning of October, with a ceremony dating back to the Middle Ages in which the judges arrive in a procession from the Temple Bar to Westminster Abbey for a religious service, followed by a reception known as the Lord Chancellor's breakfast, which is held in Westminster Hall. Although in former times the judges walked the distance from Temple to Westminster, they now mostly arrive by car. The service is held by the Dean of Westminster with the reading performed by the Lord Chancellor.
The ceremony dates back to 1897 and has been held continuously since with the exception of the years 1940 to 1946 because of the Second World War and 2020 because of the COVID-19 pandemic. In 1953 it was held in St Margaret's Church because Westminster Abbey was still decorated for the Coronation of Queen Elizabeth II.
Ireland
In Ireland, the year is divided as per the English system, with identical Michaelmas, Hilary, Easter and Trinity terms. These have a Christmas, Easter, Whit and Long Vacation between them respectively. The Michaelmas term, and legal year, is opened with a service in St. Michan's Church, Dublin attended by members of the Bar and Law Society who then adjourn to a breakfast given in the King's Inns.
France
In France, a rentrée solennelle, a ceremonial sitting of the court, is held in most courts in September to swear in new judges and in January or February, to mark the start of the legal year. New judges may also be sworn in at that event. Bar associations (barreaux), especially larger ones, may also hold a rentrée solennelle, but often at a completely different time of the year to the court-organised official ceremonies, such as in November.
French courts do not sit in a formal term structure, although the practice of vacances judiciaires (legal vacations) between July and the end of August, in late December around Christmas and New Year's and, to a lesser extent, Easter, mean that courts often do not sit to hear non-urgent business during those times, creating, de facto, three legal terms each year.
North America
Canada
Courts in Canada do not have formal terms. They are open year-round but tend to be less busy over the summer months. There is a formal opening of the courts in Ontario in September.
United States
The United States Supreme Court follows part of the legal year tradition, albeit without the elaborate ceremony. The court's year-long term commences on the first Monday in October (and is simply called "October Term"), with a Red Mass the day before. The court then alternates between "sittings" and "recesses" and goes into final recess at the end of June.
Several Midwest and East Coast states and some federal courts still use the legal year and terms of court. Like the Supreme Court, the U.S. Court of Appeals for the Second Circuit has a single year-long term with designated sittings within that term, although the Second Circuit begins its term in August instead of October (hence the name "August Term"). The U.S. Tax Court divides the year into four season-based terms starting in January.
Connecticut appellate courts divide the legal year into eight terms starting in September. New York courts divide the year into 13 terms starting in January. The Georgia Court of Appeals uses a three-term year starting in January. The Illinois Supreme Court divides the year into six terms starting in January.
Several states, like Ohio and Mississippi, do not have a uniform statewide rule for terms of court, so the number of terms varies greatly from one court to the next because every single court sets forth its own terms of court in its local rules.
However, the majority of U.S. states and most federal courts have abandoned the legal year and the related concept of terms of court. Instead, they reverse the presumption. They merely mandate that the courts are to be open year-round during business hours on every day that is not Saturday, Sunday, or a legal holiday. A typical example is Rule 77(c)(1) of the Federal Rules of Civil Procedure, which states that "The clerk's office ... must be open during business hours every day except Saturdays, Sundays, and legal holidays." Furthermore, states: "All courts of the United States shall be deemed always open for the purpose of filing proper papers, issuing and returning process, and making motions and orders."
References
See also
Law Terms Act 1830
Fiscal year
Further reading
External links
The legal year, term dates and sitting days 2024 and 2025 | Courts and Tribunals Judiciary
Practice Direction setting out term dates
English law
Calendars | Legal year | Physics | 1,214 |
54,240,905 | https://en.wikipedia.org/wiki/Hollow%20cathode%20effect | The hollow cathode effect allows electrical conduction at a lower voltage or with more current in a cold-cathode gas-discharge lamp when the cathode is a conductive tube open at one end than a similar lamp with a flat cathode. The hollow cathode effect was recognized by Friedrich Paschen in 1916.
In a hollow cathode, the electron emitting surface is in the inside of the tube. Several processes contribute to enhanced performance of a hollow cathode:
The pendulum effect, where an electron oscillates back and forth in the tube, creating secondary electrons along the way
The photoionization effect, where photons emitted in the tube cause further ionization
Stepwise ionization
Sputtering
The hollow cathode effect is utilized in the electrodes for neon signs, in hollow-cathode lamps, and more.
References
Atomic physics
Electrodes
Gas discharge lamps | Hollow cathode effect | Physics,Chemistry | 187 |
77,572,591 | https://en.wikipedia.org/wiki/NGC%205630 | NGC 5630 is a barred spiral galaxy in the constellation of Boötes. Its velocity with respect to the cosmic microwave background is 2826 ± 11 km/s, which corresponds to a Hubble distance of 41.68 ± 2.92 Mpc (∼136 million light-years). It was discovered by German-British astronomer William Herschel on 9 April 1787.
NGC 5630 is listed as a field galaxy, i.e. one does not belong to a larger galaxy group or cluster and hence is gravitationally alone.
Supernovae
Three supernovae have been observed in NGC 5630:
SN 2005dp (type II, mag. 16) was discovered by Kōichi Itagaki on 27 August 2005.
SN 2006am (type IIn, mag. 18.5) was discovered by the Lick Observatory Supernova Search (LOSS) on 22 February 2006.
SN 2023zdx (Type II-P, mag. 17) was discovered by ATLAS on 8 December 2023.
See also
List of NGC objects (5001–6000)
References
External links
5630
051635
+07-30-014
09270
14256+4128
Boötes
17870409
Discoveries by William Herschel
Barred spiral galaxies
Field galaxies | NGC 5630 | Astronomy | 257 |
217,913 | https://en.wikipedia.org/wiki/H%E1%BA%A1%20Long%20Bay | Hạ Long Bay or Halong Bay (, ) is a UNESCO World Heritage Site and popular travel destination in Quảng Ninh province, Vietnam. The name Hạ Long means "descending dragon". Administratively, the bay belongs to Hạ Long city, Cẩm Phả city, and is a part of Vân Đồn district. The bay features thousands of limestone karsts and islets in various shapes and sizes. Hạ Long Bay is a center of a larger zone that includes Bai Tu Long Bay to the northeast, and Cát Bà Island to the southwest. These larger zones share a similar geological, geographical, geomorphological, climate, and cultural characters.
Hạ Long Bay has an area of around , including 1,969 islets, most of which are limestone. The core of the bay has an area of with a high density of 775 islets. The limestone in this bay has gone through 500 million years of formation in different conditions and environments. The evolution of the karst in this bay has taken 20 million years under the impact of the tropical wet climate. The geo-diversity of the environment in the area has created biodiversity, including a tropical evergreen biosystem and a seashore biosystem. Hạ Long Bay is home to 14 endemic floral species and 60 endemic faunal species.
Historical research surveys have shown the presence of prehistoric human beings in this area tens of thousands years ago. The successive ancient cultures are the Soi Nhụ culture around 18,000–7,000 BC, the Cái Bèo culture 7,000–5,000 BC and the Hạ Long culture 5,000–3,500 years ago. Hạ Long Bay also marked some important events in Vietnamese history, with many artifacts found in Bài Thơ mountain, Đầu Gỗ cave, and Bãi Cháy.
Nguyễn Trãi praised the beauty of Hạ Long Bay 500 years ago in his verse , in which he called it "a rock wonder in the sky". In 1962, the Ministry of Culture, Sports and Tourism of North Vietnam listed Hạ Long Bay in the National Relics and Landscapes publication. In 1994, the core zone of Hạ Long Bay was listed as a World Heritage Site under Criterion VII, and was listed for a second time under Criterion VIII.
Etymology
The name Hạ Long (chữ Hán: 下龍) means "descending dragon".
Before the 19th century, the name Hạ Long Bay had not been recorded in the old books of the country. It has been called other names such as An Bang, Lục Thủy, and Vân Đồn. In the late 19th century, the name Hạ Long Bay appeared on the Maritime Map of France. The French-language Hai Phong News reported "Dragon appears on Hạ Long Bay".
According to local legend, when Vietnam had just started to develop into a country, they had to fight against invaders. To assist the Vietnamese in defending their country, the gods sent a family of dragons as protectors. This family of dragons began spitting out jewels and jade. These jewels turned into the islands and islets dotting the bay, linking together to form a great wall against the invaders. Under magics, numerous rock mountains abruptly appeared on the sea, ahead of invaders' ships; the forward ships struck the rocks and each other. After winning the battle, the dragons were interested in peaceful sightseeing of the Earth, and then decided to live in this bay. The place where the mother dragon descended was named Hạ Long, the place where the dragon's children attended upon their mother was called Bái Tử Long island (Bái: attend upon, Tử: children, Long: dragon), and the place where the dragon's children wriggled their tails violently was called Bạch Long Vĩ island (Bạch: white-color of the foam made when Dragon's children wriggled, Long: dragon, Vĩ: tail), present-day Tra Co peninsula, Móng Cái.
Overview
The bay consists of a dense cluster of some 1,600 limestone monolithic islands each topped with thick jungle vegetation, rising spectacularly from the ocean. Several of the islands are hollow, with enormous caves. Hang Dau Go (Wooden Stakes cave) is the largest grotto in the Hạ Long area. French tourists visited in the late 19th century, and named the cave Grotte des Merveilles. Its three large chambers contain large numerous stalactites and stalagmites (as well as 19th-century French graffiti). There are two bigger islands, Tuần Châu and Cát Bà, that have permanent inhabitants, as well as tourist facilities including hotels and beaches. There are a number of beautiful beaches on the smaller islands.
A community of around 1,600 people live on Hạ Long Bay in four fishing villages: Cua Van, Ba Hang, Cong Tau and Vong Vieng in Hung Thang ward, Hạ Long city. They live on floating houses and are sustained through fishing and marine aquaculture (cultivating marine biota), plying the shallow waters for 200 species of fish and 450 different kinds of mollusks. Many of the islands have acquired their names as a result of their unusual shapes. Such names include Voi Islet (elephant), Ga Choi Islet (fighting cock), Khi Islet (monkey), and Mai Nha Islet (roof). 989 of the islands have been given names. Birds and land animals including bantams, antelopes, monkeys, and lizards also live on some of the islands.
Almost all these islands are as individual towers in a classic fenglin landscape with heights ranging from , and height/width ratios of up to about six.
Another specific feature of Hạ Long Bay is the abundance of lakes inside the limestone islands. For example, Dau Be island has six enclosed lakes. All these island lakes occupy drowned dolines within fengcong karst.
Location
Hạ Long Bay is located in northeastern Vietnam, from E106°55' to E107°37' and from N20°43' to N21°09'. The bay stretches from Quang Yen town, past Hạ Long city, Cẩm Phả city to Vân Đồn District, is
bordered on the south and southeast by Lan Ha Bay, on the north by Hạ Long city, and on the west by Bai Tu Long Bay. The bay has a coastline and is approximately in size with about 2,000 islets. The area designated by UNESCO as the World Natural Heritage Site incorporates with 775 islets, of which the core zone is delimited by 69 points: Dau Go island on the west, Ba Ham lake on the south and Cong Tay island on the east. The protected area is from the Cái Dăm petrol store to Quang Hanh ward, Cẩm Phả city and the surrounding zone.
Climate
The climate of the bay is tropical, wet, sea islands, with two seasons: hot and moist summer, dry and cold winter. The average temperature is from , and annual rainfall is between . Hạ Long Bay has the typical diurnal tide system (tide amplitude ranges from ). The salinity is from 31 to 34.5MT in the dry season and lower in the rainy season.
Population
Of the 1,969 islands in Hạ Long, only approximately 40 are inhabited. These islands range from tens to thousands of hectares in size, mainly in the East and Southeast of Hạ Long Bay. In recent decades, thousands of villagers have been starting to settle down on the pristine islands and build new communities such as Sa Tô Island (Hạ Long City) and Thắng Lợi Island (Vân Đồn district).
The population of Hạ Long Bay is about 1,540, mainly in Cửa Vạn, Ba Hang and Cặp Dè fishing villages (Hùng Thắng Ward, Hạ Long City). Residents of the bay mostly live on boats and rafts buoyed by tires and plastic jugs to facilitate the fishing, cultivating and breeding of aquatic and marine species. Fish require feeding every other day for up to three years, when they are eventually sold to local seafood restaurants for up to 300,000 Vietnamese dong per kilogram. Today, the lives of Hạ Long Bay inhabitants have much improved due to new travel businesses. Residents of the floating villages around Hạ Long Bay now offer bedrooms for rent, boat tours, and fresh seafood meals to tourists. While this is an isolating, back-breaking lifestyle, floating village residents are considered wealthy compared to residents of other Hạ Long Bay islands.
At present , the Quảng Ninh provincial government has a policy to relocate the households living in the bay to resettle, in order to stabilize their life and to protect the landscape of the heritage zone. More than 300 households living in fishing villages in Hạ Long Bay have been relocated ashore in Khe Cá Resettlement Area, now known as Zone 8 (Hà Phong Ward, Hạ Long City) since May 2014. This project will continue to be implemented. The province will only retain a number of fishing villages for sightseeing tours.
History
Soi Nhụ culture (16,000–5000 BC)
Located within Hạ Long and Bái Tử Long are archaeological sites such as Mê Cung and Thiên Long. There are remains from mounds of mountain shellfish (Cyclophorus), spring shellfish (Melania, also called Thiana), some freshwater water mollusc and some rudimentary labour tools. The main way of life of Soi Nhụ's inhabitants included catching fish and shellfish, collecting fruits and digging for bulbs and roots. Their living environment was a coastal area unlike other Vietnamese cultures, for example, those found in Hòa Bình and Bắc Sơn.
Cái Bèo culture (5000–3000 BC)
Located in Hạ Long and Cát Bà island its inhabitants developed to the level of sea exploitation. Cái Bèo culture is a link between Soi Nhụ culture and Hạ Long culture.
Hạ Long culture (2500–1500 BC)
Classical period
Hạ Long Bay was the setting for historical naval battles against Vietnam's coastal neighbors. On three occasions, in the labyrinth of channels in Bạch Đằng River near the islands, the Vietnamese army stopped the Chinese invaders from landing. In 1288, General Trần Hưng Đạo stopped Mongol ships from sailing up the nearby Bạch Đằng River by placing steel-tipped wooden stakes at high tide, sinking the Mongol Kublai Khan's fleet.
Modern period
Hạ Long Bay was the site of the first ever raising of the new national flag of the Provisional Central Government of Vietnam on 5 June 1948 during the signing of the Halong Bay Agreements (Accords de la baie d’Along) by High Commissioner Emile Bollaert and President Nguyễn Văn Xuân.
During the Vietnam War, many of the channels between the islands were heavily mined by the United States Navy, some of which continue to pose threats to shipping routes in the present day.
Geology and geomorphology
In 2000, UNESCO's World Heritage Committee inscribed Hạ Long Bay in the World Heritage List according to its outstanding examples representing major stages of the Earth's history and its original limestone karstic geomorphologic features. Hạ Long Bay and its adjacent areas consist of a part of the Sino-Vietnamese composite terrane having its development history from pre-Cambrian up to present day. During Phanerozoic, terrigenous, volcanogenic and cherty-carbonate sediments containing in abundance graptolites, bivalves, brachiopods, fishes, foraminiferans, corals, radiolarias, and flora, separated from one from another by 10 stratigraphic gaps, but the boundary between Devonian and Carboniferous has been considered as continuous. The limestone karstic geomorphology of the bay has developed since the Miocene, especially the cone-shaped hills (fengcong), or isolated high limestone karst towers (fenglin) with many remnants of old phreatic caves, old karstic foot caves, and marine notch caves forming magnificent limestone karst landforms unique to the world. The Quaternary geology was developed through 5 cycles with the intercalation of marine and continental environments. The present Hạ Long Bay, in fact, appeared after the Middle Holocene maximum transgression, leaving ultimate zone of lateral undercutting in the limestone cliffs bearing many shells of oysters, having the 14C age as 2280 to >40,000 y. BP. Geological resources are abundant, including anthracite, petroleum, lignite, phosphate, oil shale, limestone and cement additives, kaolin, silica sand, dolomite, quartzite of exogenous origin, antimony, and mercury of hydrothermal origin. Additionally, there is surface water, groundwater and thermal mineral water on the shore of the Hạ Long – Bái Tử Long Bays, as well as other environmental resources.
In terms of marine geology, this area is recorded as an especially coastal sedimentary environment. In the alkaline seawater environment, the chemical denudation process of calcium carbonate proceeds rapidly, creating wide, strangely shaped marine notches.
The bottom surface sediments are various from clay mud to sand, however, silty mud and clay mud dominate in distribution. Especially, the carbonate materials originated from organisms make up 60 to 65% sedimentary content. The surface sediments of coral reefs are mainly sand and pebbles, of which the carbonate materials account for more than 90%. The intertidal zone sediments are various, from clay mud to sand and gravel, depending on distinguished sedimentary environments such as mangrove marshes, tidal flats, beaches etc. At the small beaches, the sand sediments may be dominated quartz or carbonate materials.
The sediment layers of the intertidal zone, the upper sea bed with a plain surface conserving ancient rivers, the systems of caves and their sediments, traces of ancient marine action forming distinctive notches, beaches and marine terraces, and mangrove swamps are important evidence of geological events and processes taking place during the Quaternary Period.
Karst geomorphology value
Due to a simultaneous combination of ideal factors such as thick, pale, grey, and strong limestone layers, which are formed by fine-grained materials; hot and moist climate and slow tectonic process as a whole; Hạ Long Bay has had a complete karst evolution for 20 million years. There are many types of karst topography in the bay, such as karst field.
Hạ Long Bay is a mature karst landscape developed during a warm, wet, tropical climate. The sequence of stages in the evolution of a karst landscape over a period of 20 million years requires a combination of several distinct elements including a massive thickness of limestone, a hot wet climate and slow overall tectonic up lift. The process of karst formation is divided into five stages, the second of which is the formation of the distinctive do line karst. This is followed by the development of fengcong karst, which can be seen in the groups of hills on Bo Hon and Dau Be Inland. These cones with sloping sides average 100m in height with the tallest exceeding 200m. Fenglin karst is characterized by steep separate towers. The hundreds of rocky islands that form the beautiful and famous landscape of the Bay are the individual towers of a classic Fenglin landscape where the intervening plains have been submerged by the sea. Most towers reach a height of between 50 and 100m with a height to width ratio of about 6. The karst dolines were flooded by the sea, becoming the abundance of lakes that lie within the limestone islands. For example, Dau Be island at the mouth of the Bay has six enclosed lakes including those of the Ba Ham lakes lying within its fengcong karst. The Bay contains examples of the landscape elements of fengcong, fenglin and karst plain. These are not separate evolutionary stages but the result of natural non – uniform processes in the denudation of a large mass of limestone. Marine erosion created the notches which in some places have been enlarged into caves. The marine notch is a feature of limestone coastline but, in Hạ Long Bay, it has created the mature landscape.
Within Hạ Long Bay, the main accessible caves are the older passages that survive from the time when the karst was evolving through its various stages of fengcong and fenglin. Three main types of caves can be recognized in the limestone islands (Waltham, T. 1998):
Remnants of old phreatic caves
Old karstic foot caves
Marine notch caves
The first group consists of old phreatic caves which include Sung Sot, Tam Cung, Lau Dai, Thien Cung, Dau Go, Hoang Long, Thien Long. Nowadays, these caves lie at various heights. Sung Sot cave is on Bo Hon Island. From its truncated entrance chambers on a ledge high on the cliff, a passage of more than 10m high and wide descends to the south. Tam Cung is a large phreatic fissure cave that developed in the bedding planes of the limestone dividing the fissure cave into three chambers. Lau Dai is a cave with a complex of passages extending over 300 meters on the south side of Con Ngua Island. Thien Cung and Dau Go are remnants of the same old cave system. They both survive in the northern part of Dau Go Island at between 20 and 50m above sea level. Thien Cung has one large chamber more than 100m long, blocked at its ends and almost subdivided into smaller chambers by a massive wall of stalactites and stalagmites. Dau Go is a single large tunnel descending along a major set of fractures to a massive choke.
The second group of caves is the old karstic foot caves which include Trinh Lu, Bo Nau, Tien Ong and Trong caves. Foot caves are a ubiquitous feature of karst landscapes which have reached a stage of widespread lateral undercutting at base level. They may extend back into maze caves of stream caves draining from larger cave systems within the limestone. They are distinguished by the main elements of their passages being close to the horizontal and are commonly related to denuded or accumulated terraces at the old base levels. Trinh Nu, which is one of the larger foot caves in Hạ Long Bay with its ceiling at about 12m above sea level and about 80m in length, was developed in multiple stages. Bo Nau, a horizontal cave containing old stalactite deposits, cuts across the 25o dip of the bedding plane.
The third group is the marine notch caves which're a special feature of the karst of Hạ Long Bay. The dissolution process of sea water acting on the limestone and erosion by wave action creates notches at the base of the cliffs. In advantageous conditions, dissolution of the limestone allows the cliff notches to be steadily deepened and extended into caves. Many of these at sea level extend right through the limestone hills into drowned dolinas which are now tidal lakes.
A distinguishing feature of marine notch caves is an absolutely smooth and horizontal ceiling cut through the limestone. Some marine notch caves had not been not formed at present sea level, but old sea levels related to sea level changes in Holocene transgression, event to Pleistocene sea levels. Some of them preserved the development of old karstic foot caves in mainland environments or preserved the remnants of older phreatic caves. One of the most unusual features of Hạ Long Bay is the Bo Ham lake group of hidden lakes and their connecting tunnel – notch caves in Dau Be Island. From the island's perimeter cliff a cave, 10m wide at water level and curving so that it is almost completely dark, extends about 150m to Lake 1. Luon Cave is on Bo Hon Island and extends 50m meters to an enclosed tidal lake. It has a massive stalactite hanging 2m down and truncated at the modern tidal level. It has passed through many stages in its formation.
The karst landscape of Hạ Long Bay is of international significance and of fundamental importance to the science of geomorphology. The fenglin tower karst, which is the type present in much of Hạ Long Bay, is the most extreme form of limestone landscape development. If these karst landscapes are broadly compared in terms of their height, steepness and number of their limestone towers, Hạ Long Bay is probably second in the entire world only to Yangshuo, in China. However, Hạ Long Bay has also been invaded by the sea so that the geomorphology of its limestone islands are, at least in part, the consequence of marine erosion. The marine invasion distinguishes Hạ Long Bay and makes it unique in the world. There are other areas of submerged karst towers which were invaded by the sea, but none is as extensive as Hạ Long Bay.
Timeline of geologic evolution
Some of the most remarkable geological events in Hạ Long Bay's history have occurred in the last 1,000 years, include the advance of the sea, the raising of the bay area, strong erosion that has formed coral, and, pure blue and heavily salted water. This process of erosion by seawater has deeply engraved the stone, contributing to its fantastic beauty. Present-day Hạ Long Bay is the result of this long process of geological evolution that has been influenced by so many factors.
Due to all these factors, tourists visiting Hạ Long Bay are not only treated to one of the natural wonders of the world, but also to a precious geological museum that has been naturally preserved in the open air for the last 300 million years.
Ecology
Two ecosystems coexist in Ha Long Bay: a tropical, moist, evergreen rainforest ecosystem and a marine and coastal ecosystem. Livistona halongensis, Impatiens halongensis, Chirita halongensis, Chirita hiepii, Chirita modesta, Paraboea halongensis, and Alpinia calcicola are among the seven endemic species found in the bay. There is also some bioluminescent plankton.
The many islands that dot the bay are home to a great many other species, including (but likely not limited to): 477 magnoliales, 12 pteris, 20 salt marsh flora; and 4 amphibia, 10 reptilia, 40 aves, and 4 mammalia.
Common aquatic species found in the bay include: cuttlefish (mực); oyster (hào); cyclinae (ngán); prawns (penaeidea (tôm he), panulirus (tôm hùm), parapenaeopsis (tôm sắt), etc.); sipunculoideas (sá sùng); nerita (ốc đĩa); charonia tritonis (ốc tù và); and cà sáy. A new species of sponge, Cladocroce pansinii, was discovered in underwater caves attached to the bay in 2023.
Environmental damage
With an increasing tourist trade, mangroves and seagrass beds have been cleared and jetties and wharves have been built for tourist boats.
Game fishing, often near coral reefs, is threatening many endangered species of fish.
Local government and businesses are aware of the problems and many measures have been taken to minimise the impact of tourism on the bay environment for sustainable economic growth like introducing eco-friendly tours and tight waste control on resorts.
Awards and designations
In 1962, the Vietnam Ministry of Culture, Sport and Tourism designated Hạ Long Bay a 'Renowned National Landscape Monument'.
Hạ Long Bay was first listed as a UNESCO World Heritage Site in 1994, in recognition of its outstanding, universal aesthetic value. In 2000, the World Heritage Committee additionally recognised Hạ Long Bay for its outstanding geological and geomorphological value, and its World Heritage Listing was updated.
In October 2011, the World Monuments Fund included the bay on the 2012 World Monuments Watch, citing pressure from tourism and associated developments as threats to the site that must be addressed.
In 2012, the New 7 Wonders Foundation officially named Hạ Long Bay as one of the New 7 Wonders of Nature.
Hạ Long Bay is also a part of the Club of the Most Beautiful Bays of the World.
Popular culture
Literature
In writings about Hạ Long Bay, the following Vietnamese writers wrote:
Nguyễn Trãi: "This wonder is ground raising up into the middle of the high sky".
Xuân Diệu: "Here is the unfinished works of the Beings...Here is the stones which the Giant played and threw away".
Nguyên Ngọc: "...to form this first- rate wonder, nature only uses: Stone and Water...There are just only two materials themselves chosen from as much as materials, in order to write, to draw, to sculpture, to create everything...It is quite possible that here is the image of the future world".
Ho Chi Minh: "It is the wonder that one cannot impart to others".
Phạm Văn Đồng: "Is it one scenery or many sceneries? Is it the scenery in the world or somewhere?".
Nguyễn Tuân: "Only mountains accept to be old, but Hạ Long sea and wave are young forever".
Huy Cận: "Night breathes, stars wave Hạ Long's water".
Chế Lan Viên:
"Hạ Long, Bái Tử Long - Dragons were hidden, only stones still remain
On the moonlight nights, stones meditate as men do..."
Lord Trịnh Cương overflowed with emotion: "Mountains are glistened by water shadow, water spills all over the sky".
Ancient tales
The inhabitants of the bay and its adjacent city have transmitted numerous ancient tales explaining names given to various isles and caves in the bay.
Đầu Gỗ cave ("the end of wooden bars" cave): these wooden bars in this cave are the remnants of sharped wooden columns built under the water level by the order of Trần Hưng Đạo commander in order to sink Mongolian invaders' ships in the 13th century.
Kim Quy cave ("Golden Turtle" cave): it is told that the Golden Turtle swam toward the Eastern Sea (international name: South China Sea) after returning the holy sword which had assisted King Lê Thái Tổ in the combat against Ming invaders from China. Next, with the approval of the Sea King, Golden Turtle continued to fight against monsters in this marine area. The turtle became exhausted and died in a cave. Consequently, the cave was named after the Golden Turtle.
Con Cóc islet (Frog islet): a frog-like isle. According to ancient tales, in a year of severe drought, a frog directed all animals to the Heaven and protested against the God. They demonstrated in favour of making rain. As a result, the God must accept the frog as his uncle. Since then, whenever frogs grind their teeth, the God has to pour water down the ground.
Hang Trống and Hang Trinh Nữ (Male cave and Virgin cave): the tale's about a beautiful woman had fallen in love with a fisherman whom must sail to the sea not so long after their engagement, the landlord saw this beautiful girl and captured her, but with her resistance, the landlord exiled the girl to remote island. After being left to starve, the girl died and turned into a statue people called Hang Trinh Nữ (Virgin Cave). Her betrothed ran to the girl's island and when he found out what had happened, he turned into an islet situated nearby called Hang Trống (Male cave).
Thiên Cung cave (literally: Paradise cave): this cave is one of the places associated with the ancient dragon king. It told that Thiên Cung cave was the place where the Dragon King's seven-day marriage took place. To congratulate the couple, many dragons and elephants visited to dance and fly.
Conservation Issues
Impacts of Human and Natural Factors on the Bay Area
Ha Long, Hai Phong, and Hanoi are significant urban centers driving the economic development in northern Vietnam. The economic growth in these urban areas, coupled with the rapid rise of the southern regions in China, including Hong Kong, have led to increasing human pressures on Ha Long Bay. The coastal areas of Quang Ninh province and Hai Phong City have experienced rapid growth in infrastructure development, particularly in transportation, shipping, coal mining, and tourism-related industries. Since 1999, the Asian Development Bank (ADB) has warned that constructing new ports in the Hạ Long Bay area could lead to increased maritime traffic in the region, posing threats to the bay's infrastructure and the social infrastructure supporting tourism. Pollution from industrial waste, overexploitation, and overfishing also pose significant threats. Some argue that there is a need for cautious consideration of development in the bay area through effective management structures, given its crucial environmental significance for the entire region.
Currently, the expansion of urban areas and population growth, construction of ports and factories, tourism and service activities, household and industrial waste, fishing and aquaculture practices, have not only become threats but have also caused alarming levels of environmental pollution and landscape changes in Ha Long Bay. Due to pollution, the once thriving coral reefs in the deep sea of Ha Long Bay are deteriorating. The formerly clear waters of the bay are increasingly becoming turbid and sedimented, prompting scientists to warn of the possibility of Ha Long Bay becoming "swamped." Additionally, as Ha Long Bay is surrounded by thousands of limestone islands, which are mostly good construction materials, they are susceptible to private exploitation, leading to landscape distortion.
On another aspect, global climate change with rising sea levels will strongly impact the landscape, island systems, caves, and biodiversity of Ha Long Bay. Vietnam currently lacks the necessary human and material resources to adequately respond to these challenges.
In terms of community culture, an issue that many International tourism have complained about is the lack of environmental awareness among both tourists and local communities. The modern, civilized, and courteous image of Ha Long tourism has not been fully established as desired. There are still instances of beggars harassing tourists, which affects the tourism environment of the heritage site. Efforts in education and propaganda to raise awareness among the local population, restrictions on resort development on islands, and the implementation of eco-tourism standards and heritage conservation regulations for the surrounding waters of the heritage site are significant challenges for the local government. The stalactites in the cave system of Ha Long Bay have been vandalized, cut, and taken away for use in decorating artificial landscapes (2016). Some caves have even been covered with concrete to serve as banquet venues. Moreover, the activities of fishing boats and tourists also generate significant amounts of waste pollution that the authorities have yet to effectively manage.
Conservation Efforts
In an effort to prevent the negative impact of human activities on the natural environment of Ha Long Bay, the authorities of Quang Ninh province have prohibited high-speed motorboats serving tourists in the bay area to protect the environment and biodiversity. Additionally, the province has relocated fishing households living on floating villages to the mainland to protect the water environment of Ha Long Bay. Furthermore, the extraction of coal and stone within the heritage area has been banned to prevent coal and mud pollution in the bay as advised by UNESCO. In the bay area, some local residents have voluntarily taken action to preserve the landscape by organizing volunteer groups to collect and handle waste. Starting from September 1, 2019, the People's Committee of Ha Long City strictly banned the use of single-use plastic products in the bay area. This is a resolute and significant step towards conserving the bay's environment.
The similarity in landscape, geology, biodiversity, as well as cultural and archaeological values of the entire region, including not only Ha Long Bay but also Cat Ba Archipelago and Bai Tu Long Bay, has led to scientific research in geology, archaeology, culture, and tourism, as well as fishing activities, extending beyond the boundaries of Ha Long Bay. Some experts suggest considering the expansion of the conservation area, not only limiting it to the small area of Ha Long Bay but also encompassing the surrounding sea area, including the areas close to the Vietnam-China border. With a length of about 300 km and a width of about 60 km, the entire area can be seen and conserved as a unique marine ecosystem of Vietnam.
See also
El Nido
Guilin
Krabi
Matsushima
References
External links
Official website
The Emerald Isles of Hạ Long Bay at NASA Earth Observatory, May 7, 2022
Environmental capacity Hạ Long Bay – Bai Tu Long. Publisher: Natural Science and Technology. Hanoi. Editor: Nguyen Khoa Son, – in Vietnamese
Vietnamese Sea and Islands – position Resources, and typical geological and ecological wonders. Publisher Science and Technology. Ha Noi, Editor: Nguyen Khoa Son, – in Vietnamese
Articles containing video clips
ASEAN heritage parks
Bays of Vietnam
Fishing communities in Vietnam
Gulf of Tonkin
Landforms of Quảng Ninh province
Places with bioluminescence
Tourist attractions in Quảng Ninh province
World Heritage Sites in Vietnam | Hạ Long Bay | Chemistry,Biology | 6,720 |
1,730,553 | https://en.wikipedia.org/wiki/Milliradian | A milliradian (SI-symbol mrad, sometimes also abbreviated mil) is an SI derived unit for angular measurement which is defined as a thousandth of a radian (0.001 radian). Milliradians are used in adjustment of firearm sights by adjusting the angle of the sight compared to the barrel (up, down, left, or right). Milliradians are also used for comparing shot groupings, or to compare the difficulty of hitting different sized shooting targets at different distances. When using a scope with both mrad adjustment and a reticle with mrad markings (called an "mrad/mrad scope"), the shooter can use the reticle as a ruler to count the number of mrads a shot was off-target, which directly translates to the sight adjustment needed to hit the target with a follow-up shot. Optics with mrad markings in the reticle can also be used to make a range estimation of a known size target, or vice versa, to determine a target size if the distance is known, a practice called "milling".
Milliradians are generally used for very small angles, which allows for very accurate mathematical approximations to more easily calculate with direct proportions, back and forth between the angular separation observed in an optic, linear subtension on target, and range. In such applications it is useful to use a unit for target size that is a thousandth of the unit for range, for instance by using the metric units millimeters for target size and meters for range. This coincides with the definition of the milliradian where the arc length is defined as of the radius. A common adjustment value in firearm sights is 1 cm at 100 meters which equals = mrad.
The true definition of a milliradian is based on a unit circle with a radius of one and an arc divided into 1,000 mrad per radian, hence 2,000 π or approximately 6,283.185 milliradians in one turn, and rifle scope adjustments and reticles are calibrated to this definition. There are also other definitions used for land mapping and artillery which are rounded to more easily be divided into smaller parts for use with compasses, which are then often referred to as "mils", "lines", or similar. For instance there are artillery sights and compasses with 6,400 NATO mils, 6,000 Warsaw Pact mils or 6,300 Swedish "streck" per turn instead of 360° or 2π radians, achieving higher resolution than a 360° compass while also being easier to divide into parts than if true milliradians were used.
History
The milliradian (approximately 6,283.185 in a circle) was first used in the mid-19th century by Charles-Marc Dapples (1837–1920), a Swiss engineer and professor at the University of Lausanne. Degrees and minutes were the usual units of angular measurement but others were being proposed, with "grads" (400 gradians in a circle) under various names having considerable popularity in much of northern Europe. However, Imperial Russia used a different approach, dividing a circle into equilateral triangles (60° per triangle, 6 triangles in a circle) and hence 600 units to a circle.
Around the time of the start of World War I, France was experimenting with the use of millièmes or angular mils (6400 in a circle) for use with artillery sights instead of decigrades (4000 in a circle). The United Kingdom was also trialing them to replace degrees and minutes. They were adopted by France although decigrades also remained in use throughout World War I. Other nations also used decigrades. The United States, which copied many French artillery practices, adopted angular mils, later known as NATO mils. Before 2007 the Swedish defence forces used "streck" (6300 in a circle, streck meaning lines or marks) (together with degrees for some navigation) which is closer to the milliradian but then changed to NATO mils. After the Bolshevik Revolution and the adoption of the metric system of measurement (e.g. artillery replaced "units of base" with meters) the Red Army expanded the 600 unit circle into a 6000 mil circle. Hence the Russian mil has a somewhat different origin than those derived from French artillery practices.
In the 1950s, NATO adopted metric units of measurement for land and general use. NATO mils, meters, and kilograms became standard, although degrees remained in use for naval and air purposes, reflecting civil practices.
Mathematical principle
Use of the milliradian is practical because it is concerned with small angles, and when using radians the small angle approximation shows that the angle approximates to the sine of the angle, that is . This allows a user to dispense with trigonometry and use simple ratios to determine size and distance with high accuracy for rifle and short distance artillery calculations by using the handy property of subtension: One mrad approximately subtends one meter at a distance of one thousand meters.
More in detail, because , instead of finding the angular distance denoted by θ (Greek letter theta) by using the tangent function
,
one can instead make a good approximation by using the definition of a radian and the simplified formula:
Since a radian is mathematically defined as the angle formed when the length of a circular arc equals the radius of the circle, a milliradian, is the angle formed when the length of a circular arc equals of the radius of the circle. Just like the radian, the milliradian is dimensionless, but unlike the radian where the same unit must be used for radius and arc length, the milliradian needs to have a ratio between the units where the subtension is a thousandth of the radius when using the simplified formula.
Approximation error
The approximation error by using the simplified linear formula will increase as the angle increases. For example, a
% (or parts per billion) error for an angle of 0.1 mrad, for instance by assuming 0.1 mrad equals 1 cm at 100 m
0.03% error for 30 mrad, i.e. assuming 30 mrad equals 30 m at 1 km
2.9% error for 300 mrad, i.e. assuming 300 mrad equals 300 m at 1 km
The approximation using mrad is more precise than using another common system where 1′ (minute of arc) is approximated as 1 inch at 100 yards, where comparably there is a:
4.72% error by assuming that an angle of 1′ equals 1 inch at 100 yd
4.75% error for 100′, i.e. assuming 100′ equals 100 in at 100 yd
7.36% error for 1000′, i.e. assuming 1000′ equals 1000 inches at 100 yd
Sight adjustment
Milliradian adjustment is commonly used as a unit for clicks in the mechanical adjustment knobs (turrets) of iron and scope sights both in the military and civilian shooting sports. New shooters are often explained the principle of subtensions in order to understand that a milliradian is an angular measurement. Subtension is the physical amount of space covered by an angle and varies with distance. Thus, the subtension corresponding to a mrad (either in an mrad reticle or in mrad adjustments) varies with range. Knowing subtensions at different ranges can be useful for sighting in a firearm if there is no optic with an mrad reticle available, but involves mathematical calculations, and is therefore not used very much in practical applications. Subtensions always change with distance, but an mrad (as observed through an optic) is always an mrad regardless of distance. Therefore, ballistic tables and shot corrections are given in mrads, thereby avoiding the need for mathematical calculations.
If a rifle scope has mrad markings in the reticle (or there is a spotting scope with an mrad reticle available), the reticle can be used to measure how many mrads to correct a shot even without knowing the shooting distance. For instance, assuming a precise shot fired by an experienced shooter missed the target by 0.8 mrad as seen through an optic, and the firearm sight has 0.1 mrad adjustments, the shooter must then dial 8 clicks on the scope to hit the same target under the same conditions.
Common click values
General purpose scopes Gradations (clicks) of ′, mrad and ′ are used in general purpose sights for hunting, target and long range shooting at varied distances. The click values are fine enough to get dialed in for most target shooting and coarse enough to keep the number of clicks down when dialing.
Speciality scopes mrad, ′ and mrad are used in speciality scope sights for extreme precision at fixed target ranges such as benchrest shooting. Some specialty iron sights used in ISSF 10 m, 50 m and 300 meter rifle come with adjustments in either mrad or mrad. The small adjustment value means these sights can be adjusted in very small increments. These fine adjustments are however not very well suited for dialing between varied distances such as in field shooting because of the high number of clicks that will be required to move the line of sight, making it easier to lose track of the number of clicks than in scopes with larger click adjustments. For instance to move the line of sight 0.4 mrad, a 0.1 mrad scope must be adjusted 4 clicks, while comparably a 0.05 mrad and 0.025 mrad scope must be adjusted 8 and 16 clicks respectively.
Others mrad and mrad can be found in some short range sights, mostly with capped turrets, but are not very widely used.
Subtensions at different distances
Subtension refers to the length between two points on a target, and is usually given in either centimeters, millimeters or inches. Since an mrad is an angular measurement, the subtension covered by a given angle (angular distance or angular diameter) increases with viewing distance to the target. For instance the same angle of 0.1 mrad will subtend 10 mm at 100 meters, 20 mm at 200 meters, etc., or similarly 0.39 inches at 100 m, 0.78 inches at 200 m, etc.
Subtensions in mrad based optics are particularly useful together with target sizes and shooting distances in metric units. The most common scope adjustment increment in mrad based rifle scopes is 0.1 mrad, which are sometimes called "one centimeter clicks" since 0.1 mrad equals exactly 1 cm at 100 meters, 2 cm at 200 meters, etc. Similarly, an adjustment click on a scope with 0.2 mrad adjustment will move the point of bullet impact 2 cm at 100 m and 4 cm at 200 m, etc.
When using a scope with both mrad adjustment and a reticle with mrad markings (called a mrad/mrad scope), the shooter can spot his own bullet impact and easily correct the sight if needed. If the shot was a miss, the mrad reticle can simply be used as a "ruler" to count the number of milliradians the shot was off target. The number of milliradians to correct is then multiplied by ten if the scope has 0.1 mrad adjustments. If for instance the shot was 0.6 mrad to the right of the target, 6 clicks will be needed to adjust the sight. This way there is no need for math, conversions, knowledge of target size or distance. This is true for a first focal plane scope at all magnifications, but a variable second focal plane must be set to a given magnification (usually its maximum magnification) for any mrad scales to be correct.
When using a scope with mrad adjustments, but without mrad markings in the reticle (i.e. a standard duplex cross-hair on a hunting or benchrest scope), sight correction for a known target subtension and known range can be calculated by the following formula, which utilizes the fact that an adjustment of 1 mrad changes the impact as many millimeters as there are meters:
For instance:
= 0.4 mrad, or 4 clicks with a mrad adjustment scope.
= 0.05 mrad, or 1 click with a 0.05 mrad adjustment scope.
In firearm optics, where 0.1 mrad per click is the most common mrad based adjustment value, another common rule of thumb is that an adjustment of mrad changes the impact as many centimeters as there are hundreds of meters. In other words, 1 cm at 100 meters, 2.25 cm at 225 meters, 0.5 cm at 50 meters, etc. See the table below
Adjustment range and base tilt
The horizontal and vertical adjustment range of a firearm sight is often advertised by the manufacturer using mrads. For instance a rifle scope may be advertised as having a vertical adjustment range of 20 mrad, which means that by turning the turret the bullet impact can be moved a total of 20 meters at 1000 meters (or 2 m at 100 m, 4 m at 200 m, 6 m at 300 m etc.). The horizontal and vertical adjustment ranges can be different for a particular sight, for instance a scope may have 20 mrad vertical and 10 mrad horizontal adjustment. Elevation differ between models, but about 10–11 mrad are common in hunting scopes, while scopes made for long range shooting usually have an adjustment range of 20–30 mrad (70–100 moa).
Sights can either be mounted in neutral or tilted mounts. In a neutral mount (also known as "flat base" or non-tilted mount) the sight will point reasonably parallel to the barrel, and be close to a zero at 100 meters (about 1 mrad low depending on rifle and caliber). After zeroing at 100 meters the sight will thereafter always have to be adjusted upwards to compensate for bullet drop at longer ranges, and therefore the adjustment below zero will never be used. This means that when using a neutral mount only about half of the scope's total elevation will be usable for shooting at longer ranges:
In most regular sport and hunting rifles (except for in long range shooting), sights are usually mounted in neutral mounts. This is done because the optical quality of the scope is best in the middle of its adjustment range, and only being able to use half of the adjustment range to compensate for bullet drop is seldom a problem at short and medium range shooting.
However, in long range shooting tilted scope mounts are common since it is very important to have enough vertical adjustment to compensate for the bullet drop at longer distances. For this purpose scope mounts are sold with varying degrees of tilt, but some common values are:
3 mrad, which equals 3 m at 1000 m (or 0.3 m at 100 m)
6 mrad, which equals 6 m at 1000 m (or 0.6 m at 100 m)
9 mrad, which equals 9 m at 1000 m (or 0.9 m at 100 m)
With a tilted mount the maximum usable scope elevation can be found by:
The adjustment range needed to shoot at a certain distance varies with firearm, caliber and load. For example, with a certain .308 load and firearm combination, the bullet may drop 13 mrad at 1000 meters (13 meters). To be able to reach out, one could either:
Use a scope with 26 mrad of adjustment in a neutral mount, to get a usable adjustment of = 13 mrad
Use a scope with 14 mrad of adjustment and a 6 mrad tilted mount to achieve a maximum adjustment of + 6 = 13 mrad
Shot groupings
A shot grouping is the spread of multiple shots on a target, taken in one shooting session. The group size on target in milliradians can be obtained
by measuring the spread of the rounds on target in millimeters with a caliper and dividing by the shooting distance in meters. This way, using milliradians, one can easily compare shot groupings or target difficulties at different shooting distances.
If the firearm is attached in a fixed mount and aimed at a target, the shot grouping measures the firearm's mechanical precision and the uniformity of the ammunition. When the firearm also is held by a shooter, the shot grouping partly measures the precision of the firearm and ammunition, and partly the shooter's consistency and skill. Often the shooters' skill is the most important element towards achieving a tight shot grouping, especially when competitors are using the same match grade firearms and ammunition.
Range estimation with mrad reticles
Many telescopic sights used on rifles have reticles that are marked in mrad. This can either be accomplished with lines or dots, and the latter is generally called mil-dots. The mrad reticle serves two purposes, range estimation and trajectory correction.
With a mrad reticle-equipped scope the distance to an object can be estimated with a fair degree of accuracy by a trained user by determining how many milliradians an object of known size subtends. Once the distance is known, the drop of the bullet at that range (see external ballistics), converted back into milliradians, can be used to adjust the aiming point. Generally mrad-reticle scopes have both horizontal and vertical crosshairs marked; the horizontal and vertical marks are used for range estimation and the vertical marks for bullet drop compensation. Trained users, however, can also use the horizontal dots to compensate for bullet drift due to wind. Milliradian-reticle-equipped scopes are well suited for long shots under uncertain conditions, such as those encountered by military and law enforcement snipers, varmint hunters and other field shooters. These riflemen must be able to aim at varying targets at unknown (sometimes long) distances, so accurate compensation for bullet drop is required.
Angle can be used for either calculating target size or range if one of them is known. Where the range is known the angle will give the size, where the size is known then the range is given. When out in the field angle can be measured approximately by using calibrated optics or roughly using one's fingers and hands. With an outstretched arm one finger is approximately 30 mrad wide, a fist 150 mrad and a spread hand 300 mrad.
Milliradian reticles often have dots or marks with a spacing of 1 mrad in between, but graduations can also be finer and coarser (i.e. 0.8 or 1.2 mrad).
Units for target size and range
While a radian is defined as an angle on the unit circle where the arc and radius have equal length, a milliradian is defined as the angle where the arc length is one thousandth of the radius. Therefore, when using milliradians for range estimation, the unit used for target distance needs to be thousand times as large as the unit used for target size. Metric units are particularly useful in conjunction with a mrad reticle because the mental arithmetic is much simpler with decimal units, thereby requiring less mental calculation in the field. Using the range estimation formula with the units meters for range and millimeters for target size it is just a matter of moving decimals and do the division, without the need of multiplication with additional constants, thus producing fewer rounding errors.
The same holds true for calculating target distance in kilometers using target size in meters.
Also, in general the same unit can be used for subtension and range if multiplied with a factor of thousand, i.e.
If using the imperial units yards for distance and inches for target size, one has to multiply by a factor of ≈ 27.78, since there are 36 inches in one yard.
If using the metric unit meters for distance and the imperial unit inches for target size, one has to multiply by a factor of 25.4, since one inch is defined as 25.4 millimeters.
Practical examples
Land Rovers are about 3 to 4 m long, "smaller tank" or APC/MICV at about 6 m (e.g. T-34 or BMP) and about 10 m for a "big tank." From the front a Land Rover is about 1.5 m wide, most tanks around 3–3.5 m. So a SWB Land Rover from the side is one finger wide at about 100 m. A modern tank would have to be at a bit over 300 m.
If, for instance a target known to be 1.5 m in height (1500 mm) is measured to 2.8 mrad in the reticle, the range can be estimated to:
So if the above-mentioned 6 m long BMP (6000 mm) is viewed at 6 mrad its distance is 1000 m, and if the angle of view is twice as large (12 mrad) the distance is half as much, 500 m.
When used with some riflescopes of variable objective magnification and fixed reticle magnification (where the reticle is in the second focal plane), the formula can be modified to:
Where mag is scope magnification. However, a user should verify this with their individual scope since some are not calibrated at 10× . As above target distance and target size can be given in any two units of length with a ratio of 1000:1.
Mixing mrad and minutes of arc
It is possible to purchase rifle scopes with a mrad reticle and minute-of-arc turrets, but it is general consensus that such mixing should be avoided. It is preferred to either have both a mrad reticle and mrad adjustment (mrad/mrad), or a minute-of-arc reticle and minute-of-arc adjustment to utilize the strength of each system. Then the shooter can know exactly how many clicks to correct based on what he sees in the reticle.
If using a mixed system scope that has a mrad reticle and arcminute adjustment, one way to make use of the reticle for shot corrections is to exploit that 14′ approximately equals 4 mrad, and thereby multiplying an observed corrections in mrad by a fraction of when adjusting the turrets.
Conversion table for firearms
In the table below conversions from mrad to metric values are exact (e.g. 0.1 mrad equals exactly 1 cm at 100 meters), while conversions of minutes of arc to both metric and imperial values are approximate.
0.1 mrad equals exactly 1 cm at 100 m
1 mrad ≈ 3.44′, so mrad ≈ ′
1′ ≈ 0.291 mrad (or 2.91 cm at 100 m, approximately 3 cm at 100 m)
Definitions for maps and artillery
Because of the definition of pi, in a circle with a diameter of one there are 2000 milliradians () per full turn. In other words, one real milliradian covers just under of the circumference of a circle, which is the definition used by telescopic rifle sight manufacturers in reticles for stadiametric rangefinding.
For maps and artillery, three rounded definitions are used which are close to the real definition, but more easily can be divided into parts. The different map and artillery definitions are sometimes referred to as "angular mils", and are:
of a circle in NATO countries.
of a circle in the former Soviet Union and Finland (Finland phasing out the standard in favour of the NATO standard).
of a circle in Sweden. The Swedish term for this is streck, literally "line".
Reticles in some artillery sights are calibrated to the relevant artillery definition for that military, i.e. the Carl Zeiss OEM-2 artillery sight made in East Germany from 1969 to 1976 is calibrated for the eastern bloc 6000 mil circle.
Various symbols have been used to represent angular mils for compass use:
mil, MIL and similar abbreviations are often used by militaries in the English speaking part of the world.
‰, called "artillery per milles" (German: Artilleriepromille), a symbol used by the Swiss Army.
¯, called "artillery line" (German: artilleristische Strich), a symbol used by the German Army (not to be confused with Compass Point (German: Nautischer Strich, 32 "nautical lines" per circle) which sometimes use the same symbol. However, the DIN standard (DIN 1301 part 3) is to use ¯ for artillery lines, and " for nautical lines.)
₥, called "thousandths" (French: millièmes), a symbol used on some older French compasses.
v (Finnish: piiru, Swedish: delstreck), a symbol used by the Finnish Defence Forces for the standard Warsaw Pact mil. Sometimes just marked as v if superscript is not available.
Conversion table for compasses
Use in artillery sights
Artillery uses angular measurement in gun laying, the azimuth between the gun and its target many kilometers away and the elevation angle of the barrel. This means that artillery uses mils to graduate indirect fire azimuth sights (called dial sights or panoramic telescopes), their associated instruments (directors or aiming circles), their elevation sights (clinometers or quadrants), together with their manual plotting devices, firing tables and fire control computers.
Artillery spotters typically use their calibrated binoculars to move fired projectiles' impact onto a target. Here they know the approximate range to the target and so can read off the angle (+ quick calculation) to give the left/right corrections in meters. A mil is a meter at a range of one thousand meters (for example, to move the impact of an artillery round 100 meters by a gun firing from 3 km away, it is necessary to shift the direction by 100/3 = 33.3 mils.)
Other scientific and technological uses
The milliradian (and other SI multiples) is also used in other fields of science and technology for describing small angles, i.e. measuring alignment, collimation, and beam divergence in optics, and accelerometers and gyroscopes in inertial navigation systems.
See also
Scandinavian mile, a unit of length common in Norway and Sweden, but not Denmark, today standardised as 10 kilometers.
Thousandth of an inch, an inch-based unit often called a thou or a mil.
Circular mil, a unit of area, equal to the area of a circle with a diameter of one thousandth of an inch.
Square mil, a unit of area, equal to the area of a square with sides of length of one thousandth of an inch.
Footnotes
References
External links
Units of plane angle
Decimalisation
Optical devices | Milliradian | Materials_science,Mathematics,Engineering | 5,520 |
19,450,493 | https://en.wikipedia.org/wiki/Phason | In physics, a phason is a form of collective excitation found in aperiodic crystal structures. Phasons are a type of quasiparticle: an emergent phenomenon of many-particle systems. The phason can also be seen as a degree of freedom unique to quasicrystals. Similar to phonons, phasons are quasiparticles associated with atomic motion. However, whereas phonons are related to the translation of atoms, phasons are associated with atomic rearrangement. As a result of this rearrangement, or modulation, the waves that describe the position of atoms in the crystal change phase -- hence the term "phason". In the language of the superspace picture commonly employed in the description of aperiodic crystals in which the aperiodic function is obtained via projection from a higher dimensional periodic function, the 'phason' displacement can be seen as displacement of the (higher-dimensional) lattice points in the perpendicular space.
Phasons can travel faster than the speed of sound within quasicrystalline materials, giving these materials a higher thermal conductivity than materials in which the transfer of heat is carried out only by phonons. Different phasonic modes can change the material properties of a quasicrystal.
In the superspace representation, aperiodic crystals can be obtained from a periodic crystal of higher dimension by projection to a lower dimensional space– this is commonly referred to as the cut-and-project method. While phonons change the position of atoms relative to the crystal structure in space, phasons change the position of atoms relative to the quasicrystal structure and the cut-through superspace that defines it. Therefore, phonon modes are excitations of the "in-plane" real (also called parallel, direct, or external) space, whereas phasons are excitations of the perpendicular (also called internal or virtual) space.
Phasons may be described in terms of hydrodynamic theory: when going from a homogenous fluid to a quasicrystal, hydrodamic theory predicts six new modes arising from the translational symmetry breaking in the parallel and perpendicular spaces. Three of these modes (corresponding to the parallel space) are acoustic phonon modes, while the remaining three are diffusive phason modes. In incommensurately-modulated crystals, phasons may be constructed from a coherent superposition of phonons of the unmodulated parent structure, though this is not possible for quasicrystals. Hydrodynamic analysis of quasicrystals predicts that, while the strain relaxation of phonons is relatively rapid, relaxation of phason strain is diffusive and is much slower. Therefore, metastable quasicrystals grown by rapid quenching from the melt exhibit built-in phason strain associated with shifts and anisotropic broadenings of X-ray and electron diffraction peaks.
See also
Quasicrystal
Quasiparticle
References
Freedman, B., Lifshitz, R., Fleischer, J. et al. Phason dynamics in nonlinear photonic quasicrystals. Nature Mater 6, 776–781 (2007). https://doi.org/10.1038/nmat1981
Books
Quasiparticles
Crystallography | Phason | Physics,Chemistry,Materials_science,Engineering | 713 |
11,392,572 | https://en.wikipedia.org/wiki/Columbian%20press | The Columbian press is a type of hand-operated printing press invented in the United States by George Clymer, around 1813. Made from cast iron, it was a very successful design and many thousands were made by him and by others during the 19th century. Columbians continued to be made as late as the early-20th century, 90 years after their introduction. Despite their age, many are still used for printing, especially by artists who make prints using traditional methods.
The Columbian design is also notable for its elaborate, symbolic ornamentation.
History
The Columbian press was inspired in some measure by the earlier Stanhope press. It was designed to allow large formes, such as a broadsheet newspaper page, to be printed at a single pull. The press worked by a lever system, similar to that of the Stanhope press and quite different from the toggle action of the slightly later English Albion press.
George Clymer first began working on improvements to the printing press around 1800 and his new iron press was first advertised in April 1814. However uptake by American printers was limited as his presses sold for $300 to $500 while a conventional press cost around $130. Also the Columbians were heavy, weighing around . Wooden presses that were lighter and easier to transport were more attractive to printers outside of major centres.
Despite the disadvantages, newspaper printers in large cities still bought Columbians as they could print more quickly, making them useful for newspapers with large circulations. Newspapers in New York, Philadelphia and Albany bought Columbians; one was used to print the Philadelphia Aurora But this market was limited and it is thought Clymer sold fewer than 25 presses in the United States.
In 1817, Clymer moved to London. He filed a patent for his invention in November of that year, and began manufacturing presses in premises at 1 Finsbury Street in 1818.
In Britain, Clymer's presses cost between £100 and £125, depending on the paper size they printed. But he later reduced prices to between £75 and £85. Among the early adopters were Andrew Strahan, the King's Printer, and Abraham John Valpy, who were both using the presses by 1818. Clymer's early advertisements describe the press as especially suitable for printing newspapers. An 1825 news item describes a Columbian press as among the items sold when a Dublin newspaper was closed and its property auctioned for failing to pay stamp duty.
In 1830, Clymer formed a partnership with Samuel Dixon. The company moved to new premises at 10 Finsbury Street and traded under the name of Clymer and Dixon. In 1834, George Clymer died but Dixon continued the make presses. He later joined with other partners, under the name of Clymer, Dixon and Co. The company was later taken over by others and continued production until it closed in 1863.
Meanwhile, other manufacturers made Columbian presses under license, with at least one company in Germany making unlicensed versions. More companies began making them after Clymer's patent expired. The presses were sold with different sizes of platen to accommodate different sizes of paper. Around 40 companies in eight countries are known to have made Columbian presses. Mostly, the design saw little modification or improvement although some makers in Continental Europe altered or simplified the ornamentation and some mounted their presses on a wooden base rather than a cast-iron one.
Production continued for many decades - surviving trade catalogues show Columbians were still available for sale in 1906 as printers still found them useful for printing proofs - initial test prints of a publication. Some were still being used in this role as late as the 1970's.
Decoration
The press is sometimes referred to as the "Eagle press" due to the characteristic, cast-iron bald eagle on the top lever which represents the United States. The eagle weights around and functions as a counterweight, acting to raise the platen from the paper after a print has been made.
The eagle clutches in one talon a cornucopia, representing prosperity and plenty. The other clutches an olive branch, representing peace. Illustrations of the earliest presses show the eagle also clutching thunderbolts of Jupiter, but these are not present on any examples that survive.
The side columns of the press are decorated with a Caduceus, the symbol of Hermes the messenger of the gods in Greek mythology. This alludes to the role of the printing press in the dissemination of knowledge. A secondary counterweight carries a figure of a woman in flowing robes with an anchor, this was an emblem known as the "Hope and Anchor".
The serpent-like creatures on the press' levers are intended to be depictions of dolphins. They may represent wisdom or knowledge. Also, the dolphin was the mark of the famous early book printer, the Aldine Press. The large main lever also carries a cartouche of flowers and fruit around an engraved, brass maker's plate. The legs of the press rest on claw-and-ball feet.
These decorative elements were altered by some manufacturers. For example, some presses sold in France had the eagle replaced with a globe or a lion as the eagle was a contentious political symbol in the post-Napoleonic era.
Surviving examples
Of the thousands made, 415 surviving presses were recorded in a world-wide census compiled between 2013 and 2017. Examples of Columbian presses can be currently found in 29 countries. Around half of the presses are in the United Kingdom. Some are still in use by artists using the linocut or woodcut methods for printmaking.
None of Clymer's earliest, American-made presses are thought to survive. There are around 40 surviving presses made during Clymer's lifetime. The majority are presses made by other companies after Clymer's patents expired.
Many museums and other institutions own a Columbian press, some of which are still used. Examples include:
Cary Graphic Arts Collection at the Rochester Institute of Technology, Rochester, New York. The collection holds a Columbian press made in England in 1876, which remains in use by the university.
Printmac Corporation, Paul Carthew has the world's oldest Columbian press dated 1818 (Number 10). Believed to have been cast in US and transported to UK when George Clymer migrated in 1817.
Howard Iron Works Printing Museum, Oakville, Ontario, Canada. This museum has one of the largest collections of Columbian presses in North America, including one made in 1845.
International Printing Museum, Los Angeles County, California. This museum has three Columbian presses, including ones made in 1824 and 1838.
Leicester Print Workshop, a registered charity and art studio in the United Kingdom. Their 1838 Columbian press is among the facilities available for use by artists.
McGill University Library, Montreal. The library displays an 1821 example, the oldest Columbian in North America. The press was used until 1965.
National Museum of American History (Smithsonian), an 1860 example made by Ritchie and Sons, Edinburgh, Scotland.
National Museum of Scotland, a circa 1865 example made by D. and J. Greig of Edinburgh. This was originally bought new from the manufacturer for use by the museum's print shop. It was retired in 1964 and transferred to the museum's collection.
Museum of New Zealand Te Papa Tongarewa, a Clymer and Dixon press made in England in 1841. It was sent to New Zealand in 1842 by the Church Mission Society. It was gifted to the museum in 1974.
Museum of Printing, Haverhill, Massachusetts. The collection includes an 1886 model, which is demonstrated from time to time.
Penrith Museum of Printing, Australia, number 937 made in 1841.
Printing Museum, Tokyo, this press is still used to demonstrate printing to visitors.
Pickering Beck Isle Museum, North Yorkshire. An 1854 press that is still used for demonstrations to visitors.
Science Museum, London, number 785 made by Clymer and Dixon in 1837.
Ulster Folk Museum, Northern Ireland. The museum owns a working example that is displayed in a recreated print-shop.
Ziegenbalg House museum, Tharangambadi India; this press remains in use.
Notes
References
Citations
Bibliograph
Further reading
External links
Printing Yesterday and Today
McGill Book Arts Lab | Printing demonstration using The Columbian, an iron press dating to 1821, McGill university
Letterpress printing
Printing devices | Columbian press | Physics,Technology | 1,686 |
53,075,082 | https://en.wikipedia.org/wiki/NGC%20394 | NGC 394 is a lenticular galaxy located in the constellation Pisces. It was discovered on October 26, 1854 by R. J. Mitchell. It was described by Dreyer as "faint, small, 50 arcsec northeast of II 218.", with II 218 being NGC 392.
References
External links
0394
Lenticular galaxies
18541026
Pisces (constellation)
004049 | NGC 394 | Astronomy | 84 |
3,257,112 | https://en.wikipedia.org/wiki/High-speed%20multimedia%20radio | High-speed multimedia radio (HSMM) is the implementation of high-speed wireless TCP/IP data networks over amateur radio frequency allocations using commercial off-the-shelf (COTS) hardware such as 802.11 Wi-Fi access points. This is possible because the 802.11 unlicensed frequency bands partially overlap with amateur radio bands and ISM bands in many countries. Only licensed amateur radio operators may legally use amplifiers and high-gain antennas within amateur radio frequencies to increase the power and coverage of an 802.11 signal.
Basics
The idea behind this implementation is to modify commercial 802.11 equipment for use on licensed Amateur Radio frequencies. The main frequency bands being used for these networks are: 900 MHz (33 cm), 2.4 GHz (13 cm), 3.4 GHz (9 cm), and 5.8 GHz (5 cm). Since the unlicensed 802.11 or Wi-Fi frequency bands overlap with amateur frequencies, only custom firmware is needed to access these licensed frequencies.
Such networks can be used for emergency communications for disaster relief operations as well as in everyday amateur radio communications (hobby/social).
Capabilities
HSMM can support most of the traffic that the Internet currently does, including video chat, voice, instant messaging, email, the Web (HTTP), file transfer (FTP), and forums. The only differences being that with HSMM, such services are community instead of commercially implemented and it is mostly wireless. HSMM can even be connected to the Internet and used for web surfing, although because of the FCC regulations on permitted content, this is done only when directly used for ham radio activities (under Part 97). Using high gain directional antennas and amplifiers, reliable long-distance wireless links over many miles are possible and only limited by propagation and the radio horizon.
Bandwidths and Speeds
HSMM networks most-often use professional hardware with narrower channel bandwidths such as 5 or 10 MHz to help increase range. It is common for networks to use channel -2 with a 5 MHz bandwidth. For long-range links extending outside of metropolitan areas 802.11b DSSS modulations or 802.11ah (900 MHz) equipment can be used, further increasing range at the cost of speed.
- DSSS is 10 watts max PEP in USA
- DSSS is 10 watts max PEP in USA
US / FCC Frequencies and channels
The following is a list of the 802.11 channels that overlap into an amateur radio band under the FCC in the United States. Note that the 5 cm amateur band extends from 5.65 to 5.925 GHz, so that there are many frequencies outside the Part 15 ISM/UNII block used for 802.11a. Many commercial grade 802.11a access points can also operate in between the normal channels by using 5 MHz channel spacing instead of the standard 20 MHz channel spacing. 802.11a channels 132, 136 and 140 are only available for unlicensed use in ETSI regions. Channels and frequencies marked in should not be used.
* must use 5/10Mhz bandwidth
Acronyms Used: (amateur radio) (ISM) (Radar)
Channels and power
FCC / United States
802.11a
The 802.11a amateur radio band consists of 30 overlapping channels in the 5.650–5.925 GHz (5 cm) band. The 802.11a standard uses OFDM or "Orthogonal Frequency Division Multiplexing" to transmit data and therefore is not classified as spread-spectrum. Because of this 802.11a hardware is not subject to the power rules in FCC Part 97 § 97.311 and the maximum allowable output power is 1,500 watts (W) PEP.
802.11b
The 802.11b amateur radio band consists of 8 overlapping channels in the 2.390–2.450 GHz (13 cm) band. The 802.11b specification uses Direct Sequence Spread Spectrum (DSSS) to transmit data and is subject to the rules of FCC Part 97 § 97.311. Therefore, the maximum allowable power output in the USA is 10 W PEP.
802.11g
The 802.11g amateur radio band consists of 8 overlapping channels in the 2.4 GHz (13 cm) band. The 802.11g standard uses OFDM or "Orthogonal Frequency Division Multiplexing" to transmit data and therefore is not classified as spread-spectrum. Because of this 802.11g hardware is not subject to the power rules in FCC Part 97 § 97.311 and the maximum allowable output power is 1,500 W PEP.
802.11n
The 802.11n amateur radio band consists of 8 overlapping channels in the 2.4 GHz (13 cm) band. The 802.11n standard uses OFDM or "Orthogonal Frequency Division Multiplexing" to transmit data and therefore is not classified as spread-spectrum. Because of this 802.11n hardware is not subject to the power rules in FCC Part 97 § 97.311 and the maximum allowable output power is 1,500 W PEP.
802.11y
The 802.11y amateur radio band consists of 24 overlapping channels in the 3.4 GHz (9 cm) band. The 802.11y standard uses OFDM or "Orthogonal Frequency Division Multiplexing" to transmit data and therefore is not classified as spread-spectrum. Because of this 802.11y hardware is not subject to the power rules in FCC Part 97 § 97.311 and the maximum allowable output power is 1,500 W PEP.
Frequency sharing
FCC / United States
802.11a
The 5 cm band is shared with the fixed-satellite service in ITU Region 1, and the radiolocation service. In ITU Region 2 (US) the primary user is military radiolocation, specifically naval radar. Amateur radio operators have secondary privileges to the Federal radiolocation service in the entire band and may not cause interference to these users. Amateur operators are allocated this band are in a co-secondary basis with ISM devices and space research. Amateur, space research, and ISM operators each have the "right to operate". Due to the lack of a high number of Part 15 users (compared to 2.4 GHz), the noise level tends to be lower in many parts of the US but can be quite congested in urban centers and on mountaintops. The frequencies from 5.6-5.65 GHz (channel 132) should generally be avoided to prevent interfering with TDWR weather radar stations.
802.11b/g/n
The 13 cm band is shared with Part 15 users as well as the Federal radiolocation service, and ISM (industrial, scientific, medical) devices. Amateur radio operators have secondary privileges to the Federal radiolocation service in the entire band and may not cause interference to these users. Amateur radio operators have primary privileges to ISM devices from 2.390–2.417 GHz and secondary privileges from 2.417–2.450 GHz. Because of the high number of Part 15 users, the noise level in this band tends to be rather high.
802.11y
The 9 cm band is shared with fixed services and space-to-Earth communications. Amateur radio operators using this band may not cause interference to other licensed users, including government radar stations. The low number of users tends to make this band quiet.
Identification
As with any amateur radio mode, stations must identify at least once every 10 minutes. One acceptable method for doing so is to transmit one's call sign inside an ICMP echo request (commonly known as a ping). If the access point is set to "master" then the user's call sign may be set as the "SSID" and therefore will be transmitted at regular intervals.
It is also possible to use a DDNS "push" request to automatically send an amateur call sign in plain text (ASCII) every 10 minutes. This requires that a computer's hostname be set to the call sign of the amateur operator and that the DHCP servers lease time be set to less than or equal to 10 minutes. With this method implemented the computer will send a DNS "push" request that includes the local computers hostname every time the DHCP lease is renewed. This method is supported by all modern operating systems including but not limited to Windows, Mac OS X, BSD, and Linux.
802.11 hardware may transmit and receive the entire time it is powered on even if the user is not sending data.
Security
Because the meaning of amateur transmissions may not be obscured, security measures that are implemented must be published. This does not necessarily restrict authentication or login schemes, but it does restrict fully encrypted communications. This leaves the communications vulnerable to various attacks once the authentication has been completed. This makes it very difficult to keep unauthorized users from accessing HSMM networks, although casual eavesdroppers can effectively be deterred. Current schemes include using MAC address filtering, WEP and WPA/WPA2. MAC address filtering and WEP are all hackable by using freely available software from the Internet, making them the less secure options. Per FCC rules the encryption keys themselves must be published in a publicly accessible place if using WEP, WPA/WPA2 or any other encryption, thereby undermining the security of their implementation. Such measures however are effective against casual or accidental wireless intrusions.
Using professional or modified hardware it is possible to operate on 802.11a channels that are outside the FCC authorized Part 15 bands but still inside the 5.8 GHz (5 cm) or 2.4 GHz (13 cm) amateur radio bands. Transverters or "frequency converters" can also be used to move HSMM 802.11b/g/n operations from the 2.4 GHz (13 cm) band to the 3.4 GHz (9 cm) amateur radio band. Such relocation provides a measure of security by operating outside the channels available to unlicensed (Part 15) 802.11 devices.
Custom frequencies
Using amateur-only frequencies provide better security and interference characteristics to amateur radio operators. In the past it used to be easy to use modified consumer grade hardware to operate 802.11 on channels that are outside of the normal FCC allocated frequencies for unlicensed users but still inside an amateur radio band. However, regulatory concerns with the non-authorized use of licensed band frequencies is making it harder. The newer Linux drivers implement Custom Regulatory Database that prevents a casual user from operating outside of the country-specific operating bands. This requires the use of radio transceivers based on the use of Transverter (or frequency converter) technology.
420 MHz
Doodle Labs is a privately held manufacturing company with headquarters in Singapore that designs and manufactures a line of long range Wireless Data Transceiver devices.
The DL-435 is mini-PCI adapter based on the Atheros wireless chipset.
XAGYL Communications is a Canadian Distributor of Ultra High-Speed, Long Range Wireless equipment.
The XAGYL Communications XC420M is a mini-PCI adapter based on the Atheros wireless chipset.
The Atheros chipset's ability to use 5 MHz transmission bandwidths could allow part 97 operation on the 420–430 MHz ATV sub-band. (Note that 420–430 MHz operation is not allowed near the Canada–US border. Refer to the "Line A" rule.)
900 MHz
Transverters as well as using older 802.11 hardware such as the original NRC WaveLan or FHSS modems made by Aerocomm and FreeWave make it possible to operate on this band. Ubiquiti M9-series also provide hardware capable in this band. Beware that noise floor on this band in the larger cities is usually very high, which severely limits receiver performance.
2.4 GHz custom frequencies
Using professional grade hardware or modified consumer grade hardware it is possible to operate on 802.11b/g hardware on channels that are effectively: "−1" at 2.402 GHz, and "−2" at 2.397 GHz. Using these channels allows amateur operators to move away from unlicensed Part 15 operators but may interfere with amateur radio satellite downlinks near 2.400 GHz and 2.401 GHz.
3.3–3.8 GHz
Frequency conversion involves the use of transverters that convert the operating frequency of the 802.11b/g device from 2.4 GHz to another band entirely. Transverter is a technical term and is rarely used to describe these products which are more commonly known as frequency converters, up/down converters, and just converters. Commercially available converters can convert a 2.4 GHz 802.11b/g signal to the 3.4 GHz (9 cm) band which is not authorized for unlicensed Part 15 users.
Ubiquiti Networks has four radios based on Atheros chipsets with transverters on board for this band. The PowerBridge M3 and M365 for 3.5 GHz and 3.65 GHz respectively for aesthetically low profile PtP (Point-to-Point) connections. The Nanostation M3 and M365 are in a molded weatherproof case with 13.7 dBi dual polarization antennas. The Rocket M3, M365 and M365 GPS are in a rugged case using a hi-power, very linear 2x2 MIMO radio with 2x RP-SMA (Waterproof) connectors. Finally the NanoBridge M3 and M365 for long range PtP connections. These devices use N mode Atheros chipsets along with Ubiquiti's airMax TDMA protocol to overcome the hidden node problem which is commonly an issue when using ptmp wireless outdoors. UBNT currently does not allow sales to U.S. Amateurs and only sell these radios under FCC License. This may be due to exclusion areas near coasts and US Navy installations. The 3.5 GHz band is currently used for DoD or Navy (shipborne and ground-based) radar operations and covers 60 percent of the U.S. population. This however may change due to a recent FCC NPRM & Order.
5.8 GHz custom frequencies
Using professional grade hardware or modified consumer grade hardware it is possible to operate on 802.11a channels 116–140 (5.57–5.71 GHz) and channels above 165 (> 5.835 GHz). These frequencies are outside of the FCC-allocated Part 15 unlicensed band, but still inside of the 5.8 GHz (5 cm) amateur radio band. Modifying consumer hardware to operate on these expanded channels often involves installing after-market firmware and/or changing the "country code" setting of the wireless card. When buying professional grade hardware, many companies will authorize the use of these expanded frequencies for a small additional fee.
Custom firmware
One popular way to access amateur-only frequencies is to modify an off-the-shelf access point with custom firmware. This custom firmware is freely available on the Internet from projects such as DD-WRT and OpenWrt. The AREDN Project supports off-the-shelf firmware that supports Part-97-only frequencies on Ubiquiti and TP-Link hardware. A popular piece of hardware that is modified is the Linksys WRT54GL because of the widespread availability of both the hardware and third-party firmware, however, the Linksys hardware is not frequency agile due to the closed nature of the Linksys drivers.
See also
Map of AREDN network HSMM nodes
Amateur radio emergency communications
Amateur radio frequency allocations
AMPRNet
DD-WRT
Metropolitan Area Network
Orthogonal frequency-division multiplexing
Packet Radio
Spread spectrum
Tomato Firmware
Ultra wideband
Wireless Distribution System
Wireless LAN
List of HSMM nodes
References
External links
FCC Part 97 Rules
FCC Part 15 Rules
FCC rejection of OFDM as Spread Spectrum
Using Part 15 Wireless Ethernet Devices For Amateur Radio
5.0 GHz (802.11a/h) Channels and Frequencies
BROADBAND-HAMNET.ORG Wireless Mesh: The award-winning Broadband-Hamnet – A project started in Austin, TX but has become the Ham Broadband standard worldwide, to create broadband-speed (>1 Mbit/s) mesh networks for Ham Radio use.
AREDN – Amateur Radio Emergency Data Network: This project picks up where Broadband-Hamnet leaves off and advances the Open Source software to widely available commercial devices and expands the technology beyond 2.4 GHz to the 900 MHz, 3.4 GHz, and 5.7 GHz ham bands.
Enabling Innovative Small Cell Use In 3.5 GHZ Band NPRM & Order
Packet radio | High-speed multimedia radio | Technology | 3,399 |
22,946,554 | https://en.wikipedia.org/wiki/Project%20commissioning | Project commissioning is the process of ensuring that all systems and components of a building or industrial plant are designed, installed, tested, operated, and maintained according to the owner's or final client's operational requirements. A commissioning process may be applied not only to new projects but also to existing units and systems subject to expansion, renovation or revamping.
In practice, the commissioning process is the integrated application of a set of engineering techniques and procedures to check, inspect and test every operational component of the project: from individual functions (such as instruments and equipment) up to complex amalgamations (such as modules, subsystems and systems).
Commissioning activities in the broader sense applicable to all phases of the project from the basic and detailed design, procurement, construction and assembly until the final handover of the unit to the owner, sometimes including an assisted operation phase.
Similarly Refinery commissioning is defined as "The sequential, planned, and documented process of verifying, testing, and validating the performance of each refinery unit, system, and equipment to ensure they operate safely, efficiently, and within design specifications, culminating in the successful startup and steady-state operation of the entire refinery".
Objective and impact
The main objective of commissioning is to effect the safe and orderly handover of the unit from the constructor to the owner, guaranteeing its operability in terms of performance, reliability, safety and information traceability. Additionally, when executed in a planned and effective way, commissioning normally represents an essential factor for the fulfillment of schedule, costs, safety and quality requirements of the project.
Commissioning management systems
For complex projects, the large volume and complexity of commissioning data, together with the need to guarantee adequate information traceability, normally leads to the use of powerful IT tools, known as commissioning management systems, to allow effective planning and monitoring of the commissioning activities.
Independent discipline
There is currently no formal education or university degree which addresses the training or certification of a Project Commissioning Engineer. Various short and online training courses are available, but they are designed for qualified engineers.
Large civil and industrial projects for which Commissioning as an independent discipline is as important as traditional engineering disciplines, i.e. civil, naval, chemical, mechanical, electrical, electronic, instrumentation, automation, or telecom engineering, include chemical and petrochemical plants, oil and gas platforms and pipelines, metallurgical plants, paper and cellulose plants, coal handling plants, thermoelectric and hydroelectric plants, buildings, bridges, highways, and railroads.
See also
References
Engineering disciplines
Project management | Project commissioning | Engineering | 513 |
51,873,022 | https://en.wikipedia.org/wiki/Naomi%20Climer | Naomi Wendy Climer, (born 18 December 1964) is a British engineer who has worked in broadcast, media and communications technology chiefly at the BBC and Sony Professional Solutions, and was the first female President of the Institution of Engineering and Technology (IET). Climer is the co-founder and co-chair of the Institute for the Future of Work.
Early life
Climer attended Gainsborough High School (now Queen Elizabeth High School) and Imperial College London, gaining a joint-degree in 1986 of Chemistry with Management Science.
Career
Climer is a Non Executive Director on the Boards of Focusrite plc and Oxford Metrics plc, is a non executive on the Board of Sony UK Technology Centre and is co-founder and co-chair of the Institute for the Future of Work. She is a Fellow of the Royal Academy of Engineering (elected 2013). She was a Trustee of the Institution of Engineering and Technology from 2009–2017, Deputy President from 2012, President from Sept 2015[6] and Immediate Past President from Sept 2016-Sept 2017. Climer was the subject of BBC Radio 4's The Life Scientific[7] and promoted the importance of engineering and the need for diversity in engineering across numerous media appearances.[3][8][9][10][11] Climer is a past chair of Council of the International Broadcasting Convention (IBC),
From 2012, she moved to California to be President of Sony's Media Cloud Services start-up business, returning to the UK in 2015 to take up the Presidency of the IET.
Climer joined Sony Professional Solutions Europe in 2002 as director of professional services and became vice president running the whole business from 2006 – 2012. During this time, she oversaw the move to new markets, the acquisition of Hawk-Eye for sports business, pushed Sony's sustainability agenda and started the 50:50 campaign for gender diversity.
Climer was director of technical operations from 2000 to 2002 of ITV Digital. ITV Digital ceased in June 2002, with Freeview being created in October 2002.
Climer joined the BBC in 1987 as an engineer, training in the same cohort as Kate Bellingham. She worked in BBC Broadcasting House and BBC World Service at Bush House before becoming Controller of Technology at BBC News. From 1998 to 2000, she was also a Director of the Parliamentary Broadcasting Unit.
From 2016 to 2017, Climer chaired the DCMS Future Communications Challenge Group,[4] and was a commissioner on the independent commission on the Future of Work.[5] She is currently on the UK Government's Science and Technology Awards Committee.
In 2020, Climer was vice-president of the Royal Academy of Engineering.
Awards
Climer has been awarded honorary degrees from Huddersfield, Southampton Solent, Bradford and University of Wolverhampton.
In 2013, she was elected a Fellow of the Royal Academy of Engineering.
In 2014 she won the International Association of Broadcast Manufacturers (IABM) Broadcast Industry's Woman of the Year Award.
She was also named as one of the Top 50 Influential Women in Engineering in the UK by the Daily Telegraph and Women's Engineering Society (WES) 2016 and one of the top 50 most influential women in IT in the UK by Computer Weekly in 2015 and 2016.
In 2017 she presented the Higginson Lecture at Durham University.
She was appointed Commander of the Order of the British Empire (CBE) for services to services to the engineering profession in the 2018 Birthday Honours List.
Climer has one step-daughter.
References
External links
Independent September 2015
IET
1964 births
Living people
People from Uxbridge
21st-century British women engineers
British women engineers
Alumni of Imperial College London
BBC people
Commanders of the Order of the British Empire
Fellows of the Institution of Engineering and Technology
Fellows of the Royal Academy of Engineering
Female fellows of the Royal Academy of Engineering | Naomi Climer | Engineering | 782 |
73,262,813 | https://en.wikipedia.org/wiki/Cystolepiota%20amazonica | Cystolepiota amazonica is a species of mushroom-producing fungus in the family Agaricaceae.
Taxonomy
It was described in 1989 by the German mycologist Rolf Singer who classified it as Cystolepiota amazonica.
Description
Cystolepiota amazonica is a very small brownish mushroom with white flesh.
Cap: 3mm wide and high and campanulate (bell shaped). The surface is redddish-brown to light chesnut colour. It is not hygrophanous or viscid ad is wrinkled (rugulose) or smooth with subsulcate striations at the margins. Gills: Free or narrowly adnexed, subconfluent. White but drying to pale or dirty brown. Stem: 1.2 cm tall and 0.8mm thick tapering slightly with a thinner apex. The surface is chestnut colour and smooth with white mycelium at the base. No stem ring was observed by Singer. Spores: Globose or subglobose. Dextrinoid, cyanophilic, hyaline, not metachromatic. 2.5-2.8 x 2-2.2μm. Basidia: 11–12.5 x 3.5-4.5 μm. Four spored. Smell: Indistinct.
Habitat and distribution
The specimens studied by Singer were found growing solitary on fallen, rotting leaves of Dicotyledon plants in the tropical forests of Brazil, 30 km North of Manaus.
References
Agaricaceae
Fungi described in 1989
Fungi of South America
Taxa named by Rolf Singer
Fungus species | Cystolepiota amazonica | Biology | 337 |
25,995,029 | https://en.wikipedia.org/wiki/Berberis%20%C3%97%20hortensis | Berberis × hortensis is an interspecific hybrid shrub. Its parents are Berberis oiwakensis (previously known as Mahonia lomariifolia) and Berberis japonica. It was raised in gardens during the 20th century, and has become an important garden and landscape plant.
Description
The hybrids show some variation, but are generally intermediate in most characteristics between the two parents. The following description is of the clone Charity.
These are medium to large shrubs, reaching in height. The plants have an upright form, becoming bare at the base. There are between 7 and 11 pairs of leaflets, plus a terminal leaflet. The flowers are in somewhat spreading racemes, often as long as in Berberis japonica. There is some scent to the flowers, but it is not as strong as in B. japonica. Flowering goes on throughout the winter.
Different clones may resemble one or the other parent more closely. It is possible that other species of Berberis have contributed to the stock ascribed to this hybrid. Berberis bealei is considered particularly likely to be one of these as it is often confused with Berberisa japonica. Many clones have an upright architectural form derived from M. oiwakensis subsp. lomariifolia, though some resemble the B. japonica parent rather more.
Plants provide viable seed, and second generation hybrids have been raised.
The plants are especially valued in the garden because of their ornamental leaves, and because they flower through the winter.
Taxonomy
The hybrid was first scientifically described and named as Mahonia × media by Christopher David Brickell in 1979. However, as part of the synonymization of Mahonia with the larger genus Berberis it was renamed Berberis × hortensis by David Mabberley in 2008.
Origin
The first recorded plant was found in a mixed batch of seedlings from Berberis oiwakensis sourced from mainland China that was raised in Northern Ireland in 1951 or earlier. This plant was given the cultivar name 'Charity' at the Savill Gardens, England, where it first flowered. It has been widely cultivated since under this name. Other clones have since been described and distributed. The following cultivars have gained the Royal Horticultural Society's Award of Garden Merit:-
'Buckland'
'Lionel Fortescue'
'Winter Sun'
References
hortensis
Hybrid plants | Berberis × hortensis | Biology | 489 |
61,828,622 | https://en.wikipedia.org/wiki/Anopodium%20ampullaceum | Anopodium ampullaceum is a species of fungus first discovered by Nils Lundqvist in Sweden, in the year 1964. A. ampullaceum became one of the first few fungi along with Anopodium epile and Podospora dagonerii, to be placed in the new genus Anopodium due to their unique spores that did not suit the description of the spores of the Podospora genus, which P. dagonerii had previously belonged to. The genus Anopodium deviates from other members of the Sordariomycetes class by two spore characteristics; firstly the pedicels of its spore in the apical position, and secondly due to its immature spores having spherical bodies with cylindrical apical regions. As of 1998 all three of these species are now considered to be one species, using the name A. ampullaceum.
History
Anopodium ampullaceum was first discovered in 1964 by N. Lundqvist in Sweden. A. ampullaceum was first discovered on blue hare dung, then lemming dung both in Sweden and was later discovered on rabbit dung in Oise, France. Lundqvist originally believed A. ampullaceum to be a member of the genus Podospora. Upon further investigation of the A. ampullaceum spores Lundqvist concluded that A. ampullaceum along with the two other closely related species he was examining, Anopodium epile and Podospora dagonerii, all belong in the newly formed genus Anopodium. These fungi spores begin as cylindrical or vermiform type hyaline and are non-septate, for this reason they can not be considered as Podospora spores. Lundqvist also believed that these 3 fungal species have evolved independently from a species closely related to the family Lasiosphaeria. Much later on in 1998 M.J. Richardson studied these three fungal species in-depth gathering samples from various regions of Sweden and France. Upon examination of the spores characteristics such as spore length and width and ampullate hair presence, Richardson concluded that these three species can all be considered the same species with the correct name being Anopodium ampullaceum.
Related Species
Anopodium ampullaceum is most closely related to and similar in morphology to A. epile. A. ampullaceum and A. epile differ most in the presence or absence of ampullate hairs of the top region of the perithecium, their spore lengths and widths, and their pedicel shape of the spores. Despite these variation A. ampullaceum and A. epile are still considered to be the same species under the name A. ampullaceum.
Appearance
Anopodium ampullaceum is most commonly characterized by its spores pedicels that face upwards toward the apex of its ascus. Its spores are polar with both an apical and basal side. The A. ampullaceum perithecia where spores are discharged is a non-stromatic, membrane enclosed structure, that is light in colour with a dark neck, and is covered in hair. The A. ampullaceum has a filiform paraphyses. The ascus is uni-tunicated with an invagination on the apical side and an apical ring that is barely visible. A. ampullaceum spores begin their cycle as a single cell with a spherical body and a cylindrical apical end. In the next stage of its cycle the spore becomes a two celled structure with the lower cell swelling into an ellipsoid shape and a dark brown upper cell. The pedicels of the spores have gelatinous bodies, and ampullate hairs on the neck. The ampullate hairs of the pedicels were considered as determinants of A. ampullaceum identification, but after further examination of various samples it was decided that the presence of ampullate hairs was far too variable to be a marker of the fungal species.
Ecology
Anopodium ampullaceum is commonly found in hare and rabbit dung, as it was initially discovered on Blue hare and lemming dung in Sweden. Prior to the merging of the A. ampullaceum fungi with A. epile and P. dagonerii, A. ampullaceum seemed to be restricted to leporid hare dung. By its modern name A. ampullaceum has been found in dung samples from Sweden, France and the United Kingdom.
References
Lasiosphaeriaceae
Fungus species | Anopodium ampullaceum | Biology | 945 |
13,795,040 | https://en.wikipedia.org/wiki/Granat | The International Astrophysical Observatory "GRANAT" (usually known as Granat; , lit. pomegranate), was a Soviet (later Russian) space observatory developed in collaboration with France, Denmark and Bulgaria. It was launched on 1 December 1989 aboard a Proton rocket and placed in a highly eccentric four-day orbit, of which three were devoted to observations. It operated for almost nine years.
In September 1994, after nearly five years of directed observations, the gas supply for its attitude control was exhausted and the observatory was placed in a non-directed survey mode. Transmissions finally ceased on 27 November 1998.
With seven different instruments on board, Granat was designed to observe the universe at energies ranging from X-ray to gamma ray. Its main instrument, SIGMA, was capable of imaging both hard X-ray and soft gamma-ray sources. The PHEBUS instrument was meant to study gamma-ray bursts and other transient X-Ray sources. Other experiments such as ART-P were intended to image X-Ray sources in the 35 to 100 keV range. One instrument, WATCH, was designed to monitor the sky continuously and alert the other instruments to new or interesting X-Ray sources. The ART-S spectrometer covered the X-ray energy range while the KONUS-B and TOURNESOL experiments covered both the X-ray and gamma ray spectrum.
Spacecraft
Granat was a three-axis-stabilized spacecraft and the last of the 4MV Bus produced by the Lavochkin Scientific Production Association. It was similar to the Astron observatory which was functional from 1983 to 1989; for this reason, the spacecraft was originally known as the Astron 2. It weighed 4.4 metric tons and carried almost 2.3 metric tons of international scientific instrumentation. Granat stood 6.5 m tall and had a total span of 8.5 m across its solar arrays. The power made available to the scientific instruments was approximately 400 W.
Launch and orbit
The spacecraft was launched on 1 December 1989 aboard a Proton-K from the Baikonur Cosmodrome in Kazakh SSR. It was placed in a highly eccentric 98-hour orbit with an initial apogee/perigee of 202,480 km/1,760 km respectively and an inclination of 51.9 degrees. This meant that solar and lunar perturbations would significantly increase the orbits inclination while reducing its eccentricity, such that the orbit had become near-circular by the time Granat completed its directed observations in September 1994. (By 1991, the perigee had increased to 20,000 km; by September 1994, the apogee/perigee was 59,025 km144,550 km at an inclination of 86.7 degrees.)
Three days out of the four-day orbit were devoted to observations. After over nine years in orbit, the observatory finally reentered the Earth's atmosphere on May 25, 1999.
Instrumentation
SIGMA
The hard X-ray and low-energy gamma-ray SIGMA telescope was a collaboration between CESR (Toulouse) and CEA (Saclay). It covered the energy range 35–1300 keV, with an effective area of 800 cm2 and a maximum sensitivity field of view of ~5°×5°. The maximum angular resolution was 15 arcmin. The energy resolution was 8% at 511 keV. Its imaging capabilities were derived from the association of a coded mask and a position sensitive detector based on the Anger camera principle.
ART-P
The ART-P X-ray telescope was the responsibility of the IKI in Moscow. The instrument covered the energy range 4 to 60 keV for imaging and 4 to 100 keV for spectroscopy and timing. There were four identical modules of the ART-P telescope, each consisting of a position sensitive multi-wire proportional counter (MWPC) together with a URA coded mask. Each module had an effective area of approximately 600 cm2, producing a field of view of 1.8° by 1.8°. The angular resolution was 5 arcmin; temporal and energy resolutions were 3.9 ms and 22% at 6 keV, respectively. The instrument achieved a sensitivity of 0.001 of the Crab nebula source (= 1 "mCrab") in an eight-hour exposure. The maximum time resolution was 4 ms.
ART-S
The ART-S X-ray spectrometer, also built by the IKI, covered the energy range 3 to 100 keV. Its field of view was 2° by 2°. The instrument consisted of four detectors based on spectroscopic MWPCs, making an effective area of 2,400 cm2 at 10 keV and 800 cm2 at 100 keV. The time resolution was 200 microseconds.
PHEBUS
The PHEBUS experiment was designed by CESR (Toulouse) to record high energy transient events in the range 100 keV to 100 MeV. It consisted of two independent detectors and their associated electronics. Each detector consisted of a bismuth germanate (BGO) crystal 78 mm in diameter by 120 mm thick, surrounded by a plastic anti-coincidence jacket. The two detectors were arranged on the spacecraft so as to observe 4π steradians. The burst mode was triggered when the count rate in the 0.1 to 1.5 MeV energy range exceeded the background level by 8 sigma in either 0.25 or 1.0 seconds. There were 116 energy channels.
WATCH
Starting in January 1990, four WATCH instruments, designed by the Danish Space Research Institute, were in operation on the Granat observatory. The instruments could localize bright sources in the 6 to 180 keV range to within 0.5° using a Rotation Modulation Collimator. Taken together, the instruments' three fields of view covered approximately 75% of the sky. The energy resolution was 30% FWHM at 60 keV. During quiet periods, count rates in two energy bands (6 to 15 and 15 to 180 keV) were accumulated for 4, 8, or 16 seconds, depending on onboard computer memory availability. During a burst or transient event, count rates were accumulated with a time resolution of 1 second per 36 energy channels.
KONUS-B
The KONUS-B instrument, designed by the Ioffe Physico-Technical Institute in St. Petersburg, consisted of seven detectors distributed around the spacecraft that responded to photons of 10 keV to 8 MeV energy. They consisted of NaI(Tl) scintillator crystals 200 mm in diameter by 50 mm thick behind a Be entrance window. The side surfaces were protected by a 5 mm thick lead layer. The burst detection threshold was 500 to 50 microjoules per square meter (5 × 10 to 5 × 10 erg/cm2), depending on the burst spectrum and rise time. Spectra were taken in two 31-channel pulse height analyzers (PHAs), of which the first eight were measured with 1/16 s time resolution and the remaining with variable time resolutions depending on the count rate. The range of resolutions covered 0.25 to 8 s.
The KONUS-B instrument operated from 11 December 1989 until 20 February 1990. Over that period, the "on" time for the experiment was 27 days. Some 60 solar flares and 19 cosmic gamma-ray bursts were detected.
TOURNESOL
The French TOURNESOL instrument consisted of four proportional counters and two optical detectors. The proportional counters detected photons between 2 keV and 20 MeV in a 6° by 6° field of view. The visible detectors had a field of view of 5° by 5°. The instrument was designed to look for optical counterparts of high-energy burst sources, as well as performing spectral analysis of the high-energy events.
Science results
Over the initial four years of directed observations, Granat observed many galactic and extra-galactic X-ray sources with emphasis on the deep imaging and spectroscopy of the Galactic Center, broad-band observations of black hole candidates, and X-ray novae. After 1994, the observatory was switched to survey mode and carried out a sensitive all-sky survey in the 40 to 200 keV energy band.
Some of the highlights included:
A very deep imaging (more than 5 million seconds duration) of the Galactic Center region.
Discovery of electron-positron annihilation lines from the galactic microquasar 1E1740-294 and the X-ray Nova Muscae.
Study of spectra and time variability of black hole candidates.
Across eight years of observations, Granat discovered some twenty new X-ray sources, i.e. candidate black holes and neutron stars. Consequently, their designations begin with "GRS" meaning "GRANAT source". Examples are GRS 1915+105 (the first microquasar discovered in our galaxy) and GRS 1124-683.
Impact of the dissolution of the Soviet Union
After the end of the Soviet Union, two problems arose for the project. The first was geopolitical in nature: the main spacecraft control center was located at the Yevpatoria facility in the Crimea region. This control center was significant in the Soviet space program, being one of only two in the country equipped with a 70 m RT-70 dish antenna. With the breakup of the Union, the Crimea region found itself part of the newly independent Ukraine and the center was put under Ukrainian national control, prompting new political hurdles.
The main and most urgent problem, however, was in finding funds to support the continued operation of the spacecraft amid the spending crunch in post-Soviet Russia. The French space agency, having already contributed significantly to the project (both scientifically and financially), took upon itself to fund the continuing operations directly.
See also
Astron, a previous space observatory based on the Venera spacecraft.
Spektr-RG
References
External links
Official GRANAT Observatory homepages: English Russian
NASA's HEASARC – Observatories – Granat
Encyclopedia Astronautica: On This Day
Global Telescope Network: Granat
Gunter's Space Page: Granat (Astron 2)
Soviet space observatories
Space telescopes
Gamma-ray telescopes
X-ray telescopes
Science and technology in the Soviet Union
1989 in the Soviet Union
Observational astronomy
France–Soviet Union relations
Bulgaria–Soviet Union relations
Spacecraft launched in 1989
4MV
Denmark–Soviet Union relations | Granat | Astronomy | 2,111 |
21,906,468 | https://en.wikipedia.org/wiki/Mirror%20%28multimedia%20project%29 | MIRROR is a multimedia project created by Canadian singer-songwriter Thomas Anselmi, former singer for Copyright and Slow. Formed in 2003, the project is based in Los Angeles, California, and Vancouver, British Columbia, Canada.
Album
The Mirror album was released in 2009. It differs from Anselmi's prior punk and alt rock styles, containing semi-orchestral and cinematic elements. Anselmi says he was influenced by the soundtracks of David Lynch films and by the aesthetics of pop music and television variety shows, particularly German Schlager.
The album was produced by former Grapes of Wrath band member, Vincent Jones. Anselmi and Jones were able to secure Depeche Mode singer Dave Gahan for the lead track, "Nostalgia". Additional performers and collaborators include singer Laure-Elaine, actress Frances Lawson, actor Joe Dallesandro, painter/actor Ronan Boyle, synthesist Phil Western, pianist Mike Garson, guitarist Knox Chandler, media artist and musician Haig Armen, and film director Sean Starke.
Video
A video for the opening track, "Nostalgia", was filmed on location in Poland and the Czech Republic. It combines archival footage with a live action performance by Gahan. The video was directed by Sean Starke.
The second video is "Greetings From Nowhere". It stars Frances Lawson performing the album's second song, "Nowhere". It was directed by Thomas Anselmi and Sean Starke, and filmed at the Salton Sea, California. The video contains a monologue not available on the album.
References
External links
Official Mirror Website
Interview with Thomas Anselmi at SideLine.com
Interview with Thomas Anselmi at SuicideGirls.com
Multimedia works | Mirror (multimedia project) | Technology | 348 |
8,207,052 | https://en.wikipedia.org/wiki/5-HT3%20receptor | {{DISPLAYTITLE:5-HT3 receptor}}The 5-HT3 receptor belongs to the Cys-loop superfamily of ligand-gated ion channels (LGICs) and therefore differs structurally and functionally from all other 5-HT receptors (5-hydroxytryptamine, or serotonin receptors) which are G protein-coupled receptors. This ion channel is cation-selective and mediates neuronal depolarization and excitation within the central and peripheral nervous systems.
As with other ligand gated ion channels, the 5-HT3 receptor consists of five subunits arranged around a central ion conducting pore, which is permeable to sodium (Na), potassium (K), and calcium (Ca) ions. Binding of the neurotransmitter 5-hydroxytryptamine (serotonin) to the 5-HT3 receptor opens the channel, which, in turn, leads to an excitatory response in neurons. The rapidly activating, desensitizing, inward current is predominantly carried by sodium and potassium ions. 5-HT3 receptors have a negligible permeability to anions. They are most closely related by homology to the nicotinic acetylcholine receptor.
Structure
The 5-HT3 receptor differs markedly in structure and mechanism from the other 5-HT receptor subtypes, which are all G-protein-coupled. A functional channel may be composed of five identical 5-HT3A subunits (homopentameric) or a mixture of 5-HT3A and one of the other four 5-HT3B, 5-HT3C, 5-HT3D, or 5-HT3E subunits (heteropentameric). It appears that only the 5-HT3A subunits form functional homopentameric channels. All other subunit subtypes must heteropentamerize with 5-HT3A subunits to form functional channels. Additionally, there has not currently been any pharmacological difference found between the heteromeric 5-HT3AC, 5-HT3AD, 5-HT3AE, and the homomeric 5-HT3A receptor. N-terminal glycosylation of receptor subunits is critical for subunit assembly and plasma membrane trafficking. The subunits surround a central ion channel in a pseudo-symmetric manner (Fig.1). Each subunit comprises an extracellular N-terminal domain which comprises the orthosteric ligand-binding site; a transmembrane domain consisting of four interconnected alpha helices (M1-M4), with the extracellular M2-M3 loop involved in the gating mechanism; a large cytoplasmic domain between M3 and M4 involved in receptor trafficking and regulation; and a short extracellular C-terminus (Fig.1). Whereas extracellular domain is the site of action of agonists and competitive antagonists, the transmembrane domain contains the central ion pore, receptor gate, and principle selectivity filter that allows ions to cross the cell membrane.
Human and mouse genes
The genes encoding human 5-HT3 receptors are located on chromosomes 11 (HTR3A, HTR3B) and 3 (HTR3C, HTR3D, HTR3E), so it appears that they have arisen from gene duplications. The genes HTR3A and HTR3B encode the 5-HT3A and 5-HT3B subunits and HTR3C, HTR3D and HTR3E encode the 5-HT3C, 5-HT3D and 5-HT3E subunits. HTR3C and HTR3E do not seem to form functional homomeric channels, but when co-expressed with HTR3A they form heteromeric complex with decreased or increased 5-HT efficacies. The pathophysiological role for these additional subunits has yet to be identified.
The human 5-HT3A receptor gene is similar in structure to the mouse gene which has 9 exons and is spread over ~13 kb. Four of its introns are exactly in the same position as the introns in the homologous α7-acetylcholine receptor gene, clearly showing their evolutionary relationship.
Expression. The 5-HT3C, 5-HT3D and 5-HT3E genes tend to show peripherally restricted pattern of expression, with high levels in the gut. In human duodenum and stomach, for example, 5-HT3C and 5-HT3E mRNA might be greater than for 5-HT3A and 5-HT3B.
Polymorphism. In patients treated with chemotherapeutic drugs, certain polymorphism of the HTR3B gene could predict successful antiemetic treatment. This could indicate that the 5-HTR3B receptor subunit could be used as biomarker of antiemetic drug efficacy.
Tissue distribution
The 5-HT3 receptor is expressed throughout the central and peripheral nervous systems and mediates a variety of physiological functions. On a cellular level, it has been shown that postsynaptic 5-HT3 receptors mediate fast excitatory synaptic transmission in rat neocortical interneurons, amygdala, and hippocampus, and in ferret visual cortex. 5-HT3 receptors are also present on presynaptic nerve terminals. There is some evidence for a role in modulation of neurotransmitter release, but evidence is inconclusive.
Effects
When the receptor is activated to open the ion channel by agonists, the following effects are observed:
CNS: nausea and vomiting center in brain stem, anxiety, as well as anticonvulsant and pro-nociceptive activity.
PNS: neuronal excitation (in autonomic, nociceptive neurons), emesis.
Agonists
Agonists for the receptor include:
Cereulide
2-methyl-5-HT
Alpha-Methyltryptamine
Bufotenin
Chlorophenylbiguanide
Ethanol
Ibogaine
Phenylbiguanide
Quipazine
RS-56812 Potent and selective 5-HT3 partial agonist, 1000× selectivity over other serotonin receptors
SR-57227
Varenicline
YM-31636
S 21007 (SAR c.f. CGS-12066A)
Antagonists
Antagonists for the receptor (sorted by their respective therapeutic application) include:
Antiemetics
AS-8112
Granisetron
Ondansetron
Tropisetron
Gastroprokinetics
Alosetron
Batanopride
Metoclopramide (high doses)
Renzapride
Zacopride
M1, the major active metabolite of mosapride
Antidepressants
Bupropion
Mianserin
Mirtazapine
Vortioxetine
Antipsychotics
Clozapine
Olanzapine
Quetiapine
Antimalarials
Quinine
Chloroquine
Mefloquine
Others
3-Tropanyl indole-3-carboxylate
Cannabidiol (CBD)
Delta-9-Tetrahydrocannabinol
Lamotrigine (epilepsy and bipolar disorder)
Memantine (Alzheimer's disease medication)
Menthol
Thujone
Positive Allosteric Modulators
These agents are not agonists at the receptor, but increase the affinity or efficacy of the receptors for an agonist:
Indole Derivatives
5-chloroindole
Small Organic Anaesthetics
Ethanol
Chloroform
Halothane
Isoflurane
Discovery
Identification of the 5-HT3 receptor did not take place until 1986, lacking selective pharmacological tools. However, with the discovery that the 5-HT3 receptor plays a prominent role in chemotherapy- and radiotherapy-induced vomiting, and the concomitant development of selective 5-HT3 receptor antagonists to suppress these side effects aroused intense interest from the pharmaceutical industry and therefore the identification of 5-HT3 receptors in cell lines and native tissues quickly followed.
See also
5-HT1 receptor
5-HT2 receptor
5-HT4 receptor
5-HT5 receptor
5-HT6 receptor
5-HT7 receptor
References
External links
Ion channels
Serotonin receptors | 5-HT3 receptor | Chemistry | 1,786 |
3,126,455 | https://en.wikipedia.org/wiki/Inverse%20search | Inverse search (also called "reverse search") is a feature of some non-interactive typesetting programs, such as LaTeX and GNU LilyPond. These programs read an abstract, textual, definition of a document as input, and convert this into a graphical format such as DVI or PDF. In a windowing system, this typically means that the source code is entered in one editor window, and the resulting output is viewed in a different output window. Inverse search means that a graphical object in the output window works as a hyperlink, which brings you back to the line and column in the editor, where the clicked object was defined. The inverse search feature is particularly useful during proofreading.
Implementations
In TeX and LaTeX, the package srcltx provides an inverse search feature through DVI output files (e.g., with yap or Xdvi), while vpe, pdfsync and SyncTeX provide similar functionality for PDF output, among other techniques. The Comparison of TeX editors has a column on support of inverse search; most of them provide it nowadays.
GNU LilyPond provides an inverse search feature through PDF output files, since version 2.6. The program calls this feature Point-and-click,
Many integrated development environments for programming use inverse search to display compilation error messages, and during debugging when a breakpoint is hit.
References
Bibliography
Jérôme Laurens, ”Direct and reverse synchronization with SyncTeX”, in TUGboat 29(3), 2008, p365–371, PDF (532KB) — including an overview of synchronization techniques with TeX
External links
How to set up inverse search with xdvi
Software development | Inverse search | Technology,Engineering | 354 |
1,406,274 | https://en.wikipedia.org/wiki/Pergola | A pergola is most commonly an outdoor garden feature forming a shaded walkway, passageway, or sitting area of vertical posts or pillars that usually support crossbeams and a sturdy open lattice, often upon which woody vines are trained. The origin of the word is the Late Latin pergula, referring to a projecting eave.
It also may be an extension of a building or serve as protection for an open terrace or a link between pavilions. They are different from green tunnels, with a green tunnel being a type of road under a canopy of trees.
Depending on the context, the terms "pergola", "bower", and "arbor" are often used interchangeably. An "arbor" is also regarded as being a wooden bench seat with a roof, usually enclosed by lattice panels forming a framework for climbing plants; in evangelical Christianity, brush arbor revivals occur under such structures. A pergola, on the other hand, is a much larger and more open structure. Normally, a pergola does not include integral seating.
Modern pergola structures can also include architectural or engineering structures having a pergola design, which are not used in gardens. California High-Speed Rail, for instance, uses large concrete pergolas to support high-speed rail guideways which cut over roadways or other rail tracks at shallow angles (unlike bridges or overcrossings which are usually nearly at right angles). (See the high-speed rail pergola structure picture elsewhere in the article for an illustration.)
Description
Features and types
Pergolas may link pavilions or extend from a building's door to an open garden feature such as an isolated terrace or pool. Freestanding pergolas, those not attached to a home or other structure, provide a sitting area that allows for breeze and light sun, but offer protection from the harsh glare of direct sunlight.
Pergolas also give climbing plants a structure on which to grow.
In 1498, Leonardo da Vinci decorated the Sala delle Asse of the Castello Sforzesco in Milan to give the illusion of the great square and vaulted reception hall being within a pergola that was made up of the intertwined branches of sixteen huge mulberry trees. The novel project was commissioned by the Duke of Milan, Ludovico Sforza.
Green tunnels
Pergolas are more permanent architectural features than the green tunnels of late medieval and early Renaissance gardens that often were formed of springy withies—easily replaced shoots of willow or hazel—bound together at the heads to form a series of arches, then loosely woven with long slats on which climbers were grown, to make a passage that was both cool, shaded, and moderately dry in a shower.
At the Medici villa, La Petraia, inner and outer curving segments of such green walks, the forerunners of pergolas, give structure to the pattern that can be viewed from the long terrace above it.
History
Origin
The origin of the word is the Late Latin pergula, referring to a projecting eave. The English term was borrowed from Italian. The term was mentioned in an Italian context in 1645 by John Evelyn at the cloister of Trinità dei Monti in Rome He used the term in an English context in 1654 when, in the company of the fifth Earl of Pembroke, Evelyn watched the coursing of hares from a "pergola" built on the downs near Salisbury for that purpose.
Historical gardens
The clearly artificial nature of the pergola made it fall from favor in the naturalistic gardening styles of the eighteenth and nineteenth centuries. Yet handsome pergolas on brick and stone pillars with powerful cross-beams were a feature of the gardens designed in the late nineteenth and early twentieth centuries by Sir Edwin Lutyens and Gertrude Jekyll and epitomize their trademark of firm structure luxuriantly planted. A particularly extensive pergola is featured at the gardens of The Hill in Hampstead (London), designed by Thomas Mawson for his client W. H. Lever. Pergola in Wrocław was designed in 1911 and became a UNESCO World Heritage Site in 2006.
Modern pergolas
Modern pergola design materials including wood, vinyl, fiberglass, aluminum, and chlorinated polyvinyl chloride (CPVC) rather than brick or stone pillars, are more affordable and are increasing in popularity. Wooden pergolas are made either from a weather-resistant wood, such as western red cedar (Thuja plicata) or, formerly, of coast redwood (Sequoia sempervirens). They are painted, stained, or use wood treated with preservatives for outdoor use. For a low maintenance alternative to wood, the contemporary materials of vinyl, fiberglass, aluminum, and CPVC can be used. These materials do not require yearly paint or stain like a wooden pergola would, and their manufacture can make them even stronger and longer-lasting than a wooden pergola. These contemporary material pergolas can also be motorized to open and close.
See also
Breezeway
Brise soleil
Latticework
Patio
Trellis (architecture)
Vine training systems
References
External links
Garden features
Gardening aids
Architectural elements
Outdoor recreation | Pergola | Technology,Engineering | 1,065 |
7,151,375 | https://en.wikipedia.org/wiki/Space-oblique%20Mercator%20projection | Space-oblique Mercator projection is a map projection devised in the 1970s for preparing maps from Earth-survey satellite data. It is a generalization of the oblique Mercator projection that incorporates the time evolution of a given satellite ground track to optimize its representation on the map. The oblique Mercator projection, on the other hand, optimizes for a given geodesic.
History
The space-oblique Mercator projection (SOM) was developed by John P. Snyder, Alden Partridge Colvocoresses and John L. Junkins in 1976. Snyder had an interest in maps dating back to his childhood; he regularly attended cartography conferences whilst on vacation. In 1972, the United States Geological Survey (USGS) needed to develop a system for reducing the amount of distortion caused when satellite pictures of the ellipsoidal Earth were printed on a flat page. Colvocoresses, the head of the USGS's national mapping program, asked attendees of a geodetic sciences conferences for help solving the projection problem in 1976. Snyder work on the problem with his newly purchased pocket calculator and devised the mathematical formulas needed to solve the problem. After submitting his calculations to Waldo Tobler for review, Snyder submitted these to the USGS at no charge. Impressed with his work, USGS officials offered Snyder a job, and he promptly accepted. His formulas were then used to produce maps from Landsat 4, which launched in the summer of 1978 .
Projection description
The space-oblique Mercator projection provides continual, nearly conformal mapping of the swath sensed by a satellite. Scale is true along the ground track, varying 0.01 percent within the normal sensing range of the satellite. Conformality is correct within a few parts per million for the sensing range. Distortion is essentially constant along lines of constant distance parallel to the ground track. The space-oblique Mercator is the only projection which takes the rotation of Earth into account.
Equations
The forward equations for the Space-oblique Mercator projection for the sphere are as follows:
References
John Hessler, Projecting Time: John Parr Snyder and the Development of the Space Oblique Mercator Projection, Library of Congress, 2003
Snyder's 1981 Paper Detailing the Projection's Derivation
Map projections | Space-oblique Mercator projection | Mathematics | 465 |
17,975,363 | https://en.wikipedia.org/wiki/HD%2040307 | HD 40307 is an orange (K-type) main-sequence star located approximately 42 light-years away in the constellation of Pictor (the Easel), taking its primary name from its Henry Draper Catalogue designation. It is calculated to be slightly less massive than the Sun. The star has six known planets, three discovered in 2008 and three more in 2012. One of them, HD 40307 g, is a potential super-Earth in the habitable zone, with an orbital period of about 200 days. This object might be capable of supporting liquid water on its surface, although much more information must be acquired before its habitability can be assessed.
No stellar companions to HD 40307 were detected as of 2018.
History and nomenclature
HD 40307 was observed during or before 1900 as part of the Cape Photographic Durchmusterung. The designation HD 40307 is from the Henry Draper Catalogue, which is based on spectral classifications made between 1911 and 1915 by Annie Jump Cannon and her co-workers, and was published between 1918 and 1924.
Characteristics
As a K-type star, HD 40307 emits orange-tinted light. It has only about three-quarters of the Sun's radius and mass. Its temperature is measured at slightly under .
The astronomers who discovered the planets orbiting HD 40307 suggested that the metallicities of stars determine whether or not the planetary bodies that orbit them will be terrestrial, like Earth, or gaseous, like Jupiter and Saturn.
Distance and visibility
Despite its relative proximity to the Sun at 42 light-years, HD 40307 is not visible to the naked eye, given its apparent magnitude of 7.17. It came within 6.4 light-years of the Sun about 413,000 years ago.
Planetary system
A planetary system around HD 40307 contains four confirmed planets and two other possible planets, all within of the star.
After spending five years observing the star, the European Organisation for Astronomical Research in the Southern Hemisphere (ESO) announced that they had discovered three super-Earths in orbit around HD 40307 in June 2008. All three planets were detected by the radial velocity method, using the HARPS spectrograph system.
In 2012, an independent analysis carried out by a team of astronomers led by Mikko Tuomi of the University of Hertfordshire confirmed the existence of these planets and found an additional three planets in the systems. The planet HD 40307 f on 51.6-day orbit was confirmed in 2015, with inconclusive evidence for the planets HD 40307 e and HD 40307 g.
Five of the planets orbit very close to the star, with the farthest of them located twice as close to HD 40307 than is the planet Mercury is to the Sun. The outermost planet orbits at a distance similar to the distance of Venus to the Sun and is situated well in the system's liquid water habitable zone.
The minimum masses of the planets in the system ranges from three to ten times the mass of the Earth, placing them somewhere between Earth and gas giants like Uranus and Neptune. Dynamical analysis of the innermost planets suggests that planet b is unstable at its age unless it is an ice giant, having migrated from further away. That implies similar for the other planets, even further out. The most recent discovery also indicates via dynamical analysis that the true planetary masses can not be much higher than their minimum masses.
See also
List of multiplanetary systems
List of extrasolar planets
Other stars with planets discovered in June 2008:
HD 181433
HD 47186
MOA-2007-BLG-192L
Notes
References
External links
A Trio of Super-Earths: A harvest of low-mass exoplanets discovered with HARPS, press release, European Southern Observatory, ESO 19/08, June 16, 2008.
040307
027887
2046
CD-60 01303
K-type main-sequence stars
HD, 040307
Pictor
Planetary systems with four confirmed planets
J05540421-6001245 | HD 40307 | Astronomy | 832 |
76,628,545 | https://en.wikipedia.org/wiki/NGC%203647 | NGC 3647 is a small elliptical galaxy in the Leo constellation. The galaxy was first discovered on March 22, 1865 by Albert Marth who was a German astronomer. It is approximately 747 million light-years away. Due to its close proximity to five other elliptical galaxies, there was a bit of confusion for Marth to identify which object is NGC 3647.
According to SIMBAD, it is identified as PGC 34816. But in HyperLeda and by NASA/IPAC databases, NGC 3647 is identified as PGC 34815. The correct designation for this article, according sources from wikidata is PGC 34816. There is no evidence whether this galaxy has an active nucleus or not.
References
Elliptical galaxies
Galaxies discovered in 1865
3647
Leo (constellation)
34816
J11213813+0254119
Astronomical objects discovered in 1865
Discoveries by Albert Marth | NGC 3647 | Astronomy | 185 |
18,254,249 | https://en.wikipedia.org/wiki/Load%20factor%20%28electrical%29 | In electrical engineering the load factor is defined as the average load divided by the peak load in a specified time period. It is a measure of the utilization rate, or efficiency of electrical energy usage; a high load factor indicates that load is using the electric system more efficiently, whereas consumers or generators that underutilize the electric distribution will have a low load factor.
An example, using a large commercial electrical bill:
peak demand =
use =
number of days in billing cycle =
Hence:
load factor = ( [ / { × } ] / ) × 100% = 18.22%
It can be derived from the load profile of the specific device or system of devices. Its value is always less than one because maximum demand is never lower than average demand, since facilities likely never operate at full capacity for the duration of an entire 24-hour day. A high load factor means power usage is relatively constant. Low load factor shows that occasionally a high demand is set. To service that peak, capacity is sitting idle for long periods, thereby imposing higher costs on the system. Electrical rates are designed so that customers with high load factor are charged less overall per kWh. This process along with others is called load balancing or peak shaving.
The load factor is closely related to and often confused with the demand factor.
The major difference to note is that the denominator in the demand factor is fixed depending on the system. Because of this, the demand factor cannot be derived from the load profile but needs the addition of the full load of the system in question.
See also
Availability factor
Capacity factor
Demand factor
Diversity factor
Utilization factor
References
Power engineering | Load factor (electrical) | Engineering | 329 |
10,705,807 | https://en.wikipedia.org/wiki/Gillham%20code | Gillham code is a zero-padded 12-bit binary code using a parallel nine- to eleven-wire interface, the Gillham interface, that is used to transmit uncorrected barometric altitude between an encoding altimeter or analog air data computer and a digital transponder. It is a modified form of a Gray code and is sometimes referred to simply as a "Gray code" in avionics literature.
History
The Gillham interface and code are an outgrowth of the 12-bit IFF Mark X system, which was introduced in the 1950s. The civil transponder interrogation modes A and C were defined in air traffic control (ATC) and secondary surveillance radar (SSR) in 1960.
The code is named after Ronald Lionel Gillham, a signals officer at Air Navigational Services, Ministry of Transport and Civil Aviation, who had been appointed a civil member of the Most Excellent Order of the British Empire (MBE) in the Queen's 1955 Birthday Honours. He was the UK's representative to the International Air Transport Association (IATA) committee developing the specification for the second generation of air traffic control system, known in the UK as "Plan Ahead", and is said to have had the idea of using a modified Gray code. The final code variant was developed in late 1961 for the ICAO Communications Division meeting (VII COM) held in January/February 1962, and described in a 1962 FAA report. The exact timeframe and circumstances of the term Gillham code being coined are unclear, but by 1963 the code was already recognized under this name. By the mid-1960s the code was also known as MOA–Gillham code or ICAO–Gillham code. ARINC 572 specified the code as well in 1968.
Once recommended by the ICAO for automatic height transmission for air traffic control purposes, the interface is now discouraged and has been mostly replaced by modern serial communication in newer aircraft.
Altitude encoder
An altitude encoder takes the form of a small metal box containing a pressure sensor and signal conditioning electronics. The pressure sensor is often heated, which requires a warm-up time during which height information is either unavailable or inaccurate. Older style units can have a warm-up time of up to 10 minutes; more modern units warm up in less than 2 minutes. Some of the very latest encoders incorporate unheated 'instant on' type sensors. During the warm-up of older style units the height information may gradually increase until it settles at its final value. This is not normally a problem as the power would typically be applied before the aircraft enters the runway and so it would be transmitting correct height information soon after take-off.
The encoder has an open-collector output, compatible with 14 V or 28 V electrical systems.
Coding
The height information is represented as 11 binary digits in a parallel form using 11 separate lines designated D2 D4 A1 A2 A4 B1 B2 B4 C1 C2 C4. As a twelfth bit, the Gillham code contains a D1 bit but this is unused and consequently set to zero in practical applications.
Different classes of altitude encoder do not use all of the available bits. All use the A, B and C bits; increasing altitude limits require more of the D bits. Up to and including 30700 ft does not require any of the D bits (9-wire interface). This is suitable for most light general aviation aircraft. Up to and including 62700 ft requires D4 (10-wire interface). Up to and including 126700 ft requires D4 and D2 (11-wire interface). D1 is never used.
Decoding
Bits D2 (msbit) through B4 (lsbit) encode the pressure altitude in 500 ft increments (above a base altitude of −1000±250 ft) in a standard 8-bit reflected binary code (Gray code). The specification stops at code 1000000 (126500±250 ft), above which D1 would be needed as a most significant bit.
Bits C1, C2 and C4 use a mirrored 5-state 3-bit Gray BCD code of a Giannini Datex code type (with the first 5 states resembling O'Brien code type II) to encode the offset from the 500 ft altitude in 100 ft increments. Specifically, if the parity of the 500 ft code is even then codes 001, 011, 010, 110 and 100 encode −200, −100, 0, +100 and +200 ft relative to the 500 ft altitude. If the parity is odd, the assignments are reversed. Codes 000, 101 and 111 are not used.
The Gillham code can be decoded using various methods. Standard techniques use hardware or software solutions. The latter often uses a lookup table but an algorithmic approach can be taken.
See also
Air traffic control radar beacon system (ATCRBS)
Selective Identification Feature (SIF)
IFF code
Flight level
ARINC 429
Notes
References
Further reading
(NB. Supersedes MIL-HDBK-231(AS) (1970-07-01).)
Annex 10 - Volume IV - Surveillance Radar and Collision Avoidance Systems ; 4th Edition; ICAO; 280 pages; 2007.
DO-181E Minimum Operational Performance Standards for ATCRBS / Mode S Airborne Equipment; Rev E; RTCA; 2011.
Data transmission
Avionics | Gillham code | Technology | 1,105 |
5,422,424 | https://en.wikipedia.org/wiki/Chorobates | The chorobates, described by Vitruvius in Book VIII of the De architectura, was used to measure horizontal planes and was especially important in the construction of aqueducts.
Similar to modern spirit levels, the chorobates consisted of a beam of wood 6 m in length held by two supporting legs and equipped with two plumb lines at each end. The legs were joined to the beam by two diagonal rods with carved notches. If the notches corresponding to the plumb lines matched on both sides, it showed that the beam was level. On top of the beam, a groove or channel was carved. If the condition was too windy for the plumb bobs to work effectively, the surveyor could pour water into the groove and measure the plane by checking the water level.
Isaac Moreno Gallo's interpretation of the chorobates
Isaac Moreno Gallo, a Technical Engineer of Public Works specialized in Ancient Rome's civil engineering, claims that the present-day representation of the chorobates (in a table-like shape) is mistaken due to a misinterpretation derived from an incorrect translation of the Latin term "ancones" used by Vitruvius. "...ea habet ancones in capitibus extremis aequali modo perfectos inque regulae capitibus ad nomam coagmentatos..." In this context, "ancones" could be translated as "limbs" (extremidades) or "arms", but also as "ménsulas" (brackets or corbels). According to Isaac Moreno, this vertical design is way more efficient when it comes to optical leveling and makes more sense from a topographer's point of view. Furthermore, it preserves the original length described by Vitruvius (20 feet or 5.92 meters) that the table-like chorobates versions persistently seem to ignore.
This "vertical" chorobates was indeed the predominant interpretation of the chorobates in the oldest representations recorded: 1547's engravings included in Jean Goujon's translation of Vitrubius works into French. Or 1582's Miguel de Urrea's first edition of Vitrubius works, this time in Spanish. And few years later, when Juan de Lastanosa published “The Twenty-One Books of Engineering and Machines" of Gianello della Torre”.
All three of them consistently represented the chorobates in a very similar way, until 1673 Claude Perrault's translations radically altered the vertically shaped stand with "ménsulas" and turned it into a horizontal table-like chorobates (with "legs" instead of "brackets") that has become the standard representation nowadays.
In his "Ars Mensoria" series, Isaac Moreno Gallo recreates practical demonstrations of Roman topographic instruments using his own replicas. Among them, the chorobates.
See also
Groma
Dioptra
Chorography
Odometer
References
M. J. T. Lewis. Surveying Instruments of Greece and Rome. Cambridge University Press. . 2001. p 31.Isaac Moreno Gallo.Topografía Romana. 2004. p 43 .
Surveying and engineering in Ancient Rome
Chorobates described
Measuring instruments
Surveying
Ancient Greece
Ancient Roman architecture | Chorobates | Technology,Engineering | 692 |
2,576,261 | https://en.wikipedia.org/wiki/Electron%20avalanche | An electron avalanche is a process in which a number of free electrons in a transmission medium are subjected to strong acceleration by an electric field and subsequently collide with other atoms of the medium, thereby ionizing them (impact ionization). This releases additional electrons which accelerate and collide with further atoms, releasing more electrons—a chain reaction. In a gas, this causes the affected region to become an electrically conductive plasma.
The avalanche effect was discovered by John Sealy Townsend in his work between 1897 and 1901, and is also known as the Townsend discharge.
Electron avalanches are essential to the dielectric breakdown process within gases. The process can culminate in corona discharges, streamers, leaders, or in a spark or continuous arc that completely bridges the gap between the electrical conductors that are applying the voltage. The process extends to huge sparks — streamers in lightning discharges propagate by formation of electron avalanches created in the high potential gradient ahead of the streamers' advancing tips. Once begun, avalanches are often intensified by the creation of photoelectrons as a result of ultraviolet radiation emitted by the excited medium's atoms in the aft-tip region.
The process can also be used to detect ionizing radiation by using the gas multiplication effect of the avalanche process. This is the ionisation mechanism of the Geiger–Müller tube and, to a limited extent, of the proportional counter and is also used in spark chambers and other wire chambers.
Analysis
A plasma begins with a rare natural 'background' ionization event of a neutral air molecule, perhaps as the result of photoexcitation or background radiation. If this event occurs within an area that has a high potential gradient, the positively charged ion will be strongly attracted toward, or repelled away from, an electrode depending on its polarity, whereas the electron will be accelerated in the opposite direction. Because of the huge mass difference, electrons are accelerated to a much higher velocity than ions.
High-velocity electrons often collide with neutral atoms inelastically, sometimes ionizing them. In a chain-reaction — or an 'electron avalanche' — additional electrons recently separated from their positive ions by the strong potential gradient, cause a large cloud of electrons and positive ions to be momentarily generated by just a single initial electron. However, free electrons are easily captured by neutral oxygen or water vapor molecules (so-called electronegative gases), forming negative ions. In air at STP, free electrons exist for only about 11 nanoseconds before being captured. Captured electrons are effectively removed from play — they can no longer contribute to the avalanche process. If electrons are being created at a rate greater than they are being lost to capture, their number rapidly multiplies, a process characterized by exponential growth. The degree of multiplication that this process can provide is huge, up to several million-fold depending on the situation. The multiplication factor M is given by
Where X1 and X2 are the positions that the multiplication is being measured between, and α is the ionization constant. In other words, one free electron at position X1 will result in M free electrons at position X2. Substituting the voltage gradients into this equation results in
Where V is the applied voltage, VBR is the breakdown voltage and n is an empirically derived value between 2 and 6. As can be seen from this formula, the multiplication factor is very highly dependent on the applied voltage, and as the voltage nears the breakdown voltage of the material, the multiplication factor approaches infinity and the limiting factor becomes the availability of charge carriers.
Avalanche sustenance requires a reservoir of charge to sustain the applied voltage, as well as a continual source of triggering events. A number of mechanisms can sustain this process, creating avalanche after avalanche, to create a corona current. A secondary source of plasma electrons is required as the electrons are always accelerated by the field in one direction, meaning that avalanches always proceed linearly toward or away from an electrode. The dominant mechanism for the creation of secondary electrons depends on the polarity of a plasma. In each case, the energy emitted as photons by the initial avalanche is used to ionise a nearby gas molecule creating another accelerable electron. What differs is the source of this electron. When one or more electron avalanches occur between two electrodes of sufficient size, complete avalanche breakdown can occur, culminating in an electrical spark that bridges the gap.
See also
Townsend discharge
Avalanche breakdown
Avalanche diode
Corona discharge
Multipactor
Geiger–Müller tube
Geiger counter
Spark chamber
Wire chamber
Runaway breakdown
Relativistic runaway electron avalanche
References
External links
Breakdown effects in semiconductors
Electrical breakdown | Electron avalanche | Physics | 942 |
26,764 | https://en.wikipedia.org/wiki/International%20System%20of%20Units | The International System of Units, internationally known by the abbreviation SI (from French ), is the modern form of the metric system and the world's most widely used system of measurement. It is the only system of measurement with official status in nearly every country in the world, employed in science, technology, industry, and everyday commerce. The SI system is coordinated by the International Bureau of Weights and Measures which is abbreviated BIPM from .
The SI comprises a coherent system of units of measurement starting with seven base units, which are the second (symbol s, the unit of time), metre (m, length), kilogram (kg, mass), ampere (A, electric current), kelvin (K, thermodynamic temperature), mole (mol, amount of substance), and candela (cd, luminous intensity). The system can accommodate coherent units for an unlimited number of additional quantities. These are called coherent derived units, which can always be represented as products of powers of the base units. Twenty-two coherent derived units have been provided with special names and symbols.
The seven base units and the 22 coherent derived units with special names and symbols may be used in combination to express other coherent derived units. Since the sizes of coherent units will be convenient for only some applications and not for others, the SI provides twenty-four prefixes which, when added to the name and symbol of a coherent unit produce twenty-four additional (non-coherent) SI units for the same quantity; these non-coherent units are always decimal (i.e. power-of-ten) multiples and sub-multiples of the coherent unit.
The current way of defining the SI is a result of a decades-long move towards increasingly abstract and idealised formulation in which the realisations of the units are separated conceptually from the definitions. A consequence is that as science and technologies develop, new and superior realisations may be introduced without the need to redefine the unit. One problem with artefacts is that they can be lost, damaged, or changed; another is that they introduce uncertainties that cannot be reduced by advancements in science and technology.
The original motivation for the development of the SI was the diversity of units that had sprung up within the centimetre–gram–second (CGS) systems (specifically the inconsistency between the systems of electrostatic units and electromagnetic units) and the lack of coordination between the various disciplines that used them. The General Conference on Weights and Measures (French: – CGPM), which was established by the Metre Convention of 1875, brought together many international organisations to establish the definitions and standards of a new system and to standardise the rules for writing and presenting measurements. The system was published in 1960 as a result of an initiative that began in 1948, and is based on the metre–kilogram–second system of units (MKS) combined with ideas from the development of the CGS system.
Definition
The International System of Units consists of a set of defining constants with corresponding base units, derived units, and a set of decimal-based multipliers that are used as prefixes.
SI defining constants
The seven defining constants are the most fundamental feature of the definition of the system of units.
The magnitudes of all SI units are defined by declaring that seven constants have certain exact numerical values when expressed in terms of their SI units. These defining constants are the speed of light in vacuum , the hyperfine transition frequency of caesium , the Planck constant , the elementary charge , the Boltzmann constant , the Avogadro constant , and the luminous efficacy . The nature of the defining constants ranges from fundamental constants of nature such as to the purely technical constant . The values assigned to these constants were fixed to ensure continuity with previous definitions of the base units.
SI base units
The SI selects seven units to serve as base units, corresponding to seven base physical quantities. They are the second, with the symbol , which is the SI unit of the physical quantity of time; the metre, symbol , the SI unit of length; kilogram (, the unit of mass); ampere (, electric current); kelvin (, thermodynamic temperature); mole (, amount of substance); and candela (, luminous intensity).
The base units are defined in terms of the defining constants. For example, the kilogram is defined by taking the Planck constant to be , giving the expression in terms of the defining constants
All units in the SI can be expressed in terms of the base units, and the base units serve as a preferred set for expressing or analysing the relationships between units. The choice of which and even how many quantities to use as base quantities is not fundamental or even unique – it is a matter of convention.
Derived units
The system allows for an unlimited number of additional units, called derived units, which can always be represented as products of powers of the base units, possibly with a nontrivial numeric multiplier. When that multiplier is one, the unit is called a coherent derived unit. For example, the coherent derived SI unit of velocity is the metre per second, with the symbol . The base and coherent derived units of the SI together form a coherent system of units (the set of coherent SI units). A useful property of a coherent system is that when the numerical values of physical quantities are expressed in terms of the units of the system, then the equations between the numerical values have exactly the same form, including numerical factors, as the corresponding equations between the physical quantities.
Twenty-two coherent derived units have been provided with special names and symbols as shown in the table below. The radian and steradian have no base units but are treated as derived units for historical reasons.
The derived units in the SI are formed by powers, products, or quotients of the base units and are unlimited in number.
Derived units apply to some derived quantities, which may by definition be expressed in terms of base quantities, and thus are not independent; for example, electrical conductance is the inverse of electrical resistance, with the consequence that the siemens is the inverse of the ohm, and similarly, the ohm and siemens can be replaced with a ratio of an ampere and a volt, because those quantities bear a defined relationship to each other. Other useful derived quantities can be specified in terms of the SI base and derived units that have no named units in the SI, such as acceleration, which has the SI unit m/s2.
A combination of base and derived units may be used to express a derived unit. For example, the SI unit of force is the newton (N), the SI unit of pressure is the pascal (Pa) – and the pascal can be defined as one newton per square metre (N/m2).
Prefixes
Like all metric systems, the SI uses metric prefixes to systematically construct, for the same physical quantity, a set of units that are decimal multiples of each other over a wide range. For example, driving distances are normally given in kilometres (symbol ) rather than in metres. Here the metric prefix 'kilo-' (symbol 'k') stands for a factor of 1000; thus, = .
The SI provides twenty-four metric prefixes that signify decimal powers ranging from 10−30 to 1030, the most recent being adopted in 2022. Most prefixes correspond to integer powers of 1000; the only ones that do not are those for 10, 1/10, 100, and 1/100.
The conversion between different SI units for one and the same physical quantity is always through a power of ten. This is why the SI (and metric systems more generally) are called decimal systems of measurement units.
The grouping formed by a prefix symbol attached to a unit symbol (e.g. '', '') constitutes a new inseparable unit symbol. This new symbol can be raised to a positive or negative power. It can also be combined with other unit symbols to form compound unit symbols. For example, is an SI unit of density, where is to be interpreted as ().
Prefixes are added to unit names to produce multiples and submultiples of the original unit. All of these are integer powers of ten, and above a hundred or below a hundredth all are integer powers of a thousand. For example, kilo- denotes a multiple of a thousand and milli- denotes a multiple of a thousandth, so there are one thousand millimetres to the metre and one thousand metres to the kilometre. The prefixes are never combined, so for example a millionth of a metre is a micrometre, not a millimillimetre. Multiples of the kilogram are named as if the gram were the base unit, so a millionth of a kilogram is a milligram, not a microkilogram.
The BIPM specifies 24 prefixes for the International System of Units (SI):
Coherent and non-coherent SI units
The base units and the derived units formed as the product of powers of the base units with a numerical factor of one form a coherent system of units. Every physical quantity has exactly one coherent SI unit. For example, is the coherent derived unit for velocity. With the exception of the kilogram (for which the prefix kilo- is required for a coherent unit), when prefixes are used with the coherent SI units, the resulting units are no longer coherent, because the prefix introduces a numerical factor other than one. For example, the metre, kilometre, centimetre, nanometre, etc. are all SI units of length, though only the metre is a SI unit. The complete set of SI units consists of both the coherent set and the multiples and sub-multiples of coherent units formed by using the SI prefixes.
The kilogram is the only coherent SI unit whose name and symbol include a prefix. For historical reasons, the names and symbols for multiples and sub-multiples of the unit of mass are formed as if the gram were the base unit. Prefix names and symbols are attached to the unit name gram and the unit symbol g respectively. For example, is written milligram and , not microkilogram and .
Several different quantities may share the same coherent SI unit. For example, the joule per kelvin (symbol ) is the coherent SI unit for two distinct quantities: heat capacity and entropy; another example is the ampere, which is the coherent SI unit for both electric current and magnetomotive force. This illustrates why it is important not to use the unit alone to specify the quantity. As the SI Brochure states, "this applies not only to technical texts, but also, for example, to measuring instruments (i.e. the instrument read-out needs to indicate both the unit and the quantity measured)".
Furthermore, the same coherent SI unit may be a base unit in one context, but a coherent derived unit in another. For example, the ampere is a base unit when it is a unit of electric current, but a coherent derived unit when it is a unit of magnetomotive force.
Lexicographic conventions
Unit names
According to the SI Brochure, unit names should be treated as common nouns of the context language. This means that they should be typeset in the same character set as other common nouns (e.g. Latin alphabet in English, Cyrillic script in Russian, etc.), following the usual grammatical and orthographical rules of the context language. For example, in English and French, even when the unit is named after a person and its symbol begins with a capital letter, the unit name in running text should start with a lowercase letter (e.g., newton, hertz, pascal) and is capitalised only at the beginning of a sentence and in headings and publication titles. As a nontrivial application of this rule, the SI Brochure notes that the name of the unit with the symbol is correctly spelled as 'degree Celsius': the first letter of the name of the unit, 'd', is in lowercase, while the modifier 'Celsius' is capitalised because it is a proper name.
The English spelling and even names for certain SI units and metric prefixes depend on the variety of English used. US English uses the spelling deka-, meter, and liter, and International English uses deca-, metre, and litre. The name of the unit whose symbol is t and which is defined according to is 'metric ton' in US English and 'tonne' in International English.
Unit symbols and the values of quantities
Symbols of SI units are intended to be unique and universal, independent of the context language. The SI Brochure has specific rules for writing them.
In addition, the SI Brochure provides style conventions for among other aspects of displaying quantities units: the quantity symbols, formatting of numbers and the decimal marker, expressing measurement uncertainty, multiplication and division of quantity symbols, and the use of pure numbers and various angles.
In the United States, the guideline produced by the National Institute of Standards and Technology (NIST) clarifies language-specific details for American English that were left unclear by the SI Brochure, but is otherwise identical to the SI Brochure. For example, since 1979, the litre may exceptionally be written using either an uppercase "L" or a lowercase "l", a decision prompted by the similarity of the lowercase letter "l" to the numeral "1", especially with certain typefaces or English-style handwriting. The American NIST recommends that within the United States "L" be used rather than "l".
Realisation of units
Metrologists carefully distinguish between the definition of a unit and its realisation. The SI units are defined by declaring that seven defining constants have certain exact numerical values when expressed in terms of their SI units. The realisation of the definition of a unit is the procedure by which the definition may be used to establish the value and associated uncertainty of a quantity of the same kind as the unit.
For each base unit the BIPM publishes a , (French for 'putting into practice; implementation',) describing the current best practical realisations of the unit. The separation of the defining constants from the definitions of units means that improved measurements can be developed leading to changes in the as science and technology develop, without having to revise the definitions.
The published is not the only way in which a base unit can be determined: the SI Brochure states that "any method consistent with the laws of physics could be used to realise any SI unit". Various consultative committees of the CIPM decided in 2016 that more than one would be developed for determining the value of each unit. These methods include the following:
At least three separate experiments be carried out yielding values having a relative standard uncertainty in the determination of the kilogram of no more than and at least one of these values should be better than . Both the Kibble balance and the Avogadro project should be included in the experiments and any differences between these be reconciled.
The definition of the kelvin measured with a relative uncertainty of the Boltzmann constant derived from two fundamentally different methods such as acoustic gas thermometry and dielectric constant gas thermometry be better than one part in and that these values be corroborated by other measurements.
Organizational status
The International System of Units, or SI, is a decimal and metric system of units established in 1960 and periodically updated since then. The SI has an official status in most countries, including the United States, Canada, and the United Kingdom, although these three countries are among the handful of nations that, to various degrees, also continue to use their customary systems. Nevertheless, with this nearly universal level of acceptance, the SI "has been used around the world as the preferred system of units, the basic language for science, technology, industry, and trade."
The only other types of measurement system that still have widespread use across the world are the imperial and US customary measurement systems. The international yard and pound are defined in terms of the SI.
International System of Quantities
The quantities and equations that provide the context in which the SI units are defined are now referred to as the International System of Quantities (ISQ).
The ISQ is based on the quantities underlying each of the seven base units of the SI. Other quantities, such as area, pressure, and electrical resistance, are derived from these base quantities by clear, non-contradictory equations. The ISQ defines the quantities that are measured with the SI units. The ISQ is formalised, in part, in the international standard ISO/IEC 80000, which was completed in 2009 with the publication of ISO 80000-1, and has largely been revised in 2019–2020.
Controlling authority
The SI is regulated and continually developed by three international organisations that were established in 1875 under the terms of the Metre Convention. They are the General Conference on Weights and Measures (CGPM), the International Committee for Weights and Measures (CIPM), and the International Bureau of Weights and Measures (BIPM).
All the decisions and recommendations concerning units are collected in a brochure called The International System of Units (SI), which is published in French and English by the BIPM and periodically updated. The writing and maintenance of the brochure is carried out by one of the committees of the CIPM. The definitions of the terms "quantity", "unit", "dimension", etc. that are used in the SI Brochure are those given in the international vocabulary of metrology. The brochure leaves some scope for local variations, particularly regarding unit names and terms in different languages. For example, the United States' National Institute of Standards and Technology (NIST) has produced a version of the CGPM document (NIST SP 330) which clarifies usage for English-language publications that use American English.
History
CGS and MKS systems
The concept of a system of units emerged a hundred years before the SI.
In the 1860s, James Clerk Maxwell, William Thomson (later Lord Kelvin), and others working under the auspices of the British Association for the Advancement of Science, building on previous work of Carl Gauss, developed the centimetre–gram–second system of units or cgs system in 1874. The systems formalised the concept of a collection of related units called a coherent system of units. In a coherent system, base units combine to define derived units without extra factors. For example, using metre per second is coherent in a system that uses metre for length and second for time, but kilometre per hour is not coherent. The principle of coherence was successfully used to define a number of units of measure based on the CGS, including the erg for energy, the dyne for force, the barye for pressure, the poise for dynamic viscosity and the stokes for kinematic viscosity.
Metre Convention
A French-inspired initiative for international cooperation in metrology led to the signing in 1875 of the Metre Convention, also called Treaty of the Metre, by 17 nations.
The General Conference on Weights and Measures (French: – CGPM), which was established by the Metre Convention, brought together many international organisations to establish the definitions and standards of a new system and to standardise the rules for writing and presenting measurements.
Initially the convention only covered standards for the metre and the kilogram. This became the foundation of the MKS system of units.
Giovanni Giorgi and the problem of electrical units
At the close of the 19th century three different systems of units of measure existed for electrical measurements: a CGS-based system for electrostatic units, also known as the Gaussian or ESU system, a CGS-based system for electromechanical units (EMU), and an International system based on units defined by the Metre Convention for electrical distribution systems.
Attempts to resolve the electrical units in terms of length, mass, and time using dimensional analysis was beset with difficulties – the dimensions depended on whether one used the ESU or EMU systems. This anomaly was resolved in 1901 when Giovanni Giorgi published a paper in which he advocated using a fourth base unit alongside the existing three base units. The fourth unit could be chosen to be electric current, voltage, or electrical resistance.
Electric current with named unit 'ampere' was chosen as the base unit, and the other electrical quantities derived from it according to the laws of physics.
When combined with the MKS the new system, known as MKSA, was approved in 1946.
9th CGPM, the precursor to SI
In 1948, the 9th CGPM commissioned a study to assess the measurement needs of the scientific, technical, and educational communities and "to make recommendations for a single practical system of units of measurement, suitable for adoption by all countries adhering to the Metre Convention". This working document was Practical system of units of measurement. Based on this study, the 10th CGPM in 1954 defined an international system derived six base units: the metre, kilogram, second, ampere, degree Kelvin, and candela.
The 9th CGPM also approved the first formal recommendation for the writing of symbols in the metric system when the basis of the rules as they are now known was laid down. These rules were subsequently extended and now cover unit symbols and names, prefix symbols and names, how quantity symbols should be written and used, and how the values of quantities should be expressed.
Birth of the SI
The 10th CGPM in 1954 resolved to create an international system of units and
in 1960, the 11th CGPM adopted the International System of Units, abbreviated SI from the French name , which included a specification for units of measurement.
The International Bureau of Weights and Measures (BIPM) has described SI as "the modern form of metric system". In 1971 the mole became the seventh base unit of the SI.
2019 redefinition
After the metre was redefined in 1960, the International Prototype of the Kilogram (IPK) was the only physical artefact upon which base units (directly the kilogram and indirectly the ampere, mole and candela) depended for their definition, making these units subject to periodic comparisons of national standard kilograms with the IPK. During the 2nd and 3rd Periodic Verification of National Prototypes of the Kilogram, a significant divergence had occurred between the mass of the IPK and all of its official copies stored around the world: the copies had all noticeably increased in mass with respect to the IPK. During extraordinary verifications carried out in 2014 preparatory to redefinition of metric standards, continuing divergence was not confirmed. Nonetheless, the residual and irreducible instability of a physical IPK undermined the reliability of the entire metric system to precision measurement from small (atomic) to large (astrophysical) scales.
By avoiding the use of an artefact to define units, all issues with the loss, damage, and change of the artefact are avoided.
A proposal was made that:
In addition to the speed of light, four constants of nature – the Planck constant, an elementary charge, the Boltzmann constant, and the Avogadro constant – be defined to have exact values
The International Prototype of the Kilogram be retired
The current definitions of the kilogram, ampere, kelvin, and mole be revised
The wording of base unit definitions should change emphasis from explicit unit to explicit constant definitions.
The new definitions were adopted at the 26th CGPM on 16 November 2018, and came into effect on 20 May 2019. The change was adopted by the European Union through Directive (EU) 2019/1258.
Prior to its redefinition in 2019, the SI was defined through the seven base units from which the derived units were constructed as products of powers of the base units. After the redefinition, the SI is defined by fixing the numerical values of seven defining constants. This has the effect that the distinction between the base units and derived units is, in principle, not needed, since all units, base as well as derived, may be constructed directly from the defining constants. Nevertheless, the distinction is retained because "it is useful and historically well established", and also because the ISO/IEC 80000 series of standards, which define the International System of Quantities (ISQ), specifies base and derived quantities that necessarily have the corresponding SI units.
Related units
Non-SI units accepted for use with SI
Many non-SI units continue to be used in the scientific, technical, and commercial literature. Some units are deeply embedded in history and culture, and their use has not been entirely replaced by their SI alternatives. The CIPM recognised and acknowledged such traditions by compiling a list of non-SI units accepted for use with SI, including the hour, minute, degree of angle, litre, and decibel.
Metric units not recognised by SI
Although the term metric system is often used as an informal alternative name for the International System of Units, other metric systems exist, some of which were in widespread use in the past or are even still used in particular areas. There are also individual metric units such as the sverdrup and the darcy that exist outside of any system of units. Most of the units of the other metric systems are not recognised by the SI.
Unacceptable uses
Sometimes, SI unit name variations are introduced, mixing information about the corresponding physical quantity or the conditions of its measurement; however, this practice is unacceptable with the SI. "Unacceptability of mixing information with units: When one gives the value of a quantity, any information concerning the quantity or its conditions of measurement must be presented in such a way as not to be associated with the unit."
Instances include: "watt-peak" and "watt RMS"; "geopotential metre" and "vertical metre"; "standard cubic metre"; "atomic second", "ephemeris second", and "sidereal second".
See also
Notes
Attribution
References
Further reading
Unit Systems in Electromagnetism
MW Keller et al. (PDF) Metrology Triangle Using a Watt Balance, a Calculable Capacitor, and a Single-Electron Tunnelling Device
"The Current SI Seen From the Perspective of the Proposed New SI" (PDF). Barry N. Taylor. Journal of Research of the National Institute of Standards and Technology, Vol. 116, No. 6, Pgs. 797–807, Nov–Dec 2011.
B. N. Taylor, Ambler Thompson, International System of Units (SI), National Institute of Standards and Technology 2008 edition, .
External links
BIPM (International Bureau of Weights and Measures) official web site
International standards
Systems of units | International System of Units | Mathematics | 5,493 |
74,091,597 | https://en.wikipedia.org/wiki/Stroke-based%20sorting | Stroke-based sorting, also called stroke-based ordering or stroke-based order, is one of the five sorting methods frequently used in modern Chinese dictionaries, the others being radical-based sorting, pinyin-based sorting, bopomofo and the four-corner method. In addition to functioning as an independent sorting method, stroke-based sorting is often employed to support the other methods. For example, in Xinhua Dictionary (新华字典), Xiandai Hanyu Cidian (现代汉语词典) and Oxford Chinese Dictionary, stroke-based sorting is used to sort homophones in Pinyin sorting, while in radical-based sorting it helps to sort the radical list, the characters under a common radical, as well as the list of characters difficult to lookup by radicals.
In stroke-based sorting, Chinese characters are ordered by different features of strokes, including stroke counts, stroke forms, stroke orders, stroke combinations, stroke positions, etc.
Stroke-count sorting
This method arranges characters according to their numbers of strokes ascendingly. A character with less strokes is put before those of more strokes. For example, the different characters in "" (Chinese character strokes) are sorted into "汉(5)字(6)画(8)笔(10)[筆(12)畫(12)]漢(14)", where stroke counts are put in brackets. (Please note that both 筆 and 畫 are of 12 strokes and their order is not determinable by stroke-count sorting.).
Stroke-count sorting was first used in Zihui to arrange the radicals and the characters under each radical when the dictionary was published in 1615
It was also used in Kangxi Chinese Character Dictionary when the dictionary was first compiled in 1710s.
Stroke-count–stroke-order sorting
This is a combination of stroke-count sorting and stroke-order sorting. Characters are first arranged by stroke-counts in ascending order. Then Stroke-order sorting is employed to sort characters with the same number of strokes. The characters are firstly arranged by their first strokes according to an order of stroke form groups, such as “heng (横, ㇐), shu (竖, ㇑), pie (撇, ㇓), dian (点, ㇔), zhe (折, ㇕)”, or “dian (点), heng (横), shu (竖), pie (撇), zhe (折)”. If the first strokes of two characters belong to the same group, then sort by their second strokes in a similar way, and so on.
In our example of the previous section, both 筆 and 畫 are of 12 strokes. 筆 starts with stroke "㇓" of the pie (撇) group, and 畫 starts with "㇕" of the zhe (折) group, and pie is before zhe in the groups order, so 筆 comes before 畫. Hence the different characters in "汉字笔画, 漢字筆劃" are finally sorted into "汉(5)字(6)画(8)笔(10)筆(12㇓)畫(12㇕)漢(14)", where each character is put at its unique position.
Stroke-count-stroke-order sorting was used in Xinhua Dictionary and Xiandai Hanyu Cidian before the national standard for stroke-based sorting was released in 1999.
GB stroke-based order
The Standard of GB13000.1 Character Set Chinese Character Order (Stroke-Based Order) (GB13000.1字符集汉字字序(笔画序)规范)) is a standard released by the National Language Commission of China in 1999 for Chinese characters sorting by strokes. This is an enhanced version of the traditional stroke-count–stroke-order sorting.
According to this standard,
Two characters are first sorted by stroke counts.
If they are of the same stroke counts, sort by stroke order (of the five families of heng, shu, pie, dian and zhe).
If the characters are of the same stroke order, they will be sorted by the primary-secondary stroke order.
For example, 子 and 孑 each have three strokes and are written, in stroke-order, ㇐㇚㇐ and ㇐㇚㇀. ㇐ and ㇀ both belong to the heng family, so there is a tie under (2). Under (3), ㇐ is considered a primary stroke and sorts before the secondary stroke ㇀. As a result, 子 sorts before 孑.
If two characters are of the same stroke count, stroke order and primary-secondary stroke, then sort them according to their modes of stroke combination. Stroke separation comes before stroke connection, and connection comes before stroke intersection.
For example, 八, 人, 乂 all have 2 strokes in the order of ㇓㇏. They sort in the order of 八, 人, 乂, because 八 has separated strokes, 人 has a simple connection, and 乂 has an intersection.
This standard has been employed by the new editions of Xinhua Dictionary and Xiandai Hanyu Cidian.
YES sorting
YES is a simplified stroke-based sorting method free of stroke counting and grouping, without comprise in accuracy. Briefly speaking, YES arranges Chinese characters according to their stroke orders and an "alphabet" of 30 strokes:
㇐ ㇕ ㇅ ㇎ ㇡ ㇋ ㇊ ㇍ ㇈ ㇆ ㇇ ㇌ ㇀ ㇑ ㇗ ㇞ ㇉ ㄣ ㇙ ㇄ ㇟ ㇚ ㇓ ㇜ ㇛ ㇢ ㇔ ㇏ ㇂
built on the basis of Unicode CJK strokes.
To compare the sort-order of two characters, one expands each character into a string of strokes and compare them using the sort-order of the 30 strokes, much like one sorts two words in a dictionary using the sort-order of letters. Equivalently, one first decides whether the first stroke is sufficient to result in a sort (for example, because 汉 starts with ㇔ and 笔 starts with ㇚, 笔 sorts before 汉); if they happen to be identical, then one moves on to the second stroke (for example, 汉 expands to ㇔㇔... and 字 expands to ㇔㇑..., hence 字 sorts before 汉).
The YES order of the different characters in "" is "", where each character is put at its unique position.
YES sorting has been applied to the indexing of all the characters in Xinhua Zidian and Xiandai Hanyu Cidian.
Word-sorting
All of the aforementioned examples describe the sorting of single characters. To sort two words that consists of multiple characters:
Select a method for comparing two characters.
If the first character of word #1 sorts before the first character of word #2, then word #1 sorts before word #2.
Otherwise, advance until a character that sorts differently is found, or if a word ends, in which case the shorter word sorts before the longer one.
This method is used in the YES-CEDICT Chinese Dictionary, using YES for character comparison.
See also
Modern Chinese characters
References
Chinese lexicography
Chinese character collation
Chinese character components | Stroke-based sorting | Technology | 1,432 |
733,305 | https://en.wikipedia.org/wiki/Cortistatin%20%28neuropeptide%29 | Precortistatin is a protein that in humans is encoded by the CORT gene. The 105 amino acid residue human precortistatin in turn is cleaved into cortistatin-17 and cortistatin-29. Cortistatin-17 is the only active peptide derived from the precursor. Cortistatin (or more specifically cortistatin-17) is a neuropeptide that is expressed in inhibitory neurons of the cerebral cortex, and which has a strong structural similarity to somatostatin. Unlike somatostatin, when infused into the brain, it enhances slow-wave sleep. It binds to sites in the cortex, hippocampus and the amygdala.
Function
Cortistatin is a neuropeptide with strong structural similarity to somatostatin (both peptides belong to the same family). It binds to all known somatostatin receptors, and shares many pharmacological and functional properties with somatostatin, including the depression of neuronal activity. However, it also has many properties distinct from somatostatin, such as induction of slow-wave sleep, apparently by antagonism of the excitatory effects of acetylcholine on the cortex, reduction of locomotor activity, and activation of cation selective currents not responsive to somatostatin.
References
Further reading
Neuropeptides | Cortistatin (neuropeptide) | Chemistry | 295 |
34,974,335 | https://en.wikipedia.org/wiki/E.G.F. | E.G.F. (Entreprise Générale de Franceville) is a Gabonese company specialized in Civil Engineering, construction and special works in hard places.
History of E.G.F.
EGF was founded in Franceville by Benoist SENET in 2000. The company started in a bad economical period due to the area seeing a decline in investments and big projects. Businesses refocused on Gabon's coast, especially in Port Gentil and Libreville, and EGF naturally moved to Libreville.
From then, the company developed quickly because it worked with two dynamic branches of industry: telecom and energy.
The growth of oil's cost in the second part of the 2000s triggered oil companies to reinvest in old oil fields. Maurel et Prom, a French junior oil company, which developed in Gabon began to work with E.G.F.
E.G.F.'s Services and Activities
E.G.F.'s areas of focus are civil engineering, construction and special works. E.G.F. designs and builds structures like helipads, camps, warehouses, landing stages, schools, gas pipe gutters, wellhead's cellars, etc.
Notes and references
Civil engineering organizations | E.G.F. | Engineering | 259 |
162,269 | https://en.wikipedia.org/wiki/Magnesium%20oxide | Magnesium oxide (MgO), or magnesia, is a white hygroscopic solid mineral that occurs naturally as periclase and is a source of magnesium (see also oxide). It has an empirical formula of MgO and consists of a lattice of Mg2+ ions and O2− ions held together by ionic bonding. Magnesium hydroxide forms in the presence of water (MgO + H2O → Mg(OH)2), but it can be reversed by heating it to remove moisture.
Magnesium oxide was historically known as magnesia alba (literally, the white mineral from Magnesia), to differentiate it from magnesia nigra, a black mineral containing what is now known as manganese.
Related oxides
While "magnesium oxide" normally refers to MgO, the compound magnesium peroxide MgO2 is also known. According to evolutionary crystal structure prediction, MgO2 is thermodynamically stable at pressures above 116 GPa (gigapascals), and a semiconducting suboxide Mg3O2 is thermodynamically stable above 500 GPa. Because of its stability, MgO is used as a model system for investigating vibrational properties of crystals.
Electric properties
Pure MgO is not conductive and has a high resistance to electric current at room temperature. The pure powder of MgO has a relative permittivity inbetween 3.2 to 9.9 with an approximate dielectric loss of tan(δ) > 2.16x103 at 1kHz.
Production
Magnesium oxide is produced by the calcination of magnesium carbonate or magnesium hydroxide. The latter is obtained by the treatment of magnesium chloride solutions, typically seawater, with limewater or milk of lime.
Mg2+ + Ca(OH)2 → Mg(OH)2 + Ca2+
Calcining at different temperatures produces magnesium oxide of different reactivity. High temperatures 1500 – 2000 °C diminish the available surface area and produces dead-burned (often called dead burnt) magnesia, an unreactive form used as a refractory. Calcining temperatures 1000 – 1500 °C produce hard-burned magnesia, which has limited reactivity and calcining at lower temperature, (700–1000 °C) produces light-burned magnesia, a reactive form, also known as caustic calcined magnesia. Although some decomposition of the carbonate to oxide occurs at temperatures below 700 °C, the resulting materials appear to reabsorb carbon dioxide from the air.
Applications
Refractory insulator
MgO is prized as a refractory material, i.e. a solid that is physically and chemically stable at high temperatures. It has the useful attributes of high thermal conductivity and low electrical conductivity. According to a 2006 reference book:
MgO is used as a refractory material for crucibles. It is also used as an insulator in heat-resistant electrical cable.
Biomedical
Among metal oxide nanoparticles, magnesium oxide nanoparticles (MgO NPs) have distinct physicochemical and biological properties, including biocompatibility, biodegradability, high bioactivity, significant antibacterial properties, and good mechanical properties, which make it a good choice as a reinforcement in composites.
Heating elements
It is used extensively as an electrical insulator in tubular construction heating elements as in electric stove and cooktop heating elements. There are several mesh sizes available and most commonly used ones are 40 and 80 mesh per the American Foundry Society. The extensive use is due to its high dielectric strength and average thermal conductivity. MgO is usually crushed and compacted with minimal airgaps or voids.
Cement
MgO is one of the components in Portland cement in dry process plants.
Sorel cement uses MgO as the main component in combination with MgCl2 and water.
Fertilizer
MgO has an important place as a commercial plant fertilizer and as animal feed.
Fireproofing
It is a principal fireproofing ingredient in construction materials. As a construction material, magnesium oxide wallboards have several attractive characteristics: fire resistance, termite resistance, moisture resistance, mold and mildew resistance, and strength, but also a severe downside as it attracts moisture and can cause moisture damage to surrounding materials.
Medical
Magnesium oxide is used for relief of heartburn and indigestion, as an antacid, magnesium supplement, and as a short-term laxative. It is also used to improve symptoms of indigestion. Side effects of magnesium oxide may include nausea and cramping. In quantities sufficient to obtain a laxative effect, side effects of long-term use may rarely cause enteroliths to form, resulting in bowel obstruction.
Waste treatment
Magnesium oxide is used extensively in the soil and groundwater remediation, wastewater treatment, drinking water treatment, air emissions treatment, and waste treatment industries for its acid buffering capacity and related effectiveness in stabilizing dissolved heavy metal species.
Many heavy metals species, such as lead and cadmium, are least soluble in water at mildly basic conditions (pH in the range 8–11). Solubility of metals increases their undesired bioavailability and mobility in soil and groundwater. Granular MgO is often blended into metals-contaminating soil or waste material, which is also commonly of a low pH (acidic), in order to drive the pH into the 8–10 range. Metal-hydroxide complexes tend to precipitate out of aqueous solution in the pH range of 8–10.
MgO is packed in bags around transuranic waste in the disposal cells (panels) at the Waste Isolation Pilot Plant, as a getter to minimize the complexation of uranium and other actinides by carbonate ions and so to limit the solubility of radionuclides. The use of MgO is preferred over CaO since the resulting hydration product () is less soluble and releases less hydration heat. Another advantage is to impose a lower pH value (about 10.5) in case of accidental water ingress into the dry salt layers, in contast to the more soluble which would create a higher pH of 12.5 (strongly alkaline conditions). The cation being the second most abundant cation in seawater and in rocksalt, the potential release of magnesium ions dissolving in brines intruding the deep geological repository is also expected to minimize the geochemical disruption.
Niche uses
As a food additive, it is used as an anticaking agent. It is known to the US Food and Drug Administration for cacao products; canned peas; and frozen dessert. It has an E number of E530.
As a reagent in the installation of the carboxybenzyl (Cbz) group using benzyl chloroformate in EtOAc for the N-protection of amines and amides.
Doping MgO (about 1–5% by weight) into hydroxyapatite, a bioceramic mineral, increases the fracture toughness by migrating to grain boundaries, where it reduces grain size and changes the fracture mode from intergranular to transgranular.
Pressed MgO is used as an optical material. It is transparent from 0.3 to 7 μm. The refractive index is 1.72 at 1 μm and the Abbe number is 53.58. It is sometimes known by the Eastman Kodak trademarked name Irtran-5, although this designation is obsolete. Crystalline pure MgO is available commercially and has a small use in infrared optics.
An aerosolized solution of MgO is used in library science and collections management for the deacidification of at-risk paper items. In this process, the alkalinity of MgO (and similar compounds) neutralizes the relatively high acidity characteristic of low-quality paper, thus slowing the rate of deterioration.
Magnesium oxide is used as an oxide barrier in spin-tunneling devices. Owing to the crystalline structure of its thin films, which can be deposited by magnetron sputtering, for example, it shows characteristics superior to those of the commonly used amorphous Al2O3. In particular, spin polarization of about 85% has been achieved with MgO versus 40–60 % with aluminium oxide. The value of tunnel magnetoresistance is also significantly higher for MgO (600% at room temperature and 1,100 % at 4.2 K) than Al2O3 (ca. 70% at room temperature).
MgO is a common pressure transmitting medium used in high pressure apparatuses like the multi-anvil press.
Brake lining
Magnesia is used in brake linings for its heat conductivity and intermediate hardness. It helps dissipate heat from friction surfaces, preventing overheating, while minimizing wear on metal components. Its stability under high temperatures ensures reliable and durable braking performance in automotive and industrial applications.
Thin film transistors
In thin film transistors(TFTs), MgO is often used as a dielectric material or an insulator due to its high thermal stability, excellent insulating properties, and wide bandgap. Optimized IGZO/MgO TFTs demonstrated an electron mobility of 1.63 cm²/Vs, an on/off current ratio of 10⁶, and a subthreshold swing of 0.50 V/decade at −0.11 V. These TFTs are integral to low-power applications, wearable devices, and radiation-hardened electronics, contributing to enhanced efficiency and durability across diverse domains.
Historical uses
It was historically used as a reference white color in colorimetry, owing to its good diffusing and reflectivity properties. It may be smoked onto the surface of an opaque material to form an integrating sphere.
Early gas mantle designs for lighting, such as the Clamond basket, consisted mainly of magnesium oxide.
Precautions
Inhalation of magnesium oxide fumes can cause metal fume fever.
See also
Notes
References
External links
Data page at UCL
Ceramic data page at NIST
NIOSH Pocket Guide to Chemical Hazards at CDC
Magnesium minerals
Magnesium compounds
Oxides
Refractory materials
Optical materials
Ceramic materials
Antacids
E-number additives
Rock salt crystal structure | Magnesium oxide | Physics,Chemistry,Engineering | 2,119 |
63,288,122 | https://en.wikipedia.org/wiki/2-Phosphoglycolate | 2-Phosphoglycolate (chemical formula C2H2O6P3-; also known as phosphoglycolate, 2-PG, or PG) is a natural metabolic product of the oxygenase reaction mediated by the enzyme ribulose 1,5-bisphosphate carboxylase (RuBisCo).
Synthesis
RuBisCo catalyzes the fixation of atmospheric carbon dioxide in the chloroplasts of plants. It uses ribulose 1,5-bisphosphate (RuBP) as substrate and facilitates carboxylation at the C2 carbon via an endiolate intermediate. The two three-carbon products (3-phosphoglycerate) are subsequently fed into the Calvin cycle. Atmospheric oxygen competes with this reaction. In a process called photorespiration RuBisCo can also catalyze addition of atmospheric oxygen to the C2 carbon of RuBP forming a high energy hydroperoxide intermediate that decomposes into 2-phosphoglycolate and 3-phosphoglycerate. Despite a higher energy barrier for the oxygenation reaction compared to carboxylation, photorespiration accounts for up to 25% of RuBisCo turnover in C3 plants.
Biological role
Plants
In plants, 2-phosphoglycolate has a potentially toxic effect as it inhibits a number of metabolic pathways. The activities of important enzymes in the central carbon metabolism of the chloroplast such as triose-phosphate isomerase, phosphofructokinase, or sedoheptulose 1,7-bisphosphate phosphatase show a significant decrease in the presence of 2-PG. Therefore, degradation of 2-PG during photorespiration is important for cellular homeostasis.
Photorespiration is the main way of chloroplasts to rid themselves of 2-PG. However, this pathway comes at a decreased return on investment ratio as 2-PG is transformed to 3-phosphoglycerate in an elaborate salvage pathway at the cost of one equivalent of NADH and ATP, respectively. In addition, this salvage pathway loses ½ equivalent of previously fixed carbon dioxide and releases ½ equivalent of toxic ammonia per molecule of 2-PG. This leads to a net loss of carbon in photorespiration, making it much less efficient than the Calvin cycle.
However, this salvage pathway can also act as a cellular energy sink, preventing the chloroplastidal electron transport chain from over reduction. It is believed that this pathway also plays a role in improving the abiotic stress response of plants.
Bacteria
2-PG is similarly a toxic product in bacteria. Bacteria remove this substance using a glycerate pathway. This shorter pathway branches out from photorespiration after the formation of glyoxylate, proceeding to use glycoxylate carboxylase and tartronic semialdehyde reductase to rejoin at the formation of glycerate. Some Cyanobacteria can use a combination of photorespiration and glycerate pathways.
Transferring the shorter glycerate pathway into plant chloroplasts, combined with stopping chloroplastic export of glycolate, results in higher photosynthetic efficiency. In tobacco, the biomass increases by 13%, not as good a result as a designed pathway.
Animals
Although mainly produced in plants, 2-PG also plays a role in mammalian metabolism, though the source of 2-PG in mammals remains incompletely understood. It is thought that the processing of breaks in the DNA-strand produces small amounts of 2-PG, but other processes may yield 2-PG as well. The phosphatase subunit of bisphosphoglycerate mutase, an enzyme found in red blood cells, shows an increase in activity by up three orders of magnitude in the presence of 2-PG, resulting in an increase of the oxygen affinity of hemoglobin.
Agricultural significance
RuBisCo has been a potential target for bioengineers for agricultural purposes. A decrease in the oxygenation of RuBP may result in a boost in the efficiency of carbon assimilation in crops such as rice or wheat and therefore increase their net biomass production. Attempts have been made to artificially alter the protein structure of RuBisCo to enhance its catalytic turnover rate. Mutations in the L-subunit of the enzyme, for example, have been shown to increase both the catalytic turnover rate and RuBisCos affinity for carbon dioxide
References
Organophosphates
Organic acids | 2-Phosphoglycolate | Chemistry | 944 |
40,568,016 | https://en.wikipedia.org/wiki/Vietnamese%20numerals | Historically Vietnamese has two sets of numbers: one is etymologically native Vietnamese; the other uses Sino-Vietnamese vocabulary. In the modern language the native Vietnamese vocabulary is used for both everyday counting and mathematical purposes. The Sino-Vietnamese vocabulary is used only in fixed expressions or in Sino-Vietnamese words, in a similar way that Latin and Greek numerals are used in modern English (e.g., the bi- prefix in bicycle).
For numbers up to one million, native Vietnamese terms is often used the most, whilst mixed Sino-Vietnamese origin words and native Vietnamese words are used for units of one million or above.
Concept
For non-official purposes prior to the 20th century, Vietnamese had a writing system known as Hán-Nôm. Sino-Vietnamese numbers were written in chữ Hán and native vocabulary was written in chữ Nôm. Hence, there are two concurrent system in Vietnamese nowadays in the romanized script, one for native Vietnamese and one for Sino-Vietnamese.
In the modern Vietnamese writing system, numbers are written as Arabic numerals or in the romanized script chữ Quốc ngữ (một, hai, ba), which had a chữ Nôm character. Less common for numbers under one million are the numbers of Sino-Vietnamese origin (nhất [1], nhị [2], tam [3]), using chữ Hán (Chinese characters). Chữ Hán and chữ Nôm has all but become obsolete in the Vietnamese language, with the Latin-style of reading, writing, and pronouncing native Vietnamese and Sino-Vietnamese being wide spread instead, when France occupied Vietnam. Chữ Hán can still be seen in traditional temples or traditional literature or in cultural artefacts. The Hán-Nôm Institute resides in Hanoi, Vietnam.
Basic figures
The following table is an overview of the basic Vietnamese numeric figures, provided in both native and Sino-Vietnamese counting systems. The form that is highlighted in green is the most widely used in all purposes whilst the ones highlighted in blue are seen as archaic but may still be in use. There are slight differences between the Hanoi and Saigon dialects of Vietnamese, readings between each are differentiated below.
Some other features of Vietnamese numerals include the following:
Outside of fixed Sino-Vietnamese expressions, Sino-Vietnamese words are usually used in combination with native Vietnamese words. For instance, combines native and Sino-Vietnamese .
Modern Vietnamese separates place values in thousands instead of myriads. For example, "123123123" is recorded in Vietnamese as , or 123 million, 123 thousand and 123. Meanwhile, in Chinese, Japanese & Korean, the same number is rendered as (1 hundred-million, 2312 ten-thousand and 3123).
Sino-Vietnamese numbers are not in frequent use in modern Vietnamese. Sino-Vietnamese numbers such as () 'ten thousand', () 'hundred-thousand' and () 'million' are used for figures exceeding one thousand, but with the exception of are becoming less commonly used. Number values for these words are used for each numeral increasing tenfold in digit value, being the number for 105, for 106, et cetera. However, in Vietnamese and in Modern Chinese now have different values.
Other figures
When the number 1 appears after 20 in the unit digit, the pronunciation changes to .
When the number 4 appears after 20 in the unit digit, it is more common to use Sino-Vietnamese (四/𦊛).
When the number 5 appears after 10 in the unit digit, the pronunciation changes to (), or in some Northern dialects, nhăm (𠄶) .
When appears after 20, the pronunciation changes to .
Ordinal numbers
Vietnamese ordinal numbers are generally preceded by the prefix , which is a Sino-Vietnamese word which corresponds to . For the ordinal numbers of one and four, the Sino-Vietnamese readings () and (四/𦊛) are more commonly used; two is occasionally rendered using the Sino-Vietnamese (). In all other cases, the native Vietnamese number is used.
In formal cases, the ordinal number with the structure " () + Sino-Vietnamese numbers" is used, especially in calling the generation of monarches, with an example being (女王 Elizabeth 第二) (Queen Elizabeth II), or the Second Spanish Republic being called Đệ nhị Cộng hoà Tây Ban Nha (第二共和西班牙).
Footnotes
See also
Japanese numerals, Korean numerals, Chinese numerals
Numerals
Vietnamese language | Vietnamese numerals | Mathematics | 919 |
2,662,177 | https://en.wikipedia.org/wiki/N-Oxoammonium%20salt | N-Oxoammonium salts are a class of organic compounds with the formula [R1R2=O]X−. The cation [R1R2=O] is of interest for the dehydrogenation of alcohols. Oxoammonium salts are diamagnetic, whereas the nitroxide has a doublet ground state. A prominent N-oxoammonium salt is prepared by oxidation of (2,2,6,6-tetramethylpiperidin-1-yl)oxyl, commonly referred to as [TEMPO]+. A less expensive analogue is Bobbitt's salt.
Structure and bonding
Oxoammonium cations are isoelectronic with carbonyls and structurally related to aldoximes (hydroxylamines), and aminoxyl (nitroxide) radicals, with which they can interconvert via a series of redox steps. According to X-ray crystallography, the N–O distance in [TEMPO]BF4 is 1.184 Å, 0.1 Å shorter than the N–O distance of 1.284 Å in the charge-neutral TEMPO. Similarly, the N in [TEMPO]+ is nearly planar, but the O moves 0.1765 Å out of the plane in the neutral TEMPO.
The N-oxoammonium salts are used for oxidation of alcohols to carbonyl groups, as well as other forms of oxoammonium-catalyzed oxidations. The nitroxyl TEMPO reacts via its N-oxoammonium salt.
See also
Nitrone – structurally related, the N-oxide of an imine
References
Functional groups
Oxycations | N-Oxoammonium salt | Chemistry | 354 |
27,405,777 | https://en.wikipedia.org/wiki/ClearCube | ClearCube is a computer systems manufacturer based in Austin, Texas, owned by parent company ClearCube Holdings. The company became known for its blade PC products; it has since expanded its offerings to include desktop virtualization and VDI. It was founded in 1997 by Andrew Heller (former IBM Fellow) and Barry Thornton as Vicinity Systems.
In 2005, ClearCube derived about a third of its revenue from virtual infrastructure products sold into the financial services sector, with the majority of the rest of the revenue coming from customers in the health-care and government sectors. Since 2005, ClearCube has continued to focus on virtualization-capable hardware and management software, which has led to strong revenue growth. In 2011, the company announced 50% year-over-year revenue growth due to the strong performance of its virtual desktop products.
In 2011, ClearCube acquired Dallas-based Network Elites. The acquisition brought roughly 25 additional employees to the company and expanded ClearCube's Cloud services capabilities.
Partnerships
Until 2005, IBM was a reseller of the entire product line of ClearCube. Afterwards, IBM bundled some of its own hardware with ClearCube's software, and also diversified its software offering to include Citrix and VMware products. When IBM sold its PC division to Lenovo, the latter also began reselling ClearCube blades. Other major PC manufactures, like HP, also began to compete in the blade PC niche around this time. Other resellers of ClearCube products included Hitachi and SAIC.
In 2008, ClearCube spun off its software division as VDIworks, and while VDIworks has developed additional OEM relationships, the two companies remain closely associated in OEM partnership, and share the same investors and owners. In January 2008, ClearCube also introduced products implementing Teradici's PC-over-IP protocol, including two dual DVI thin clients, the I9420 I/Port and C7420 C/Port, which connect to the blades using copper-based and fiber-optic Ethernet, respectively.
References
Further reading
Michael Kanellos (September 20, 2002) Start-up brings 'blades' to the desktop, CNET News
Shelley Solheim (June 8, 2006) ClearCube readies new PC blade tools, InfoWorld
Oliver Rist, February 4, 2005 ClearCube makes good blade system even better, InfoWorld
Computer companies of the United States
Computer hardware companies
Computer systems companies
Centralized computing
Thin clients | ClearCube | Technology | 509 |
3,025,266 | https://en.wikipedia.org/wiki/Ternary%20computer | A ternary computer, also called trinary computer, is one that uses ternary logic (i.e., base 3) instead of the more common binary system (i.e., base 2) in its calculations. Ternary computers use trits, instead of binary bits.
Types of states
Ternary computing deals with three discrete states, but the ternary digits themselves can be defined differently:
Ternary quantum computers use qutrits rather than trits. A qutrit is a quantum state that is a complex unit vector in three dimensions, which can be written as in the bra-ket notation. The labels given to the basis vectors () can be replaced with other labels, for example those given above.
History
One early calculating machine, built entirely from wood by Thomas Fowler in 1840, operated in balanced ternary. The first modern, electronic ternary computer, Setun, was built in 1958 in the Soviet Union at the Moscow State University by Nikolay Brusentsov, and it had notable advantages over the binary computers that eventually replaced it, such as lower electricity consumption and lower production cost. In 1970 Brusentsov built an enhanced version of the computer, which he called Setun-70. In the United States, the ternary computing emulator Ternac working on a binary machine was developed in 1973.
The ternary computer QTC-1 was developed in Canada.
Balanced ternary
Ternary computing is commonly implemented in terms of balanced ternary, which uses the three digits −1, 0, and +1. The negative value of any balanced ternary digit can be obtained by replacing every + with a − and vice versa. It is easy to subtract a number by inverting the + and − digits and then using normal addition. Balanced ternary can express negative values as easily as positive ones, without the need for a leading negative sign as with unbalanced numbers. These advantages make some calculations more efficient in ternary than binary. Considering that digit signs are mandatory, and nonzero digits are magnitude 1 only, notation that drops the '1's and use only zero and the + − signs is more concise than if 1's are included.
Unbalanced ternary
Ternary computing can be implemented in terms of unbalanced ternary, which uses the three digits 0, 1, 2. The original 0 and 1 are explained as an ordinary binary computer, but instead uses 2 as leakage current.
The world's first unbalanced ternary semiconductor design on a large wafer was implemented by the research team led by Kim Kyung-rok at Ulsan National Institute of Science and Technology in South Korea, which will help development of low power and high computing microchips in the future. This research theme was selected as one of the future projects funded by Samsung in 2017, published on July 15, 2019.
Potential future applications
With the advent of mass-produced binary components for computers, ternary computers have diminished in significance. However, Donald Knuth argues that they will be brought back into development in the future to take advantage of ternary logic's elegance and efficiency. One possible way this could happen is by combining an optical computer with the ternary logic system. A ternary computer using fiber optics could use dark as 0 and two orthogonal polarizations of light as +1 and −1.
The Josephson junction has been proposed as a balanced ternary memory cell, using circulating superconducting currents, either clockwise, counterclockwise, or off. "The advantages of the proposed memory circuit are capability of high speed computation, low power consumption and very simple construction with fewer elements due to the ternary operation."
Ternary computing shows promise for implementing fast large language models (LLMs) and potentially other AI applications, in lieu of floating point arithmetic.
In popular culture
In Robert A. Heinlein's novel Time Enough for Love, the sapient computers of Secundus, the planet on which part of the framing story is set, including Minerva, use an unbalanced ternary system. Minerva, in reporting a calculation result, says "three hundred forty one thousand six hundred forty... the original ternary readout is unit pair pair comma unit nil nil comma unit pair pair comma unit nil nil point nil".
Modern researches
With the emergence of Carbon nano tube transistors, many researches have shown interest in designing ternary logic gates using them. During 2020–2024 more than 1000 papers about this subject on IEEE Xplore have been published.
See also
Flip-flap-flop Ternary variant of a flip-flop
References
Further reading
External links
The ternary calculating machine of Thomas Fowler
3niti – Collaboration for Open Ternary Computer Development
Development of ternary computers at Moscow State University
Tunguska – Ternary Operating System emulator
Triador: a ternary computer with 600 ternary multiplexers
5500FP - modern ternary CPU
Classes of computers
Russian inventions
Soviet inventions | Ternary computer | Technology | 1,021 |
77,395,174 | https://en.wikipedia.org/wiki/Pellet%20%28steel%20industry%29 | Pellets are a processed form of iron ore utilized in the steel industry, specifically designed for direct application in blast furnaces or direct reduction plants. These pellets are spherical in shape, with diameters ranging from 8 to 18 millimeters.
The production of iron ore pellets involves several steps, including grinding the ore, mixing it with binders, and then forming and heating the pellets. The iron content of the pellets generally ranges from 62% to 66%. This enrichment process improves the iron concentration and imparts specific chemical and mechanical properties that enhance the efficiency of steel production.
History
The pelletizing of powdered iron ores was first introduced at the end of the nineteenth century, utilizing tar as a binding agent, comprising 1% by weight. This method involved firing the mixture in a rotating drum to create pellets suitable for blast furnaces, while also facilitating the removal of undesirable elements such as sulfur and arsenic through the emitted fumes.
During this period, pellet sintering developed alongside grate sintering as an alternative process to address the agglomeration challenges faced by high-quality iron ore products. The concept of pellet agglomeration was initially patented by A. Anderson in Sweden in 1912, followed by a similar patent in Germany in 1913. The resultant product was named "GEROELL", derived from the German word for "rolling." Pellets produced through this method demonstrated faster reduction rates compared to calibrated ores and agglomerates made from the same feedstock. In 1926, an industrial pilot plant was constructed by Krupp in Rheinhausen to explore the potential of this pelletizing technology. However, the plant was later dismantled to make way for the installation of a large-scale grate sintering line, which emerged as a competing process in the industry.
Pellet sintering has remained a viable method for processing iron ore. In the United States, this technique was employed to process fine concentrates from the Mesabi Range during World War II. This was necessary as naturally rich iron ores (containing over 50% iron) were being depleted. The development of pelletizing fine magnetite ores, which typically have less than 44 mm in size and are around 85% iron, began around 1943 with support from the University of Minnesota. The process was later adopted in Europe, particularly in Sweden, to facilitate the production of pre-reduced iron ore.
Pellet production saw substantial growth between 1960 and 1980 but eventually plateaued at approximately 300 million tons annually. The following data illustrates pellet production over several years:
In 1984, global pellet production reached 189 million tons, with North America producing 90 million tons, the USSR 63 million tons, and other regions 36 million tons.
By 1992, production had increased to 264 million tons.
In 2008, production further rose to 313 million tons.
However, in 2009, production decreased to 215 million tons due to the economic crisis.
In 2010, production rebounded to 388 million tons.
Production
Pellets are produced directly at the extraction site by mining companies and are marketed as a distinct product, unlike agglomerates which are typically manufactured at blast furnace sites through the mixing of iron ores from various sources. Pellets are generally more robust and better suited to handling compared to agglomerates, which are relatively fragile. The production process for pellets can vary significantly depending on the local characteristics of the iron ore, and some facilities may include additional stages, such as arsenic removal. The pellet production process involves several key stages:
Crushing: The iron ore is first finely crushed to separate the valuable iron ore from non-valuable gangue materials.
Enrichment: Depending on the ore's characteristics, enrichment is achieved through grinding (which can be conducted in multiple phases and may use either dry or wet methods) and by employing magnetic separation and flotation techniques.
Blending: The ore concentrate may be mixed with additives to achieve the desired chemical composition. Common additives include dolomite, olivine, and quartzite, which typically account for 3 to 3.5% of the pellet's weight.
Binding: To ensure cohesion during the pelletizing process, an additional binder, usually wet bentonite combined with maize flour or polyacrylamide, is added.
These processes ensure that the pellets are produced to meet specific quality standards and can withstand the demands of handling and transportation.
The ore concentrate is formed into pellets through a compaction process. This can be performed using various types of mixing equipment, though saucers are the most commonly employed tool. Before being subjected to sintering, the pellets are referred to as "green" or "raw" pellets, and their typical diameter ranges from 5 to 20 mm.
Following pellet formation, they are either sent to a consumption plant or directed to a cooking oven. Due to their inherent fragility, which persists despite the binder used, pellets are generally more suitable for processing in a cooking oven rather than a consumption plant. After cooking, the pellets are cooled.
The cooking process involves passing the pellets through a chain of contiguous ovens, where they are heated to temperatures of up to 1,200°C. This can be achieved using different methods: a straight grate process for a single, uninterrupted chain or a grate kiln process that includes a rotating cooling tray at the end of the chain. The required heat for this process is supplied by burners, which can either add fuel to the ore concentrate or facilitate the oxidation of the ore, depending on the specific type of ore being processed.
Benefits and limitations
Benefits
Pelletizing ore enhances the efficiency of blast furnaces and direct reduction plants by providing several advantages over raw iron ore:
Handling Resistance: Pellets are more resilient to handling, including in wet conditions, and do not cause clogging in storage hoppers.
Uniform Composition: The consistent and known composition of pellets facilitates a more streamlined process for converting them into iron.
Optimal Porosity: The porosity of pellets enables effective gas-solid chemical reactions within the furnace. This porosity helps maintain the material’s mechanical strength and chemical reactivity, even in the furnace’s highest temperature zones.
Efficient Reduction: The controlled oxidation state of iron oxides in pellets allows carbon monoxide to more effectively reduce Fe2O3 compared to less oxidized compounds like Fe3O4.
Pellets generally contain a higher iron content than agglomerated ore, leading to increased plant productivity and reduced fuel consumption. They are also more durable and capable of withstanding repeated handling. Despite their higher cost—typically about 70% more than raw ore—the benefits they offer in terms of efficiency and performance justify the expense. In steelmaking, pellets are often mixed with sinter in varying proportions to optimize the process.
Similar to sinter, the high-temperature roasting and sintering of pellets effectively eliminate undesirable elements such as sulfur. It is also an efficient method for removing zinc, which can otherwise hinder the operation of blast furnaces. With a vaporization temperature of 907°C, zinc is effectively removed during the roasting process, making pelletizing a suitable method for this application.
Limitations
Pellets are vulnerable to sulfur-induced damage during the reduction process in blast furnaces. Even low levels of sulfur dioxide (SO₂) can interfere with furnace operations, with effects observed at concentrations as low as 5 to 50 parts per million (ppm) in the reduction gas. The detailed mechanism behind this issue was only fully understood towards the end of the 20th century. Initially, sulfur accelerates the extraction of oxygen from the iron oxide, but this effect reverses once metallic iron begins to form, significantly slowing the oxygen extraction process. This unusual behavior is attributed to sulfur's strong affinity for the metallic iron that forms on the pellet surface, which inhibits the penetration of carbon.
Furthermore, the reaction between wustite (FeO) and carbon monoxide (CO) occurs not only on the surface of FeO but also beneath the surface of the reduced iron. Due to iron's superior absorption characteristics, a substantial portion of gas transport happens at the iron/iron oxide phase boundary. This process depends on the iron's ability to absorb sufficient carbon (carburization). If sulfur obstructs carbon absorption, reduction is limited to the surface of the iron oxide. This restriction results in the formation of elongated, fibrous iron crystals, as iron crystallization can only proceed in the direction of the reducing iron oxide. Consequently, the structure of the granules becomes reinforced and can expand to two or three times their original volume. This expansion, or "swelling," of the granules can lead to blockage or significant damage to the blast furnace, highlighting the challenges associated with using pellets in blast furnace operations.
Composition
Pellets, similar to agglomerates, are classified based on their chemical properties as either acidic or basic. To determine the basicity index (ic), the following ratio of mass concentrations is used:
This ratio helps in assessing the relative basicity of the pellets, which is important for optimizing their use in blast furnaces and other metallurgical processes.
In practice, a simplified basicity index (i) is commonly used to classify pellets based on their chemical properties. This index is calculated using the ratio of calcium oxide (CaO) to silicon dioxide (SiO2):
i=\frac{CaO}{SiO2}
Pellets with an index (i) less than 1 are classified as acidic.
Pellets with an index (i) greater than 1 are categorized as basic.
Pellets with an index (i) equal to 1 are referred to as self-melting.
Pellets can contain high levels of hematite, but the proportion must be controlled. Excessive hematite can weaken the pellet structure during reduction, leading to the pellets breaking down into dust under the weight of stacked charges. This is due to the fact that a high hematite content can cause the pellets to disintegrate, compromising their integrity and usability in the reduction process.
Acid pellets
Acid pellets are produced without the addition of additives, resulting in a specific chemical composition. Typically, the composition of acid pellets is as follows: 2.2% SiO2 and 0.2% CaO. In the United States during the 1990s, the typical characteristics of acid pellets were:
Chemical Composition: 66% Fe, 4.8% SiO2, 0.2% MgO, and a CaO/SiO2 ratio of 0.04.
Compressive Strength: 250 kg.
ISO Reducibility: 1.0.
Swelling Ratio: 16%.
Softening Temperature: 1290°C, with a difference of 230°C between the softening and melting temperatures.
Unlike agglomerated ores, which may include basic fluxes like silicates in the binder during pelletizing, acid pellets maintain their acidic composition due to their solid spherical shape. This design helps preserve their mechanical properties and reduces the risk of disintegration.
Acid pellets exhibit notable mechanical strength with a crush resistance exceeding 250 kg per pellet. However, their reducibility could be improved. Additionally, they are prone to swelling when exposed to lime, especially when the basicity index (i = CaO / SiO2) exceeds 0.25, which may potentially cause issues in a blast furnace.
Self-melting pellets
Self-melting pellets, also known as basic pellets, are a type of iron ore pellet that was developed in the United States in the 1990s. These pellets are designed for use in blast furnaces and are produced by adding lime (calcium oxide) and magnesia (magnesium oxide) to iron ore concentrate, enhancing their metallurgical properties. Self-melting pellets typically have the following properties:
Iron (Fe) content: 63%
Silicon dioxide (SiO2) content: 4.2%
Magnesium oxide (MgO) content: 1.6%
Calcium oxide to silicon dioxide ratio (CaO/SiO2): 1.10
Compressive strength: 240 kg per pellet
ISO reducibility: 1.2
Expansion ratio: 15%
Softening temperature: 1,440°C, with a difference of 80°C between the softening and melting temperatures
These pellets are recognized for their high compressive strength and ease of reduction, making them well-suited for blast furnace operations. The production process of self-melting pellets involves incorporating limestone into the iron ore concentrate. This inclusion affects the productivity of pellet plants due to the calcination process, which involves the endothermic process of limestone. As a result, the overall productivity of the pellet plant can decrease by approximately 10 to 15% compared to the production of acid pellets, which do not include lime. Self-melting pellets are appreciated for their enhanced performance in blast furnaces but require consideration of the trade-offs in production efficiency.
Pellets with low silica content
These pellets are designed for use in direct reduction plants. The typical composition of the pellets includes: 67.8% iron (Fe), 1.7% silicon dioxide (SiO2 ), 0.40% aluminum oxide (Al2O3), 0.50% calcium oxide (CaO), 0.30% magnesium oxide (MgO), and 0.01% phosphorus (P).
Low-silica pellets, when doped with lime, can self-fuse. A typical composition for these self-fusing pellets is: 65.1% iron (Fe), 2.5% silicon dioxide (SiO2), 0.45% aluminum oxide (Al2O3 ), 2.25% calcium oxide (CaO), 1.50% magnesium oxide (MgO), and 0.01% phosphorus (P).
Other types of pellets
To cater to specific customer needs, manufacturers have developed alternative pellet types that offer distinct properties and performance characteristics:
Self-Reducing Pellets: Self-reducing pellets are composed of iron ore and coal, which serve as an internal reducing agent during smelting. This design allows the pellets to undergo reduction without the need for additional reducing materials, enhancing efficiency in certain metallurgical processes.
Magnesian Pellets: Magnesian pellets are created by adding minerals such as olivine or serpentine, which increase the magnesia (MgO) content to approximately 1.5%. These pellets are characterized by their balanced performance in blast furnaces, with an average cold crush resistance of around 180 kg per pellet. The added magnesia helps improve the metallurgical properties of the pellets, making them suitable for specific reduction conditions.
These alternative pellet types are designed to address different operational requirements and enhance the flexibility of iron-making processes.
Notes
References
Bibliography
Related articles
Agglomerate (steel industry)
Metallurgy
Minerals
Steel industry | Pellet (steel industry) | Chemistry,Materials_science,Engineering | 3,151 |
930,128 | https://en.wikipedia.org/wiki/Directive%20%28programming%29 | In computer programming, a directive or pragma (from "pragmatic") is a language construct that specifies how a compiler (or other translator) should process its input. Depending on the programming language, directives may or may not be part of the grammar of the language and may vary from compiler to compiler. They can be processed by a preprocessor to specify compiler behavior, or function as a form of in-band parameterization.
In some cases directives specify global behavior, while in other cases they only affect a local section, such as a block of programming code. In some cases, such as some C programs, directives are optional compiler hints and may be ignored, but normally they are prescriptive and must be followed. However, a directive does not perform any action in the language itself, but rather only a change in the behavior of the compiler.
This term could be used to refer to proprietary third-party tags and commands (or markup) embedded in code that result in additional executable processing that extend the existing compiler, assembler and language constructs present in the development environment. The term "directive" is also applied in a variety of ways that are similar to the term command.
The C preprocessor
In C and C++, the language supports a simple macro preprocessor. Source lines that should be handled by the preprocessor, such as #define and #include are referred to as preprocessor directives.
Syntactic constructs similar to C's preprocessor directives, such as C#'s #if, are also typically called "directives", although in these cases there may not be any real preprocessing phase involved.
All preprocessor commands begin with a hash symbol (#) with the exception of the import and module directives in C++.
History
Directives date to JOVIAL.
COBOL Had a COPY directive.
In ALGOL 68, directives are known as pragmats (from "pragmatic"), and denoted pragmat or pr; in newer languages, notably C, this has been abbreviated to "pragma" (no 't').
A common use of pragmats in ALGOL 68 is in specifying a stropping regime, meaning "how keywords are indicated". Various such directives follow, specifying the POINT, UPPER, RES (reserved), or quote regimes. Note the use of stropping for the pragmat keyword itself (abbreviated pr), either in the POINT or quote regimes:
.PR POINT .PR
.PR UPPER .PR
.PR RES .PR
'pr' quote 'pr'
Today directives are best known in the C language, of early 1970s vintage, and continued through the current C99 standard, where they are either instructions to the C preprocessor, or, in the form of #pragma, directives to the compiler itself. They are also used to some degree in more modern languages; see below.
Other languages
In Ada, compiler directives are called pragmas (short for "pragmatic information").
In Common Lisp, directives are called declarations, and are specified using the declare construct (also proclaim or declaim). With one exception, declarations are optional, and do not affect the semantics of the program. The one exception is special, which must be specified where appropriate.
In Turbo Pascal, directives are called significant comments, because in the language grammar they follow the same syntax as comments. In Turbo Pascal, a significant comment is a comment whose first character is a dollar sign and whose second character is a letter; for example, the equivalent of C's #include "file" directive is the significant comment {$I "file"}.
In Perl, the keyword "use", which imports modules, can also be used to specify directives, such as use strict; or use utf8;.
Haskell pragmas are specified using a specialized comment syntax, e.g. {-# INLINE foo #-}.
PHP uses the directive declare(strict_types=1).
Python has two directives – from __future__ import feature (defined in PEP 236 -- Back to the __future__), which changes language features (and uses the existing module import syntax, as in Perl), and the coding directive (in a comment) to specify the encoding of a source code file (defined in PEP 263 -- Defining Python Source Code Encodings). A more general directive statement was proposed and rejected in PEP 244 -- The `directive' statement; these all date to 2001.
ECMAScript also adopts the use syntax for directives, with the difference that pragmas are declared as string literals (e.g. "use strict";, or "use asm";), rather than a function call.
In Visual Basic, the keyword "Option" is used for directives:
Option Explicit On|Off - When on disallows implicit declaration of variables at first use requiring explicit declaration beforehand.
Option Compare Binary - Results in string comparisons based on a sort order derived from the internal binary representations of the characters - e.g. for the English/European code page (ANSI 1252) A < B < E < Z < a < b < e < z < À < Ê < Ø < à < ê < ø. Affects intrinsic operators (e.g. =, <>, <, >), the Select Case block, and VB runtime library string functions (e.g. InStr).
Option Compare Text - Results in string comparisons based on a case-insensitive text sort order determined by your system's locale - e.g. for the English/European code page (ANSI 1252) (A=a) < (À = à) < (B=b) < (E=e) < (Ê = ê) < (Z=z) < (Ø = ø). Affects intrinsic operators (e.g. =, <>, <, >), the Select Case block, and VB runtime library string functions (e.g. InStr).
Option Strict On|Off - When on disallows:
typeless programming - where declarations which lack an explicit type are implicitly typed as Object.
late-binding (i.e. dynamic dispatch to CLR, DLR, and COM objects) on values statically typed as Object.
implicit narrowing conversions - requiring all conversions to narrower types (e.g. from Long to Integer, Object to String, Control to TextBox) be explicit in code using conversion operators (e.g. CInt, DirectCast, CType).
Option Infer On|Off - When on enables the compiler to infer the type of local variables from their initializers.
In Ruby, interpreter directives are referred to as pragmas and are specified by top-of-file comments that follow a key: value notation. For example, coding: UTF-8 indicates that the file is encoded via the UTF-8 character encoding.
In C#, compiler directives are called pre-processing directives. There are a number of different compiler directives including #pragma, which is specifically used to control compiler warnings and debugger checksums.
The SQLite DBMS includes a PRAGMA directive that is used to introduce commands that are not compatible with other DBMS.
In Solidity, compiler directives are called pragmas, and are specified using the `pragma` keyword.
Assembly language
In assembly language, directives, also referred to as pseudo-operations or "pseudo-ops", generally specify such information as the target machine, mark separations between code sections, define and change assembly-time variables, define macros, designate conditional and repeated code, define reserved memory areas, and so on. Some, but not all, assemblers use a specific syntax to differentiate pseudo-ops from instruction mnemonics, such as prefacing the pseudo-op with a period, such as the pseudo-op .END, which might direct the assembler to stop assembling code.
PL/SQL
Oracle Corporation's PL/SQL procedural language includes a set of compiler directives, known as "pragmas".
See also
Footnotes
References
External links
OpenMP Website
OpenACC Website
OpenHMPP Website
Computer programming | Directive (programming) | Technology,Engineering | 1,766 |
12,671,968 | https://en.wikipedia.org/wiki/Xibornol | Xibornol is a lipophilic substance with antiseptic properties, mainly used in Italy and Spain. It is primarily administered to the throat as a spray mouthwash. As of 2007, all approved forms are water-based suspensions.
The drug was discovered in the 1970s.
References
Antibiotics
Phenols | Xibornol | Biology | 65 |
47,393,255 | https://en.wikipedia.org/wiki/Ring%20vaccination | Ring vaccination is a strategy to inhibit the spread of a disease by vaccinating those who are most likely to be infected.
This strategy vaccinates the contacts of confirmed patients, and people who are in close contact with those contacts. This way, everyone who has been, or could have been, exposed to a patient receives the vaccine, creating a 'ring' of protection that can limit the spread of a pathogen.
Ring vaccination requires thorough and rapid surveillance and epidemiologic case investigation. The Intensified Smallpox Eradication Program used this strategy with great success in its efforts to eradicate smallpox in the latter half of the 20th century.
Medical use
When someone falls ill, people they might have infected should be vaccinated. Contacts who might have been infected typically include family, neighbours, and friends. Several layers of contacts may be vaccinated (the contacts, the contacts' contacts, the contact's contacts' contacts, etc.).
Ring vaccination relies on contact tracing to determine possible infections. However, this can be difficult. In some cases, it is preferable to vaccinate as many people as possible within the geographic area of known infection (geographically-targeted reactive vaccination). If the infections occur within a defined geographic boundary, it may be preferable to vaccinate the entire community in which the illness has appeared, rather than explicitly tracing contacts.
Many vaccines take several weeks to induce immunity, and thus do not provide immediate protection. However, even if some of the ill person's contacts are already infected, ring vaccination can prevent the virus from being transmitted again, to the ill contacts' contacts. A few vaccines can protect even if they are given just after infection; ring vaccination is somewhat more effective for vaccines providing this post-exposure prophylaxis.
Advantages
When responding to a possible outbreak, health officials should consider which is best, ring vaccination or mass vaccination. In some outbreaks, it might be better to only vaccinate those directly exposed; variable factors (such as demographics and the vaccine that is available) can make one method or the other safer, with fewer people experiencing side-effects when the same number are protected from the disease.
History
Ring vaccination was used in Leicester, England in the late 19th-century.
It was also used in the mid-20th century in the eradication of smallpox.
It was used experimentally in the Ebola virus epidemic in West Africa.
In 2018, health authorities used a ring vaccination strategy to try to suppress the 2018 Équateur province Ebola outbreak. This involved vaccinating only those most likely to be infected; direct contacts of infected individuals, and contacts of those contacts. The vaccine used was rVSV-ZEBOV.
Ring vaccination has been used extensively in the 2018 Kivu Ebola outbreak, with over 90,000 people vaccinated. In April 2019, the WHO published the preliminary results of the research by its research, in association with the DRC's Institut National pour la Recherche Biomedicale, into the effectiveness of the ring vaccination program, stating that the rVSV-ZEBOV-GP vaccine had been 97.5% effective at stopping Ebola transmission, relative to no vaccination.
See also
Cocooning (immunization)
Herd immunity
Pulse vaccination strategy
Targeted immunization strategies
Vaccination
References
Further reading
Vaccination | Ring vaccination | Biology | 709 |
54,135,786 | https://en.wikipedia.org/wiki/Petr%20Hor%C3%A1lek | Petr Horálek (born July 21, 1986) is a Czech astrophotographer, popularizer of astronomy and an artist.
Astronomy and Astrophotography
Early life
He worked as a volunteer of the Pardubice observatory in 1999–2010 and studied Theoretical physics and Astrophysics at the Masaryk University in Brno (2007–2011). During this period, in 2011, he started with astrophotography.
Astrophotography
Petr specializes in photographing "rare night-sky phenomena" and TWAN style photographs with the night sky over known foreground. His images capture unique moments and views in the sky to show the importance of the fight against the global light pollution problem.
Unique image from Aitutaki Island
However, those distant places were over his financial capacities due to medium GDP of his native country so in 2014, he bought Working holiday visa and travelled to New Zealand. Working on fruit orchards for 4 months, he was able to earn enough money for his only goal: to travel to the Cook Islands and to capture (probably) historically the first nightscape photograph with the Milky Way over the popular Aitutaki Island.
ESO Photo Ambassador
In January 2015, Petr Horálek became the 22nd Photo Ambassador of the European Southern Observatory. He focuses mostly on Ultra-hd fulldome and panoramatic images of the night sky over ESO observatories, which can be freely used in digital planetariums. With this work, he participates on the program of ESO Supernova Planetarium & Visitor Centre.
Cooperation with other organizations
As astrophotographer he cooperates with NSF's NOIRLab organization, where he participated on virtual tour of all of NOIRLab sites (Kitt Peak National Observatory, Gemini Observatory, Vera C. Rubin Observatory and Cerro Tololo Inter-American Observatory). He is the Czech delegate of the International Dark Sky Association and active TWAN guess photographer. Since 2020 he works in Institute of Physics in Opava.
Scientific impact
Several of his images had scientific impact. Here are some examples:
Record of a unique bright Perseid fireball on August 12, 2012, analyzed by P. Spurný at al. and published in Astronomy & Astrophysics magazine
Data of the hybrid solar eclipse of 2013 in Uganda were post-processed by a mathematician and a computer-scientist, prof. Miloslav Druckmüller
Sophisticated imaging and post-processing of the zodiacal light over the ESO La Silla Observatory in April 2016 revealed structures in the zodiacal band, image was published in the ESO Messenger 164 (June 2016) as well as the ESO Picture of the Week
Determining the properties of an asteroid that hit the Moon during the January 21, 2019 total lunar eclipse
Awards
Images of Petr Horálek were chosen 40 times as NASA's Astronomy Picture of the Day, ESO Picture of the Week and Czech Astrophotography of the Month. In October 2015 the International Astronomical Union named the asteroid (6822) 1986 UO after him (see Asteroid 6822 Horálek).
Asteroid 6822 Horálek
Asteroid 6822 Horálek or (6822) 1986 UO was discovered by Zdeňka Vavrová on October 28, 1986, at Kleť Observatory, the Czech Republic. It is about 5 km wide asteroid orbiting the Sun in the Main asteroid belt with perihelion at 1.943 AU and aphelion 3.235 AU from the Sun
MPC Citation for 6822 Horálek
Petr Horálek (b. 1986) is a Czech astronomer, astronomy popularizer, passionate photographer, and one of the ESO Photo Ambassadors. He travels the globe to observe solar and lunar eclipses. His breathtaking photographs capture the beauty of the night sky and its harmony with the landscape. [Ref: Minor Planet Circ. 95803]
Art
Petr Horálek is also an artist. He focuses on sketches, especially female portraits and emotional sketches
References
External links
Petr Horálek's images in ESO archive
Petr Horálek's official website
Space photography and videography
1986 births
Czech photographers
Living people | Petr Horálek | Astronomy | 844 |
4,118,466 | https://en.wikipedia.org/wiki/139P/V%C3%A4is%C3%A4l%C3%A4%E2%80%93Oterma | 139P/Väisälä–Oterma is a periodic comet in the Solar System. When it was discovered in 1939 it was not recognized as a comet and designated as asteroid 1939 TN.
References
External links
Orbital simulation from JPL (Java) / Horizons Ephemeris
139P/Vaisala-Oterma – Seiichi Yoshida @ aerith.net
139P at Kronk's Cometography
Periodic comets
0139
Discoveries by Liisi Oterma
+
139P
19391007 | 139P/Väisälä–Oterma | Astronomy | 106 |
5,335,894 | https://en.wikipedia.org/wiki/Methyllithium | Methyllithium is the simplest organolithium reagent, with the empirical formula CH3Li. This s-block organometallic compound adopts an oligomeric structure both in solution and in the solid state. This highly reactive compound, invariably used in solution with an ether as the solvent, is a reagent in organic synthesis as well as organometallic chemistry. Operations involving methyllithium require anhydrous conditions, because the compound is highly reactive towards water. Oxygen and carbon dioxide are also incompatible with MeLi. Methyllithium is usually not prepared, but purchased as a solution in various ethers.
Synthesis
In the direct synthesis, methyl bromide is treated with a suspension of lithium in diethyl ether.
2 Li + MeBr → LiMe + LiBr
The lithium bromide forms a complex with the methyllithium. Most commercially available methyllithium consists of this complex. "Low-halide" methyllithium is prepared from methyl chloride. Lithium chloride precipitates from the diethyl ether since it does not form a strong complex with methyllithium. The filtrate consists of fairly pure methyllithium. Alternatively, commercial methyllithium can be treated with dioxane to precipitate LiBr(dioxane), which can be removed by filtration. The use of halide-free vs LiBr-MeLi has a decisive effect on some syntheses.
Reactivity
Methyllithium is both strongly basic and highly nucleophilic due to the partial negative charge on carbon and is therefore particularly reactive towards electron acceptors and proton donors. In contrast to n-BuLi, MeLi reacts only very slowly with THF at room temperature, and solutions in ether are indefinitely stable. Water and alcohols react violently. Most reactions involving methyllithium are conducted below room temperature. Although MeLi can be used for deprotonations, n-butyllithium is more commonly employed since it is less expensive and more reactive.
Methyllithium is mainly used as the synthetic equivalent of the methyl anion synthon. For example, ketones react to give tertiary alcohols in a two-step process:
Ph2CO + MeLi → Ph2C(Me)OLi
Ph2C(Me)OLi + H+ → Ph2C(Me)OH + Li+
Nonmetal halides are converted to methyl compounds with methyllithium:
PCl3 + 3 MeLi → PMe3 + 3 LiCl
Such reactions more commonly employ the Grignard reagents methylmagnesium halides, which are often equally effective, and less expensive or more easily prepared in situ.
It also reacts with carbon dioxide to give Lithium acetate:
CH3Li + CO2 → CH3CO2−Li+
Transition metal methyl compounds can be prepared by reaction of MeLi with metal halides. Especially important are the formation of organocopper compounds (Gilman reagents), of which the most useful is lithium dimethylcuprate. This reagent is widely used for nucleophilic substitutions of epoxides, alkyl halides and alkyl sulfonates, as well as for conjugate additions to α,β-unsaturated carbonyl compounds by methyl anion. Many other transition metal methyl compounds have been prepared.
ZrCl4 + 6 MeLi → Li2ZrMe6 + 4 LiCl
Structure
Two structures have been verified by single crystal X-ray crystallography as well as by 6Li, 7Li, and 13C NMR spectroscopy. The tetrameric structure is a distorted cubane-type cluster, with carbon and lithium atoms at alternate corners. The Li---Li distances are 2.68 Å, almost identical with the Li-Li bond in gaseous dilithium. The C-Li distances are 2.31 Å. Carbon is bonded to three hydrogen atoms and three Li atoms. The nonvolatility of (MeLi)4 and its insolubility in alkanes results from the fact that the clusters interact via further inter-cluster agostic interactions. In contrast the bulkier cluster (tertiary-butylLi)4, where intercluster interactions are precluded by steric effects, is volatile as well as soluble in alkanes.
Colour code: Li- purple C- black H- white
The hexameric form features hexagonal prisms with Li and C atoms again at alternate corners.
Colour code: Li- purple C- black H- white
The degree of aggregation, "n" for (MeLi)n, depends upon the solvent and the presence of additives (such as lithium bromide). Hydrocarbon solvents such as benzene favour formation of the hexamer, whereas ethereal solvents favour the tetramer.
Bonding
These clusters are considered "electron-deficient," that is, they do not follow the octet rule because the molecules lack sufficient electrons to form four 2-centered, 2-electron bonds around each carbon atom, in contrast to most organic compounds. The hexamer is a 30 electron compound (30 valence electrons.) If one allocates 18 electrons for the strong C-H bonds, 12 electrons remain for Li-C and Li-Li bonding. There are six electrons for six metal-metal bonds and one electron per methyl-η3 lithium interaction.
The strength of the C-Li bond has been estimated at around 57 kcal/mol from IR spectroscopic measurements.
References
Organolithium compounds
Methylating agents
Methyl complexes
Pyrophoric materials | Methyllithium | Chemistry,Technology | 1,176 |
23,265,764 | https://en.wikipedia.org/wiki/IEEE%20Kiyo%20Tomiyasu%20Award | The IEEE Kiyo Tomiyasu Award is a Technical Field Award established by the IEEE Board of Directors in 2001. It is an institute level award, not a society level award. It is presented for outstanding early to mid-career contributions to technologies holding the promise of innovative applications. The prize is sponsored by Dr. Kiyo Tomiyasu, the IEEE Geoscience and Remote Sensing Society, and the IEEE Microwave Theory and Techniques Society (MTT).
This award may be presented to an individual, multiple recipients or team of up to three people. Candidates must have graduated within the last fifteen years and must be no more than forty-five years old when nominated. Recipients of this award receive a bronze medal, certificate, and honorarium.
Recipients & citations
2022: Pieter Abbeel
- for contributions to deep learning.
2021: Zhu Han
- for contributions to game theory and distributed management of autonomous communication networks.
2020: Andrea Alù
2019: Robert W. Heath Jr. and Jeffrey Andrews
- for contributions to wireless communication systems
2018: Nicholas Laneman
- for cooperative communications and relaying techniques in wireless communications
2017: Emilio Frazzoli
- for developing planning and control algorithms of autonomous vehicles
2016: Yonina Eldar
- "for development of theory and implementation of sub-Nyquist sampling with applications to radar, communications, and ultrasound."
2015: Kaustav Banerjee and Vivek Subramanian
- "for contributions to nano-materials, devices, circuits, and CAD, enabling lowpower and low-cost electronics."
2014: George Chrisikos
- "for contributions to heterogeneous network architectures with ubiquitous wireless access."
2013: Carlos A. Coello Coello
-"for pioneering contributions to single- and multiobjective optimization techniques using bioinspired metaheuristics."
2012: Mung Chiang
- "for demonstrating the practicability of a new theoretical foundation for the analysis and design of communication networks."
2011: Moe Z. Win
- "for fundamental contributions to high speed reliable communications using optical and wireless channels"
2010: Tsu-Jae King
- "for contributions to nanoscale MOS transistors, memory devices, and MEMs devices."
2009: Shih-Fu Chang
- "for contributions to Automated Image Classification"
2008: George V. Eleftheriades
- "for pioneering contributions to the science and the technological applications of negative-refraction electromagnetic materials"
2007: Alberto Moreira
- "for development of synthetic aperture radar concepts."
2006: Muhammad A. Alam
- "for contributions to device technology for communication systems"
2005: Chai K. Toh
- "for pioneering contributions to communication protocols for ad hoc mobile wireless networks"
2004: David B. Fogel
- ""For outstanding contributions to the science and technology of computational intelligence and to the development and expansion of that field"
2003: Keshab K. Parhi
- "For pioneering contributions to high-speed and low-power digital signal processing architectures for broadband communications systems"
2002: Casimer DeCusatis
- "for contributions to optical technologies and fiber optic communications, holding the promise of innovative applications for computer networks.'
References
External links
IEEE Kiyo Tomiyasu Award page at IEEE
List of recipients of the IEEE Kiyo Tomiyasu Award
Kiyo Tomiyasu Award
Awards established in 2001 | IEEE Kiyo Tomiyasu Award | Technology | 684 |
4,332,851 | https://en.wikipedia.org/wiki/N%C3%A9gone | Négone was a Spanish proprietary augmented reality role playing experience played at a facility in Madrid.
Description
In a physical indoor space, an adventure was played out in themed rooms and scenes. The player went through the scenes, interacting with the environment to accomplish the goals of the game. This the player accomplished through physical tasks and by solving puzzles, earning points towards the success of the mission. Time for tasks was limited.
Before entering the game, each player was given a small wristband computer which allowed them to interact with the environment. It also allowed the system to gather information about user actions using RFID and infrared technologies.
Users could continue the game on the internet at home.
History
Négone was created by the Spanish company DifferendGames, S.A., founded in 2002. The first incarnation, “La Maquina” (The Machine), opened in July 2003 at a Shopping Mall on the south of Madrid. A second version opened in downtown Madrid in October 2005, near the Santiago Bernabéu Stadium. This version was entitled La "Fuga" (The Escape), and replicated a futuristic 31st century prison named Mazzinia.
In 2008, the company was working to integrate the first interactive comic book.
Although there were plans to expand Négone to the United States, by mid-2009 Négone had closed.
See also
Escape room
References
Single-player online games
Human–computer interaction | Négone | Engineering | 288 |
20,780,721 | https://en.wikipedia.org/wiki/Yes%20and%20no | Yes and no, or similar word pairs, are expressions of the affirmative and the negative, respectively, in several languages, including English. Some languages make a distinction between answers to affirmative versus negative questions and may have three-form or four-form systems. English originally used a four-form system up to and including Early Middle English. Modern English uses a two-form system consisting of yes and no. It exists in many facets of communication, such as: eye blink communication, head movements, Morse code, and sign language. Some languages, such as Latin, do not have yes-no word systems.
Answering a "yes or no" question with single words meaning yes or no is by no means universal. About half the world's languages typically employ an echo response: repeating the verb in the question in an affirmative or a negative form. Some of these also have optional words for yes and no, like Hungarian, Russian, and Portuguese. Others simply do not have designated yes and no words, like Welsh, Irish, Latin, Thai, and Chinese. Echo responses avoid the issue of what an unadorned yes means in response to a negative question. Yes and no can be used as a response to a variety of situationsbut are better suited in response to simple questions. While a yes response to the question "You don't like strawberries?" is ambiguous in English, the Welsh response (I am) has no ambiguity.
The words yes and no are not easily classified into any of the conventional parts of speech. Sometimes they are classified as interjections. They are sometimes classified as a part of speech in their own right, sentence words, or pro-sentences, although that category contains more than yes and no, and not all linguists include them in their lists of sentence words. Yes and no are usually considered adverbs in dictionaries, though some uses qualify as nouns. Sentences consisting solely of one of these two words are classified as minor sentences.
In English
Classification
Although sometimes classified as interjections, these words do not express emotion or act as calls for attention; they are not adverbs because they do not qualify any verb, adjective, or adverb. They are sometimes classified as a part of speech in their own right: sentence words or word sentences.
This is the position of Otto Jespersen, who states that Yes' and 'No'... are to all intents and purposes sentences just as much as the most delicately balanced sentences ever uttered by Demosthenes or penned by Samuel Johnson."
Georg von der Gabelentz, Henry Sweet, and Philipp Wegener have all written on the subject of sentence words. Both Sweet and Wegener include yes and no in this category, with Sweet treating them separately from both imperatives and interjections, although Gabelentz does not.
Watts classifies yes and no as grammatical particles, in particular response particles. He also notes their relationship to the interjections oh and ah, which is that the interjections can precede yes and no but not follow them. Oh as an interjection expresses surprise, but in the combined forms oh yes and oh no merely acts as an intensifier; but ah in the combined forms ah yes and ah no retains its stand-alone meaning, of focusing upon the previous speaker's or writer's last statement. The forms *yes oh, *yes ah, *no oh, and *no ah are grammatically ill-formed. Aijmer similarly categorizes the yes and no as response signals or reaction signals.
Felix Ameka classifies these two words in different ways according to the context. When used as back-channel items, he classifies them as interjections; but when they are used as the responses to a yes–no question, he classifies them as formulaic words. The distinction between an interjection and a formula is, in Ameka's view, that the former does not have an addressee (although it may be directed at a person), whereas the latter does. The yes or no in response to the question is addressed at the interrogator, whereas yes or no used as a back-channel item is a feedback usage, an utterance that is said to oneself. However, Sorjonen criticizes this analysis as lacking empirical work on the other usages of these words, in addition to interjections and feedback uses.
Bloomfield and Hockett classify the words, when used to answer yes–no questions, as special completive interjections. They classify sentences comprising solely one of these two words as minor sentences.
Sweet classifies the words in several ways. They are sentence-modifying adverbs, adverbs that act as modifiers to an entire sentence. They are also sentence words, when standing alone. They may, as question responses, also be absolute forms that correspond to what would otherwise be the not in a negated echo response. For example, a "No." in response to the question "Is he here?" is equivalent to the echo response "He is not here." Sweet observes that there is no correspondence with a simple yes in the latter situation, although the sentence-word "Certainly." provides an absolute form of an emphatic echo response "He is certainly here." Many other adverbs can also be used as sentence words in this way.
Unlike yes, no can also be an adverb of degree, applying to adjectives solely in the comparative (e.g., no greater, no sooner, but not no soon or no soonest), and an adjective when applied to nouns (e.g., "He is no fool." and Dyer's "No clouds, no vapours intervene.").
Grammarians of other languages have created further, similar, special classifications for these types of words. Tesnière classifies the French and as (along with ). Fonagy observes that such a classification may be partly justified for the former two, but suggests that pragmatic holophrases is more appropriate.
The Early English four-form system
While Modern English has a two-form system of yes and no for affirmatives and negatives, earlier forms of English had a four-form system, comprising the words yea, nay, yes, and no. Yes contradicts a negatively formulated question, No affirms it; Yea affirms a positively formulated question, Nay contradicts it.
Will they not go? — Yes, they will.
Will they not go? — No, they will not.
Will they go? — Yea, they will.
Will they go? — Nay, they will not.
This is illustrated by the following passage from Much Ado about Nothing:
Benedick's answer of yea is a correct application of the rule, but as observed by W. A. Wright "Shakespeare does not always observe this rule, and even in the earliest times the usage appears not to have been consistent." Furness gives as an example the following, where Hermia's answer should, in following the rule, have been yes:
This subtle grammatical feature of Early Modern English is recorded by Sir Thomas More in his critique of William Tyndale's translation of the New Testament into Early Modern English, which was then quoted as an authority by later scholars:
In fact, More's exemplification of the rule actually contradicts his statement of what the rule is. This went unnoticed by scholars such as Horne Tooke, Robert Gordon Latham, and Trench, and was first pointed out by George Perkins Marsh in his Century Dictionary, where he corrects More's incorrect statement of the first rule, "No aunswereth the question framed by the affirmative.", to read nay. That even More got the rule wrong, even while himself dressing down Tyndale for getting it wrong, is seen by Furness as evidence that the four word system was "too subtle a distinction for practice".
Marsh found no evidence of a four-form system in Mœso-Gothic, although he reported finding "traces" in Old English. He observed that in the Anglo-Saxon Gospels,
positively phrased questions are answered positively with (John 21:15,16, King James Version: "Jesus saith to Simon Peter, Simon, son of Jonas, lovest thou me more than these? He saith unto him, Yea, Lord; thou knowest that I love thee" etc.)
and negatively with (Luke 12:51, KJV: "Suppose ye that I am come to give peace on earth? I tell you, Nay; but rather division"; 13:4,5, KJV: "Or those eighteen, upon whom the tower in Siloam fell, and slew them, think ye that they were sinners above all men that dwelt in Jerusalem? I tell you, Nay: but, except ye repent, ye shall all likewise perish."), (John 21:5 "Then Jesus saith unto them, Children, have ye any meat? They answered him, No."; Matthew 13:28,29, KJV: "The servants said unto him, Wilt thou then that we go and gather them up? But he said, Nay; lest while ye gather up the tares, ye root up also the wheat with them."), and meaning 'not I' (John 18:17, KJV: "Then saith the damsel that kept the door unto Peter, Art not thou also one of this man's disciples? He saith, I am not.");
while negatively phrased questions are answered positively with (Matthew 17:25, KJV: "they that received tribute money came to Peter, and said, Doth not your master pay tribute? He saith, Yes.")
and negatively for example with , meaning 'no one' (John 8:10,11, "he said unto her, Woman, where are those thine accusers? hath no man condemned thee? She said, No man, Lord.").
Marsh calls this four-form system of Early Modern English a "needless subtlety". Tooke called it a "ridiculous distinction", with Marsh concluding that Tooke believed Thomas More to have simply made this rule up and observing that Tooke is not alone in his disbelief of More. Marsh, however, points out (having himself analyzed the works of John Wycliffe, Geoffrey Chaucer, John Gower, John Skelton, and Robert of Gloucester, and Piers Plowman and Le Morte d'Arthur) that the distinction both existed and was generally and fairly uniformly observed in Early Modern English from the time of Chaucer to the time of Tyndale. But after the time of Tyndale, the four-form system was rapidly replaced by the modern two-form system. The Oxford English Dictionary says the four-form system "was usually considered to be... proper..." until about 1600, with citations from Old English (mostly for yes and yea) and without any indication that the system had not yet started then.
Colloquial forms
Non-verbal
Linguist James R. Hurford notes that in many English dialects "there are colloquial equivalents of Yes and No made with nasal sounds interrupted by a voiceless, breathy h-like interval (for Yes) or by a glottal stop (for No)" and that these interjections are transcribed into writing as or . These forms are particularly useful for speakers who are at a given time unable to articulate the actual words yes and no. The use of short vocalizations like uh-huh, mm-hmm, and yeah are examples of non-verbal communication, and in particular the practice of backchanneling.
Art historian Robert Farris Thompson has posited that mm-hmm may be a loanword from a West African language that entered the English vernacular from the speech of enslaved Africans; linguist Lev Michael, however, says that this proposed origin is implausible, and linguist Roslyn Burns states that the origin of the term is difficult to confirm.
Aye and variants
The word aye () as a synonym for yes in response to a question dates to the 1570s. According to the Online Etymology Dictionary, it is of unknown origin. It may derive from the word I (in the context of "I assent"); as an alteration of the Middle English ("yes"); or the adverb aye (meaning always "always, ever"), which comes from the Old Norse . Using aye to mean yes is archaic, having disappeared from most of the English-speaking world, but is notably still used by people from parts of Wales, Scotland, Northern Ireland and Northern England in the UK, and in other parts of Ulster in Ireland.
In December 1993, a witness in a court in Stirlingshire, Scotland, answered "aye" to confirm he was the person summoned, but was told by a sheriff judge that he must answer either yes or no, orelse be held in contempt of court. When he was asked if he understands, he answered "aye" again, and was imprisoned for 90 minutes for contempt of court. On his release he said, "I genuinely thought I was answering him."
Aye is also a common word in parliamentary procedure, where the phrase the ayes have it means that a motion has passed. In the House of Commons of the British Parliament, MPs vote orally by saying "aye" or "no" to indicate they approve or disapprove of the measure or piece of legislation. (In the House of Lords, by contrast, members say "content" or "not content" when voting).
The term has also historically been used in nautical usage, often phrased as "aye, aye, sir" duplicating the word "aye". Fowler's Dictionary of Modern English Usage (1926) explained that the nautical phrase was at that time usually written ay, ay, sir.
The informal, affirmative phrase why-aye (also rendered whey-aye or way-eye) is used in the dialect of northeast England, most notably by Geordies.
In New England English, chiefly in Maine, ayuh is used; also variants such as eyah, ayeh or ayup. It is believed to be derived from either the nautical or Scottish use of aye.
Other
Other variants of "yes" include acha in informal Indian English and historically righto or righty-ho in upper-class British English, although these fell out of use during the early 20th century.
Three-form systems
Several languages have a three-form system, with two affirmative words and one negative. In a three-form system, the affirmative response to a positively phrased question is the unmarked affirmative, the affirmative response to a negatively phrased question is the marked affirmative, and the negative response to both forms of question is the (single) negative. For example, in Norwegian the affirmative answer to "Snakker du norsk?" ("Do you speak Norwegian?") is "Ja", and the affirmative answer to "Snakker du ikke norsk?" ("Do you not speak Norwegian?") is "Jo", while the negative answer to both questions is "Nei".
Danish, Swedish, Norwegian, Icelandic, Faroese, Hungarian, German, Dutch, French and Malayalam all have three-form systems.
Swedish, and to some extent Danish and Norwegian, also have additional forms javisst and jovisst, analogous to ja and jo, to indicate a strong affirmative response. Swedish (and Danish and Norwegian slang) also have the forms joho and nehej, which both indicate stronger response than jo or nej. Jo can also be used as an emphatic contradiction of a negative statement.
Malayalam has the additional forms , and which act like question words, question tags or to strengthen the affirmative or negative response, indicating stronger meaning than , and . The words , , , , , and work in the same ways. These words are considered more polite than a curt "No!" or "Yes!". means "it is there" and the word behaves as an affirmative response like . The usage of to simply mean "No" or "No way!" is informal and may be casual or sarcastic, while is the more formal way of saying "false", "incorrect" or that "it is not" and is a negative response for questions. The word has a stronger meaning than . is used to mean "OK" or "correct", with the opposite meaning "not OK" or "not correct". It is used to answer affirmatively to questions to confirm any action by the asker, but to answer negatively one says . and both mean to "want" and to "not want".
Other languages with four-form systems
Like Early Modern English, the Romanian language has a four-form system. The affirmative and negative responses to positively phrased questions are da and nu, respectively. But in responses to negatively phrased questions they are prefixed with ba (i.e. ba da and ba nu). nu is also used as a negation adverb, infixed between subject and verb. Thus, for example, the affirmative response to the negatively phrased question "N-ai plătit?" ("Didn't you pay?") is "Ba da." ("Yes."—i.e. "I did pay."), and the negative response to a positively phrased question beginning "Se poate să ...?" ("Is it possible to ...?") is "Nu, nu se poate." ("No, it is not possible."—note the use of nu for both no and negation of the verb.)
Related words in other languages and translation problems
Finnish
Finnish does not generally answer yes–no questions with either adverbs or interjections but answers them with a repetition of the verb in the question, negating it if the answer is the negative. (This is an echo response.) The answer to ("Are you coming from town?") is the verb form itself, ("We are coming.") However, in spoken Finnish, a simple "Yes" answer is somewhat more common,
Negative questions are answered similarly. Negative answers are just the negated verb form. The answer to ("Do you know Mr Lehto?") is ("I don't know.") or simply . ("I don't."). However, Finnish also has particle words for "yes": (formal) and (colloquial). A yes–no question can be answered "yes" with either or , which are not conjugated according to the person and plurality of the verb. , however, is always conjugated and means "no".
Latvian
Up until the 16th century Latvian did not have a word for "yes" and the common way of responding affirmatively to a question was by repeating the question's verb, just as in Finnish. The modern day was borrowed from Middle High German and first appeared in 16th-century religious texts, especially catechisms, in answers to questions about faith. At that time such works were usually translated from German by non-Latvians that had learned Latvian as a foreign language. By the 17th century, was being used by some Latvian speakers that lived near the cities, and more frequently when speaking to non-Latvians, but they would revert to agreeing by repeating the question verb when talking among themselves. By the 18th century the use of was still of low frequency, and in Northern Vidzeme the word was almost non-existent until the 18th and early 19th century. Only in the mid-19th century did really become usual everywhere.
Welsh
It is often assumed that Welsh has no words at all for yes and no. It has and , and and . However, these are used only in specialized circumstances and are some of the ways in Welsh of saying yes or no. and are used to respond to sentences of simple identification, while and are used to respond to questions specifically in the past tense. As in Finnish, the main way to state yes or no, in answer to yes–no questions, is to echo the verb of the question. The answers to "" ('Is Ffred coming?') are either "" ('He is (coming).') or "" ('He is not (coming)'). In general, the negative answer is the positive answer combined with . For more information on yes and no answers to yes–no questions in Welsh, see Jones, listed in further reading.
Latin
Latin has no single words for yes and no. Their functions as word sentence responses to yes–no questions are taken up by sentence adverbs, single adverbs that are sentence modifiers and also used as word sentences. There are several such adverbs classed as truth-value adverbs—including , , , , , , , , and (negative). They express the speaker's/writer's feelings about the truth value of a proposition. They, in conjunction with the negator , are used as responses to yes–no questions. For example:
Latin also employs echo responses.
Galician and Portuguese
These languages have words for yes and no, namely and in Galician and and in Portuguese. However, answering a question with them is less idiomatic than answering with the verb in the proper conjugation.
Spanish
In Spanish, the words 'yes' and 'no' are unambiguously classified as adverbs: serving as answers to questions and also modifying verbs. The affirmative can replace the verb after a negation ( = I don't own a car, but he does) or intensify it (I don't believe he owns a car. / He does own one! = ). The word is the standard adverb placed next to a verb to negate it ( = I don't own a car). Double negation is normal and valid in Spanish, and it is interpreted as reinforcing the negation ( = I own no car).
Nepali
In Nepali, there is no one word for 'yes' and 'no' as it depends upon the verb used in the question. The words most commonly translated as equivalents are 'हो' (ho; ) and 'होइन' (hoina; ) are in fact the affirmative and negative forms of the same verb 'हो' (ho; ) and hence is only used when the question asked contains said verb. In other contexts, one must repeat the affirmative or negative forms of the verb being asked, for instance "तिमीले खाना खायौँ?" (timīle khānā khāyau?; ) would be answered by "खाएँ" (khāe˜; ), which is the verb "to eat" conjugated for the past tense first person singular. In certain contexts, the word "नाई" (nāī) can be used to deny something that is stated, for instance politely passing up an offer.
Chinese
Speakers of Chinese use echo responses. In all Sinitic/Chinese languages, yes–no questions are often posed in A-not-A form, and the replies to such questions are echo answers that echo either A or not A. In Standard Mandarin Chinese, the closest equivalents to yes and no are to state "" (; ) and "" (; ). The phrase () may also be used for the interjection "no", and (ǹg) may be used for "yes". Similarly, in Cantonese, the preceding are 係 hai6 (lit: "is") and 唔係 (lit: "not is") m4 hai6, respectively. One can also answer 冇錯 mou5 co3 () for the affirmative, although there is no corresponding negative to this.
Japanese
Japanese lacks words for yes and no. The words "" (hai) and "" (iie) are mistaken by English speakers for equivalents to yes and no, but they actually signify agreement or disagreement with the proposition put by the question: "That's right." or "That's not right." For example: if asked, , answering with the affirmative "はい" would mean "Right, I am not going"; whereas in English, answering "yes" would be to contradict the negative question. Echo responses are typical in Japanese.
Complications
These differences between languages make translation difficult. No two languages are isomorphic at the most elementary level of words for yes and no. Translation from two-form to three-form systems are equivalent to what English-speaking school children learning French or German encounter. The mapping becomes complex when converting two-form to three-form systems. There are many idioms, such as reduplication (in French, German, and Italian) of affirmatives for emphasis (the Dutch and German ).
The mappings are one-to-many in both directions. The German has no fewer than 13 English equivalents that vary according to context and usage (yes, yeah, and no when used as an answer; well, all right, so, and now, when used for segmentation; oh, ah, uh, and eh when used an interjection; and do you, will you, and their various inflections when used as a marker for tag questions) for example. Moreover, both and are frequently used as additional particles for conveying nuanced meaning where, in English, no such particle exists. Straightforward, non-idiomatic, translations from German to English and then back to German can often result in the loss of all of the modal particles such as and from a text.
Translation from languages that have word systems to those that do not, such as Latin, is similarly problematic. As Calvert says, "Saying yes or no takes a little thought in Latin".
See also
Affirmation and negation
Thumb signal
Translation
Untranslatability
References
Further reading
—Jones' analysis of how to answer questions with "yes" or "no" in the Welsh language, broken down into a typology of echo and non-echo responsives, polarity and truth-value responses, and numbers of forms
English grammar
English words
History of the English language
Parts of speech
br:Sí
de:Ja
es:Sí
es:No
eo:Jes
it:Sì
ja:はい
no:Ja
sk:Áno
yi:יא | Yes and no | Technology | 5,515 |
23,581,310 | https://en.wikipedia.org/wiki/C16H34 | {{DISPLAYTITLE:C16H34}}
The molecular formula C16H34 (molar mass: 226.44 g/mol, exact mass: 226.2661 u) may refer to:
Hexadecane (cetane)
Isocetane | C16H34 | Chemistry | 61 |
442,205 | https://en.wikipedia.org/wiki/1%20Giant%20Leap | 1 Giant Leap is a British electronic music duo consisting of the two principal artists, Jamie Catto (Faithless founding member) and Duncan Bridgeman.
Career
Based in the UK, the two musicians set out to create a multimedia project that would encompass a CD, DVD and cinematic presentation that would offer a complete artistic statement. The project offers music, digital video footage shot over the course of six months by Catto and Bridgeman, images, rhythms and spoken word content.
The band was signed to the Palm record label and its eponymous CD was released on 9 April 2002. It features contributions from Dennis Hopper, Kurt Vonnegut, Michael Stipe, Robbie Williams, Eddi Reader, Tom Robbins, Brian Eno, Baaba Maal, Speech, Asha Bhosle, Neneh Cherry, Anita Roddick, Michael Franti, Zap Mama, and other artists and authors. The band's theme for the project is "Unity Through Diversity". A making-of was also shown on the Discovery Channel, which featured some of the effort involved in finding and working with the musicians and other people involved in the project.
1 Giant Leap's "My Culture" video for their first top ten single, featuring Robbie Williams and Maxi Jazz from Faithless, received extensive airplay.
In 2004, they moved on to a deal with Simon Fuller and 19 Entertainment to make a second film and album titled What About Me? The concept was the same as their initial CD and DVD—travelling the world interviewing artists and sampling music—though the second time around their journey was longer (four years) and doubling the number of contributing artists from their debut release.
Reception
In a review for NPR's All Things Considered, Charles deLedesma said that the album and DVD had "an uphill marketing struggle ... because it isn't easily pigeonholed. But that's its real strength, too. This production presents a luscious, cohesive collection of images, ideas and extremely beautiful world music, which flows here with the voices of three continents."
Discography
Albums
Video albums
Singles
References
External links
What About Me? website
1 Giant Leap website
1 Giant Leap at Palm Pictures
Film/Music Project inspired by 1 Giant Leap
1 Giant Leap Discography
Full History at Sound On Sound
1 Giant Leap at Palmpictures.co.uk
English electronic music duos
Multimedia works
British world music groups
Trip hop groups
Musical groups established in 2001
Electronic music groups from London
2001 establishments in England
Palm Pictures artists | 1 Giant Leap | Technology | 504 |
28,766,498 | https://en.wikipedia.org/wiki/DOTA-TATE | DOTA-TATE (DOTATATE, DOTA-octreotate, oxodotreotide, DOTA-(Tyr3)-octreotate, and DOTA-0-Tyr3-Octreotate) is an eight amino acid long peptide, with a covalently bonded DOTA bifunctional chelator.
DOTA-TATE can be reacted with the radionuclides gallium-68 (T1/2 = 68 min), lutetium-177 (T1/2 = 6.65 d) and copper-64 (T1/2 = 12.7 h) to form radiopharmaceuticals for positron emission tomography (PET) imaging or radionuclide therapy. 177Lu DOTA-TATE therapy is a form of peptide receptor radionuclide therapy (PRRT) which targets somatostatin receptors (SSR). In that form of application it is a form of targeted drug delivery.
Chemistry and mechanism of action
DOTA-TATE is a compound containing tyrosine3-octreotate, an SSR agonist, and the bifunctional chelator DOTA (tetraxetan). SSRs are found with high density in numerous malignancies, including CNS, breast, lung, and lymphatics. The role of SSR agonists (i.e. somatostatin and its analogs such as octreotide, somatuline and vapreotide) in neuroendocrine tumours (NETs) is well established, and massive SSR overexpression is present in several NETs. (Tyr3)-octreotate binds the transmembrane receptors of NETs with highest activity for SSR2 and is actively transported into the cell via endocytosis, allowing trapping of the radioactivity and increasing the probability of the desired double-strand DNA breakage (for tumour control). Trapping improves the probability of this kind of effect due to the relatively short range of the beta particles emitted by 177Lu, which have a maximum range in tissue of <2 mm. Bystander effects include cellular damage by free radical formation.
Clinical applications
Gallium-68 DOTA-TATE
68Ga DOTA-TATE (gallium-68 dotatate, GaTate) is used to measure tumor SSR density and whole-body bio-distribution via PET imaging. 68Ga DOTA-TATE imagery has a much higher sensitivity and resolution compared to 111In octreotide gamma camera or SPECT scans, due to intrinsic modality differences. It is commonly used to confirm the presence of paragangliomas and pheochromocytomas.
Copper-64 DOTA-TATE
Copper (64Cu) oxodotreotide or copper Cu 64 dotatate, sold under the brand name Detectnet, is a radioactive diagnostic agent indicated for use with positron emission tomography (PET) for localization of somatostatin receptor positive neuroendocrine tumors (NETs) in adults. It was FDA approved in September 2020. These are the same indications for as the gallium DOTA-TATE scans, but Cu-64 has advantages over Ga-68 in having a 12-hour half life rather than the much shorter one-hour half life of Ga-68, making it easier to transport from central production locations.
Lutetium-177 DOTA-TATE
The combination of the beta emitter 177Lu with DOTA-TATE can be used in the treatment of cancers expressing the relevant somatostatin receptors. The U.S. Food and Drug Administration (FDA) considers 177Lu-dotatate to be a first-in-class medication.
Alternatives to 177Lu-DOTA-TATE include 90Y (T1/2 = 64.6 h) DOTA-TATE. The longer penetration range in the target tissues of the more energetic beta particles emitted by 90Y (high average beta energy of 0.9336 MeV) could make it more suitable for large tumors while 177Lu would be preferred for smaller volume tumors.
See also
Lutetium
64Cu-dotatate
References
Chelating agents
Macrocycles
Orphan drugs
Radiopharmaceuticals
DOTA (chelator) derivatives | DOTA-TATE | Chemistry | 887 |
53,302,354 | https://en.wikipedia.org/wiki/Polymerization-induced%20phase%20separation | Polymerization-induced phase separation (PIPS) is the occurrence of phase separation in a multicomponent mixture induced by the polymerization of one or more components. The increase in molecular weight of the reactive component renders one or more components to be mutually immiscible in one another, resulting in spontaneous phase segregation.
Types
Polymerization-induced phase separation can be initiated either through thermally induced polymerization or photopolymerization. The process general occurs through spinodal decomposition, commonly resulting in the formation of co-continuous phases.
Control over morphology
The morphology of the final phase separated structures are generally random owing to the stochastic nature of the onset and process of phase separation. Several approaches have been investigated to control morphology. Tran-Cong-Miyata and co-workers using periodic irradiation in photoreactive polymer blends to control morphology, specifically width of the resultant spinodal modes in the phase separated morphology. Li and co-workers employed holography, a process of holographic polymerization, in to order to direct the phase separated structure to have the same patterns as the holographic field. Recently, Hosein and co-workers demonstrated that nonlinear optical pattern formations that occur in photopolymer systems may be used to direct the organization of blends to have the same morphology as the light pattern.
Applications
The process is commonly used in control of the morphology of polymer blends, for applications in thermoelectrics, solid-state lighting, polymer electrolytes, composites, membrane formation, and surface pattern formations.
References
__notoc__
Polymer chemistry | Polymerization-induced phase separation | Chemistry,Materials_science,Engineering | 331 |
153,688 | https://en.wikipedia.org/wiki/Well%20dressing | Well dressing, also known as well flowering, is a tradition practised in some parts of rural England in which wells, springs and other water sources are decorated with designs created from flower petals. The custom is most closely associated with the Peak District of Derbyshire and Staffordshire. James Murray Mackinlay, writing in 1893, noted that the tradition was not observed in Scotland; W. S. Cordner, in 1946, similarly noted its absence in Ireland. Both Scotland and Ireland do have a long history of the veneration of wells, however, dating from at least the 6th century.
The custom of well dressing in its present form probably began in the late 18th century, and evolved from "the more widespread, but less picturesque" decoration of wells with ribbons and simple floral garlands.
History
The location identified most closely with well dressing is Tissington, Derbyshire, though the origins of the tradition are obscure. It has been speculated that it began as a pagan custom of offering thanks to gods for a reliable water supply; other suggested explanations include villagers celebrating the purity of their water supply after surviving the Black Death in 1348, or alternatively celebrating their water's constancy during a prolonged drought in 1615. The practice of well dressing using clay boards at Tissington is not recorded before 1818, however, and the earliest record for the wells being adorned by simple garlands occurs in 1758.
Well dressing was celebrated in at least 12 villages in Derbyshire by the late 19th century, and was introduced in Buxton in 1840, "to commemorate the beneficence of the Duke of Devonshire who, at his own expense, made arrangements for supplying the Upper Town, which had been much inconvenienced by the distance to St Anne's well on the Wye, with a fountain of excellent water within easy reach of all". Similarly, well dressing was revived at this time in Youlgreave, to celebrate the supplying of water to the village "from a hill at some distance, by means of pipes laid under the stream of an intervening valley.". With the arrival of piped water the tradition was adapted to include public taps, although the resulting creations were still described as well dressings.
The custom waxed and waned over the years, but has seen revivals in Derbyshire, Staffordshire, South Yorkshire, Cheshire, Shropshire, Worcestershire and Kent.
Process
Wooden frames are constructed and covered with clay, mixed with water and salt. A design is sketched on paper, often of a religious theme, and this is traced onto the clay. The picture is then filled in with natural materials, predominantly flower petals and mosses, but also beans, seeds and small cones. Each group uses its own technique, with some areas mandating that only natural materials be used while others feel free to use modern materials to simplify production. Wirksworth and Barlow are two of the very few dressings where the strict use of only natural materials is still observed.
In literature
John Brunner's story "In the Season of the Dressing of the Wells" describes the revival of the custom in an English village of the West Country after World War I, and its connection to the Goddess.
Jon McGregor's novel Reservoir 13 is set in a village where well dressing is an annual event.
See also
Clootie well
Osterbrunnen
References
Footnotes
Bibliography
External links
welldressing.com Listing of dates and sites, with galleries of photos and historical information
Official website for the Stoney Middleton Well Dressing Committee
Official website of the Buxton Wells Dressing Festival
Short history of well dressing
Tissington Hall's guide to producing welldressings
Well dressings in Wirksworth Derbyshire
Community site for Wirksworth Derbyshire
Well Dressings in Barlow, Derbyshire. Dressed year on year for at least 150 years
A history of well dressing in Wormhill
Well dressings in Brackenfield
Culture in Derbyshire
English folklore
Tourist attractions of the Peak District
Water wells
English traditions | Well dressing | Chemistry,Engineering,Environmental_science | 799 |
63,915,831 | https://en.wikipedia.org/wiki/Ester%20H.%20Segal | Ester H. Segal is an Israeli nanotechnology researcher and professor in the Department of Biotechnology and Food Engineering at the Technion - Israel Institute of Technology, where she heads the Laboratory for Multifunctional Nanomaterials. She is also affiliated with the Russell Berrie Nanotechnology Institute at the Technion - Israel Institute of Technology. Segal is a specialist in porous silicon nanomaterials, as well as nanocomposite materials for active packaging technologies to extend the shelf life of food.
Education
Segal received her bachelor of science degree in chemical engineering from the Technion - Israel Institute of Technology in 1997. She earned her master of science degree and PhD from the Technion in polymer science.
Research and career
Segal competed her graduate research with Moshe Narkis at the Technion - Israel Institute of Technology, where she developed electrically conductive polymer systems and their application as sensors for volatile organic compounds. After completing her PhD in 2004, Segal was awarded the Rothschild Postdoctoral Fellowship and joined the group of Michael J. Sailor at the Department of Chemistry and Biochemistry at the University of California, San Diego from 2004 to 2007. There, she developed porous silicon nanomaterials for drug delivery and optical biosensing purposes. In 2007, She returned to Israel and joined the Department of Biotechnology and Food Engineering at the Technion - Israel Institute of Technology to begin her own research lab. She was promoted to full professor in 2020.
Her research lab focuses on coupling materials science with chemistry and biotechnology to address problems in food technology and medicine. Specific areas include optical biosensing, silicon-based therapeutics, silicon-polymer hybrids, and food packaging technologies.
Optical biosensors
Fabry-Perot interferometers
Using electrochemical etched mesoporous silicon, Segal's research group has developed label-free, optical sensors by means of Fabry-Perot interferometry. These sensors, containing pores between 10 and 100 nm detect analytes such as proteins, DNA, whole bacteria cells, amphipathic molecules on lipid bilayers, organophosphorus compounds, heavy metal ions, and proteolytic products from enzymatic activity. Some of these sensors have been integrated with isotachophoresis and/or engineered with specific surface functions (e.g. attached proteins, enzymes, aptamers, and antimicrobial peptides) to enhance the limits of detection for analytes. She has helped engineer hybrid porous silicon materials for sensing purposes, including carbon dot-infused silicon transducers, hydrogel-confined silicon substrates, and polymer-silicon hybrids.
Diffraction gratings
Segal's research group engineered microstructured silicon optical sensors for the detection of microorganisms, including bacteria and fungi, in clinical samples and food. The microstructured substrates serve as reflective diffraction gratings for label-free measurements of refractive index. Her group (in collaboration with the Department of Urology at the Bnai Zion hospital and Ha'Emek Medical Center) developed a means of rapid antimicrobial susceptibility testing for clinical samples.
Porous silicon therapeutics
Segal and her research team engineered porous silicon carriers containing nerve growth factor for delivery to the brain in Alzheimer's models, in addition to carriers of anti-cancer drugs to diseased tissue and bone morphogenetic protein 2. She also demonstrated the delivery of anti-cancer drugs captured in silicon microparticles with a pneumatic capillary gene gun. She has studied the kinetics and degradation of porous silicon therapeutics in disease models, finding that porous silicon materials tend to degrade at faster rates in diseased tissue environments compared to healthy tissue.
Food packaging technologies
Some of Segal's research focuses on development of technologies for active packaging of food usually through the incorporation of polymers, nanomaterials, and essential oils. These materials have antimicrobial properties, allowing them to preserve food for longer times, and reduce food waste.
Professional activities
2019 ACS Advances in Measurement Science Lectureship Award for her work on photonic crystal sensing.
2019 Lady Globes named her one of Israel's top 50 most influential women.
2017 Discovery Award for Team Prismatix (part of UK Longitude Prize Contest) antimicrobial resistance testing technology
2016 Hershel Rich Innovation Award
2016 Daniel Shiran Memorial Research Prize for outstanding research in biomedicine
2015 Yanai Prize for Excellence in Academic Education
2014 Henry Taub Award for Academic Excellence
Entrepreneurship
Segal serves as the CTO to BactuSense Technologies Ltd and was the project coordinator of Nanopak, an EU-funded project that developed food packaging products in order to extend the shelf life of food.
Personal life
Segal is a cancer survivor, married, and has two children.
References
Year of birth missing (living people)
Living people
Technion – Israel Institute of Technology alumni
Academic staff of Technion – Israel Institute of Technology
Israeli nanotechnologists
Israeli women engineers
Israeli chemical engineers
Polymer scientists and engineers
21st-century Israeli women scientists | Ester H. Segal | Chemistry,Materials_science | 1,047 |
12,638,277 | https://en.wikipedia.org/wiki/LRRTM1 | LRRTM1 is a brain-expressed imprinted gene that encodes a leucine-rich repeat transmembrane protein that interacts with neurexins and neuroligins to modulate synaptic cell adhesion in neurons. As the name implies, its protein product is a transmembrane protein that contains many leucine rich repeats. It is expressed during the development of specific forebrain structures and shows a variable pattern of maternal downregulation (genomic imprinting).
Clinical significance
LRRTM1 is the first gene linked to increased odds of being left-handed, when inherited from the father's side. Possessing one particular variant of the LRRTM1 gene slightly raises the risk of psychotic mental illnesses such as schizophrenia, again only if inherited from the father's side. As well, LRRTM1 has been associated with measures of schizotypy in non-clinical populations, indicating that the gene may have shared effects on neurodevelopment in both healthy and unhealthy individuals and individuals with schizophrenia.
LRRTM1 is also critically involved in synapse formation within the dorsal lateral geniculate nucleus (dLGN) of mice. LRRTM1 aids in the assembly of complex retinogeniculate synapses in mice, which are believed to help process complex visual signals. Lack of this gene shows decreased performance in complex visual tasks.
See also
Handedness
References
Further reading
Genes on human chromosome 2
Motor skills | LRRTM1 | Chemistry,Biology | 306 |
10,844 | https://en.wikipedia.org/wiki/French%20materialism | French materialism is the name given to a handful of French 18th-century philosophers during the Age of Enlightenment, many of them clustered around the salon of Baron d'Holbach. Although there are important differences between them, all of them were materialists who believed that the world was made up of a single substance, matter, the motions and properties of which could be used to explain all phenomena.
Prominent French materialists of the 18th century include:
Julien Offray de La Mettrie
Denis Diderot
Baron d'Holbach
Claude Adrien Helvétius
Pierre Jean Georges Cabanis
Jacques-André Naigeon
See also
Atheism during the Age of Enlightenment
German materialism
Mechanism (philosophy)
Metaphysical naturalism
External links
Marx's essay on French Materialism on WikiSource
Materialism
Philosophical schools and traditions
French philosophy | French materialism | Physics | 172 |
33,308,777 | https://en.wikipedia.org/wiki/Good%E2%80%93deal%20bounds | Good–deal bounds are price bounds for a financial portfolio which depends on an individual trader's preferences. Mathematically, if is a set of portfolios with future outcomes which are "acceptable" to the trader, then define the function by
where is the set of final values for self-financing trading strategies. Then any price in the range does not provide a good deal for this trader, and this range is called the "no good-deal price bounds."
If then the good-deal price bounds are the no-arbitrage price bounds, and correspond to the subhedging and superhedging prices. The no-arbitrage bounds are the greatest extremes that good-deal bounds can take.
If where is a utility function, then the good-deal price bounds correspond to the indifference price bounds.
References
Mathematical finance
Pricing | Good–deal bounds | Mathematics | 170 |
2,303,438 | https://en.wikipedia.org/wiki/Draco%20Dwarf | The Draco Dwarf is a spheroidal galaxy which was discovered by Albert George Wilson of Lowell Observatory in 1954 on photographic plates of the National Geographic Society's Palomar Observatory Sky Survey (POSS). It is part of the Local Group and a satellite galaxy of the Milky Way galaxy. The Draco Dwarf is situated in the direction of the Draco Constellation at 34.6° above the galactic plane.
Characteristics
Paul W. Hodge analyzed the distribution of its stars in 1964 and concluded that its ellipticity was 0.29 ± 0.04.
Recent studies have indicated that the galaxy may potentially hold large amounts of dark matter. Having an absolute magnitude of −8.6 and a total luminosity of only , it is one of the faintest companions to our Milky Way.
Draco Dwarf contains many red giant branch (RGB) stars; five carbon stars have been identified in Draco Dwarf and four likely asymptotic giant branch (AGB) stars have been detected.
The Draco Dwarf is estimated to be 80 ± 10 kpc from Earth and span a distance of 830 ± 100 × 570 ± 70 pc.
RR Lyrae
In 1961, Walter Baade and Henrietta H. Swope studied Draco Dwarf and discovered over 260 variables, of the 138 in the cluster's center, all but five were determined to be RR Lyrae variables. From this work a RR Lyrae derived distance modulus of 19.55 is found which implies a distance of 81 kpc.
Metallicity
The Draco Dwarf contains primarily an old population of stars and insignificant amounts of interstellar matter (being basically dust free). From 75% to 90% of its stars formed more than ~10 Gyr ago followed by a low rate of formation with a small burst of star formation around 2–3 Gyr ago. It has a single Gaussian distribution with average metallicity of [Fe/H] = −1.74 dex with a standard deviation (sigma/σ) of 0.24 dex and a small tail of metal-rich stars. The central region of Draco Dwarf exhibits a concentration of more metal-rich stars there being more centrally concentrated red horizontal branch stars than blue horizontal branch stars.
Dark matter
Recently, dwarf spheroidal galaxies have become key objects for the study of dark matter. The Draco Dwarf is one which has received specific attention. Radial velocity computations of Draco have revealed a large internal velocity dispersion giving a mass to luminosity ratio of up to /, suggesting large amounts of dark matter. It has been hypothesized that large velocity dispersions could be explained as tidal dwarfs (virtually unbound stellar streams from dwarf galaxies tidally disrupted in the Milky Way potential). However, Draco Dwarf's narrow horizontal branch width does not support this model. This only leaves the dark matter explanation and makes Draco Dwarf the most dark matter dominated object known as of 2007. The dark matter distribution within Draco Dwarf is at least nearly isothermal.
At large radii, radial velocity dispersion exhibit strange behavior. One possible explanation for this would be the presence of more than one stellar population. This suggests the need for further study of Draco Dwarf population's metallicity and ages and of dwarf spheroidals in general.
In 2024, a group of scientists using the Hubble Space Telescope measured proper motions of Draco with 18 years of data, making it the first dwarf galaxy to have its 3D velocity dispersion profile radially resolved. The group of astronomers showed that Draco's dark matter distribution is in better agreement with the LCDM model, helping to alleviate the cusp-core problem.
Notes
Assuming an absolute magnitude of +0.5 V for RR Lyrae the apparent modulus of the Draco Dwarf is 19.58 m−M. Using a reddening value towards Draco Dwarf of 0.03 ± 0.01 we get a true distance modulus of 19.55.
Using the distance modulus formula of 1 we get an RR Lyrae estimated distance of 81 kpc.
Apparent Magnitude of 10.9 – distance modulus of 19.52 (80 kpc) = −8.6
distance 80 ± 10 kpc × tan(diameter_angle = 35.5 × 24.5) = 830 ± 100 × 570 ± 70 pc diameter
The quoted total mass is the cumulative dark matter mass (much higher than the luminous mass) at 900 pc from the galaxy's center.
References
External links
Dwarf spheroidal galaxies
Peculiar galaxies
Local Group
Milky Way Subgroup
Draco (constellation)
10822
60095
? | Draco Dwarf | Astronomy | 950 |
21,115,465 | https://en.wikipedia.org/wiki/Unrequited%20love | Unrequited love or one-sided love is love that is not openly reciprocated or understood as such by the beloved. The beloved may not be aware of the admirer's deep affection, or may consciously reject it knowing that the admirer admires them. Merriam-Webster defines unrequited as "not reciprocated or returned in kind".
Psychiatrist Eric Berne states in his book Sex in Human Loving that "Some say that one-sided love is better than none, but like half a loaf of bread, it is likely to grow hard and moldy sooner." The philosopher Friedrich Nietzsche contends that "indispensable...to the lover is his unrequited love, which he would at no price relinquish for a state of indifference". Unrequited love stands in contrast to redamancy, the act of reciprocal love, which is the tendency for people to like others who express liking for them.
Analysis
Route to unrequited love
According to Dr. Roy Baumeister, what makes a person desirable is a complex and highly personal mix of many qualities and traits. But falling for someone who is much more desirable than oneself — whether because of physical beauty or attributes like charm, intelligence, wit or status — Baumeister calls this kind of mismatch "prone to find their love unrequited" and that such relationships are falling upward.
"Platonic friendships provide a fertile soil for unrequited love." Thus the object of unrequited love is often a friend or acquaintance, someone regularly encountered in the workplace, during the course of work, school or other activities involving large groups of people. This creates an awkward situation in which the admirer has difficulty in expressing their true feelings, a fear that revelation of feelings might invite rejection, cause embarrassment or might end all access to the beloved, as a romantic relationship may be inconsistent with the existing association.
Rejectors
"There are two bad sides to unrequited love, but only one is made familiar by our culture" – that of the lover, not the rejector. In fact, research suggests that the object of unrequited affection experiences a variety of negative emotions exceeding those of the suitor, including anxiety, frustration, and guilt. As Freud pointed out, "when a woman sues for love, to reject and refuse is a distressing part for a man to play".
Advantages
Unrequited love has long been depicted as noble, an unselfish and stoic willingness to accept suffering. Literary and artistic depictions of unrequited love may depend on assumptions of social distance that have less relevance in western, democratic societies with relatively high social mobility and less rigid codes of sexual fidelity. Nonetheless, the literary record suggests a degree of euphoria in the feelings associated with unrequited love, which has the advantage as well of carrying none of the responsibilities of mutual relationships: certainly, "rejection, apparent or real, may be the catalyst for inspired literary creation... 'the poetry of frustration'."
Eric Berne considered that "the man who is loved by a woman is lucky indeed, but the one to be envied is he who loves, however little he gets in return. How much greater is Dante gazing at Beatrice than Beatrice walking by him in apparent disdain."
"Remedies"
Roman poet Ovid in his Remedia Amoris "provides advice on how to overcome inappropriate or unrequited love. The solutions offered include travel, teetotalism, bucolic pursuits, and ironically, avoidance of love poets".
Cultural examples
Western
In the wake of his real-life experiences with Maud Gonne, W. B. Yeats wrote of those who "had read/All I had rhymed of that monstrous thing/Returned and yet unrequited love".
According to Robert B. Pippin, Proust claimed that "the only successful (sustainable) love is unrequited love", something which according to Pippin, "has been invoked as a figure for the condition of modernity itself".
Eastern
The medieval Japanese poet Saigyō may have turned from samurai to monk because of unrequited love, one of his waka asking: "What turned me to wanting/to break with the world-bound life?/Maybe the one whose love/turned to loathing and who now joins with me in a different joy". In other poems he wrote: "Alas, I'm foreordained to suffer, loving deep a heartless lass....Would I could know if there be such in far-off China!"
In China, passion tends to be associated not with happiness, but with sorrow and unrequited love.
See also
References
Further reading
Robert Burton, The Anatomy of Melancholy (New York 1951) THE THIRD PARTITION: LOVE-MELANCHOLY
J. Reid Meloy, Violent Attachments (1997)
Peabody, Susan 1989, 1994, 2005, "Addiction to Love: Overcoming Obsession and Dependency in Relationships."
External links
Grief
Love
Non-sexuality
Philosophy of love
Asymmetry
ca:Amor no correspost | Unrequited love | Physics | 1,062 |
4,449,103 | https://en.wikipedia.org/wiki/Vibration%20isolation | Vibration isolation is the prevention of transmission of vibration from one component of a system to others parts of the same system, as in buildings or mechanical systems. Vibration is undesirable in many domains, primarily engineered systems and habitable spaces, and methods have been developed to prevent the transfer of vibration to such systems. Vibrations propagate via mechanical waves and certain mechanical linkages conduct vibrations more efficiently than others. Passive vibration isolation makes use of materials and mechanical linkages that absorb and damp these mechanical waves. Active vibration isolation involves sensors and actuators that produce disruptive interference that cancels-out incoming vibration.
Passive isolation
"Passive vibration isolation" refers to vibration isolation or mitigation of vibrations by passive techniques such as rubber pads or mechanical springs, as opposed to "active vibration isolation" or "electronic force cancellation" employing electric power, sensors, actuators, and control systems.
Passive vibration isolation is a vast subject, since there are many types of passive vibration isolators used for many different applications. A few of these applications are for industrial equipment such as pumps, motors, HVAC systems, or washing machines; isolation of civil engineering structures from earthquakes (base isolation), sensitive laboratory equipment, valuable statuary, and high-end audio.
A basic understanding of how passive isolation works, the more common types of passive isolators, and the main factors that influence the selection of passive isolators:
Common passive isolation systems
Pneumatic or air isolators
These are bladders or canisters of compressed air. A source of compressed air is required to maintain them. Air springs are rubber bladders which provide damping as well as isolation and are used in large trucks. Some pneumatic isolators can attain low resonant frequencies and are used for isolating large industrial equipment. Air tables consist of a working surface or optical surface mounted on air legs. These tables provide enough isolation for laboratory instrument under some conditions. Air systems may leak under vacuum conditions. The air container can interfere with isolation of low-amplitude vibration.
Mechanical springs and spring-dampers
These are heavy-duty isolators used for building systems and industry. Sometimes they serve as mounts for a concrete block, which provides further isolation.
Pads or sheets of flexible materials such as elastomers, rubber, cork, dense foam and laminate materials.
Elastomer pads, dense closed cell foams and laminate materials are often used under heavy machinery, under common household items, in vehicles and even under higher performing audio systems.
Molded and bonded rubber and elastomeric isolators and mounts
These are often used as machinery (such as engines) mounts or in vehicles. They absorb shock and attenuate some vibration.
Negative-stiffness isolators
Negative-stiffness isolators are less common than other types and have generally been developed for high-level research applications such as gravity wave detection. Lee, Goverdovskiy, and Temnikov (2007) proposed a negative-stiffness system for isolating vehicle seats.
The focus on negative-stiffness isolators has been on developing systems with very low resonant frequencies (below 1 Hz), so that low frequencies can be adequately isolated, which is critical for sensitive instrumentation. All higher frequencies are also isolated. Negative-stiffness systems can be made with low stiction, so that they are effective in isolating low-amplitude vibrations.
Negative-stiffness mechanisms are purely mechanical and typically involve the configuration and loading of components such as beams or inverted pendulums. Greater loading of the negative-stiffness mechanism, within the range of its operability, decreases the natural frequency.
Wire rope isolators
These isolators are durable and can withstand extreme environments. They are often used in military applications.
Base isolators for seismic isolation of buildings, bridges, etc.
Base isolators made of layers of neoprene and steel with a low horizontal stiffness are used to lower the natural frequency of the building. Some other base isolators are designed to slide, preventing the transfer of energy from the ground to the building.
Tuned mass dampers
Tuned mass dampers reduce the effects of harmonic vibration in buildings or other structures. A relatively small mass is attached in such a way that it can dampen out a very narrow band of vibration of the structure.
Do it Yourself Isolators
In less sophisticated solutions, bungee cords can be used as a cheap isolation system which may be effective enough for some applications. The item to be isolated is suspended from the bungee cords. This is difficult to implement without a danger of the isolated item falling. Tennis balls cut in half have been used under washing machines and other items with some success. In fact, tennis balls became the de facto standard suspension technique used in DIY rave/DJ culture, placed under the feet of each record turntable which produces enough dampening to neutralize the vibrations of high-powered soundsystems from affecting the delicate, high-sensitivity mechanisms of the turntable needles.
How passive isolation works
A passive isolation system, such as a shock mount, in general contains mass, spring, and damping elements and moves as a harmonic oscillator. The mass and spring stiffness dictate a natural frequency of the system. Damping causes energy dissipation and has a secondary effect on natural frequency.
Every object on a flexible support has a fundamental natural frequency. When vibration is applied, energy is transferred most efficiently at the natural frequency, somewhat efficiently below the natural frequency, and with increasing inefficiency (decreasing efficiency) above the natural frequency. This can be seen in the transmissibility curve, which is a plot of transmissibility vs. frequency.
Here is an example of a transmissibility curve. Transmissibility is the ratio of vibration of the isolated surface to that of the source. Vibrations are never eliminated, but they can be greatly reduced. The curve below shows the typical performance of a passive, negative-stiffness isolation system with a natural frequency of 0.5 Hz. The general shape of the curve is typical for passive systems. Below the natural frequency, transmissibility hovers near 1. A value of 1 means that vibration is going through the system without being amplified or reduced. At the resonant frequency, energy is transmitted efficiently, and the incoming vibration is amplified. Damping in the system limits the level of amplification. Above the resonant frequency, little energy can be transmitted, and the curve rolls off to a low value. A passive isolator can be seen as a mechanical low-pass filter for vibrations.
In general, for any given frequency above the natural frequency, an isolator with a lower natural frequency will show greater isolation than one with a higher natural frequency. The best isolation system for a given situation depends on the frequency, direction, and magnitude of vibrations present and the desired level of attenuation of those frequencies.
All mechanical systems in the real world contain some amount of damping. Damping dissipates energy in the system, which reduces the vibration level which is transmitted at the natural frequency. The fluid in automotive shock absorbers is a kind of damper, as is the inherent damping in elastomeric (rubber) engine mounts.
Damping is used in passive isolators to reduce the amount of amplification at the natural frequency. However, increasing damping tends to reduce isolation at the higher frequencies. As damping is increased, transmissibility roll-off decreases. This can be seen in the chart below.
Passive isolation operates in both directions, isolating the payload from vibrations originating in the support, and also isolating the support from vibrations originating in the payload. Large machines such as washers, pumps, and generators, which would cause vibrations in the building or room, are often isolated from the floor. However, there are a multitude of sources of vibration in buildings, and it is often not possible to isolate each source. In many cases, it is most efficient to isolate each sensitive instrument from the floor. Sometimes it is necessary to implement both approaches.
In Superyachts, the engines and alternators produce noise and vibrations. To solve this, the solution is a double elastic suspension where the engine and alternator are mounted with vibration dampers on a common frame. This set is then mounted elastically between the common frame and the hull.
Factors influencing the selection of passive vibration isolators
Characteristics of item to be isolated
Size: The dimensions of the item to be isolated help determine the type of isolation which is available and appropriate. Small objects may use only one isolator, while larger items might use a multiple-isolator system.
Weight: The weight of the object to be isolated is an important factor in choosing the correct passive isolation product. Individual passive isolators are designed to be used with a specific range of loading.
Movement: Machines or instruments with moving parts may affect isolation systems. It is important to know the mass, speed, and distance traveled of the moving parts.
Operating Environment
Industrial: This generally entails strong vibrations over a wide band of frequencies and some amount of dust.
Laboratory: Labs are sometimes troubled by specific building vibrations from adjacent machinery, foot traffic, or HVAC airflow.
Indoor or outdoor: Isolators are generally designed for one environment or the other.
Corrosive/non-corrosive: Some indoor environments may present a corrosive danger to isolator components due to the presence of corrosive chemicals. Outdoors, water and salt environments need to be considered.
Clean room: Some isolators can be made appropriate for clean room.
Temperature: In general, isolators are designed to be used in the range of temperatures normal for human environments. If a larger range of temperatures is required, the isolator design may need to be modified.
Vacuum: Some isolators can be used in a vacuum environment. Air isolators may have leakage problems. Vacuum requirements typically include some level of clean room requirement and may also have a large temperature range.
Magnetism: Some experimentation which requires vibration isolation also requires a low-magnetism environment. Some isolators can be designed with low-magnetism components.
Acoustic noise: Some instruments are sensitive to acoustic vibration. In addition, some isolation systems can be excited by acoustic noise. It may be necessary to use an acoustic shield. Air compressors can create problematic acoustic noise, heat, and airflow.
Static or dynamic loads: This distinction is quite important as isolators are designed for a certain type and level of loading.
; Static loading
is basically the weight of the isolated object with low-amplitude vibration input. This is the environment of apparently stationary objects such as buildings (under normal conditions) or laboratory instruments.
; Dynamic loading
involves accelerations and larger amplitude shock and vibration. This environment is present in vehicles, heavy machinery, and structures with significant movement.
Cost:
Cost of providing isolation: Costs include the isolation system itself, whether it is a standard or custom product; a compressed air source if required; shipping from manufacturer to destination; installation; maintenance; and an initial vibration site survey to determine the need for isolation.
Relative costs of different isolation systems: Inexpensive shock mounts may need to be replaced due to dynamic loading cycles. A higher level of isolation which is effective at lower vibration frequencies and magnitudes generally costs more. Prices can range from a few dollars for bungee cords to millions of dollars for some space applications.
Adjustment: Some isolation systems require manual adjustment to compensate for changes in weight load, weight distribution, temperature, and air pressure, whereas other systems are designed to automatically compensate for some or all of these factors.
Maintenance: Some isolation systems are quite durable and require little or no maintenance. Others may require periodic replacement due to mechanical fatigue of parts or aging of materials.
Size Constraints: The isolation system may have to fit in a restricted space in a laboratory or vacuum chamber, or within a machine housing.
Nature of vibrations to be isolated or mitigated
Frequencies: If possible, it is important to know the frequencies of ambient vibrations. This can be determined with a site survey or accelerometer data processed through FFT analysis.
Amplitudes: The amplitudes of the vibration frequencies present can be compared with required levels to determine whether isolation is needed. In addition, isolators are designed for ranges of vibration amplitudes. Some isolators are not effective for very small amplitudes.
Direction: Knowing whether vibrations are horizontal or vertical can help to target isolation where it is needed and save money.
Vibration specifications of item to be isolated: Many instruments or machines have manufacturer-specified levels of vibration for the operating environment. The manufacturer may not guarantee the proper operation of the instrument if vibration exceeds the spec.
Not For Profit Organizations such as ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers) and VISCMA (Vibration Isolation and Seismic Control Manufacturers Association) provide specifications / standards for isolator types and spring deflection requirements that cover a wide array of industries including electrical, mechanical, plumbing, and HVAC.
Comparison of passive isolators
Negative-stiffness vibration isolator
Negative-Stiffness-Mechanism (NSM) vibration isolation systems offer a unique passive approach for achieving low vibration environments and isolation against sub-Hertz vibrations. "Snap-through" or "over-center" NSM devices are used to reduce the stiffness of elastic suspensions and create compact six-degree-of-freedom systems with low natural frequencies. Practical systems with vertical and horizontal natural frequencies as low as 0.2 to 0.5 Hz are possible. Electro-mechanical auto-adjust mechanisms compensate for varying weight loads and provide automatic leveling in multiple-isolator systems, similar to the function of leveling valves in pneumatic systems. All-metal systems can be configured which are compatible with high vacuums and other adverse environments such as high temperatures.
These isolation systems enable vibration-sensitive instruments such as scanning probe microscopes, micro-hardness testers and scanning electron microscopes to operate in severe vibration environments sometimes encountered, for example, on upper floors of buildings and in clean rooms. Such operation would not be practical with pneumatic isolation systems. Similarly, they enable vibration-sensitive instruments to produce better images and data than those achievable with pneumatic isolators.
The theory of operation of NSM vibration isolation systems is summarized, some typical systems and applications are described, and data on measured performance is presented. The theory of NSM isolation systems is explained in References 1 and 2. It is summarized briefly for convenience.
Vertical-motion isolation
A vertical-motion isolator is shown . It uses a conventional spring connected to an NSM consisting of two bars hinged at the center, supported at their outer ends on pivots, and loaded in compression by forces P. The spring is compressed by weight W to the operating position of the isolator, as shown in Figure 1. The stiffness of the isolator is K=KS-KN where KS is the spring stiffness and KN is the magnitude of a negative-stiffness which is a function of the length of the bars and the load P. The isolator stiffness can be made to approach zero while the spring supports the weight W.
Horizontal-motion isolation
A horizontal-motion isolator consisting of two beam-columns is illustrated in Figure. 2. Each beam-column behaves like two fixed-free beam columns loaded axially by a weight load W. Without the weight load the beam-columns have horizontal stiffness KS With the weight load the lateral bending stiffness is reduced by the "beam-column" effect. This behavior is equivalent to a horizontal spring combined with an NSM so that the horizontal stiffness is , and is the magnitude of the beam-column effect. Horizontal stiffness can be made to approach zero by loading the beam-columns to approach their critical buckling load.
Six-degree-of-freedom (six-DOF) isolation
A six-DOF NSM isolator typically uses three isolators stacked in series: a tilt-motion isolator on top of a horizontal-motion isolator on top of a vertical-motion isolator. Figure 3 (Ref. needed) shows a schematic of a vibration isolation system consisting of a weighted platform supported by a single six-DOF isolator incorporating the isolators of Figures 1 and 2 (Figures 1 and 2 are missing). Flexures are used in place of the hinged bars shown in Figure 1. A tilt flexure serves as the tilt-motion isolator. A vertical-stiffness adjustment screw is used to adjust the compression force on the negative-stiffness flexures thereby changing the vertical stiffness. A vertical load adjustment screw is used to adjust for varying weight loads by raising or lowering the base of the support spring to keep the flexures in their straight, unbent operating positions.
Vibration isolation of supporting joint
The equipment or other mechanical components are necessarily linked to surrounding objects (the supporting joint - with the support; the unsupporting joint - the pipe duct or cable), thus presenting the opportunity for unwanted transmission of vibrations. Using a suitably designed vibration-isolator (absorber), vibration isolation of the supporting joint is realized. The accompanying illustration shows the attenuation of vibration levels, as measured before installation of the functioning gear on a vibration isolator as well as after installation, for a wide range of frequencies.
The vibration isolator
This is defined as a device that reflects and absorbs waves of oscillatory energy, extending from a piece of working machinery or electrical equipment, and with the desired effect being vibration insulation. The goal is to establish vibration isolation between a body transferring mechanical fluctuations and a supporting body (for example, between the machine and the foundation). The illustration shows a vibration isolator from the series «ВИ» (~"VI" in Roman characters), as used in shipbuilding in Russia, for example the submarine "St.Petersburg" (Lada). The depicted «ВИ» devices allow loadings ranging from 5, 40 and 300 kg. They differ in their physical sizes, but all share the same fundamental design. The structure consists of a rubber envelope that is internally reinforced by a spring. During manufacture, the rubber and the spring are intimately and permanently connected as a result of the vulcanization process that is integral to the processing of the crude rubber material. Under action of weight loading of the machine, the rubber envelope deforms, and the spring is compressed or stretched. Therefore, in the direction of the spring's cross section, twisting of the enveloping rubber occurs. The resulting elastic deformation of the rubber envelope results in very effective absorption of the vibration. This absorption is crucial to reliable vibration insulation, because it averts the potential for resonance effects. The amount of elastic deformation of the rubber largely dictates the magnitude of vibration absorption that can be attained; the entire device (including the spring itself) must be designed with this in mind. The design of the vibration isolator must also take into account potential exposure to shock loadings, in addition to the routine everyday vibrations. Lastly, the vibration isolator must also be designed for long-term durability as well as convenient integration into the environment in which it is to be used. Sleeves and flanges are typically employed in order to enable the vibration isolator to be securely fastened to the equipment and the supporting foundation.
Vibration isolation of unsupporting joint
Vibration isolation of unsupporting joint is realized in the device named branch pipe a of isolating vibration.
Branch pipe a of isolating vibration
Branch pipe a of isolating vibration is a part of a tube with elastic walls for reflection and absorption of waves of the oscillatory energy extending from the working pump over wall of the pipe duct. Is established between the pump and the pipe duct. On an illustration is presented the image a vibration-isolating branch pipe of a series «ВИПБ». In a structure is used the rubber envelope, which is reinforced by a spring. Properties of an envelope are similar envelope to an isolator vibration. Has the device reducing axial effort from action of internal pressure up to zero.
Subframe isolation
Another technique used to increase isolation is to use an isolated subframe. This splits the system with an additional mass/spring/damper system. This doubles the high frequency attenuation rolloff, at the cost of introducing additional low frequency modes which may cause the low frequency behaviour to deteriorate. This is commonly used in the rear suspensions of cars with Independent Rear Suspension (IRS), and in the front subframes of some cars. The graph (see illustration) shows the force into the body for a subframe that is rigidly bolted to the body compared with the red curve that shows a compliantly mounted subframe. Above 42 Hz the compliantly mounted subframe is superior, but below that frequency the bolted in subframe is better.
Semi-active isolation
Semiactive vibration isolators have received attention because they consume less power than active devices and controllability over passive systems.
Active isolation
Active vibration isolation systems contain, along with the spring, a feedback circuit which consists of a sensor (for example a piezoelectric accelerometer or a geophone), a controller, and an actuator. The acceleration (vibration) signal is processed by a control circuit and amplifier. Then it feeds the electromagnetic actuator, which amplifies the signal. As a result of such a feedback system, a considerably stronger suppression of vibrations is achieved compared to ordinary damping. Active isolation today is used for applications where structures smaller than a micrometer have to be produced or measured. A couple of companies produce active isolation products as OEM for research, metrology, lithography and medical systems. Another important application is the semiconductor industry. In the microchip production, the smallest structures today are below 20 nm, so the machines which produce and check them have to oscillate much less.
Sensors for active isolation
Piezoelectric accelerometers and force sensors
MEM accelerometers
Geophones
Proximity sensors
Interferometers
Actuators for active isolation
Linear motors
Pneumatic actuators
Piezoelectric motors
See also
Active vibration control
Base isolation
Bushing (isolator)
Damped wave
Damping ratio
Noise, vibration, and harshness
Noise and vibration on maritime vessels
Oscillation
Package cushioning
Passive heave compensation
Shock absorber
Shock mount
Sorbothane
Soundproofing
Vibration
Vibration control
References
Platus PhD, David L., SPIE International Society of Optical Engineering - July 1999, Optomechanical Engineering and Vibration Control Negative-Stiffness-Mechanism Vibration Isolation Systems
Harris, C., Piersol, A., Harris Shock and Vibration Handbook, Fifth Edition, McGraw-Hill, (2002),
A.Kolesnikov «Noise and vibration». Russia. Leningrad. Publ.«Shipbuilding». 1988
External links
White Paper on Active Vibration Isolation for Lithography and Imaging
Passive Isolation of Harmonic Excitation
Vibration Control for Microscopy
Mechanical engineering
Mechanical vibrations
sk:Vibrácia | Vibration isolation | Physics,Engineering | 4,753 |
2,322,153 | https://en.wikipedia.org/wiki/Carbon%20offsets%20and%20credits | Carbon offsetting is a carbon trading mechanism that enables entities to compensate for offset greenhouse gas emissions by investing in projects that reduce, avoid, or remove emissions elsewhere. When an entity invests in a carbon offsetting program, it receives carbon credit or offset credit, which account for the net climate benefits that one entity brings to another. After certification by a government or independent certification body, credits can be traded between entities. One carbon credit represents a reduction, avoidance or removal of one metric tonne of carbon dioxide or its carbon dioxide-equivalent (CO2e).
A variety of greenhouse gas reduction projects can qualify for offsets and credits depending on the scheme. Some include forestry projects that avoid logging and plant saplings, renewable energy projects such as wind farms, biomass energy, biogas digesters, hydroelectric dams, as well as energy efficiency projects. Further projects include carbon dioxide removal projects, carbon capture and storage projects, and the elimination of methane emissions in various settings such as landfills. Many projects that give credits for carbon sequestration have received criticism as greenwashing because they overstated their ability to sequester carbon, with some projects being shown to actually increase overall emissions.
Carbon offset and credit programs provide a mechanism for countries to meet their Nationally Determined Contributions (NDC) commitments to achieve the goals of the Paris Agreement. Article 6 of the Paris Agreement includes three mechanisms for "voluntary cooperation" between countries towards climate goals, including carbon markets. Article 6.2 enabled countries to directly trade carbon credits and units of renewable power with each other. Article 6.4 established a new international carbon market allowing countries or companies to use carbon credits generated in other countries to help meet their climate targets.
Carbon offset and credit programs are coming under increased scrutiny because their claimed emissions reductions may be inflated compared to the actual reductions achieved. . The Australia Institute highlights 23 instances where carbon offset schemes were found to have significant shortcomings. These include claims of overestimated carbon sequestration, double-counting of credits, and the failure of projects to provide additional environmental benefits beyond what would have occurred naturally.
To be credible, the reduction in emissions must meet three criteria: they must last indefinitely, be additional to emission reductions that were going to happen anyway, and must be measured, monitored and verified by independent third parties to ensure that the amount of reduction promised has in fact been attained.
Overview
A carbon offset or carbon credit is a way of compensating for emissions of carbon dioxide or other greenhouse gases. It is a reduction, avoidance, or removal of emissions to compensate for emissions released elsewhere. One carbon credit represents an emission reduction or removal of one metric tonne of carbon dioxide or the equivalent amount of greenhouse gases that contribute equally to global warming (CO2e). Carbon credits are a form of carbon pricing, along with carbon taxes and subsidies. Credits can move among the various markets they are traded in.
There are several labels for one-tonne emission reductions, including "Verified Emission Reduction" or "Certified Emission Reduction". The label depends on the particular program that certifies a reduction project. At COP27, negotiators agreed to define offsets and credits issued under Article 6 of the Paris Agreement as "mitigation contributions" in order to discourage carbon neutrality claims by buyers. Certification organizations such as the Gold Standard also have detailed guidance on what descriptive terms are appropriate for buyers of offsets and credits.
Offsets from past project have to be additional to what would have happened without the project. For future projects, forward crediting is a process where credits are issued for projected emissions reductions, which can be claimed by buyers even before the reduction activities have occurred. When credit holders claim the GHG reductions, they must retire the carbon credits such that cannot be transferred and used. Carbon offsets can be tracked and reported within an offset certification registry, which may contain project information such as project status, project documents, credits generated, ownership, sale, and retirement.
The year in which a carbon emissions reduction project — usually the year in which a third party verifies the project — generates the carbon offset credit is known as the vintage.
History
In 1977, major amendments to the US Clean Air Act created one of the first tradable emission offset mechanisms, allowing permitted facilities to increase emissions in exchange for paying another company to reduce its emissions of the same pollutant by a greater amount. The 1990 amendments to that same law established the Acid Rain Trading Program, which introduced the concept of a cap and trade system, which allowed companies to buy and sell offsets created by other companies that invested in emission reduction projects subject to an overall limit on emissions. In the 1990s, regulatory frameworks for the US Clean Water Act enabled mitigation banking and wetlands offsetting, which set the procedural and conceptual precedent for carbon offsetting.
In 1997, the original international compliance carbon markets emerged from the Kyoto Protocol, which established three mechanisms that enable countries or operators in developed countries to acquire offset credits. One mechanism was the Clean Development Mechanism (CDM), which expanded the concept of carbon emissions trading to a global scale, focusing on the major greenhouse gases that cause climate change: carbon dioxide (), methane, nitrous oxide (N2O), perfluorocarbons, hydrofluorocarbons, and sulfur hexafluoride. The Kyoto Protocol was to expire in 2020, to be superseded by the Paris Agreement. Countries are still determining the role of carbon offsets in the Paris Agreement through international negotiations on the agreement's Article 6.
In November 2024, after years of deadlock, governments attending the COP29 conference in Baku, Azerbaijan agreed to rules on creating, trading and registering emission reductions and removals as carbon credits that higher-emission countries can buy, thus providing funding for low-emission technologies.
Economics
The economics behind programs such as the Kyoto Protocol was that the marginal cost of reducing emissions would differ among countries. Studies suggested that the flexibility mechanisms could reduce the overall cost of meeting the targets. Offset and credit programs have been identified as a way for countries to meet their NDC commitments and achieve the goals of the Paris agreement at a lower cost. They may also help close the emissions gap identified in annual UNEP reports.
There is a diverse range of sources of supply and demand as well as trading frameworks that drive offset and credit markets. Demand for offsets and credits derives from a range of compliance obligations, arising from international agreements, national laws, as well as voluntary commitments that companies and governments have adopted. Voluntary carbon markets usually consist of private entities purchasing carbon offset credits to meet voluntary greenhouse gas reduction commitments. In some cases, non-covered participants in an ETS may purchase credits as an alternative to purchasing offsets in a voluntary market.
These programs also have other positive externalities, or co-benefits, which include better air quality, increased biodiversity, and water and soil protection; community employment opportunities, energy access, and gender equality; and job creation, education opportunities, and technology transfer. Some certification programs have tools and research products to help quantify these benefits.
Prices for offsets and credits vary widely, reflecting the uncertainty associated with verifying the indirect value of carbon offsets. At the same time, uncertainty has caused some companies to become more skeptical about buying offsets .
Emissions trading systems
Emissions trading are now an important element of regulatory programs to control pollution, including GHG emissions. GHG emission trading programs exist at the sub-national, national, and international level. Under these programs, there is a cap on emissions. Sources of emissions have the flexibility to find and apply the lowest-cost methods for reducing pollution. A central authority or government body usually allocates or sells a limited number (a "cap") of permits. These permit a discharge of a specific quantity of a specific pollutant over a set time period. Polluters are required to hold permits in amounts equal to their emissions. Those that want to increase their emissions must buy permits from others willing to sell them. These programs have been applied to greenhouse gases for several reasons. Their warming effects are the same regardless of where they are emitted. The costs of reducing emissions vary widely by source. The cap ensures that the environmental goal is attained.
Regulations and schemes
As of 2022, 68 carbon pricing programs were in place or scheduled to be created globally. International programs include the Clean Development Mechanism, Article 6 of the Paris Agreement, and CORSIA. National programs include ETS systems such as the European Union Emissions Trading System (EU-ETS) and the California Cap and Trade Program. Eligible credits in these programs may include credits that international or independent crediting systems have issued. There are also standards and crediting mechanisms that independent, nongovernmental entities such as Verra and Gold Standard manage.
Kyoto Protocol
Under the Clean Development Mechanism, a developed country can sponsor a greenhouse gas reduction project in a developing country, where the costs of greenhouse gas reduction activities are usually much lower. The developed country receives credits for meeting its emission reduction targets known as Certified Emission Reductions (CERs), while the developing country receives capital investment and clean technology or beneficial change in land use. Under Joint Implementation, a developed country with relatively high domestic costs of emission reduction would set up a project in another developed country. Offset credits under this program are designated as Emission Reduction Units.
The International Emissions Trading program enables countries to trade in the international carbon credit market to cover their shortfall in assigned amount units. Countries with surplus units can sell them to countries that are exceeding their emission targets under Annex B of the Kyoto Protocol.
Nuclear energy projects are not eligible for credits under these programs. Country-specific designated national authorities approve projects under the CDM.
Paris Agreement Article 6 mechanisms
Article 6 of the Paris Agreement continues to support offset and credit programs between countries, including CDM projects from the Kyoto Protocol. Programs now occur to help achieve emission reduction targets set out in each country's nationally determined contribution (NDC).
The ITMO system requires "corresponding adjustments" to avoid double counting of emission reductions. Double-counting occurs if both the host country and purchasing country count the reduction towards their target. If the receiving country uses ITMOs towards its NDC, the host country must discount those reductions from its emissions budget by adding and reporting that higher total in its biennial reporting. Otherwise Article 6.2 gives countries a lot of flexibility in how they can create trading agreements.
The supervisory board under Article 6.4 is responsible for approving methodologies, setting guidance, and implementing procedures. The preparation work for this is expected to last until the end of 2023. ER credits issued will fall by 2% to ensure that the program as a whole results in an overall Mitigation of Global Emissions. An additional 5% reduction of ERs will go to a fund to finance adaptation. Administrative fees for program management are still under discussion.
CDM projects may transition to the Article 6.4 program subject to approval by the country hosting the project, and if the project meets the new rules, with certain exceptions for rules on methodologies. Projects can generally continue to use the same CDM methodologies through 2025. From 2026 on, they must meet all Article 6 requirements. Up to 2.8 billion credits could potentially become eligible for issuance under Article 6.4 if all CDM projects transition.
Article 6 does not directly regulate the voluntary carbon markets. In principle, it is possible to issue and purchase carbon offsets without reference to Article 6. It is possible that a multi-tier system could emerge with different types of offsets and credits available for investors. Companies may be able to purchase 'adjusted credits' that eliminate the risk of double counting. These may be seen as more valuable if they support science-based targets and net-zero emissions. Other non-adjusted offsets and credits could support claims for other environmental or social indicators. They could also support emission reductions that are seen as less valuable in terms of these goals. Uncertainty remains around Article 6's effects on future voluntary carbon markets. There is also uncertainty about what investors could claim by purchasing various types of carbon credits.
REDD+
REDD+ is a UNFCCC framework, largely addressed at tropical regions in developing countries, that is designed to compensate countries for not clearing or degrading their forests, or for enhancing forest carbon stocks. It aims to create financial value for carbon stored in forests, using the concept of results-based payments. REDD+ also promotes co-benefits from reducing deforestation such as biodiversity. It was introduced in its basic form at COP11 in 2005 and has grown into a broad policy initiative to address deforestation and forest degradation.
In 2015, REDD+ was incorporated into Article 5 of the Paris Agreement. REDD+ initiatives typically compensate developing countries or their regional administrations for reducing their emissions from deforestation and forest degradation. It consists of several stages: One, achieving REDD+ readiness; two, formalizing an agreement for financing; three, measuring, reporting, and verifying results; and four, receiving results-based payments.
Over 50 countries have national REDD+ initiatives. REDD+ is also taking place through provincial and district governments and at the local level through private landowners. As of 2020, there were over 400 ongoing REDD+ projects globally. Brazil and Colombia account for the largest amount of REDD+ project land area.
CORSIA
The Carbon Offsetting and Reduction Scheme for International Aviation (CORSIA) is a global, market-based program to reduce emissions from international aviation. It aims to allow credits and offsets for emissions that cannot be reduced by technology and operational improvements or sustainable aviation fuels. To ensure the environmental integrity of these offsets, the program has developed a list of eligible offsets that can be used. Operating principles are similar to those under existing trading mechanisms and carbon offset certification standards. CORSIA has applied to international aviation since January 2019. At that point all airlines had been required to report their CO2 emissions on an annual basis. International flights must undertake offsetting under CORSIA since January 2021.
Markets
Compliance market credits account for most of the offset and credit market today. Trading on voluntary carbon markets was 300 MtCO2e in 2021. By comparison, the compliance carbon market trading volume was 12 GtCO2e, and global greenhouse gas emissions in 2019 were 59 GtCO2e.
Currently several exchanges trade in carbon credits and allowances covering both spot and futures markets. These include the Chicago Mercantile Exchange, CTX Global, the European Energy Exchange, Global Carbon Credit Exchange gCCEx, Intercontinental Exchange, MexiCO2, NASDAQ OMX Commodities Europe and Xpansiv. Many companies now engage in emissions abatement, offsetting, and sequestration programs, which generate credits that can be sold on an exchange.
At the start of 2022 there were 25 operational emissions trading systems around the world. They are in jurisdictions representing 55% of global GDP. These systems cover 17% of global emissions. The European Union Emissions Trading System (EU-ETS) is the second largest trading system in the world after the Chinese national carbon trading scheme. It covers over 40% of European GHG emissions. California's cap-and-trade program covers about 85% of statewide GHG emissions.
Voluntary carbon markets and certification programs
Voluntary carbon markets (VCM) are largely unregulated markets where carbon offsets are traded by corporations, individuals and organizations that are under no legal obligation to make emission cuts. In voluntary carbon markets, companies or individuals use carbon offsets to meet the goals they set themselves for reducing emissions. Credits are issued under independent crediting standards. Some entities also purchase them under international or domestic crediting mechanisms. National and subnational programs have been increasing in popularity.
Many different groups exist within the voluntary carbon market, including developers, brokers, auditors, and buyers. Certification programs for VCMs establish accounting standards, project eligibility requirements, and monitoring, reporting and verification (MRV) procedures for credit and offset projects. They include the Verified Carbon Standard issued by Verra, the Gold Standard, the Climate Action Reserve, the American Carbon Registry, and Plan Vivo. Puro Standard, the first standard for engineered carbon removal, is verified by DNV GL. There are also some additional standards for validating co-benefits, including the Climate, Community and Biodiversity Standard (CCB Standard), also issued by Verra, and the Social Carbon Standard, issued by the Ecologica Institute.
The voluntary carbon markets currently represent less than 1% of the reductions pledged in country NDCs by 2030. It represents an even smaller portion of the reductions needed to achieve the 1.5°C Paris temperature goal pathway in 2030. However, the VCM is growing significantly. Between 2017 and 2021, both the issuance and retirement of VCM carbon offsets more than tripled. Some predictions call for global VCM demand to increase 15-fold between 2021 and 2030, and 100 times by 2050. Carbon removal projects such as forestry and carbon capture and storage are expected to have a larger share of this market in the future, compared to renewable energy projects. However, there is evidence that large companies are becoming more reluctant to use VCM offsets and credits because of a complex web of standards, despite an increased focus on net zero emissions goals.
Determining value
In 2022 voluntary carbon market (VCM) prices ranged from $8 to $30 per tonne of CO2e for the most common types of offset projects. Several factors can affect these prices. The costs of developing a project are a significant factor. Those tied to projects that can sequester carbon have recently been selling at a premium compared to other projects such as renewable energy or energy efficiency. Projects that sequester carbon are also called Nature-Based Solutions. Projects with additional social and environmental benefits can command a higher price. This reflects the value of the co-benefits and the perceived value of association with these projects. Credits from a reputable organization may command a higher price. Some credits located in developed countries may be priced higher. One reason could be that companies prefer to back projects closer to their business sites. Conversely, carbon credits with older vintages tend to be valued lower on the market.
Prices on the compliance market are generally higher. They vary based on geography, with EU and UK ETS credits trading at higher prices than those in the US in 2022. Lower prices on the VCM are in part due to an excess of supply in relation to demand. Some types of offsets are able to be created at very low costs under present standards. Without this surplus, current VCM prices could be at least $10/tCO2e higher.
Some pricing forecasts predict VCM prices could increase to as much as $47–$210 per tonne by 2050. There could be an even higher spike in the short term in certain scenarios. A major factor in future price models is the extent to which programs that support more permanent removals can influence future global climate policy. This could limit the supply of approvable offsets, and thereby raise prices.
Demand for VCM offsets is expected to increase five to ten-fold over the next decade as more companies adopt Net Zero climate commitments. This could benefit both markets and progress on reducing GHG emissions. If carbon offset prices remain significantly below these forecast levels, companies could be open to criticisms of greenwashing. This is because some might claim credit for emission reduction projects that would have been undertaken anyway. At prices of $100/tCO2e, a variety of carbon removal technologies could deliver around 2 GtCO2e per year of annual emission reductions between now and 2050. These technologies include reducing deforestation, forest restoration, CCS, BECCs and renewables in least developed countries. In addition, as the cost of using offsets and credits rises, investments in reducing supply chain emissions will become more attractive.
Verified Carbon Standard by Verra
Verra was developed in 2005. It is a widely used voluntary carbon standard, which also offers specific methodologies for REDD+ projects. As of 2020, there had been over 1,500 certified VCS projects covering energy, transport, waste, forestry, and other sectors. In 2021, Verra issued 300 MtCO2e worth of offset credits for 110 projects. Verra is the program of choice for most of the forest credits in the voluntary market, and almost all REDD+ projects.
Gold Standard
The Gold Standard was developed in 2003 by the World Wide Fund for Nature (WWF) in consultation with an independent standards advisory board. Projects are open to any non-government, community-based organization. Allowable categories include renewable energy supply, energy efficiency, afforestation, reforestation, and agriculture. The program also promotes the Sustainable Developments Goals. Projects must meet at least three of those goals besides reducing GHG emissions. Projects must make a net-positive contribution to the economic, environmental and social welfare of the local population. Program monitoring requirements help determine this.
Types of offset projects
A variety of projects can be used to reduce GHG emissions and thus to generate carbon offsets and credits. These can include land use improvement, methane capture, biomass sequestration, renewable energy, or industrial energy efficiency. They also include reducing methane, reforestation and switching fuel, for example to carbon-neutral and carbon-negative fuels. The CDM identifies over 200 types of projects suitable for generating carbon offsets and credits. An example of land use improvement is better forest management.
Offset certification and carbon trading programs vary by how much they consider specific projects eligible for offsets or credits. The European Union Emission Trading System considers nuclear energy projects, afforestation or reforestation activities, and projects involving destruction of industrial gases ineligible. Industrial gases include HFC-23 and .
Renewable energy
Renewable energy projects can include hydroelectric, wind, photovoltaic solar, solar hot water, biomass power, and heat production. These types of projects help societies move from electricity and heating based on fossil fuels towards forms of energy that are less carbon-intensive. However, they may not qualify as offset projects. This is because it is difficult or impossible to determine their additionality. They usually generate revenue. And they usually involve subsidies or other complex financial arrangements. This can make them ineligible under many offset and credit programs.
Methane collection and combustion
Methane is a potent greenhouse gas. It is most often emitted from landfills, livestock, and from coal mining. Methane projects can produce carbon offsets through the capture of methane for energy production. Examples include the combustion or containment of methane generated by farm animals by use of an anaerobic digester, in landfills, or from other industrial waste.
Energy efficiency
Carbon offsets that fund renewable energy projects help lower the carbon intensity of energy supply. Energy conservation projects seek to reduce the overall demand for energy. Carbon offsets in this category fund projects of three main types.
Cogeneration plants generate both electricity and heat from the same power source. This improves upon the energy efficiency of most power plants. That is because these plants waste the energy generated as heat. Fuel efficiency projects replace a combustion device with one using less fuel per unit of energy provided. They can do this by optimizing industrial processes, reducing energy costs per unit. They can also optimize individual action, for example making it easier to cycle to work instead of driving.
Destruction of industrial pollutants
Industrial pollutants such as hydrofluorocarbons (HFCs) and perfluorocarbons (PFCs) have a much greater potential for global warming than carbon dioxide by volume. It is easy to capture and destroy these pollutants at their source. So they present a large low-cost source of carbon offsets. As a category, HFCs, PFCs, and N2O reductions represent 71 percent of offsets issued under the CDM. Since many of these are now banned by an amendment to the Montreal Protocol, they are often no longer eligible for offsets or credits.
Land use, land-use change and forestry
Land use, land-use change and forestry have the collective label LULUCF. LULUCF projects focus on natural carbon sinks such as forests and soil. There are a number of different types of LULUCF projects. Forestry-related projects focus on avoiding deforestation. They do this by protecting existing forests, restoring forests on land that was once forested, and creating forests on land that previously had no forests, typically for more than a generation. Soil management projects attempt to preserve or increase the amount of carbon sequestered in soil.
Deforestation is particularly significant in Brazil, Indonesia, and parts of Africa, accounting for about 20 percent of greenhouse gas emissions. Carbon offsets allow firms to avoid deforestation by paying directly for forest preservation or providing substitutes for forest-based products. Offset schemes using reforestation, such as REDD, are available in developing countries, and are becoming increasingly available in developed countries including the US and the UK.
China has a policy of forestry carbon credits. Forestry carbon credits are based on the measurement of forest growth, which is converted into carbon emission reduction measurements by government ecological and forestry offices. Owners of forests (who are typically rural families or rural villages) receive carbon tickets (碳票; tan piao) which are tradeable securities.
Processes
Creation
An offset project is designed by project developers, financed by investors, validated by an independent verifier, and registered with a carbon offset program. Official registration indicates that a program has approved the project and that the project is eligible to start generating carbon offset credits once it starts. Most carbon offset programs have a library of approved methodologies covering a range of project types. After a project has begun, programs will often verify it periodically to determine the quantity of emission reductions generated. The length of time between verifications can vary, but is typically one year. After a program approves verification reports, it issues carbon offset credits, which are deposited in the project developer's account in a registry system administered by the offset program.
Criteria for assessing quality
Criteria for assessing the quality of offsets and credits usually cover the following areas:
Baseline and Measurement
Additionality
Leakage
Permanence
Double counting
Co-benefits
Approaches for increasing integrity
Besides the certification programs mentioned above, industry groups have been working since the 2000s to promote the quality of these projects. The International Carbon Reduction and Offset Alliance (ICROA) was founded in 2008. It promotes best practice across the voluntary carbon market. ICROA's membership consists of carbon offset providers based in the United States, European and Asia-Pacific markets who commit to the ICROA Code of Best Practice.
Other groups are now advocating for new approaches to ensure that offsets and credits have integrity. The Oxford Offsetting Principles state that traditional carbon offsetting schemes are "unlikely to deliver the types of offsetting needed to ultimately reach net zero emissions." These principles focus instead on cutting emissions as a first priority. In terms of offsets, they advocate for shifting to carbon removal offset projects that involve long-term storage. The principles also support the development of offsetting aligned with net zero. The Science Based Targets initiative's net-zero criteria argue that it is important to move beyond offsets based on reduced or avoided emissions. Instead projects should base offsets on carbon that has been sequestered from the atmosphere, such as CO2 Removal Certificates.
Some initiatives focus on improving the quality of current carbon offset and credit projects. The Integrity Council for the Voluntary Carbon Market (ICVCM) has published a draft set of principles for determining a high integrity carbon credit. These are known as the Core Carbon Principles. Final guidelines for this program are expected in late 2023. The Voluntary Carbon Markets Integrity Initiative has developed a code of practice that was published in 2022. The UK government partly funds this initiative.
Limitations and drawbacks
The use of offsets and credits faces a variety of criticisms. Some argue that they promote a "business-as-usual" mindset, allowing companies to use carbon offsetting to avoid making larger changes to reduce carbon emissions at source.
Research from The Australia Institute has suggested that at least 25% of carbon offsets may lack integrity, describing them as "hot air." Additionally, some reports have raised concerns that carbon offsets could be used to justify the continuation or expansion of fossil fuel projects, potentially delaying direct efforts to reduce emissions.
Using projects in this way is called "greenwashing". Pope Francis noted in his 2015 encyclical letter Laudato si' the risk that countries and sectors may use carbon credits as "a ploy which permits maintaining [their] excessive consumption".
Many projects that give credits for carbon sequestration have received criticism as greenwashing because they overstated their ability to sequester carbon, with some projects being shown to actually increase overall emissions.
In 2023 a civil suit was brought against Delta Airlines based on its use of carbon credits to support claims of carbon neutrality. In 2016 the Öko-Institut analyzed a series of CDM projects. It found that 85% had a low likelihood of being truly additional or were likely to over-estimate emission reductions. In 2023, the University of California all but dropped the purchase of offsets in favor of direct reductions in emissions. An additional challenge is that carbon pricing and existing policies are still inadequate to meet Paris goals. However, there is evidence that companies that invest in offsets and credits tend to make more ambitious emissions cuts compared with companies that do not.
Researchers have raised the concern that the use of carbon offsets – such as by maintaining forests, reforestation or carbon capture – as well as renewable energy certificates allow polluting companies a business-as-usual approach to continue releasing greenhouse gases and for being, inappropriately trusted, untried techno-fixes.
Oversight issues
Several certification standards exist, with different ways of measuring emissions baseline, reductions, additionality, and other key criteria. However, no single standard governs the industry. Some offset providers have faced criticism that their carbon reduction claims are exaggerated or misleading. For example, carbon credits issued by the California Air Resources Board were found to use a formula that established fixed boundaries around forest regions. This created simplified, regional averages for the carbon stored in a wide mix of tree species.
Some experts have estimated that California's cap and trade program has generated between 20 million and 39 million forestry credits that do not achieve real climate benefits. This amounts to nearly one in three credits issued through that program.. The Australia Institute shares that while Australia’s carbon offset system appears to be regulated, it lacks independent verification and transparency. The government doesn’t release the data that would allow independent scrutiny of offset projects.Without reliable data or oversight, there is no way to verify the effectiveness of these projects, which can lead to misleading claims and potentially increase emissions, especially when offsets are used to justify new fossil fuel projects.
Determining additionality can be difficult. This may present risks for buyers of offsets or credits. Carbon projects that yield strong financial returns even in the absence of revenue from carbon credits are usually not considered additional. Another example is projects that are compelled by regulations. Projects representing common practice in an industry are also usually not considered additional. A full determination of additionality requires a careful investigation of proposed carbon offset projects.
Offsets provide a revenue stream for the reduction of some types of emissions, so they can lead to perverse incentives. They may provide incentives to emit more, so that emitting entities can get credit for reducing emissions from an artificially high baseline. Regulatory agencies could address these situations. This could involve setting specific standards for verifiability, uniqueness, and transparency.
Concerns with forestry projects
Forestry projects have faced increasing criticism over their integrity as offset or credit programs. A number of news stories from 2021 to 2023 criticized nature-based carbon offsets, the REDD+ program, and certification organizations. In one case it was estimated that around 90% of rainforest offset credits of the Verified Carbon Standard are likely to be "phantom credits"..
Tree planting projects in particular have been problematic. Critics point to a number of concerns. Trees reach maturity over a course of many decades. It is difficult to guarantee how long the forest will last. It may suffer clearing, burning, or mismanagement. Some tree-planting projects introduce fast-growing invasive species. These end up damaging native forests and reducing biodiversity. In response, some certification standards such as the Climate Community and Biodiversity Standard require multiple species plantings. Tree planting in high latitude forests may have a net warming effect on the Earth's climate because tree cover absorbs sunlight thus creating a warming effect that balances out their absorption of carbon dioxide. Tree-planting projects can also cause conflicts with local communities and Indigenous people if the project displaces or otherwise curtails their use of forest resources.
Lack of impact on the company’s own operations
Offsetting, while a widely used tool for addressing greenhouse gas emissions, has inherent limitations in directly reduce carbon emissions at the source. By purchasing carbon credits from external projects, companies invest in external projects to counterbalance emissions, often through reforestation or renewable energy initiatives. However, offsetting has faced significant scrutiny and exploitation. We see many cases of large companies purchasing carbon credits to offset their emissions without taking meaningful action to reduce their own emissions directly.
In 2009 the term “insetting” was described publicly as similar to the concept of carbon offsetting but underlines the importance of taking responsibility and ownership of climate change along a company’s own value chain. Insetting then refers to activities taken by companies to reduce GHG emissions and generate carbon storage while simultaneously creating a positive impact on communities and ecosystems through its own value chain interventions. It allows businesses to invest directly into their value chain, e.g. through carbon financing and in close collaboration with partners on the ground.
See also
African carbon market
Cap and dividend
Cap and Share
Carbon Border Adjustment Mechanism
Carbon tax
Chinese national carbon trading scheme
External links
Timeline of carbon offsets from Carbon Brief
References
Sources
(pb: ).
Carbon finance
Renewable energy
Greenhouse gas emissions
Greenwashing | Carbon offsets and credits | Chemistry | 6,945 |
1,516,575 | https://en.wikipedia.org/wiki/Institute%20of%20Astronomy%2C%20Cambridge | The Institute of Astronomy (IoA) is the largest of the three astronomy departments in the University of Cambridge, and one of the largest astronomy sites in the United Kingdom. Around 180 academics, postdocs, visitors and assistant staff work at the department.
Research at the department is made in a number of scientific areas, including exoplanets, stars, star clusters, cosmology, gravitational-wave astronomy, the high-redshift universe, AGN, galaxies and galaxy clusters. This is a mixture of observational astronomy, over the entire electromagnetic spectrum, computational theoretical astronomy, and analytic theoretical research.
The Kavli Institute for Cosmology is also located on the department site. This institute has an emphasis on The Universe at High Redshifts. The Cavendish Astrophysics Group are based in the Battcock Centre, a building in the same grounds.
History
The institute was formed in 1972 from the amalgamation of earlier institutions:
The University Observatory, founded in 1823. Its Cambridge Observatory building now houses offices and the department library.
The Solar Physics Observatory, which started in Cambridge in 1912. The building was partly demolished in 2008 to make way for the Kavli Institute for Cosmology.
The Institute of Theoretical Astronomy, which was created by Fred Hoyle in 1967. Its building is the main departmental site (the Hoyle Building), with a lecture theatre added in 1999, and a second two-storey wing built in 2002.
From 1990 to 1998, the Royal Greenwich Observatory was based in Cambridge, where it occupied Greenwich House on a site adjacent to the Institute of Astronomy.
Teaching
The department teaches 3rd and 4th year undergraduates as part of the Natural Sciences Tripos or Mathematical Tripos. Around 30 students normally study the masters which consists of a substantial research project (around 1/3 of the masters) and students have an opportunity to study courses such as General Relativity, Cosmology, Black Holes, Extrasolar Planets, Astrophysical Fluid Dynamics, Structure and Evolution of Stars & Formation of Galaxies. In addition, there are around 12 to 18 graduate PhD students at the department per year, mainly funded by the STFC. The graduate programme is particularly unusual in the UK as the students are free to choose their own PhD supervisor or adviser from the staff at the department, and this choice is often made as late as the end of their first term.
Notable current staff
An incomplete list of notable current members of the department.
Cathie Clarke
Carolin Crawford
George Efstathiou
Andrew Fabian
Paul Hewett
Mike Irwin
Gerry Gilmore
Douglas Gough
Nikku Madhusudhan
Richard McMahon
Hiranya Peiris
Max Pettini
James E. Pringle
Martin Rees
Christopher Tout
Anna Zytkow
Notable past members and students
Here are some notable members of the department and its former institutes.
Sverre_Aarseth
Suzanne Aigrain
George Airy
Robert Stawell Ball
James Challis
Donald Clayton
John Couch Adams
Arthur Eddington
Richard Ellis
Roger Griffin
Stephen Hawking
Cyril Hazard
Fred Hoyle
Ofer Lahav
Mike Irwin
Jamal Nazrul Islam
Harold Jeffreys
Robert Kennicutt
Donald Lynden-Bell
Jayant Narlikar
Jeremiah Ostriker
Christopher S. Reynolds
Robert Woodhouse
Telescopes
The Institute houses several telescopes on its site. Although some scientific work is done with the telescopes, they are mostly used for public observing and astronomical societies. The poor weather and light-pollution in Cambridge makes most modern astronomy difficult. The telescopes on the site include:
The Northumberland Telescope donated by the Duke of Northumberland in 1833. This is a diameter refractor on an English mount.
The smaller Thorrowgood Telescope, on extended loan from the Royal Astronomical Society. The telescope is an refractor.
The 36-inch Telescope, built in 1951.
The Three-Mirror Telescope, which is a prototype telescope with a unique design to have wide field of view, sharp images and all-reflection optics.
The institute's former 24" Schmidt Camera was donated to the Spaceguard Centre in Knighton, Powys in Wales in June 2009.
The Cambridge University Astronomical Society (CUAS) and Cambridge Astronomical Association (CAA) both regularly observe. The Institute holds public observing evenings on Wednesdays from October to March.
Public activities
The department holds a number of events involving the general public in astronomy. These include or have included:
Open evenings on Wednesdays during the winter, with a talk given by a member of the institute followed by observing in clear weather
Hosting the Astroblast conference
Annual sculpture exhibition showing work of Anglia Ruskin University
Annual open day during the Cambridge Science Festival
A monthly podcast, the 'Astropod', aimed at the general public (the Astropod originally ran from 2009 to 2011, and was relaunched in 2020)
Extra observing nights for special events such as IYA Moonwatch and BBC stargazing live
Library
The institute library is housed in the old Cambridge Observatory building. It is a specialist library concentrating on the subjects of astronomy, astrophysics and cosmology. The collection has approximately 17,000 books and subscribes to about 80 current journals. The library also has a collection of rare astronomical books, many of which belonged to John Couch Adams.
Achievements
Among the significant contributions to astronomy made by the institute, the now decommissioned Automatic Plate Measuring (APM) machine was used to create a major catalogue of astronomical objects in the northern sky.
References
External links
Institute of Astronomy at the University of Cambridge
Kavli Institute of Cosmology, Cambridge
Images from the Institute of Astronomy Library
Astronomy institutes and departments
Astronomy, Institute of
Research institutes in the United Kingdom
Astronomy in the United Kingdom
Research institutes established in 1972 | Institute of Astronomy, Cambridge | Astronomy | 1,138 |
23,820,202 | https://en.wikipedia.org/wiki/Gymnopilus%20areolatus | Gymnopilus areolatus is a species of mushroom-forming fungus in the family Hymenogastraceae. It was first formally described by American mycologist William Alphonso Murrill, from specimens collected in Cuba.
Description
The cap is in diameter.
Habitat and distribution
Gymnopilus areolatus typically grows clumped together on stumps, and logs of hardwoods and palms. It is found in Cuba in May and September.
See also
List of Gymnopilus species
References
areolatus
Fungi of North America
Taxa named by William Alphonso Murrill
Fungi described in 1913
Fungus species | Gymnopilus areolatus | Biology | 129 |
69,735,469 | https://en.wikipedia.org/wiki/Chentsov%27s%20theorem | In information geometry, Chentsov's theorem states that the Fisher information metric is, up to rescaling, the unique Riemannian metric on a statistical manifold that is invariant under sufficient statistics.
The theorem is named after its inventor Nikolai Chentsov
See also
Fisher information
Sufficient statistic
Information geometry
References
N. N. Čencov (1981), Statistical Decision Rules and Optimal Inference, Translations of mathematical monographs; v. 53, American Mathematical Society, http://www.ams.org/books/mmono/053/
Shun'ichi Amari, Hiroshi Nagaoka (2000) Methods of information geometry, Translations of mathematical monographs; v. 191, American Mathematical Society, http://www.ams.org/books/mmono/191/ (Theorem 2.6)
Differential geometry
Information geometry
Statistical distance | Chentsov's theorem | Physics,Mathematics | 178 |
17,591,554 | https://en.wikipedia.org/wiki/KCNC3 | Potassium voltage-gated channel, Shaw-related subfamily, member 3 also known as KCNC3 or Kv3.3 is a protein that in humans is encoded by the KCNC3.
Function
The Shaker gene family of Drosophila encodes components of voltage-gated potassium channels and comprises four subfamilies. Based on sequence similarity, this gene is similar to one of these subfamilies, namely the Shaw subfamily. The protein encoded by this gene belongs to the delayed rectifier class of channel proteins and is an integral membrane protein that mediates the voltage-dependent potassium ion permeability of excitable membranes.
Clinical significance
KCNC3 is associated with spinocerebellar ataxia type 13.
See also
Voltage-gated potassium channel
References
External links
GeneReviews/NCBI/NIH/UW entry on Spinocerebellar Ataxia Type 13
Further reading
Ion channels | KCNC3 | Chemistry | 191 |
55,205,467 | https://en.wikipedia.org/wiki/NGC%20472 | NGC 472 is a spiral galaxy located roughly 220 million lightyears from earth in the constellation Pisces. It was discovered on August 29, 1862 by Heinrich Louis d'Arrest.
See also
List of galaxies
List of spiral galaxies
References
External links
Deep Sky Catalog
SEDS
0472
18620829
Pisces (constellation)
Spiral galaxies
Discoveries by Heinrich Louis d'Arrest
004833 | NGC 472 | Astronomy | 82 |
5,740,174 | https://en.wikipedia.org/wiki/Relationship%20between%20string%20theory%20and%20quantum%20field%20theory | Many first principles in quantum field theory are explained, or get further insight, in string theory.
From quantum field theory to string theory
Emission and absorption: one of the most basic building blocks of quantum field theory, is the notion that particles (such as electrons) can emit and absorb other particles (such as photons). Thus, an electron may just "split" into an electron plus a photon, with a certain probability (which is roughly the coupling constant). This is described in string theory as one string splitting into two. This process is an integral part of the theory. The mode on the original string also "splits" between its two parts, resulting in two strings which possibly have different modes, representing two different particles.
Coupling constant: in quantum field theory this is, roughly, the probability for one particle to emit or absorb another particle, the latter typically being a gauge boson (a particle carrying a force). In string theory, the coupling constant is no longer a constant, but is rather determined by the abundance of strings in a particular mode, the dilaton. Strings in this mode couple to the worldsheet curvature of other strings, so their abundance through space-time determines the measure by which an average string worldsheet will be curved. This determines its probability to split or connect to other strings: the more a worldsheet is curved, the higher a chance of splitting and reconnecting it has.
Spin: each particle in quantum field theory has a particular spin s, which is an internal angular momentum. Classically, the particle rotates in a fixed frequency, but this cannot be understood if particles are point-like. In string theory, spin is understood by the rotation of the string; For example, a photon with well-defined spin components (i.e. in circular polarization) looks like a tiny straight line revolving around its center.
Gauge symmetry: in quantum field theory, the mathematical description of physical fields include non-physical states. In order to omit these states from the description of every physical process, a mechanism called gauge symmetry is used. This is true for string theory as well, but in string theory it is often more intuitive to understand why the non-physical states should be disposed of. The simplest example is the photon: a photon is a vector particle (it has an inner "arrow" which points to some direction, its polarization). Mathematically, it can point towards any direction in space-time. Suppose the photon is moving in the z direction; then it may either point towards the x, y or z spatial directions, or towards the t (time) direction (or any diagonal direction). Physically, however, the photon may not point towards the z or t directions (longitudinal polarization), but only in the x-y plane (transverse polarization). A gauge symmetry is used to dispose of the non-physical states. In string theory, a photon is described by a tiny oscillating line, with the axis of the line being the direction of the polarization (i.e. the inner direction of the photon is the axis of the string which the photon is made of). If we look at the worldsheet, the photon will look like a long strip which stretches along the time direction with an angle towards the z-direction (because it is moving along the z-direction as time goes by); its short dimension is therefore in the x-y plane. The short dimension of this strip is precisely the direction of the photon (its polarization) in a certain moment in time. Thus the photon cannot point towards the z or t directions, and its polarization must be transverse.
Note: formally, gauge symmetries in string theory are (at least in most cases) a result of the existence of a global symmetry together with the profound gauge symmetry of string theory, which is the symmetry of the worldsheet under a local change of coordinates and scales.
Renormalization: in particle physics the behaviour of particles in the smallest scales is largely unknown. In order to avoid this difficulty, the particles are treated as fields behaving according to an "effective field theory" at low energy scales, and a mathematical tool known as renormalization is used to describe the unknown aspects of this effective theory using only a few parameters. These parameters can be adjusted so that calculations give adequate results. In string theory, this is unnecessary since the behaviour of the strings is presumed to be known to every scale.
Fermions: in the bosonic string, a string can be described as an elastic one-dimensional object (i.e. a line) "living" in spacetime. In superstring theory, every point of the string is not only located at some point in spacetime, but it may also have a small arrow "drawn" on it, pointing at some direction in spacetime. These arrows are described by a field "living" on the string. This is a fermionic field, because at each point of the string there is only one arrow; thus one cannot bring two arrows to the same point. This fermionic field (which is a field on the worldsheet) is ultimately responsible for the appearance of fermions in spacetime: roughly, two strings with arrows drawn on them cannot coexist at the same point in spacetime, because then one would effectively have one string with two sets of arrows at the same point, which is not allowed, as explained above. Therefore two such strings are fermions in spacetime.
Notes
String theory
Quantum field theory | Relationship between string theory and quantum field theory | Physics,Astronomy | 1,139 |
2,710,620 | https://en.wikipedia.org/wiki/Self-regulated%20learning | Self-regulated learning (SRL) is one of the domains of self-regulation, and is aligned most closely with educational aims. Broadly speaking, it refers to learning that is guided by metacognition (thinking about one's thinking), strategic action (planning, monitoring, and evaluating personal progress against a standard), and motivation to learn.
A self-regulated learner "monitors, directs, and regulates actions toward goals of information acquisition, expanding expertise, and self-improvement”. In particular, self-regulated learners are cognizant of their academic strengths and weaknesses, and they have a repertoire of strategies they appropriately apply to tackle the day-to-day challenges of academic tasks. These learners hold incremental beliefs about intelligence (as opposed to entity, or fixed views of intelligence) and attribute their successes or failures to factors (e.g., effort expended on a task, effective use of strategies) within their control.
Finally, self-regulated learners take on challenging tasks, practice their learning, develop a deep understanding of subject matter, and exert effort towards academic success. In part, these characteristics may help to explain why self-regulated learners usually exhibit a high sense of self-efficacy. In the educational psychology literature, researchers have linked these characteristics to success in and beyond school.
Self-regulated learners are successful because they control their learning environment. They exert this control by directing and regulating their own actions toward their learning goals. Self-regulated learning should be used in three different phases of learning. The first phase is during the initial learning, the second phase is when troubleshooting a problem encountered during learning and the third phase is when they are trying to teach others.
Being self-regulated learning about students
Self-regulation is an important construct in student success within an environment that allows learner choice, such as online courses. Within the remained time of explanation, there will be different types of self-regulations such as the focus is the differences between first- and second-generation college students' ability to self-regulate their online learning. A comfort level of using the computer as a control provided evidence that first-generation students report significantly lower levels of self-regulation for online learning than their second-generation counterparts. As to relating to Self-regulation, there are different strategies such as private writing techniques, which is a way as a form of text. It is a way of freewriting and journaling which are underexploited in academic writing instructions. However, it is only seen as a form of prewriting and is criticized for being under-theorized in the significance of writing a social practice that approaches drawing towards the conceptual framework of the conception of learning development. It is estimated that students who are first-year students are discovering new strategies, This can be argued that the transition from secondary to tertiary education can be challenging for students, they must adapt to the independent nature of learning at universities. Learning strategies are rarely taught at universities, making it difficult for students to learn new strategies. The significance is evaluating small-group peer discussion boards as an avenue for sharing learning strategies between students in a first-year anatomy and physiology course. It is believed that students perceive the outlining process, and students in business communication courses were surveyed about their perceptions of outlining. It is said that students were asked about how they outline, whether they outline, and why they outline. The significance was students were able to include organization within the outlining process and include content exploration which made it useful for students whose process included only organization or only content exploration. Leading to the supposition that there is critical analysis in student writing such as functional linguistics being presented. In a way where there were trainee teachers who were asked to write a form of descriptive writing to students attempting critical analysis. Although descriptive writing has an important role in learning for students. It is said that the discussion of critical analysis is realized in student writing. Resulting in the importance of self-regulation in online learning, particularly with first-year and second-year college-generation students. First-generation students tend to report lower levels of self-regulation, in a way as a comfort level with using computers. Strategies such as private writing techniques and peer discussion boards can help students develop effective learning strategies, especially in the transition from secondary to tertiary education. Additionally, the outlining process and critical analysis in student writing are vital components of academic success. Overall, fostering self-regulation and providing support for diverse learning strategies are essential for students' success in online courses.
Phases of self-regulation
According to Winne and Hadwin, self-regulation unfolds over “four flexibly sequenced phases of recursive cognition.” These phases are task perception, goal setting and planning, enacting, and adaptation.
During the task perception phase, students gather information about the task at hand and personalize their perception of it. This stage involves determining motivational states, self-efficacy, and information about the environment around them.
Next, students set goals and plan how to accomplish the task. Several goals may be set concerning explicit behaviors, cognitive engagement, and motivation changes. The goals that are set depend on how the students perceive the task at hand.
The students will then enact the plan they have developed by using study skills and other useful tactics they have in their repertoire of learning strategies.
The last phase is an adaptation, wherein students evaluate their performance and determine how to modify their strategy in order to achieve higher performance in the future. They may change their goals or their plan; they may also choose not to attempt that particular task again. Winne and Hadwin state that all academic tasks encompass these four phases.
Zimmerman suggested that self-regulated learning process has three stages:
Forethought, learners' preparing work before the performance on their studying;
Volitional control, which is also called "performance control", occurs in the learning process. It involves learners' attention and willpower;
Self-reflection happens in the final stage when learners review their performance toward final goals. Focusing on one's learning strategies during the process also helps towards achieving the learning outcomes.
Baba and Nitta (2015) demonstrated that Zimmerman's cyclical self-regulatory processes can be extended to longer periods of time and self-reflection has a close connection to second language writing development. From a Complex Dynamic Systems Theory perspective, Wind and Harding (2020) found that attractor states might negatively affect the cyclicality of self-regulatory processes.
Sources of self-regulated learning
According to Iran-Nejhad and Chissom, there are three sources of self-regulated learning: active/executive, dynamic, and interest-creating discovery model (1992).
Active/executive self-regulation is regulated by the person and is intentional, deliberate, conscious, voluntary, and strategic. The individual is aware and effortful in using self-regulation strategies. Under this source of SRL, learning happens best in a habitual mode of functioning.
Dynamic self-regulation is also known as unintentional learning because it is regulated by internal subsystems other than the “central executive.” The learner is not consciously aware they are learning because it occurs “outside the direct influence of deliberate internal control.”
The third source of self-regulated learning is the interest-creating discovery module, which is described as “bifunctional” as it is developed from both the active and dynamic models of self-regulation. In this model, learning takes place best in a creative mode of functioning and is neither completely person-driven nor unconscious, but a combination of both.
Social cognitive perspective
Self-regulation from the social cognitive perspective looks at the triadic interaction between the person (e.g., beliefs about success), their behavior (e.g., engaging in a task), and the environment (e.g., feedback from a teacher). Zimmerman et al. specified three important characteristics of self-regulated learning:
self-observation (monitoring one's activities); seen as the most important of these processes
self-judgment (self-evaluation of one's performance) and
self-reactions (reactions to performance outcomes).
To the extent that one accurately reflects about one's progress towards a learning goal, and appropriately adjusts the actions to be performed in order to maximize performance and foreseeable outcome; effectively, at this point, one's self has become self-regulated. During a student's school career, the primary goal of teachers is to produce self-regulated learners by using such theories as the Information Processing Model (IPM). By storing the information into long-term memory (or a live document like a Runbook) the learner can retrieve it upon demand and apply meta-learning to tasks, and thereby become a self-regulated learner.
Information processing perspective
Winne and Marx posited that motivational thoughts and beliefs are governed by the basic principles of cognitive psychology, which should be conceived in information-processing terms. Motivation plays a major role in self-regulated learning. Motivation is needed to apply effort and continue on when faced with difficulty. Control also plays a role in self-regulated learning as it helps the learner to stay on track in reaching their learning goal and avoid being distracted from things that stand in the way of the learning goal.
Student performance perspective
Lovett, Meyer and Thille observed comparable student performance between instructor-led and self-regulated learning environments. In a subsequent study, self-regulated learning was shown to enable accelerated learning while maintaining long-term retention rates.
Cassandra B. Whyte noted the importance of internal locus of control tendencies on successful academic performance, also compatible with self-regulated learning. Whyte recognized and appreciated external factors, to include the benefit of working with a good teacher, while encouraging self-regulated hard work, skill-building, and a positive attitude to perform better in academic situations.
To increase positive attitudes and academic performance, expert learners should be created. Expert learners develop self-regulated learning strategies. One of these strategies is the ability to develop and ask questions and use these questions to expand on their own prior knowledge. This technique allows the learners to test the true understanding of their knowledge and make correction about content areas that have a misunderstanding. When learners engage in questioning, it forces them to be more actively engaged in their learning. It also allows them to self analyze and determine their level of comprehension.
This active engagement allows the learner to organize concepts into existing schemas. Through the use of questions, learners can accommodate and then assimilate their new knowledge with existing schema. This process allows the learner to solve novel problems and when the existing schema does not work on the novel problem the learner must reevaluate and assess their level of understanding.
Application in practice
There are many practical applications for self-regulated learning in schools and classrooms. Paris and Paris state there are three main areas of direct application in classrooms: literacy instruction, cognitive engagement, and self-assessment. In the area of literacy instruction, educators can teach students the skills necessary to lead them to become self-regulated learners by using strategies such as reciprocal teaching, open-ended tasks, and project-based learning.
Other tasks that promote self-regulated learning are authentic assessments, autonomy-based assignments, and portfolios. These strategies are student-centered and inquiry-based, which cause students to gradually become more autonomous, creating an environment of self-regulated learning. However, students do not simply need to know the strategies, but they need to realize the importance of utilizing them in order to experience academic success.
According to Dweck and Master, "Students' use of learning strategies – and their continued use of them in the face of difficulty – is based on the beliefs that these strategies are necessary for learning, and that they are effective ways of overcoming obstacles." Students who are not self-regulated learners may daydream, rarely complete assignments, or forget assignments completely. Those who do practice self-regulation ask questions, take notes, allocate their time effectively, and use resources available to them. Pajares lists several practices of successful students that Zimmerman and his colleagues developed in his chapter of Motivation and Self-Regulated Learning: Theory, Research, and Applications.
These behaviors include, but are not limited to: finishing homework assignments by deadlines, studying when there are other interesting things to do, concentrating on school subjects, taking useful class notes of class instruction, using the library for information for class assignments, effectively planning schoolwork, effectively organizing schoolwork, remembering information presented in class and textbooks, arranging a place to study at home without distractions, motivating oneself to do schoolwork, and participating in class discussions.
Examples of self-regulated learning strategies in practice:
Self-Assessment: fosters planning, assess what skills the learner has and what skills are needed. Allows students to internalize standards of learning so they can regulate their own learning.
Wrapper Activity: activity based on pre-existing learning or assessment task. This can be done as a homework assignment. Consist of self-assessment questions to complete before completing homework and then after the completion of homework. This will allow the learner to draw their own conclusions about the learning process.
Think Aloud: This involves the teacher describing their thought process in solving a problem.
Questioning: Following new material, student develops questions about the material.
Reciprocal Teaching: the learner teaches new material to fellow learners.
Help-seeking
Self-regulation has recently been studied in relation to certain age and socioeconomic groups. Programs such as CSRP target different groups in order to increase effortful control in the classroom to enhance early learning.
Measurement
There are two perspectives on how to measure student self-regulation behaviour. First, the perspective sees SRL as an aptitude. This perspective measures the regulation behaviour based on the perception of the student about their regulation behaviour. The instrument that is frequently used in this perspective is a questionnaire. The second perspective sees SRL as an event which can be measured by observing the actual behaviour of the student. The most commonly used methods of measurement in this perspective are the think-aloud protocol and direct observation.
Evaluation
A qualitative study reported that learners use SRL effectively when provided with enhanced guided notes (EGN) instead of standard guided notes (SGN) by the instructor. Moreover, students tend to use shallow level processing strategies such as rote memorization, rehearsal, and reviewing notes which are largely related to the learning cultures that they have been exposed to. However, other learning contexts encourage social influences such as group work and social assistance as ways of developing SRL through reciprocal interaction which facilitates self-reflection. Therefore, it is a challenge for researchers to develop a suitable framework to evaluate SRL, as learners tend to use some strategies over others with specific focus on SRL in different contexts.
See also
Corrective feedback
Educational psychology
Learning by teaching
Meta learning
Reflective practice
Self (psychology)
Self-efficacy
References
Educational psychology
Motivation | Self-regulated learning | Biology | 3,034 |
41,234,142 | https://en.wikipedia.org/wiki/Readout%20integrated%20circuit | A readout integrated circuit (ROIC) is an integrated circuit (IC) specifically used for reading detectors of a particular type. They are compatible with different types of detectors such as infrared and ultraviolet. The primary purpose for ROICs is to accumulate the photocurrent from each pixel and then transfer the resultant signal onto output taps for readout. Conventional ROIC technology stores the signal charge at each pixel and then routes the signal onto output taps for readout. This requires storing large signal charge at each pixel site and maintaining signal-to-noise ratio (or dynamic range) as the signal is read out and digitized.
A ROIC has high-speed analog outputs to transmit pixel data outside of the integrated circuit. If digital outputs are implemented, the IC is referred to as a Digital Readout Integrated Circuit (DROIC).
A digital readout integrated circuit (DROIC) is a class of ROIC that uses on-chip analog-to-digital conversion (ADC) to digitize the accumulated photocurrent in each pixel of the imaging array. DROICs are easier to integrate into a system compared to ROICs as the package size and complexity are reduced, they are less sensitive to noise and have higher bandwidth compared to analog outputs.
A digital pixel readout integrated circuit (DPROIC) is a ROIC that uses on-chip analog-to-digital conversion (ADC) within each pixel (or small group of pixels) to digitize the accumulated photocurrent within the imaging array. DPROICs have an even higher bandwidth than DROICs and can significantly increase the well capacity and dynamic range of the device.
References
Digital Converters for Image Sensors, Kenton T. Veeder, SPIE Press, 2015.
A 25μm pitch LWIR focal plane array with pixel-level 15-bit ADC providing high well capacity and targeting 2mK NETD, Fabrice Guellec et al, Proceedings Volume 7660, Infrared Technology and Applications XXXVI, 2010.
A high-resolution, compact and low-power ADC suitable for array implementation in standard CMOS, Christer Jansson, IEEE Transactions on circuits and systems - I: Fundamental theory and applications, Vol. 42, No. 11, November 1995.
Digital Pixel Readout Integrated Circuit for High Dynamic Range Infrared Imaging Applications, Phase I SBIR, Technology report, NASA Jet Propulsion Laboratory, July 2018.
Digital pixel readout integrated circuit architectures for LWIR, Shafique, A., Yaziki, M., Kayahan, H., Ceylan, O., Gurbuz, Y., Proceedings Volume 9451, Infrared Technology and Applications XLI; 94510V, 2015.
Integrated circuits
Detectors
Digital-Pixel Focal Plane Array Technology, Schultz, K., et al, Lincoln Laboratory Journal, Vol. 20, No. 2, 2014.
Sensors, Space Probes and Wi-Fi Cybersecurity, Oh My!, Maxfield, Max., Electronic Engineering Journal, February, 2020.
Digital Pixel Infrared Imaging Boosts Camera Speed and Performance, Bannatyne, R., Vision Systems Design, June 2020. | Readout integrated circuit | Technology,Engineering | 651 |
48,291,822 | https://en.wikipedia.org/wiki/Naenara%20%28browser%29 | Naenara is a North Korean intranet web browser software developed by the Korea Computer Center for use of the national Kwangmyong intranet. It is developed from a version of Mozilla Firefox and is distributed with the Linux-based operating system Red Star OS that North Korea developed due to licensing and security issues with Microsoft Windows.
Design
Naenara is a modified version of Mozilla Firefox. Red Star OS and Naenara were developed by the Korea Computer Center that states on its web page that it seeks to develop Linux-based software for use.
Naenara can be used to browse approximately 1,000 to 5,500 websites in the national Kwangmyong intranet.
When Naenara is run, it tries to contact an IP address at http://10.76.1.11/. The default search engine for the browser is Google Korea.
See also
Samjiyon tablet computer
References
Computing and society
Internet in North Korea
Linux web browsers | Naenara (browser) | Technology | 206 |
75,405,291 | https://en.wikipedia.org/wiki/ADB-5%27Br-BUTINACA | ADB-5'Br-BUTINACA (ADB-B-5Br-INACA) is an indazole-3-carboxamide based synthetic cannabinoid receptor agonist which has been sold as a designer drug, first detected in Philadelphia in the US in May 2022, and subsequently found in South Korea, Portugal and Sweden. It is specifically listed as an illegal drug in Italy, South Korea and several states in the US, and controlled under analogue legislation in various other jurisdictions.
See also
ADB-BUTINACA
ADB-5'Br-PINACA
ADB-5'F-BUTINACA
ADSB-FUB-187
MDMB-5'Br-BUTINACA
MDMB-BINACA
References
Cannabinoids
Designer drugs
Bromoarenes
Amides
Tert-butyl compounds
Indazolecarboxamides | ADB-5'Br-BUTINACA | Chemistry | 182 |
5,045,613 | https://en.wikipedia.org/wiki/Pauli%20group | In physics and mathematics, the Pauli group on 1 qubit is the 16-element matrix group consisting of the 2 × 2 identity matrix and all of the Pauli matrices
,
together with the products of these matrices with the factors and :
.
The Pauli group is generated by the Pauli matrices, and like them it is named after Wolfgang Pauli.
The Pauli group on qubits, , is the group generated by the operators described above applied to each of qubits in the tensor product Hilbert space . That is,
The order of is since a scalar or factor in any tensor position can be moved to any other position.
As an abstract group, is the central product of a cyclic group of order 4 and the dihedral group of order 8.
The Pauli group is a representation of the gamma group in three-dimensional Euclidean space. It is not isomorphic to the gamma group; it is less free, in that its chiral element is whereas there is no such relationship for the gamma group.
References
External links
Finite groups
Quantum information science
2. https://arxiv.org/abs/quant-ph/9807006 | Pauli group | Physics,Mathematics | 239 |
1,842,710 | https://en.wikipedia.org/wiki/Etorphine | Etorphine (M99) is a semi-synthetic opioid possessing an analgesic potency approximately 1,000–3,000 times that of morphine. It was first prepared in 1960 from oripavine, which does not generally occur in opium poppy extract but rather the related plants Papaver orientale and Papaver bracteatum. It was reproduced in 1963 by a research group at MacFarlan Smith in Gorgie, Edinburgh, led by Kenneth Bentley. It can be produced from thebaine.
Veterinary use
Etorphine is available legally only for veterinary use and is strictly governed by law. It is often used to immobilize elephants and other large mammals. Diprenorphine (Revivon) is an opioid receptor antagonist that can be administered in proportion to the amount of etorphine used (1.3 times) to reverse its effects. Veterinary-strength etorphine is fatal to humans. For this reason the package as supplied to vets always includes the human antidote along with the etorphine.
The human antidote is generally naloxone, not diprenorphine, and is always prepared before the preparation of etorphine to be immediately administered following accidental human exposure to etorphine. The in humans is 3 μg which led to the requirement that the medicine include an equal dose of an antidote, diprenorphine or naloxone.
One of its main advantages is its speed of operation, and more importantly, the speed that diprenorphine reverses its effects. The high incidence of side effects, including severe cardiopulmonary depression, has caused etorphine to fall into disfavor in general veterinary practice. However, its high potency, combined with the rapid action of both etorphine and its antagonist, diprenorphine, means that it has found a place for use in the capture of large mammals, such as rhinoceroses and elephants, where rapid onset and rapid recovery are both very important. The high potency of etorphine means that sufficient etorphine can be administered to large wild mammals by projectile syringe (dart).
Large Animal Immobilon is a combination of etorphine plus acepromazine maleate. An etorphine antidote Large Animal Revivon contains mainly diprenorphine for animals and a human-specific naloxone-based antidote, which should be prepared prior to the etorphine. A 5–15 mg dose is enough to immobilize an African elephant and a 2–4 mg dose is enough to immobilize a black rhinoceros.
Pharmacology
Etorphine is a potent, non-selective full agonist of the μ-, δ-, and κ-opioid receptors. It has a weak affinity for the nociceptin receptor. Etorphine has an LD50 of 3 μg in humans.
Legal status
In Hong Kong, etorphine is regulated under Schedule 1 of Hong Kong's Chapter 134 Dangerous Drugs Ordinance. It can be used legally only by health professionals and for university research purposes. The substance can be given by pharmacists under a prescription. Anyone who supplies the substance without prescription can be fined $10,000 (HKD). The penalty for trafficking or manufacturing the substance is a $5,000,000 (HKD) fine and life imprisonment. Possession of the substance for consumption without license from the Department of Health is illegal with a $1,000,000 (HKD) fine and/or 7 years of jail time.
In the Netherlands, etorphine is a Schedule I drug of the Opium Law. It is used only for veterinary purposes in zoos to immobilize large animals.
In the US, etorphine is listed as a Schedule I drug with an ACSCN of 9056, although its hydrochloride salt is classified as Schedule II with an ACSCN of 9059.
In the UK, under the Misuse of Drugs Act 1971, etorphine is controlled as a Class A substance.
In Italy, Etorphine is illegal, as are the parent compounds Dihydroetorphine and Acetorphine. (Datas from 2022)
In popular culture
The fictional character Dexter Morgan uses Etorphine M-99 to capture and sedate his victims in the TV shows Dexter, and Dexter: Original Sin.
The fictional character Dwight Schrute uses etorphine to sedate a coworker in the TV show The Office.
See also
6,14-Endoethenotetrahydrooripavine - the central nucleus of all Bentley compound opioids under which class etorphine falls
7-PET
Dihydroetorphine – a close analog of etorphine that has been used as an opioid painkiller for human usage in China
Thienorphine
Opioid potency comparison
References
External links
Opioids.com page on etorphine
Etorphine: Molecule of the Month
Analgesics
Delta-opioid receptor agonists
4,5-Epoxymorphinans
Ethers
Semisynthetic opioids
Kappa-opioid receptor agonists
Mu-opioid receptor agonists
Nociceptin receptor agonists
Hydroxyarenes
Tertiary alcohols | Etorphine | Chemistry | 1,102 |
1,895,094 | https://en.wikipedia.org/wiki/Opportunistic%20infection | An opportunistic infection is an infection caused by pathogens (bacteria, fungi, parasites or viruses) that take advantage of an opportunity not normally available. These opportunities can stem from a variety of sources, such as a weakened immune system (as can occur in acquired immunodeficiency syndrome or when being treated with immunosuppressive drugs, as in cancer treatment), an altered microbiome (such as a disruption in gut microbiota), or breached integumentary barriers (as in penetrating trauma). Many of these pathogens do not necessarily cause disease in a healthy host that has a non-compromised immune system, and can, in some cases, act as commensals until the balance of the immune system is disrupted. Opportunistic infections can also be attributed to pathogens which cause mild illness in healthy individuals but lead to more serious illness when given the opportunity to take advantage of an immunocompromised host.
Types of opportunistic infections
A wide variety of pathogens are involved in opportunistic infection and can cause a similarly wide range in pathologies. A partial list of opportunistic pathogens and their associated presentations includes:
Bacteria
Clostridioides difficile (formerly known as Clostridium difficile) is a species of bacteria that is known to cause gastrointestinal infection and is typically associated with the hospital setting.
Legionella pneumophila is a bacterium that causes Legionnaire's disease, a respiratory infection.
Mycobacterium avium complex (MAC) is a group of two bacteria, M. avium and M. intracellulare, that typically co-infect, leading to a lung infection called mycobacterium avium-intracellulare infection.
Mycobacterium tuberculosis is a species of bacteria that causes tuberculosis, a respiratory infection.
Pseudomonas aeruginosa is a bacterium that can cause respiratory infections. It is frequently associated with cystic fibrosis and hospital-acquired infections.
Salmonella is a genus of bacteria, known to cause gastrointestinal infections.
Staphylococcus aureus is a bacterium known to cause skin infections and sepsis, among other pathologies. Notably, S. aureus has evolved several drug-resistant strains, including MRSA.
Streptococcus pneumoniae is a bacterium that causes respiratory infections.
Streptococcus pyogenes (also known as group A Streptococcus) is a bacterium that can cause a variety of pathologies, including impetigo and strep throat, as well as other, more serious, illnesses.
Fungi
Aspergillus is a fungus, commonly associated with respiratory infection.
Candida albicans is a species of fungus that is associated with oral thrush and gastrointestinal infection.
Coccidioides immitis is a fungus known for causing coccidioidomycosis, more commonly known as Valley Fever.
Cryptococcus neoformans is a fungus that causes cryptococcosis, which can lead to pulmonary infection as well as nervous system infections, like meningitis.
Histoplasma capsulatum is a species of fungus known to cause histoplasmosis, which can present with an array of symptoms, but often involves respiratory infection.
Pseudogymnoascus destructans (formerly known as Geomyces destructans) is a fungus that causes white-nose syndrome in bats.
Microsporidia is a group of fungi that infect species across the animal kingdom, one species of which can cause microsporidiosis in immunocompromised human hosts.
Pneumocystis jirovecii (formerly known as Pneumocystis carinii) is a fungus that causes pneumocystis pneumonia, a respiratory infection.
Parasites
Cryptosporidium is a protozoan that infects the gastrointestinal tract.
Toxoplasma gondii is a protozoan, known for causing toxoplasmosis.
Viruses
Cytomegalovirus is a family of opportunistic viruses, most frequently associated with respiratory infection.
Human polyomavirus 2 (also known as JC virus) is known to cause progressive multifocal leukoencephalopathy (PML).
Human herpesvirus 8 (also known as Kaposi sarcoma-associated herpesvirus) is a virus associated with Kaposi sarcoma, a type of cancer.
Causes
Immunodeficiency or immunosuppression are characterized by the absence of or disruption in components of the immune system, leading to lower-than-normal levels of immune function and immunity against pathogens. They can be caused by a variety of factors, including:
Malnutrition
Fatigue
Recurrent infections
Immunosuppressing agents for organ transplant recipients
Advanced HIV infection
Chemotherapy for cancer
Genetic predisposition
Skin damage
Antibiotic treatment leading to disruption of the physiological microbiome, thus allowing some microorganisms to outcompete others and become pathogenic (e.g. disruption of intestinal microbiota may lead to Clostridium difficile infection)
Medical procedures
Pregnancy
Aging
Leukopenia (i.e. neutropenia and lymphocytopenia)
Burns
The lack of or the disruption of normal vaginal microbiota allows the proliferation of opportunistic microorganisms and will cause the opportunistic infection bacterial vaginosis.
Opportunistic Infection and HIV/AIDS
HIV is a virus that targets T cells of the immune system and, as a result, HIV infection can lead to progressively worsening immunodeficiency, a condition ideal for the development of opportunistic infection. Because of this, respiratory and central nervous system opportunistic infections, including tuberculosis and meningitis, respectively, are associated with later-stage HIV infection, as are numerous other infectious pathologies. Kaposi's sarcoma, a virally-associated cancer, has higher incidence rates in HIV-positive patients than in the general population. As immune function declines and HIV-infection progresses to AIDS, individuals are at an increased risk of opportunistic infections that their immune systems are no longer capable of responding properly to. Because of this, opportunistic infections are a leading cause of HIV/AIDS-related deaths.
Prevention
Since opportunistic infections can cause severe disease, much emphasis is placed on measures to prevent infection. Such a strategy usually includes restoration of the immune system as soon as possible, avoiding exposures to infectious agents, and using antimicrobial medications ("prophylactic medications") directed against specific infections.
Restoration of immune system
In patients with HIV, starting antiretroviral therapy is especially important for restoration of the immune system and reducing the incidence rate of opportunistic infections
In patients undergoing chemotherapy, completion of and recovery from treatment is the primary method for immune system restoration. In a select subset of high risk patients, granulocyte colony stimulating factors (G-CSF) can be used to aid immune system recovery.
Avoidance of infectious exposure
The following may be avoided as a preventative measure to reduce the risk of infection:
Eating undercooked meat or eggs, unpasteurized dairy products or juices.
Potential sources of tuberculosis (high-risk healthcare facilities, regions with high rates of tuberculosis, patients with known tuberculosis).
Any oral exposure to feces.
Contact with farm animals, especially those with diarrhea: source of Toxoplasma gondii, Cryptosporidium parvum.
Cat feces (e.g. cat litter): source of Toxoplasma gondii, Bartonella spp.
Soil/dust in areas where there is known histoplasmosis, coccidioidomycosis.
Reptiles, chicks, and ducklings are a common source of Salmonella.
Unprotected sexual intercourse with individuals with known sexually transmitted infections.
Prophylactic medications
Individuals at higher risk are often prescribed prophylactic medication to prevent an infection from occurring. A person's risk level for developing an opportunistic infection is approximated using the person's CD4 T-cell count and other indications. The table below provides information regarding the treatment management of common opportunistic infections.
Alternative agents can be used instead of the preferred agents. These alternative agents may be used due to allergies, availability, or clinical presentation. The alternative agents are listed in the table below.
Treatment
Treatment depends on the type of opportunistic infection, but usually involves different antibiotics.
Veterinary treatment
Opportunistic infections caused by feline leukemia virus and feline immunodeficiency virus retroviral infections can be treated with lymphocyte T-cell immunomodulator.
References
External links
Infectious diseases
Immunology
Immune system disorders | Opportunistic infection | Biology | 1,844 |
9,420,905 | https://en.wikipedia.org/wiki/Mobile%20Brigade%20Corps | The Mobile Brigade Corps () abbreviated Brimob is the special operations, paramilitary, and tactical unit of the Indonesian National Police (Polri). It is one of the oldest existing units within Polri. Some of its main duties are counter-terrorism, riot control, high-risk law enforcement where the use of firearms are present, search and rescue, hostage rescue, and bomb disposal operations. The Mobile Brigade Corps is a large component of the Indonesian National Police trained for counter-separatist and counter-insurgency duties, often in conjunction with military operations.
The Mobile Brigade Corps consists of 2 (two) branches, namely Gegana and Pelopor. Gegana is tasked with carrying out more specific special police operations tasks such as: Bomb Disposal, CBR Handling (Chemistry, Biology, and Radioactivity), Anti-Terror (Counter Terrorism), and Intelligence. Meanwhile, the Pelopor are tasked with carrying out broader and paramilitary operations, such as: Riot control, Search and Rescue (SAR), security of vital installations, and guerrilla operations. Brimob is classified as a Police Tactical Unit (PTU) and is operationally a Police Special Weapons and Tactics (SWAT) unit (including Densus 88 and Gegana Brimob) which supports other general police units. Each regional police (Polda) in Indonesia has its own Brimob unit.
History
Formed in late 1945 as a special police corps named Pasukan Polisi Istimewa (Special Police Troops) with the task of disarming remnants of the Japanese Imperial Army and protecting the chief of state and the capital city. Under the Japanese, it was called . It fought in the revolution and was the first military unit to engage in the Battle of Surabaya under the command of Police Inspector Moehammad Jasin.
On 14 November 1946, Prime Minister Sutan Sjahrir reorganised all Polisi Istimewa, Barisan Polisi Istimewa and Pasukan Polisi Istimewa, merged into the Mobile Brigade (Mobrig). This day is celebrated as the anniversary of this Blue Beret Corps. This Corps was reconstituted to suppress military and police conflicts and even coups d'etat.
On 1 December 1947, Mobrig was militarized and later deployed in various conflicts and confrontations like the PKI rebellion in Madiun, the Darul Islam rebellion (1947), the APRA coup d'état and proclamation of the Republic of South Maluku (1950), the PRRI rebellion (1953), and the Permesta (1958).
As of 14 November 1961, the Mobrig changed its name to Korps Brigade Mobil (Brimob), and its troops took part in the military confrontation with Malaysia in the early 1960s and in the conflict in East Timor in the mid-1970s. After that, Brimob was placed under the command of the Indonesian National Police.
The Mobile Brigade, which began forming in late 1946 and was used during the anti-Dutch Revolution, started sending students for US Army SF training on Okinawa in January 1959. In April 1960 a second contingent arrived for two months of Ranger training. By the mid-1960s the three-battalion Mobile Brigade, commonly known as Brimob, had been converted into an elite shock force. A Brimob airborne training centre was established in Bandung. Following the 1965 coup attempt, one Brimob battalion was used during anti-Communist operations in West Kalimantan. In December 1975 a Brimob battalion was used during the East Timor operation. During the late 1970s, Brimob assumed VIP security and urban anti-terrorist duties. In 1989, Brimob still contained airborne-qualified elements. Pelopor ('Ranger') and airborne training takes place in Bandung and at a training camp outside Jakarta. Historically, Brimob wore the Indonesian spot camouflage pattern during the early 1960s as their uniform.
In 1981, the Mobile Brigade spawned a new unit called the "Jihandak" (Penjinak Bahan Peledak), an explosive ordnance disposal (EOD) unit.
Task
The Implementation and mobilization of the Brimob Corps is to cope with high-level interruption of society mainly: mass riots, organized crime armed with fire, search and rescue, explosives, chemicals, biological and radioactive threats along with other police operational implementing elements in order to realize legal order and peace of society throughout juridical of Indonesia and other tasks assigned to the corps.
Qualifications
The Pelopor qualifications, which are the basic capabilities of every Brimob member, are the following basic skills:
Ability to navigate with map and compass
Intelligence
Anti-terror (counter-terrorism)
Riot control
Guerrilla war, Close / Urban war tactics
Bomb disposal
Handle high intensity crimes where the use of firearms is present
Search and rescue
Surveillance, disguise and prosecution.
Other individual and unit capabilities.
Function
The function of the Police's Mobile Brigade Corps as the Polri's Principal Operating Unit which has specific capabilities (Riot control, Combat Countermeasures, Mobile Detective, Counter-terrorism, Bomb Disposal, and Search and Rescue) in the framework of High-level domestic security and community-supported search and rescue personnel who are well trained and have solid leadership, equipment and supplies with modern technology.
Role
The role of Brimob is together with other police functions is to act against high-level criminals, mainly mass riots, organized crime of firearms, bombs, chemicals, biology and radio active threats in order to realize the legal order and peace of society in all juridical areas of Indonesia. Roles undertaken include:
Role to help other police functions,
Role to complement in territorial police operations carried out in conjunction with other police functions,
Role to protect members of other police units as well civilians who are under threat,
Role to strengthen other police functions in the implementation of regional operational tasks,
Serve to replace and handle territorial police duties if the situation or task objective has already led to a high-grade crime.
Organisation
In 1992 the Mobile Brigade was essentially a paramilitary organisation trained and organised along military lines. It had a strength of about 12,000. The brigade was used primarily as an elite unit for emergencies and supporting police operations as a rapid response unit.
The unit was mainly deployed for domestic security and defense operations, but now has gained and achieved many specialties in the scope of policing duties such as implementing SWAT operations, Search and Rescue operations, Riot control and CBR (Chemical, biological and radiological) defense. Brimob also are usually sent to do domestic security operations with the TNI.
Since the May 1998 upheaval, PHH (Pasukan Anti Huru-Hara, Anti Riot Unit) have received special anti-riot training. Elements of the unit are cross trained for airborne and Search and Rescue operations. In each Police HQ that represents a province (which is known as POLDA) in Indonesia each has an organized BRIMOB force which consists of a command headquarters, several Detachments of Pelopor police personnel organized into a regiment and usually 1 or 2 detachments of GEGANA.
The Chief of the Indonesian National Police, known as KAPOLRI, has the highest command in each police operation including BRIMOB, orders are delivered by the police chief and then executed by his Operational Assistant Agent with then further notification to the Corps Commandant and then to the concerned regional commanders.
National Level Units
Corps HQ and HQ Services
Brimob Corps Training School
Gegana Battalion
HHC
Pelopor Brigade
Brigade HQ and HQ Company
I (1st) Pelopor Regiment
II (2nd) Pelopor Regiment
III (3rd) Pelopor Regiment
IV (4th) Pelopor Regiment
Intelligence Unit
Training Command
Pelopor
Pelopor (lit."Pioneer") is the main reaction force of the Mobile Brigade Corps, it acts as a troop formation and has the roles of mainly riot control and conducting paramilitary operations assigned to the corps to cope with high-level threat of society disturbance. It also specializes in the field of Guerrilla, and Search and Rescue (SAR) operations. There are today 4 national regiments of Pelopor in the Brimob corps which are:
I' Pelopor Regiment
II Pelopor Regiment
III Pelopor Regiment
IV Pelopor Regiment
In a historical view, this unit was called as "Brimob Rangers" during the Post Independence era. In 1959, during its first formation, Brimob Rangers troops conducted a test mission in the area of Cibeber, Ciawi and Cikatomas which borders Tasikmalaya-Garut in West Java. It was the baptism of fire of the Rangers, in which the newly acquired skills of guerrilla warfare and counter-insurgency operations were applied against remnants of Darul Islam in these communities. The actions against the Islamic Army of Indonesia (TII) units in the province weakened the DI even further, leading the total collapse of the local DI provincial chapter in 1962, ending a decade-long war of violence there.
The official first forward deployment of the Brimob Rangers was the Fourth Military Operations Movement in South Sumatra, West Sumatra and North Sumatra (in response to the Permesta rebellion of 1958). It was the Brimob Rangers troops became part of the Bangka Belitung Infantry Battalion led by Lieutenant Colonel (Inf) Dani Effendi. Rangers were tasked to capture the remains of the PRRI prison in Sumatra's forests led by Major Malik, which was then under rebel hands.
In 1961, under the express orders of then Chief of Police General Soekarno Djoyonegoro, Brimob Rangers troops were officially renamed Pelopor Troops of the Mobile Brigade. This is in accordance with the wishes of President Sukarno who wanted Indonesian names for units within both the TNI (Indonesian National Armed Forces) and POLRI (Indonesian National Police). At this time also the Pelopor constables and NCOs received many brand new weapons for police and counter-insurgency operations, including the more famous AR-15 assault rifles. The subsequent assignment of this force was to infiltrate West Irian in Fak-Fak in May 1962 and engage in combat with the servicemen of the Royal Netherlands Army during Operation Trikora. The troops were also involved in the Confrontation of Malaysia in 1964 and at that time the Brimob Rangers troops (now Pelopor) in Indonesia faced the British Special Air Service.
Pelopor Troops play a role as a troop formation unit and is still active in the Brimob's operational system. Aside from the national regiments, each Police region has a Pelopor regiment of two to four battalions.
Gegana
Gegana is a special branch detachment within the Brimob corps which have special abilities mainly in bomb disposal and counter-terrorist operations. On the other hand, it also specializes in the field of hostage rescue, intelligence and CBR (Chemical, biological and radiological) defense. The national Gegana unit is organized into a battalion headquarters company and (five) detachments which are:
Intelligence Detachment
Bomb Disposal Detachment
Anti-Terror Detachment
Anti-Anarchist Detachment
CBR (Chemical, biological and radiological) Detachment
This unit was formed in 1976 as a detachment. At first, it was meant to deal with aircraft hijacking. Later in 1995, with the expansion of Brimob, the Gegana Detachment was expanded to become the 2nd BRIMOB Regiment. However, there are a select few specialists who are very skilled in these specialties. Gegana does not have battalions or companies. The regiment is broken down into several detachments. Within each detachment they are split into sub-detachments (sub-den), and within each sub-den they are further sub-divided into several units. Each unit usually consists of 10 personnel. One sub-den consists of 40 personnel, and one detachment consists of about 280 personnel.
One operation is usually assigned to one unit. Therefore, from the 10 people in that unit, six are required to have special skills: two for EOD (Explosives and Ordnance Disposal), two for search and rescue operations, and two for counter-terrorist operations. In any operation, two experts are designated Operators One and Two while the rest of the unit members become the Support Team.
For example, in counter-terrorism operations, the designated Operators must have sharp-shooting skills, ability to negotiate, and be an expert in storm-and-arrest procedures. These skills and operations are not meant to be lethal because the main goal of every Gegana operation is to arrest suspects and bring them to the court. Unless there is a situation that is compromised, Gegana avoids the use of lethal force.
In Search and Rescue operations, the personnel are required to have the basic capabilities of diving, rappelling, shooting, and first aid. In anti-bomb operation, the Operators have to be the expert in their respective fields. Each Gegana personnel has been introduced to various types of bombs in general, including the risks of handling them. There are specific procedures for handling each bomb, including the required timing.
Currently, Gegana's national battalion has three Explosive Ordnance Disposal (EOD) tactical vehicles.
Gegana battalions or companies are present in each provincial police unit.
Unit composition
Alongside the national units, regional formations of the Mobile Brigade are present in all 38 Regional Police Forces (Polda) in Indonesia which represent a province. In each BRIMOB unit of a Police HQ in a province (Polda), there are about several detachments of MBC Pelopor units (organized into a regiment) and usually 1 - 2 detachment of Gegana (small battalions or companies).
A Brimob unit of a regional police headquarters consists of the following:
Regional Mobile Brigade HQ Section (Si-yanma)
Planning and Administration Section (Subbagrenmin)
Intelligence Section (Si-intel)
Operational Section (Si-ops)
Provost (Internal affairs) Section (Si-provos)
Communications Technology Section (Si-tekkom)
Medical and Fitness Section (Si-kesjas)
Search and Rescue (SAR) Unit
Pelopor Regiment composed of:
Regional Pelopor Regimental HQ
A Detachment (Den-A)
B Detachment (Den-B)
C Detachment (Den-C)
D Detachment (Den-D) (large departments only)
Support units
Gegana Detachment (Den Gegana)
Detachment HQ
1-3 Subdetachments/Platoons
For some regional police headquarters, Pelopor detachments only consists up-to "C" Detachments only (3 battalions each). But for bigger regional police HQs such as the Jakarta Regional Metropolitan Police (Polda Metro Jaya), it consists up-to "D" Detachment with a total of four (4) detachments. Each Pelopor Detachment consists of 4 (four) Companies, and each Company consists of 3 (three) Platoons. The Gegana detachment is organized as a company in most police regions, but in larger ones is organized as a full battalion of two detachments and a headquarters company.
In the 2020s, the regional organization was amended with regional divisional commands (Pasukan Brigade Mobil), under which each provincial brigade reports directly. The BRIMOB divisions are led by Police Brigadier Generals.
Controversies
2020s
In the Kanjuruhan Stadium disaster on 1 October 2022, the police, especially Brimob as crowd control unit, were deployed tear gas, which triggered a stampede of people in the stadium trying to escape from the effects of the gas. A crush formed at an exit, resulting in fans being asphyxiated. The disaster claimed 135 lives, including two police officers and dozens of children under the age of 17. Several officers who operated the tear gas launcher were questioned. Only three officers, two high ranked officers (non-Brimob) and one Brimob commander, were accused in the trial. Of the three, only the Mobile Brigade Commander was sentenced. He was sentenced with 1 year and 6 months for violating Article 359, Article 360 paragraph 1, and Article 360 paragraph 2 of the Criminal Code (KUHP), namely as a result of his negligence causing the death of another person or injuring others. On 16 January 2023, Mobile Brigade members "intimidate" and disrupted the trial by chanting in the courtroom.
Gallery
See also
Detachment 88 or Densus 88, Indonesian special counter-terrorism squad
Mobile Police Command, a Vietnamese equivalence to the BRIMOB
References
External links
Video about Brimob
51 Tahun Si Baret Biru
February 1962 – Summer 1963: In to Action
Gegana Operators In Action on Instagram
1946 establishments in Indonesia
Specialist law enforcement agencies of Indonesia
Non-military counterterrorist organizations
Bomb disposal
Non-military counterinsurgency organizations
Military units and formations of Indonesia in the Indonesian War of Independence | Mobile Brigade Corps | Chemistry | 3,428 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.