id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
11,003,420 | https://en.wikipedia.org/wiki/Brown%20energy | Brown energy or brown power are terms that have been coined to describe energy produced from polluting sources as a contrast to green energy from renewable, non-polluting sources. The term "grey energy" or "gray energy" has been used instead, including by the United Nations.
See also
Brownout (electricity)
References
Energy | Brown energy | Physics | 67 |
27,422,394 | https://en.wikipedia.org/wiki/Urrao%20antpitta | The Urrao antpitta (Grallaria urraoensis), also known as Fenwick's antpitta (Grallaria fenwickorum), is a highly threatened species of bird found in the understory of cloud forest in the Andean highlands of Colombia. The first published description used the scientific name Grallaria fenwickorum (and English name Fenwick's antpitta); shortly afterward, a second description using the name Grallaria urraoensis was published. The editors of the latter recognized that the name likely was a junior synonym, but others have questioned the validity of the first description, and various authorities, including the International Ornithological Congress, have adopted G. urraoensis. Antioquia antpitta has been suggested as an English-language name compromise.
Discovery
The new species was discovered during banding sessions in September 2007 and February and March 2008 when Diego Carantón, then working as a researcher for a Colombian NGO, Fundación ProAves de Colombia, caught an unfamiliar Grallaria antpitta. It was also sound-recorded in late 2008. The population was thought to be a new species and was added to the Colombian checklist as "Grallaria sp." in 2009. Since 2008 many ornithologists and birders have seen, photographed, recorded and studied the new bird at the reserve, where a family party is seen daily at a feeding station alongside chestnut-naped antpittas. Luis Felipe Barrera and Avery Bartels, the authors of the description under the name Grallaria fenwickorum, based it on holotypic material from a living bird, but also included information based on two specimens that Carantón had collected earlier.
Their holotype comprises 14 feathers, taken from the wing, tail and body of a living bird which was banded, photographed, sound-recorded and measured in the field before being released, on 11 January 2010. In the description it was stated that the holotype material had been deposited, as tissue collection No. 699, at the José Celestino Mutis Natural History Museum of the Faculty of Sciences of the University of Pamplona. This was denied by people associated with the museum, which has neither a tissue collection nor anything deposited under No. 699. An associate of the museum did receive an envelope with the feathers, but he was not informed about its great significance and it was not moved to the collection until after the description of the new species. The museum does not have an ornithological curator or the means to preserve such an important sample. Consequently, they have forwarded the material to the relevant authorities to allow them to take charge in its depositing and preservation.
Besides this holotype, two specimens were previously collected by Carantón. He has stated that the second was not deliberately collected, but died in the mist net where it was caught, which is not an exceptional occurrence. According to Fundación ProAves these specimens were collected without their knowledge and without the necessary permit from the local government, and consequently neither was used as a holotype in their description, but one could possibly be designated as a neotype if the legal status was resolved. In 2011, the collector and ProAves (the collector was employed by them when the specimens were collected) were fined for breach of reporting requirements. ProAves maintain that the collection itself was irregular, but there was no such finding by the local government. One of the specimens was used as a holotype in the second description of the species, by Diego Carantón-Ayala and Katherine Certuche-Cubillos, where they coined the name Grallaria urraoensis.
Taxonomy and etymology
Within its genus, the bird is a typical member of the plain-coloured group due to its relatively small wings, fairly uniform upperparts and underparts without strong markings, relatively high tail / wing ratio, a convolute inner edge of the tarsus, and 12 rectrices. It is evidently most closely related to the brown-banded antpitta, G. milleri, because of similarities in voice and measurements and its generally plain plumage. Barrera and Bartels and other ornithologists have suggested that it is most closely related to the probably extinct subspecies G. m. gilesi, but Carantón and Certuche say that it may resemble G. m. milleri more closely than it does gilesi. They suggest that the present species, the brown-banded antpitta, and the Cundinamarca antpitta form a clade.
The genus name Grallaria is derived from the Latin word grallae, meaning "stilts", referring to the bird's relatively long legs. The specific name fenwickorum recognises George Fenwick, President of the American Bird Conservancy (ABC), and his family, who assisted Fundación ProAves (ABC's partner organization in Colombia) in the purchase of land, now the Colibrí del Sol Bird Reserve. Based on present knowledge, the antpitta is restricted to the reserve and its immediate surroundings. ProAves's suggested English name also honours Fenwick, while the Spanish common name Tororoi de Urrao is given after the municipality of Urrao, where the bird is found. Tororoi is a general Spanish name used for most antpitta species. The creation of a type specimen without killing an individual follows the policy of the ABC.
Description
The bird most closely resembles the brown-banded antpitta, which is endemic to the Cordillera Central of Colombia, but it has a slate-grey breast and lacks the brown flanks and breast band of the other species. Measurements of the living bird from which Barrera and Bartels' holotype material was derived, as well as of the two collected specimens, show weights ranging from , flat wing chords of , tail lengths of , and tarsus lengths of . The sexes are similar in appearance, as with most other antpittas.
A captured fledgling was covered with dark grey down with brown edges above and was buff below. Its feet were dark pink; its bill was black above and orange below, with conspicuous red-orange edges. A captured juvenile looked scaled, with patches of chestnut-edged black down intermixed with grey feathers on much of its body, and a buff belly. Its bill resembled that of the fledgling.
The song comprises three notes of increasing length and frequency. The birds sing more early in the year. The call is a single note, higher-pitched than the song, which rises, falls, and rises again. The birds often give it in response to loud noises and playbacks of its vocalisations. They call more later in the year. Both song and call resemble those of the brown-banded antpitta, but Fenwick's antpitta's notes are shorter and lower-pitched, and those of its song are separated by wider intervals.
Distribution and habitat
The known distribution of the bird is limited to the Urrao municipality in and near the Colibrí del Sol Bird Reserve, a reserve on the south-eastern slope of the Páramo del Sol massif, at the northern end of the Cordillera Occidental of Colombia, and some west of Medellín, Colombia's second largest city. The massif has over of relatively intact páramo and Polylepis woodland, containing more such habitat than all the other páramos in the region combined. There the bird is restricted to upper montane cloud forest dominated by Colombian oak, at an altitude of above sea level, where most territories contain Chusquea bamboo thickets. It is suspected that its range may be larger than currently known, but so far surveys have failed to confirm this.
Behaviour
The species exhibits behaviour typical of other members of its genus; it is a shy, terrestrial forager for insects (especially beetles) in the leaf-litter within the forest understorey. It ascends to higher perches (up to 1.5 m above the ground) to sing, and is most active and vocal in the hours following dawn and prior to dusk.
It usually occurs in pairs, less often singly, and one group of three has been observed.
Reproduction
The males captured in February and March had enlarged testes, typical of breeding birds. The fledgling and an adult with old brood patches were observed in June. These data and song activity from February to April (a dry season) suggest that the breeding season begins early in the year, possibly as early as January, and extends for several months.
As in other Grallaria species, the fledgling was less developed than those of most passerines, and both parents fed it earthworms.
Conservation status
The bird has a very restricted known range, limited to the Colibrí del Sol reserve and its immediate vicinity, while previous surveys in similar habitat in the region have failed to record the species. Moreover, habitat used by the bird has been extensively cleared for pasture, and the area is rich in minerals. The known population of 24 territories has an estimated area of , giving a conservative global population estimate of 57–156 territories. Both articles on the new species propose that the IUCN classify Fenwick's antpitta as critically endangered, and this will be followed in the forthcoming 2011 edition of the BirdLife International list, which is the authority used for birds by the IUCN. Although it is protected in the Colibrí del Sol reserve, it needs further protective measures. The single bird or pair that was known from outside the reserve has not been recorded since mid-2010 and appears to have disappeared.
Controversy over the discovery
The first description was published in Conservación Colombiana, the journal of Fundación ProAves. It was accompanied by an editorial giving the reasons that Diego Carantón, who discovered the bird, was not among the authors of the paper. The editorial accused Carantón of taking specimens illegally as well as violating his contract by omitting mention of his discovery from his monthly reports to Fundación ProAves and by trying to deprive the foundation of its intellectual property in the discovery. Specifically, it said that the Fundación had learned of the discovery through third parties in October 2008. Attempts to agree on a publication authored by Carantón and members of Fundación ProAves failed, and then Carantón and others tried to publish a description of the species in the journal The Condor without notifying the Fundación. The Condor rejected the manuscript pending resolution of the dispute. Staff members of Fundación ProAves went to the Colibrí del Sol reserve and in January 2010 caught a bird whose feathers they collected and used as the basis of their publication without Carantón (May 2010).
In June 2010 (though dated May 2010), a second description of the new species by Carantón and another biologist in Colombia, Katherine Certuche, appeared in Ornitología Colombiana, the journal of the Asociación Colombiana de la Ornitología, edited by the ornithologists Carlos Daniel Cadena and F. Gary Stiles. It was accompanied by an editorial describing Stiles's and Cadena's involvement with Carantón and Certuche's paper starting shortly after Fundación ProAves found out about the work. In this account, Cadena attempted to mediate but withdrew because of conflicts with Fundación ProAves. The editorial adds a reason that Carantón's collection of specimens may have been lawful, and notes that in any case, none of the legal accusations against him had been decided by a court. Further, the attempt at joint publication by Carantón, Certuche, and Fundación ProAves scientists failed because Fundación ProAves insisted that Carantón could not be the corresponding author and that Fundación ProAves had to have full control over the final text.
After The Condor rejected Carantón and Certuche's manuscript, they submitted it to Ornitología Colombiana, which decided to publish it despite the previous description of the species. Cadena and Stiles noted that ProAves had not given Carantón the possibility to answer their accusations before they were published and said the description by Barrera and Bartels could be a violation of Carantón's moral rights, which are protected under the Colombian law. They also stated that the description by Barrera and Bartels was in violation of the ICZN Code of Ethics, which Barrera and Bartels denied and in any case the Code of Ethics is part of a section that zoologists are urged to follow (unlike most other sections of the ICZN code, which zoologists have to follow).
Subsequently, the editor-in-chief of The Condor voiced his strong discontent with the actions of ProAves, suggested the description by Barrera and Bartels conflicted with the very spirit of the ICZN Code, and stated that he felt ProAves had "maneuvered to trick the Condor out of considering your [Carantón's] manuscript so that ProAves could publish its own type description of the antpitta."
In 2011, the local government fined Carantón and ProAves (Carantón was employed by them when the specimens were collected) for breach of reporting requirements. ProAves maintain that the collection itself was irregular, but there was no such finding by the local government.
Another controversy pertains to determining the valid scientific name for the new species. The G. fenwickorum description was published before the G. urraoensis description (18 May 2010 vs 24 June 2010), thus everything else equal, the Principle of Priority dictates that G. fenwickorum would be the valid name, and G. urraoensis would be considered a junior synonym. However, the description of G. fenwickorum was unconventional in several respects. Instead of clearly designating a single specimen as the Type specimen, in a complex statement they designated both a sample of feathers and the photographed bird as the type specimen.
The rest of the description is also ambiguous regarding the nature of the type specimen. This ambiguity, by itself, may be considered problematic since the ICZN Code requires type specimens to be designated unambiguously in modern descriptions (ICZN Art. 16.4). The ambiguity also affects determining whether the type specimen was preserved (if it is the sample of feathers) or not (if it is the bird that was released). To complicate matters, there is evidence that the sample of feathers do not belong to the bird depicted in the article.
Because of these problems and other issues, it has been argued that Barrera and Bartels failed to comply with minimum requirements stipulated in the ICZN and thus the name fenwickorum is not available (i.e. not valid). Based on these arguments, the American Ornithologists' Union's South American Checklist Committee has accepted the species as G. urraoensis. (Members include Cadena, who abstained from the vote on the name, and Stiles, who voted for urraoensis.)
However, in 2018, the International Commission of Zoological Nomenclature rejected a petition to suppress the fenwickorum name, and ruled, in ICZN Opinion 2414, that "The available specific name Grallaria fenwickorum Barrera & Bartels in Barrera, Bartels & Fundación ProAves de Colombia, 2010 remains valid for the species of antpitta involved. The issue is left open for subsequent workers to make new proposals to the Commission." Setting aside a ruling given by the Commission without its consent is explicitly forbidden by the Code (ICZN Art. 80.9). As Opinion 2414 clearly indicates that Grallaria fenwickorum is available and valid, this is how it is to be treated, and the continued use of Grallaria urraoensis as the name of the species actually violates the Code.
Finally, the two descriptions also proposed different English names for the bird. Time will tell whether Fundación ProAves' English name, Fenwick's antpitta, or Carantón and Certuche's English name, Urrao antpitta, will prove more popular, but the only completely uninvolved authority that has taken a stance on this matter has avoided taking sides by coining a new name, Antioquia antpitta. Its known range is entirely within the Antioquia Department. The two articles that described the species proposed the same Spanish name, tororoi de Urrao.
References
Notes
Sources
Followed by an English translation, "The Price of Priority".
External links
Photographs of Grallaria fenwickorum
Sound recordings by the describing teams
as Grallaria fenwickorum:
Song
and as Grallaris urraoensis:
Song
Call
Urrao antpitta
Birds of the Colombian Andes
Endemic birds of Colombia
Critically endangered animals
Critically endangered biota of South America
Urrao antpitta
Controversial bird taxa
Naming controversies | Urrao antpitta | Biology | 3,439 |
49,241,355 | https://en.wikipedia.org/wiki/Gastruloid | Gastruloids are three dimensional aggregates of embryonic stem cells (ESCs) that, when cultured in specific conditions, exhibit an organization resembling that of an embryo. They develop with three orthogonal axes and contain the primordial cells for various tissues derived from the three germ layers, without the presence of extraembryonic tissues. Notably, they do not possess forebrain, midbrain, and hindbrain structures. Gastruloids serve as a valuable model system for studying mammalian development, including human development, as well as diseases associated with it. They are a model system an embryonic organoid for the study of mammalian development (including humans) and disease.
Background
The Gastruloid model system draws its origins from work by Marikawa et al.. In that study, small numbers of mouse P19 embryonal carcinoma (EC) cells, were aggregated as embryoid bodies (EBs) and used to model and investigate the processes involved in anteroposterior polarity and the formation of a primitive streak region. In this work, the EBs were able to organise themselves into structures with polarised gene expression, axial elongation/organisation and up-regulation of posterior mesodermal markers. This was in stark contrast to work using EBs from mouse ESCs, which had shown some polarisation of gene expression in a small number of cases but no further development of the multicellular system.
Following this study, the Martinez Arias laboratory in the Department of Genetics at the University of Cambridge demonstrated how aggregates of mouse embryonic stem cells (ESCs) were able to generate structures that exhibited collective behaviours with striking similarity to those during early development such as symmetry-breaking (in terms of gene expression), axial elongation and germ-layer specification. To quote from the original paper: "Altogether, these observations further emphasize the similarity between the processes that we have uncovered here and the events in the embryo. The movements are related to those of cells in gastrulating embryos and for this reason we term these aggregates ‘gastruloids’". As noted by the authors of this protocol, a crucial difference between this culture method and previous work with mouse EBs was the use of small numbers of cells which may be important for generating the correct length scale for patterning, and the use of culture conditions derived from directed differentiation of ESCs in adherent culture
Brachyury (T/Bra), a gene which marks the primitive streak and the site of gastrulation, is up-regulated in the Gastruloids following a pulse of the Wnt/β-Catenin agonist CHIR99021 (Chi; other factors have also been tested) and becomes regionalised to the elongating tip of the Gastruloid. From or near the region expressing T/Bra, cells expressing the mesodermal marker tbx6 are extruded from the similar to cells in the gastrulating embryo; it is for this reason that these structures are called Gastruloids.
Further studies revealed that the events that specify T/Bra expression in gastruloids mimic those in the embryo. After seven days gastruloids exhibit an organization very similar to a midgestation embryo with spatially organized primordia for all mesodermal (axial, paraxial, intermediate, cardiac, cranial and hematopoietic) and endodermal derivatives as well as the spinal cord. They also implement Hox gene expression with the spatiotemporal coordinates as the embryo. Gastruloids lack brain as well as extraembryonic tissues but characterisation of the cellular complexity of gastruloids at the level of single cell and spatial transcriptomics, reveals that they contain representatives of the three germ layers including neural crest, Primordial Germ cells and placodal primordia.
A feature of gastruloids is a disconnect between the transcriptional programs and outlines and the morphogenesis. However, changes in the culture conditions can elicit morphogenesis, most significantly gastruloids have been shown to form somites and early cardiac structures. In addition, interactions between gastruloids and extraembryonic tissues promote an anterior, brain-like polarised tissue.
Gastruloids have recently been obtained from human ESCs, which gives developmental biologists the ability to study early human development without needing human embryos. Importantly though, the human gastruloid model is not able to form a human embryo, meaning that is a non-intact, non-viable and non-equivalent to in vivo human embryos.
The term Gastruloid has been expanded to include self-organised human embryonic stem cell arrangements on patterned (micro patterns) that mimic early patterning events in development; these arrangements should be referred to as 2D gastruloids.
References
Stem cells
Tissue engineering
Animal developmental biology | Gastruloid | Chemistry,Engineering,Biology | 1,015 |
21,716,689 | https://en.wikipedia.org/wiki/Kelasuri%20Wall | The Kelasuri Wall () or Great Abkhazian Wall () is a stone wall located to the east of Sokhumi in Abkhazia, Georgia. The exact time of its construction is not known; several dates ranging from antiquity to the 17th century were suggested, although more recent works have provisionally favoured construction in the 6th century AD. The wall featured about 300 towers, most of them now entirely or largely ruined.
Location
The wall begins near the mouth of Kelasuri River where the ruins of a large tower remained. It goes to the east crossing Kodori River near Tsebelda fortress, then passes near Tkvarcheli and terminates near the village of Lekukhona on the right bank of Enguri.
Most of the fortifications are located in the western part of the wall between Kelasuri and Mokvi Rivers. Kelasuri's left bank and mountain passes were most heavily fortified. On the other hand, only four towers were found between Tkvarcheli and Enguri.
Towers
The wall was not continuous as its builders made use of natural obstacles such as steep slopes and gorges. 279 towers belonging to the wall have been identified, about a hundred of them are extant. The usual distance between towers is 40–120 m, where there was no continuous wall some towers were 300, 500 and 1000 m apart.
All the towers are rectangular (7 by 8 or 8 by 9 meters), 4–6 m high and have shallow foundations. Each tower had a door in its southern wall framed by massive stone beams, sometimes a narrow staircase was also added. Embrasures were usually located in the towers' northern and western walls on the second floor.
History of construction
Since the wall was first examined scientifically in the early 19th century, many hypotheses on who and when built it were published. For example, the Swiss traveller Frédéric Dubois de Montpéreux (fr) asserted that the wall was built by Greeks in the last centuries BC to protect their colony of Dioscurias (which he erroneously placed near the Kodori cape
According to Mikhail Ivashchenko, the wall was built by Byzantines in the 4th century to protect their possessions and control mountain passes. He connected the name of the river Kelasuri with Byzantine Greek kleisoura, a Byzantine territorial unit smaller than a theme. Several other historians supported this date although they could not agree on the length and orientation of the wall.
Yuri Voronov, a well-known Abkhazian historian and archaeologist, examined the Abkhazian wall in 1966-1971 and proposed a new date of its construction. According to Voronov, Prince of Mingrelia, Levan II Dadiani built Kelasuri Walls between 1628 and 1653 to protect his fiefdom from the Abkhaz invasions (though at that time Principality of Abkhazia was a nominal vassal of Mingrelia). Per Voronov's work the embrasures in the wall were made for firearms; he also quoted Georgian historian Vakhushti and Italian missionary Arcangelo Lamberti who both wrote about the wall built by Megrelian princes for protection from the Abkhaz.
References
Fortifications in Abkhazia
Ruins in Georgia (country)
Border barriers
Fortification lines | Kelasuri Wall | Engineering | 671 |
318,052 | https://en.wikipedia.org/wiki/Gaussian%20rational | In mathematics, a Gaussian rational number is a complex number of the form p + qi, where p and q are both rational numbers.
The set of all Gaussian rationals forms the Gaussian rational field, denoted Q(i), obtained by adjoining the imaginary number i to the field of rationals Q.
Properties of the field
The field of Gaussian rationals provides an example of an algebraic number field that is both a quadratic field and a cyclotomic field (since i is a 4th root of unity). Like all quadratic fields it is a Galois extension of Q with Galois group cyclic of order two, in this case generated by complex conjugation, and is thus an abelian extension of Q, with conductor 4.
As with cyclotomic fields more generally, the field of Gaussian rationals is neither ordered nor complete (as a metric space). The Gaussian integers Z[i] form the ring of integers of Q(i). The set of all Gaussian rationals is countably infinite.
The field of Gaussian rationals is also a two-dimensional vector space over Q with natural basis .
Ford spheres
The concept of Ford circles can be generalized from the rational numbers to the Gaussian rationals, giving Ford spheres. In this construction, the complex numbers are embedded as a plane in a three-dimensional Euclidean space, and for each Gaussian rational point in this plane one constructs a sphere tangent to the plane at that point. For a Gaussian rational represented in lowest terms as (i.e. and are relatively prime), the radius of this sphere should be where is the squared modulus, and is the complex conjugate. The resulting spheres are tangent for pairs of Gaussian rationals and with , and otherwise they do not intersect each other.
References
Cyclotomic fields
it:Intero di Gauss#Campo dei quozienti | Gaussian rational | Mathematics | 404 |
398,580 | https://en.wikipedia.org/wiki/Dyssomnia | Dyssomnias are a broad classification of sleeping disorders involving difficulty getting to sleep, remaining asleep, or of excessive sleepiness.
Dyssomnias are primary disorders of initiating or maintaining sleep or of excessive sleepiness and are characterized by a disturbance in the amount, quality, or timing of sleep.
Patients may complain of difficulty getting to sleep or staying asleep, intermittent wakefulness during the night, early morning awakening, or combinations of any of these. Transient episodes are usually of little significance. Stress, caffeine, physical discomfort, daytime napping, and early bedtimes are common factors.
Types
There are over 31 recognized kinds of dyssomnias. The major three groups, along with the group types, include:
Intrinsic sleep disorders
idiopathic hypersomnia,
narcolepsy,
periodic limb movement disorder,
restless legs syndrome,
obstructive sleep apnea,
central sleep apnea syndrome,
sleep state misperception,
psychophysiologic insomnia,
recurrent hypersomnia,
post-traumatic hypersomnia,
central alveolar hypoventilation syndrome,
Extrinsic sleep disorders – 13 disorders recognized, including
alcohol-dependent sleep disorder,
food allergy insomnia,
inadequate sleep routine.
Circadian rhythm sleep disorders, both intrinsic and extrinsic – 6 disorders recognized, including
advanced sleep phase syndrome,
delayed sleep phase syndrome,
jetlag,
shift work sleep disorder.
See also
Parasomnia
Sleep problems in women
Somnolence
References
External links
Sleep disorders | Dyssomnia | Biology | 311 |
680,229 | https://en.wikipedia.org/wiki/List%20of%20national%20and%20international%20statistical%20services | The following is a list of national and international statistical services.
Central national statistical services
Nearly every country in the world has set a central public sector unit entirely devoted to the production, harmonisation and dissemination of official statistics that the public sector and the national community need to run, monitor and evaluate their operations and policies. This central statistical organisation does not produce every official statistic as other public sector organisations, like the national central bank or ministries in charge of agriculture, education or health, may be charged with producing and disseminating sector policy oriented statistical data. The statistical legislation and regulation generally attribute responsibilities and authorities according to statistical domains or functions in addition to those of the central unit.
The table below lists these central statistical organisations by country. The United States has no central producing unit, but several units (also listed below) have been given responsibility over various federal statistics domains (see also: Federal Statistical System of the United States).
Africa
Americas
Asia
Europe
(Institutions from countries marked with * are members of Eurostat's European Statistical System (ESS).)
Oceania
Autonomous statistical services at sub-national level
Some countries are politically organised as federations of states or of autonomous regions; also a specific territory might have been given a partial autonomy. Several of these sub-national regional units have set their own quasi-independent statistical department. A list is presented in Sub-national autonomous statistical services
International statistical services
United Nations organisations
Intergovernmental Development and Central Banks
Regional intergovernmental organisations
Other organisations
See also
Official statistics
Statistics
List of statistical topics
List of academic statistical associations
National agencies responsible for GDP measurement
External links
World Bank directory of national statistic sites
OECD Worldwide statistical sources
UN Statistics Division Information on National Statistical Systems
Demography
Lists of government agencies
Lists of scientific organizations
Official statistics
Statistics-related lists
Government institutions | List of national and international statistical services | Environmental_science | 363 |
4,341,065 | https://en.wikipedia.org/wiki/Tcelna | Tcelna (formerly known as Tovaxin) is an anti-T cell vaccine being studied in multiple sclerosis (MS). As of 2016 it is in phase II trials.
History
The company announced in late 2005 that the U.S. Food and Drug Administration had approved the protocol for the Phase IIb clinical trial of Tcelna.
The multicenter, randomized, double blind, placebo-controlled Phase IIb clinical study on 150 patients was designed to evaluate the efficacy, safety and tolerability of the therapy with clinically isolated syndrome (CIS) and early relapsing-remitting MS (RR-MS) patients.
The first phase of the trial finished in March 2008. All patients who completed the trial were to be eligible for an optional one-year extension study, OLTERMS, to receive Tcelna open-label without a placebo group; however, that program was terminated suddenly for lack of funding.
After several financial troubles, the trials were restarted in 2011 and Opexa rebranded the therapy, previous called Tovaxin, with the new name Tcelna.
References
External links
BBC Article 3/8/06
UPI Health Brief
Opexa Therapeutics Tcelna Page
The US clinical trial registry NCT01684761 Tcelna phase II trial for SPMS.
Vaccines | Tcelna | Biology | 278 |
12,207,850 | https://en.wikipedia.org/wiki/Comparison%20of%20software%20for%20molecular%20mechanics%20modeling | This is a list of computer programs that are predominantly used for molecular mechanics calculations.
See also
Car–Parrinello molecular dynamics
Comparison of force-field implementations
Comparison of nucleic acid simulation software
List of molecular graphics systems
List of protein structure prediction software
List of quantum chemistry and solid-state physics software
List of software for Monte Carlo molecular modeling
List of software for nanostructures modeling
Molecular design software
Molecular dynamics
Molecular modeling on GPUs
Molecule editor
Notes and references
External links
SINCRIS
Linux4Chemistry
Collaborative Computational Project
World Index of Molecular Visualization Resources
Short list of Molecular Modeling resources
OpenScience
Biological Magnetic Resonance Data Bank
Materials modelling and computer simulation codes
A few tips on molecular dynamics
atomistic.software - atomistic simulation engines and their citation trends
Computational chemistry software
Computational chemistry
Software comparisons
Molecular dynamics software
Molecular modelling software
Science software | Comparison of software for molecular mechanics modeling | Chemistry,Technology | 167 |
49,838,589 | https://en.wikipedia.org/wiki/Track%20hub | A track hub is a structured directory of genomic data, such as gene expression or epigenetic data, viewable over the web with a genome browser. Track hubs are defined by the track hub standard. Originally developed as part of the UCSC genome browser, they are now supported by Ensembl and BioDalliance.
Track hubs are a useful and efficient tool for visualizing large data sets. Collections of wiggle plots produced by a transcriptomics study can be organized hierarchically into so called composite and super-tracks.
References
External links
https://genome.ucsc.edu/goldenpath/help/hgTrackHubHelp.html
Bioinformatics
Computer file formats | Track hub | Chemistry,Engineering,Biology | 148 |
70,819,323 | https://en.wikipedia.org/wiki/Mary%20Lynn%20Realff | Mary Lynn Realff (born 1965) is an American mechanical engineer and materials scientist specializing in researching the mechanical properties of textiles. She is an associate professor in the School of Materials Science and Engineering at Georgia Tech, and co-director of the Georgia Tech Center for Women, Science, and Technology. Beyond her research on textiles, she is also known for her explorations of group work in engineering education.
Education and career
Realff graduated in 1987 from Georgia Tech, with a bachelor's degree in textile engineering. She earned a Ph.D. from the Massachusetts Institute of Technology in Mechanical Engineering and Polymer Science and Technology in 1992. Mary Lynn Realff became an associate professor at Georgia Tech where she is an instructor for undergraduate and graduate courses regarding textile structures and polymer sciences. She was elected as a Fellow of the American Society of Mechanical Engineers in 2007, at which time she was a program director for Materials Processing and Manufacturing at the National Science Foundation.
Research/publications
Publications
Looking Ahead: Fostering Effective Team Dynamics in the Engineering Classroom and Beyond (2021)
Fatigue Response and Constitutive Behavior Modeling of Poly(ethylene terephthalate) Unreinforced and Nanocomposite Fibers Using Genetic Neural Networks (2012)
Advanced Fabrics (2003)
Objective and Subjective Analysis of Knitted Fabric Bagging (2002)
References
External links
1965 births
Living people
American mechanical engineers
American materials scientists
American women engineers
Textile engineers
Engineering educators
Georgia Tech alumni
Massachusetts Institute of Technology faculty
Fellows of the American Society of Mechanical Engineers | Mary Lynn Realff | Engineering | 306 |
18,977,704 | https://en.wikipedia.org/wiki/Motorola%20e380 | The Motorola E380 is a small phone made by Motorola. It was released in Q3 of 2003. In 2006 Motorola released a Dolly magazine edition in Australia. This phone has been discontinued after the Dolly version.
External links
Motorola E380 review
E380 | Motorola e380 | Technology | 55 |
1,582,785 | https://en.wikipedia.org/wiki/Flexiviridae | Flexiviridae was a family of viruses named after being filamentous and highly flexible. Members of the family infect plants. In 2009, the family was dissolved and replaced with four families, each of which still contain the name flexiviridae:
Alphaflexiviridae
Betaflexiviridae
Gammaflexiviridae
Deltaflexiviridae
Flexiviridae was incertae sedis but the new families are in Tymovirales.
References
Obsolete virus taxa
Unaccepted virus taxa | Flexiviridae | Biology | 106 |
39,531,188 | https://en.wikipedia.org/wiki/MYRRHA | The MYRRHA (Multi-purpose hYbrid Research Reactor for High-tech Applications) is a design project of a nuclear reactor coupled to a proton accelerator. This makes it an accelerator-driven system (ADS). MYRRHA will be a lead-bismuth cooled fast reactor with two possible configurations: sub-critical or critical.
The project is managed by SCK CEN, the Belgian Centre for Nuclear Research. Its design will be adapted as a function of the experience gained from a first research project with a small proton accelerator and a lead-bismuth eutectic target: GUINEVERE.
MYRRHA is anticipated to be constructed in 2036, with a first phase (100 MeV LINAC accelerator) expected to be completed in 2026 if successfully demonstrated.
Concept
In a traditional power-generating nuclear reactor, the nuclear fuel is arranged in such a way that the two or three neutrons released from a fission event will induce one other atom in the fuel to fission. This is known as criticality. To maintain this precise balance, a number of control systems are used like control rods and neutron poisons. In most such designs, a loss of control can lead to a runaway reaction, heating the fuel until it melts. Various feedback systems and active controls prevent this.
The concept behind a number of advanced reactor designs is to arrange the fuel so it is always below criticality. Under normal conditions, this would lead to it rapidly "turning off" as the neutron counts continue to fall. In order to produce power, some other source of neutrons has to be provided. In most designs, these are provided from a second much smaller reactor running on a neutron-rich fuel, like highly enriched uranium. This is the basis for the fast breeder reactor and similar designs. In order for this to work, the reactor generally has to use a coolant that has a low neutron cross-section, water will slow the neutrons down too much. Typical coolants for fast reactors are sodium or lead-bismuth.
In the accelerator driven reactor, these extra neutrons are instead provided by a particle accelerator. These produce protons which are shot into a target, normally a heavy metal. The energy of the protons causes neutrons to be knocked off the atoms in the target, a process known as neutron spallation. These neutrons are then fed into the reactor, making up the number needed to bring the reactor back to criticality. The MYRRHA design uses the lead-bismuth cooling fuel as the target, shooting the protons directly into the reactor core.
Components
MYRRHA is a project presently under development of a research reactor aiming to demonstrate the feasibility of the ADS and the lead-cooled fast reactor concepts, with various research applications from spent-fuel irradiation to material irradiation testing. A linear accelerator is under development to provide a beam of fast proton that hits a spallation target, producing neutrons. These neutrons are necessary to keep the nuclear reactor running when operated in sub-critical mode, but to increase its versatility the reactor is also designed to operate in critical mode with fast neutron and thermal neutron zones.
Accelerator
The accelerator will accelerate protons to an energy of 600 MeV with a beam current of up to 4 mA. In subcritical mode, if the accelerator stops the reactor power drops immediately. To avoid thermal cycles the accelerator needs to be extremely reliable. MYRRHA aims at no more than 10 outages longer than three seconds per 100 days. A first prototype stage of the accelerator was started in 2020.
The accelerator and two targets are called Minerva, and construction was started in 2024.
ISOL@MYRRHA
The high reliability and intense beam current required for operating such a machine makes the proton accelerator potentially interesting for online isotope separation. Phase I of the project therefore also includes the design and feasibility study of ISOL@MYRRHA to investigate exotic isotopes.
Spallation target
The protons collide with a liquid lead-bismuth eutectic. The high atomic number of the target leads to a large number of neutrons via spallation.
Reactor
The pool type, or the loop type, reactor will be cooled by a lead-bismuth eutectic. Separated into a fast neutron zone and a thermal neutron zone, the reactor is planned to use a mixed oxide of uranium and plutonium (with 35 wt. % ).
Two operating modes are foreseen: critical and sub-critical.
In sub-critical mode, the reactor is planned to run with a criticality under 0.95: On average a fission reaction will induce less than one additional fission reaction, the reactor does not have enough fissile material to sustain a chain reaction on its own and relies on the neutrons from the spallation target. As additional safety feature the reactor can be passively cooled when the accelerator is switched off.
See also
ASTRID
Fast breeder reactor
Fast neutron reactor
Gas-cooled fast reactor
Generation IV reactor
Integral Fast Reactor
Sodium-cooled fast reactor
References
External links
Liquid metal fast reactors
Neutron sources
Nuclear reactors
Nuclear research reactors
Nuclear technology
Pressure vessels | MYRRHA | Physics,Chemistry,Engineering | 1,039 |
41,839,863 | https://en.wikipedia.org/wiki/INtime | The INtime Real Time Operating System (RTOS) family is based on a 32-bit RTOS conceived to run time-critical operations cycle-times as low as 50μs. INtime RTOS runs on single-core, hyper-threaded, and multi-core x86 PC platforms from Intel and AMD. It supports two binary compatible usage configurations; INtime for Windows, where the INtime RTOS runs alongside Microsoft Windows®, and INtime Distributed RTOS, where INtime runs one.
Like its iRMX predecessors, INtime is a real-time operating system, and like DOSRMX and iRMX for Windows, it runs concurrently with a general-purpose operating system on a single hardware platform.
History
Initial Release
INtime 1.0 was originally introduced in 1997 in conjunction with the Windows NT operating system. Since then it has been upgraded to include support for all subsequent protected-mode Microsoft Windows platforms, Windows XP to Windows 10.
INtime can also be used as a stand-alone RTOS. INtime binaries are able to run unchanged when running on a stand-alone node of the INtime RTOS. Unlike Windows, INtime can run on an Intel 80386 or equivalent processor. Current versions of the Windows operating system generally require at least a Pentium level processor in order to boot and execute.
Version 2.2
After spinning off from Radisys in 2000 development work on INtime continued at TenAsys Corporation. In 2003 TenAsys released version 2.2 of INtime.
Notable features of version 2.2 include:
Real-time Shared Libraries, or RSLs, which are the functional equivalent of the Windows Dynamically Loaded Libraries, or DLLs.
Support for the development of USB clients, and USB host control drivers for OHCI, UHCI and EHCI (USB 2.0) devices.
A new timing acquisition and display application called ""INscope"" is released.
Notes
Real-time operating systems
Embedded operating systems | INtime | Technology | 405 |
23,030,775 | https://en.wikipedia.org/wiki/Mechanical%20systems%20drawing | Mechanical systems drawing is a type of technical drawing that shows information about heating, ventilating, air conditioning and transportation around the building (Elevators or Lifts and Escalator). It is a powerful tool that helps analyze complex systems. These drawings are often a set of detailed drawings used for construction projects; it is a requirement for all HVAC work. They are based on the floor and reflected ceiling plans of the architect. After the mechanical drawings are complete, they become part of the construction drawings, which is then used to apply for a building permit. They are also used to determine the price of the project.
Sets of drawings
Arrangement drawing
Arrangement drawings include information about the self-contained units that make up the system: table of parts, fabrication and detail drawing, overall dimension, weight/mass, lifting points, and information needed to construct, test, lift, transport, and install the equipment. These drawings should show at least three different orthographic views and clear details of all the components and how they are assembled.
Assembly drawing
The assembly drawing typically includes three orthographic views of the system: overall dimensions, weight and mass, identification of all the components, quantities of material, supply details, list of reference drawings, and notes. Assembly drawings detail how certain component parts are assembled.
An assembly drawing shows which order the product is put together, showing all the parts as if they were stretched out. This will help a welder to understand how the product will go together so he get an idea of where the weld is needed. The assembly drawing will contain the following; information overall dimensions, weight and mass, identification of all the components, quantities of material, supply details, list of reference drawings, and notes.
Detail drawing
In detail drawings, components used to build the mechanical system are described in some detail to show that the designer's specifications are met: relevant codes, standards, geometry, weight, mass, material, heat treatment requirements, surface texture, size tolerances, and geometric tolerances.
Fabrication drawings
A fabrication is made up of many different parts. A fabrication drawing has a list of parts that make up the fabrication. In the list, parts are identified (balloons and leader lines) and complex details are included: welding details, material standards, codes, and tolerances, and details about heat/stress treatments. and also
United Kingdom
Tender drawings
Special detailed drawing
Line diagrams and layouts indicating basic proposals, location of main items of plant, routes of main pipes, air ducts and cable runs in such detail as to illustrate the incorporation of the engineering services within the project as a whole.
Schematic drawing
The schematic is a line diagram, not necessarily to scale, that describes interconnection of components in a system. The main features of a schematic drawing show:
A two dimensional layout with divisions that show distribution of the system between building levels, or an isometric-style layout that shows distribution of systems across individual floor levels
All functional components that make up the system, i.e., plant items, pumps, fans, valves, strainers, terminals, electrical switchgear, distribution and components
Symbols and line conventions, in accordance with industry standard guidance
Labels for pipe, duct, and cable sizes where not shown elsewhere
Components that have a sensing and control function, and links between them—building management systems, fire alarms and HV controls
Major components, so their whereabouts in specifications and other drawings can be easily determined
Detailed design drawing
A drawing the intended locations of plant items and service routes in such detail as to indicate the design intent. The main features of detailed design drawings should be as follows:
Plan layouts to a scale of at least 1:100.
Plant areas to a scale of at least 1:50 and accompanied by cross-sections.
The drawing don't indicate precise positions of services, but should be feasible to install the services within the general routes indicated. It should be possible to produce co-ordination drawings or installation drawings without major re-routing of the services.
Represent pipework by single line layouts.
Represent ductwork by either double or single line layouts as required to ensure that the routes indicated are feasible.
Indicate on the drawing the space available for major service routing in both horizontal and vertical planes.
Installation drawing
A drawing which based on the detailed drawing, installation drawing or co-ordination drawing (interface drawing) with the primary purpose of defining that information needed by the tradesmen on site to install the works or concurrently work among various engineering assembly. The main features of typical installation drawings are:
Plan layouts to a scale of at least 1:50, accompanied by cross-sections to a scale of at least 1:20 for all congested areas
A spatially coordinated drawing, i.e., show no physical location clashes between the system components
Allowance for inclusion of all supports and fixtures necessary to install the works
Allowance for the service at its widest point for spaces between pipe and duct runs, for insulation, standard fitting dimensions, and joint widths
Installation details provided from shop drawings
Installation working space; space to facilitate commissioning and space to allow on-going operation and maintenance in accordance with the relevant health and safety requirements
Plant and equipment including alternatives and options
Dimensions where services positioning is important enough not to installers
Plant room layouts to a scale of at least 1:20, accompanied by cross-sections and elevations to a scale of at least 1:20
Record (as installed, as-built) drawing
A drawing showing the building and services installations as installed at the date of practical completion. Generally the record drawing is a development of the installation drawing. The main features of the record drawings should be as follows.
Provide a record of the locations of all the systems and components installed including pumps, fans, valves, strainers, terminals, electrical switchgear, distribution and components.
Use a scale not less than that of the installation drawings.
Have marked on the drawings the positions of access points for operating and maintenance purposes.
The drawings should not be dimensioned unless the inclusion of a dimension is considered necessary for location.
Builder's work Drawing
Design stage
These drawings show the provisions required to accommodate the services that significantly affect the design of the building structure, fabric, and external works. This includes drawings (and schedules) of work the building trade carries out, or that must be cost-estimated at the design stage, e.g., plant bases
Installation stage
These drawings show requirements for building works necessary to facilitate installing the engineering services (other than where it is appropriate to mark out on site). Information on these drawing includes details of all:
Bases for plant formed in concrete, brickwork or blockwork, to a scale of not less than 1:20
Attendant builders work, holes, chases, etc. for conduits, cables and trunking etc. and any item where access for a function of the installation is required to a scale of not less than 1:100
Purpose made brackets for supporting service or plant/equipment to a scale of not less than 1:50
Accesses into ceilings, ducts, etc. at a scale of not less than 1:50
Special fixings, inserts, brackets, anchors, suspensions, supports etc. at a scale of not less than 1:20
Sleeves, puddle flanges, access chambers at a scale not less than 1:20
Details to include
Size, type, and layout of ducting
Diffusers, heat registers, return air grilles, dampers
Turning vanes, ductwork insulation
HVAC unit
Thermostats
Electrical, water, and/or gas connections
Ventilation
Exhaust fans
Symbol legend, general notes and specific key notes
Heating and/or cooling load summary
Connection to existing systems
Demolition of part or all of existing systems
Smoke detector and firestat re-ducting
Thermostat programming
Heat loss and heat gain calculations
Special condition
Job outlook
About 80,000 jobs are held by mechanical drafters in the United States of America during 2008. From 2008 to 2018, mechanical drafting hiring rate is expected to neither increase nor decrease. It is encouraged to either take two additional years of training in drafting school after high school or attend a four-year college/university to develop better technical skills and gain more experience with CAD (computer-aided design).
Income of mechanical drafters in 2008
Lowest 10% made $29,390.
Highest 10% made $71,340.
Middle 50% made between $36,490 to $59,010.
Median: $46,640.
ADDA certification
The American Design Drafting Association (ADDA) has developed a Drafter Certification Test. The test assesses the drafter's skill in basic drafting concepts: geometric construction, working drawings, and architectural terms and standards. The test is administered periodically at ADDA-authorized sites.
Regulations in Canada
Mechanical system drawings must abide by all of the following regulations: the National Building Code of Canada, the National Fire Code, and Model National Energy Code of Canada for Buildings. For residential projects, The National Housing Code of Canada and the Model National Energy Code of Canada for Houses must also be followed. These drawings must also adhere to local and provincial codes and bylaws.
See also
Architectural drawing
Electrical drawing
Engineering drawing
Plumbing drawing
Structural drawing
Plan (drawing)
References
External links
Examples of mechanical drawings
Mechanical Drafters Occupational Employment and Wages, May 2011
Technical drawing | Mechanical systems drawing | Engineering | 1,887 |
12,202,917 | https://en.wikipedia.org/wiki/Taylor%20dispersion | Taylor dispersion or Taylor diffusion is an apparent or effective diffusion of some scalar field arising on the large scale due to the presence of a strong, confined, zero-mean shear flow on the small scale. Essentially, the shear acts to smear out the concentration distribution in the direction of the flow, enhancing the rate at which it spreads in that direction. The effect is named after the British fluid dynamicist G. I. Taylor, who described the shear-induced dispersion for large Peclet numbers. The analysis was later generalized by Rutherford Aris for arbitrary values of the Peclet number. The dispersion process is sometimes also referred to as the Taylor-Aris dispersion.
The canonical example is that of a simple diffusing species in uniform
Poiseuille flow through a uniform circular pipe with no-flux
boundary conditions.
Description
We use z as an axial coordinate and r as the radial
coordinate, and assume axisymmetry. The pipe has radius a, and
the fluid velocity is:
The concentration of the diffusing species is denoted c and its
diffusivity is D. The concentration is assumed to be governed by
the linear advection–diffusion equation:
The concentration and velocity are written as the sum of a cross-sectional average (indicated by an overbar) and a deviation (indicated by a prime), thus:
Under some assumptions (see below), it is possible to derive an equation just involving the average quantities:
Observe how the effective diffusivity multiplying the derivative on the right hand side is greater than the original value of diffusion coefficient, D. The effective diffusivity is often written as:
where is the Péclet number, based on the channel radius . The interesting result is that for large values of the Péclet number, the effective diffusivity is inversely proportional to the molecular diffusivity. The effect of Taylor dispersion is therefore more pronounced at higher Péclet numbers.
In a frame moving with the mean velocity, i.e., by introducing , the dispersion process becomes a purely diffusion process,
with diffusivity given by the effective diffusivity.
The assumption is that for given , which is the case if the length scale in the direction is long enough to smooth the gradient in the direction. This can be translated into the requirement that the length scale in the direction satisfies:
.
Dispersion is also a function of channel geometry. An interesting phenomenon for example is that the dispersion of a flow between two infinite flat plates and a rectangular channel, which is infinitely thin, differs approximately 8.75 times. Here the very small side walls of the rectangular channel have an enormous influence on the dispersion.
While the exact formula will not hold in more general circumstances, the mechanism still applies, and the effect is stronger at higher Péclet numbers. Taylor dispersion is of particular relevance for flows in porous media modelled by Darcy's law.
Derivation
One may derive the Taylor equation using method of averages, first introduced by Aris. The result can also be derived from large-time asymptotics, which is more intuitively clear. In the dimensional coordinate system , consider the fully-developed Poiseuille flow flowing inside a pipe of radius , where is the average velocity of the fluid. A species of concentration with some arbitrary distribution is to be released at somewhere inside the pipe at time . As long as this initial distribution is compact, for instance the species/solute is not released everywhere with finite concentration level, the species will be convected along the pipe with the mean velocity . In a frame moving with the mean velocity and scaled with following non-dimensional scales
where is the time required for the species to diffuse in the radial direction, is the diffusion coefficient of the species and is the Peclet number, the governing equations are given by
Thus in this moving frame, at times (in dimensional variables, ), the species will diffuse radially. It is clear then that when (in dimensional variables, ), diffusion in the radial direction will make the concentration uniform across the pipe, although however the species is still diffusing in the direction. Taylor dispersion quantifies this axial diffusion process for large .
Suppose (i.e., times large in comparison with the radial diffusion time ), where is a small number. Then at these times, the concentration would spread to an axial extent . To quantify large-time behavior, the following rescalings
can be introduced. The equation then becomes
If pipe walls do not absorb or react with the species, then the boundary condition must be satisfied at . Due to symmetry, at .
Since , the solution can be expanded in an asymptotic series, Substituting this series into the governing equation and collecting terms of different orders will lead to series of equations. At leading order, the equation obtained is
Integrating this equation with boundary conditions defined before, one finds . At this order, is still an unknown function. This fact that is independent of is an expected result since as already said, at times , the radial diffusion will dominate first and make the concentration uniform across the pipe.
Terms of order leads to the equation
Integrating this equation with respect to using the boundary conditions leads to
where is the value of at , an unknown function at this order.
Terms of order leads to the equation
This equation can also be integrated with respect to , but what is required is the solvability condition of the above equation. The solvability condition is obtained by multiplying the above equation by and integrating the whole equation from to . This is also the same as averaging the above equation over the radial direction. Using the boundary conditions and results obtained in the previous two orders, the solvability condition leads to
This is the required diffusion equation. Going back to the laboratory frame and dimensional variables, the equation becomes
By the way in which this equation is derived, it can be seen that this is valid for in which changes significantly over a length scale (or more precisely on a scale . At the same time scale , at any small length scale about some location that moves with the mean flow, say , i.e., on the length scale , the concentration is no longer independent of , but is given by
Higher order asymptotics
Integrating the equations obtained at the second order, we find
where is an unknown at this order.
Now collecting terms of order , we find
The solvability condition of the above equation yields the governing equation for as follows
References
Other sources
Mestel. J. Taylor dispersion — shear augmented diffusion, Lecture Handout for Course M4A33, Imperial College.
Fluid mechanics
Fluid dynamics | Taylor dispersion | Chemistry,Engineering | 1,369 |
4,869,852 | https://en.wikipedia.org/wiki/Manganese%20heptoxide | Manganese(VII) oxide (manganese heptoxide) is an inorganic compound with the formula . Manganese heptoxide is a volatile liquid with an oily consistency. It is a highly reactive and powerful oxidizer that reacts explosively with nearly any organic compound. It was first described in 1860. It is the acid anhydride of permanganic acid.
Properties
The crystalline form of this chemical compound is dark green. The liquid is green by reflected light and red by transmitted light. It is soluble in carbon tetrachloride, and decomposes when in contact with water.
Structure
Its solubility properties indicate a nonpolar molecular species, which is confirmed by its structure. The molecules consist of a pair of tetrahedra that share a common vertex. The vertices are occupied by oxygen atoms and at the centers of the tetrahedra are the Mn(VII) centers. The connectivity is indicated by the formula O3Mn−O−MnO3. The terminal Mn−O distances are 1.585 Å and the bridging oxygen is 1.77 Å distant from the two Mn atoms. The Mn−O−Mn angle is 120.7°.
Pyrosulfate, pyrophosphate, and dichromate adopt structures similar to that of . Probably the most similar main group species is . Focusing on comparisons within the transition metal series, and are structurally similar but the Tc−O−Tc angle is 180°. Solid is not molecular but consists of crosslinked Re centers with both tetrahedral and octahedral sites; in the vapour phase it is molecular with a similar structure to Tc2O7.
Synthesis and reactions
arises as a dark green oil by the addition of cold concentrated to solid . The reaction initially produces permanganic acid, (structurally, ), which is dehydrated by cold sulfuric acid to form its anhydride, :
can react further with sulfuric acid to give the remarkable manganyl(VII) cation , which is isoelectronic with :
decomposes near room temperature, explosively so above . The explosion can be initiated by striking the sample or by its exposure to oxidizable organic compounds. The products are and . Ozone is also produced, giving a strong smell to the substance. The ozone can spontaneously ignite a piece of paper impregnated with an alcohol solution.
Manganese heptoxide reacts with hydrogen peroxide in presence of sulfuric acid, liberating oxygen and ozone:
References
Explosive chemicals
Manganese(VII) compounds
Acid anhydrides
Acidic oxides
Substances discovered in the 19th century | Manganese heptoxide | Chemistry | 550 |
3,469,088 | https://en.wikipedia.org/wiki/Disc%20rot | Disc rot is the tendency of CD, DVD, or other optical discs to become unreadable because of chemical deterioration. The causes include oxidation of the reflective layer, reactions with contaminants, ultra-violet light damage, and de-bonding of the adhesive used to adhere the layers of the disc together.
Causes
In CDs, the reflective layer is immediately beneath a thin protective layer of lacquer, and is also exposed at the edge of the disc. The lacquer protecting the edge of an optical disc can usually be seen without magnification. It is rarely uniformly thick; thickness variations are usually visible. The reflective layer is typically aluminium, which reacts easily with several commonly encountered chemicals such as oxygen, sulfur, and certain ions carried by liquid water. In ordinary use, a surface layer of aluminium oxide is formed quickly when an aluminium surface is exposed to the atmosphere; it serves as passivation for the bulk aluminium with regard to many, but not all, contaminants. CD reflective layers are so thin that this passivation is less effective. In the case of CD-R and CD-RW media, the materials used in the reflecting layer are more complex than a simple aluminium layer, but also can present problems if contaminated. The thin 0.25-0.5mm layer of protective lacquer is equivalent.
DVDs have a different structure from CDs, using a plastic disc over the reflecting layer. This means that a scratch on either surface of a DVD is not as likely to reach the reflective layer and expose it to environmental contamination and perhaps to cause corrosion, perhaps progressive corrosion. Since disc rot is often caused by the corrosion of aluminum, this means that DVDs are more resistant to disc rot. Each type of optical disc thus has different susceptibility to contamination and corrosion of its reflecting layer; furthermore, the writable and re-writable versions of each optical disc type are somewhat different as well. Finally, discs made with gold as the reflecting layer are considerably less vulnerable to chemical corrosion problems. Because they are less expensive, the industry has adopted aluminium reflecting layers as the standard for factory-pressed optical discs.
Blu-rays, used to distribute movies (often as merchandise) and games, usually use a silver alloy layer instead of aluminum.
Signs of disc rot
On CDs, the rot becomes visually noticeable in two ways:
When the CD is held up to a strong light, light shines through several pin-prick-sized holes.
Discoloration of the disc, which looks like a coffee stain on the disc. See also CD bronzing.
In audio CDs, the rot leads to scrambled or skipped audio or even the inability to play the disc.
Using surface error scanning to check the data integrity allows discovering loss of data integrity before uncorrectable errors occur.
Variants
Laser rot
Laser rot is the appearance of video and audio artifacts during the playback of LaserDiscs, and their progressive worsening over time. It is most commonly attributed to oxidation in the aluminum layers by poor quality adhesives used to bond the disc halves together. Poor adhesives separate over time, which allows oxygen in the air to corrode the thin aluminum layer into aluminum oxide, visible as transparent patches or small dots in the disc. Corrosion is possible due to the thinness of the layer; normally aluminum does not corrode because it is coated in a thin oxide layer that forms on contact with oxygen. Single-sided video discs did not appear to suffer from laser rot while double-sided discs did. The name "laser rot" is not necessarily a misnomer; although the degradation does not involve the player's laser, the "rot" refers to the LaserDisc itself.
Laser rot was indicated by the appearance of multi-colored speckles appearing in the video output of a LaserDisc during playback. The speckles increased in volume and frequency as the disc continued to degrade. Much of the early production run of MCA DiscoVision discs had severe laser rot. Also, in the 1990s, LaserDiscs manufactured by Sony's DADC plant in Terre Haute, Indiana, were plagued by laser rot.
HD-DVD rot
Many HD-DVDs, especially those produced by Warner Bros. between 2006 and 2008 developed disc rot not long after production. Disc rot was also more common on double-sided HD-DVDs than on single-sided HD-DVDs.
See also
CD bronzing, a type of disc rot, affecting a subset of CDs and DVDs, causing a bronze-colored darkening of the playable surface and eventual loss of readability.
Conservation and restoration of plastic objects
Data rot, a similar concept
DVD-D and Flexplay, disposable optical disc formats designed to become unplayable after a limited time
M-Disc, an optical disc format claimed to have a reduced rate of rot compared to conventional DVDs
Panchiko, a band who gained popularity in 2016 when a 2000 demo EP--which was notably distorted due to CD-R disc rot--was found in a charity shop.
References
External links
Mac Observer article
"CD Bronzing" article, with PDO replacement information, at Classical.net - How a manufacturing problem can cause disc quality degradation
"Using CDs for Data Storage" article, with extensive footnoting.
(Wayback Machine copy)
A bad case of DVD rot eats into movie collections
Blu-ray rot on copies of The Prestige at AVS Forums
Compact disc
DVD
80 mm discs
120 mm discs
Materials degradation
LaserDisc
HD DVD
Product expiration
Preservation (library and archival science) | Disc rot | Materials_science,Engineering | 1,127 |
36,549,964 | https://en.wikipedia.org/wiki/Road-effect%20zone | The Road-effect zone is the area in which effects on the natural environment extend outward from a road. Such effects are substance emissions like carbon monoxide, carbon dioxide, particulate matter, nitrogen oxide, volatile organic compounds, biological matter, rubber, or salt, intangible emissions like noise or light, and changes of the microclimate like alterations of wind, water flows, temperature or moisture.
Range
There is ongoing debate on the width of the road-effect zone, sometimes also called buffer area. Most studies conform to an extension that can range from as little as 20 meters to a width up to two or more kilometers. Jordaan et al. find an area of 100 meters significantly impacted, Biglin & Dupigny-Giroux estimate 71.25% of the total impact to lie within the first 300 meters, Deblinger & Forman suggest an average area of 600 meters to be concerned, and Eigenbrod et al. found a considerably negative effect on differing anuran species for areas of 250 to 1.000 meters.
Species affected
Determining the size of the road-effect zone is additionally complicated by varying local spatial patterns and the result significantly depends on the particular species under investigation; for instance there are large differences between moose, mice, plants, bats or anurans, and even among anuran species themselves.
Challenges
The road-effect zone is a topic that, due to drastically increased mobility-intensive lifestyles, and due to the cognition of their adverse effects on the non-human environment or biological diversity in total, has gained augmented interest in the recent past. Its impact on, and integration in to, spatial planning are challenges to be dealt with today and tomorrow to a greater degree.
See also
Line source
Low-emission zone
References
Transport and the environment | Road-effect zone | Physics | 364 |
26,855,865 | https://en.wikipedia.org/wiki/Antimicrobial%20polymer | Polymers with the ability to kill or inhibit the growth of microorganisms such as bacteria, fungi, or viruses are classified as antimicrobial agents. This class of polymers consists of natural polymers with inherent antimicrobial activity and polymers modified to exhibit antimicrobial activity. Polymers are generally nonvolatile, chemically stable, and can be chemically and physically modified to display desired characteristics and antimicrobial activity. Antimicrobial polymers are a prime candidate for use in the food industry to prevent bacterial contamination and in water sanitation to inhibit the growth of microorganisms in drinking water.
Mechanism of Action
Antimicrobial polymers inhibit cell growth and initiate cell death through two primary mechanisms. The first mechanism is utilized by contact-active polymers. Contact-active polymers utilize electrostatic interactions, the hydrophobic effect, and the chelate effect. Electrostatic attraction is a common initial interaction of an antimicrobial polymer with a microbe. The chelating and hydrophobic effects are common secondary interactions of antimicrobial polymers with microbes.
Cationically charged antimicrobial polymers are attracted to the anionically charged bacterial cell walls. The outer wall of bacterial cells possesses a net negative charge. The cytoplasmic membrane of bacterial cells has a negative charge and contains essential proteins. The secondary interaction, the chelating effect, involves the bonding of the antimicrobial polymer to the microbial cell. These interactions lead to membrane disruption and ultimately inhibited cell growth or death.
The cytoplasmic membrane of a cell is a semi-permeable membrane, which controls the transport of solutes into the cell. The phospholipid bilayer is an important component of the cell membrane, which is composed of hydrophilic heads and a hydrophobic tail. The hydrophilic heads form the inner and outer linings of the cell membrane, while the hydrophobic tails compose the interior of the membrane. The secondary interaction, the hydrophobic effect, involves the accumulation of nonpolar compounds away from water. Nonpolar components of antimicrobial polymers insert themselves into the nonpolar interior of the cell membrane.
High molecular weight polymers commonly induce cell death or inhibition through contact-active interactions with the surface of cells. Cell death and inhibition result from impairment of normal cellular function. Positive residues on the polymer electrostatically interact with negative charges on the cell and induce secondary cellular effects. Cellular membrane penetration is common in low molecular weight polymers. The initial electrostatic and hydrophobic interaction of an antimicrobial polymer and biomimetic polymer causes membrane disruption and cell death. The hydrophobic tail of the polymer penetrates the phospholipid bilayer into the hydrophobic region, resulting in membrane disruption and denaturing of proteins and enzymes, as well as other secondary effects. Secondary effects include disruption of solute and electron transport as well as disturbances to energy production pathways, which leads to cell death.
The second mechanism is characterized by the release of low molecular weight antimicrobial agents from polymers. Antimicrobial agents that are released from polymers induce cell death through binding to or penetrating the cell wall. When antimicrobial agents bind to proteins, structural changes occur to the cell membrane resulting in cellular death. The penetration of nanoparticle antimicrobial agents into the cell wall enables the antimicrobial agents to interact with cell DNA. Microbe death results from the effects to DNA transcription and mRNA synthesis when polymer nanoparticles combine with DNA.
Primary Characteristics of Antimicrobial Polymers
There are different primary characteristics of antimicrobial polymers, dependent upon the mechanism of action. The two primary characteristics of contact-active antimicrobial polymers are cationic charge and hydrophobicity. Cationic residues are necessary to induce the interaction with the microbial cell wall. Polycations such as quaternary ammonium, quaternary phosphonium, and guanidinium are frequently found in antimicrobial polymers. Hydrophobic residues improve binding to the lipid bilayer and are utilized for insertion into the microbial cell wall. Non-contact-active antimicrobial polymers require the addition of antimicrobial agents to induce activity. Common agents added include N-halamine compounds, nitric oxide, and copper and silver nanoparticles.
Classes of Antimicrobial Polymers
Antimicrobial polymers are generally classified into two categories based on how antimicrobial activity is conferred. The first are polymers with inherent antimicrobial which do not require any modifications to incite antimicrobial behavior. The other class requires modification to enable antimicrobial activity and can be differentiated by the type of modification. Polymers may be chemically modified to induce antimicrobial behavior or they may be used as a backbone for the addition of organic or inorganic compounds.
Inherent Antimicrobial Activity
Polymers with inherent antimicrobial activity include chitosan, poly-ε-lysine, quaternary ammonium compounds, polyethylenimine, and polyguanidines. Chitosan is a nontoxic polymer that has displayed broad-spectrum antimicrobial activity. The mechanism of action for chitosan includes electrostatic interaction, the chelate effect, and the hydrophobic effect. Electrostatic interaction is the primary initial interaction when the pH is lower, while the chelating and hydrophobic effects are the primary initial interactions when the pH is higher. Growth inhibition and death of fungi, bacteria, and yeasts have been seen from chitosan. The antimicrobial effect of chitosan is greater on fungi than yeasts and more effective on gram-negative bacteria than gram-positive bacteria.
Poly-ε-lysine is a biodegradable, nontoxic, edible antimicrobial polymer. This polymer utilizes electrostatic interactions to attach to the cell wall, therefore disrupting the integrity of the cell wall. Poly-ε-lysine penetrates the cell wall, causing physiological damage to the cell and death. In comparison to a similar synthetic polymer, poly-ε-lysine is more effective against gram-positive than gram-negative bacteria. Poly-ε-lysine is also effective against Bacillus coagulans, Bacillus stearothermophilus, and Bacillus subtilis.
Benzalkonium chloride, stearalkonium chloride, and cetrimonium are all quaternary ammonium compounds containing nitrogen. The antibacterial activity of these compounds is affected by the number of carbon atoms and the length of the nitrogen-containing chain. Optimal antimicrobial activity is generally seen in quaternary ammonium compounds with a long chain length, containing 8-18 carbon atoms. Increased activity is seen against gram-positive bacteria in polymers with a chain length of 12-14 carbon atoms, while improved activity against gram-negative bacteria is seen in polymers with a chain length of 14-16 carbon atoms. Polymer quaternary ammonium compounds containing nitrogen induce cell death through electrostatic interactions and the hydrophobic effect. This group of polymers displays limited hemolytic activity, making them advantageous for use in cosmetics and healthcare.
Polyethylenimine is a synthetic, nonbiodegradable polymer containing nitrogen. This polymer induces cell death through cell membrane rupture. When attached to immobilized surfaces including glass and plastic, N-alkyl-polyethylenimine caused cell inactivation in almost 100% of airborne and waterborne bacteria and fungi. A benefit of this polymer is that it is nontoxic to mammalian cells. Polyethylenimine has been applied in the medical industry for use in prostheses. Bacteria growth was reduced by 92% when polyethylenimine was tested as a coating surface for medical devices. The activity of polyethylenimine is affected by the molecular weight of the polymer; low molecular weight polyethylenimine displays negligible activity, while displaying great antimicrobial activity in its high molecular weight form.
Polyguanidines are another class of antimicrobial polymers containing nitrogen. This class of antimicrobial polymers is nontoxic and exhibits high water solubility. Polyguanidines display broad-spectrum antimicrobial activity and initially interact with microbes using electrostatic forces. Greater activity against gram-positive bacteria has been seen with polyguanidines than against gram-negative bacteria. The reason for the difference in activity is likened to the different structures of gram-positive and gram-negative bacteria. Gram-negative bacteria have a thinner peptidoglycan layer than gram-positive bacteria. In addition, gram-negative bacteria have an outer lipid membrane, which gram-positive bacteria do not. High molecular weight polymers are able to penetrate gram-positive bacteria.
Antimicrobial Activity Through Chemical Modification
This class of polymers does not have any inherent antimicrobial activity. To induce antimicrobial activity, polymers re chemically modified to include active agents. Active side groups are attached to the polymer backbone to generate antimicrobial activity. Pendent groups, antibiotic drugs, or inorganic particles can be adjoined to the polymer.
Pendant groups that are attached to the polymer backbone include quaternary ammonium, hydroxyl groups with an organic acid, and others. Antimicrobial polymers containing quaternary ammonium as a side group are commonly synthesized from methacrylic monomers. The benefit of these monomers is that the hydrophobicity, molecular weight, and surface charge can all be manipulated. Hydrophobicity of the polymer has a strong effect on antimicrobial activity. Polysiloxanes, which have a quaternary ammonium pendant group, have demonstrated activity against several strains of bacteria including Enterococcus hirae, E. coli, and P. aeruginosa. The flexibility and amphiphilic nature of this polymer enhances the antimicrobial activity. When benzaldehyde, a hydroxyl group containing organic acid, is used as a side group with Methyl methacrylate polymers, growth inhibition five times that of control surfaces has been shown. Benzaldehyde has inherent antimicrobial activity and has been incorporated into polymers to improve activity. Polymers with quaternary ammonium or hydroxyl groups with an organic acid as a pendant group have demonstrated activity against many types of bacteria, fungi, and algae.
Antimicrobial activity can also be induced through the addition of inorganic particles such as silver, copper, and titanium dioxide nanoparticles to a polymer. Metal nanoparticles are incorporated into the polymer to form polymeric nanocomposites. Silver is utilized in antimicrobial polymers because of its stability as well as broad-spectrum antimicrobial activity. Positive silver ions are produced in environments beneficial for the growth of bacteria. These positive silver ions physically interact with cell wall proteins resulting in membrane disruption and cell death. Silver nanoparticles embedded into a cationic polymer have displayed activity against E.coli and S.aureus. Copper and titanium dioxide nanoparticles are less commonly employed in antimicrobial polymers than silver nanoparticles. Copper nanoparticles embedded into polypropylene nanocomposites have demonstrated the ability to kill 99.9% of bacteria. Titanium dioxide is a nontoxic material with antimicrobial activity that is photo-activated. Titanium dioxide has been embedded in polypropylene to create photoactive antimicrobial polymers. The antimicrobial activity of the polymer composite is initiated by a light source. The light source causes the titanium dioxide to be oxidized, which results in the release of highly reactive hydroxyl species that disrupt bacteria. The effectiveness of the photoactive antimicrobial polymer has been demonstrated against the bacteria E.coli.
Another class of antibacterial polymers includes those whose activity is introduced through the incorporation of antibiotics into the polymer matrix. The chemical triclosan is commonly utilized for its antibacterial properties. Triclosan mixed with the copolymer styrene-acrylate exhibits antibacterial activity against E. faecalis. In addition, triclosan combined with the polymer polyvinyl alcohol has increased antibacterial activity compared to triclosan not incorporated in a polymer. The polymer polyethylenimine has also been modified to include antibiotics. Polyethylenimine is used to make bacterial cell walls more permeable, therefore increasing the sensitivity of bacteria to antibiotics. Polyethylenimine increases the effectiveness of the antibiotics including ampicillin, rifampin, cefotaxime, as well as others.
Protein-Mimicking Polymers
Magainin and defensin are natural peptides, short polymers composed of amino acids, which display exceptional antimicrobial activity. The antimicrobial activity is a product of the peptides’ structure, including its highly rigid backbone. These peptides have organized pendant groups, making one side of the polymer hydrophobic and the other side cationic. This group of polymers efficiently induce cell death through cell wall penetration. Polymer mimics of these antimicrobial peptides have been developed. Protein-mimicking polymers emulate the structure of magainin and defensin. Examples of protein mimicking polymers include poly(phenylene ethynylene)-based and N-carboxyanhydride-based polymers. Poly(phenylene ethynylene) polymers with amino acid pendant groups were manufactured to have positively charged side groups and a stiff backbone. The synthetic polymer had low toxicity and strong antimicrobial activity. In addition, N-carboxyanhydride-based polymers with the hydrophilic amino acid lysine and different hydrophobic amino acids were developed. The polymers displayed antimicrobial activity against E. coli, C. Albicans, and others.
Factors that Affect Antimicrobial Activity
Molecular Weight
The molecular weight of the polymer is perhaps one of the most important properties to consider when determining antimicrobial properties because antimicrobial activity is markedly dependent on the molecular weight. It has been determined that optimal activity is achieved when polymers have a molecular weight in the range of 1.4x104 Da to 9.4x104 Da. Weights larger than this range show a decrease in activity. This dependence on weight can be attributed to the sequence of steps necessary for biocidal action. Extremely large molecular weight polymers will have trouble diffusing through the bacterial cell wall and cytoplasm. Thus much effort has been directed towards controlling the molecular weight of the polymer.
Counter Ion
Most bacterial cell walls are negatively charged, therefore most antimicrobial polymers must be positively charged to facilitate the adsorption process. The structure of the counter ion, or the ion associated with the polymer to balance charge, also affects the antimicrobial activity. Counter anions that form a strong ion-pair with the polymer impede the antimicrobial activity because the counter ion will prevent the polymer from interacting with the bacteria. However, ions that form a loose ion-pair or readily dissociate from the polymer, exhibit a positive influence on the activity because it allows the polymer to interact freely with the bacteria.
Spacer Length/Alkyl Chain Length
The spacer length or alkyl chain length refers to the length of the carbon chain that composes the polymer backbone. The length of this chain has been investigated to see if it affects the antimicrobial activity of the polymer. Results have generally shown that longer alkyl chains have resulted in higher activity. There are two primary explanations for this effect. Firstly, longer chains have more active sites available for adsorption with the bacteria cell wall and cytoplasmic membrane. Secondly, longer chains aggregate differently than shorter chains, which again may provide a better means for adsorption. However, shorter chain lengths diffuse more easily.
Disadvantages
A major disadvantage of antimicrobial polymers is that macromolecules are very large and thus may not act as fast as small molecule agents. Biocidal polymers that require contact times on the order of hours to provide substantial reductions in pathogens, really have no practical value. Seconds, or minutes at most, should be the contact time goal for a real application. Furthermore, if the structural modification to the polymer caused by biocidal functionalization adversely affects the intended use, the polymer will be of no practical value. For example, if a fiber that must be exposed to aqueous bleach to render it antimicrobial (an N-halamine polymer) is weakened by that exposure, or its dye is bleached, it will have limited use.
Synthetic Methods
Synthesis from Antimicrobial Monomers
This synthetic method involves covalently linking antimicrobial agents that contain functional groups with high antimicrobial activity, such as hydroxyl, carboxyl, or amino groups to a variety of polymerizable derivatives, or monomers before polymerization. The antimicrobial activity of the active agent may be either reduced or enhanced by polymerization. This depends on how the agent kills bacteria, either by depleting the bacterial food supply or through bacterial membrane disruption and the kind of monomer used. Differences have been reported when homo-polymers are compared to copolymers. Examples of antimicrobial polymers synthesized from antimicrobial monomers are included in Table 2:
Table 2: Polymers Synthesized from Antimicrobial Monomers and their Antimicrobial Properties
Synthesis by Adding Antimicrobial Agents to Preformed Polymers
This synthetic method involves first synthesizing the polymer, followed by modification with an active species. The following kinds of monomers are usually used to form the backbone of homopolymers or copolymers: vinylbenzyl chloride, methyl methacrylate, 2-chloroethyl vinyl ether, vinyl alcohol, maleic anhydride. The polymers are then activated by anchoring antimicrobial species, such as phosphonium salts, ammonium salts, or phenol groups via quaternization, substitution of chloride, or hydrolysis of anhydride. Examples of polymers synthesized from this method are provided in Table 3:
Table 3: Antimicrobial Polymers Synthesized from Preformed Polymers and Antimicrobial Properties
Synthesis by Adding Antimicrobial Agents to Naturally Occurring Polymers
Chitin is the second-most abundant biopolymer in nature. The deacetylated product of chitin—chitosan has been found to have antimicrobial activity without toxicity to humans. This synthetic technique involves making chitosan derivatives to obtain better antimicrobial activity. Currently, work has involved the introduction of alkyl groups to the amine groups to make quaternized N-alkyl chitosan derivatives, introduction of extra quaternary ammonium grafts to the chitosan, and modification with phenolic hydroxyl moieties.
Synthesis by insertion of antimicrobial agents into polymer backbone
This method involves using chemical reactions to incorporate antimicrobial agents into the polymeric backbones. Polymers with biologically active groups, such as polyamides, polyesters, and polyurethanes are desirable as they may be hydrolyzed to active drugs and small innocuous molecules. For example, a series of polyketones have been synthesized and studied, which show an inhibitory effect on the growth of B. subtilis and P. fluorescens as well as fungi, A. niger and T. viride. There are also studies which incorporate antibiotics into the backbone of the polymer.
Requirements of an antimicrobial polymer
In order for an antimicrobial polymer to be a viable option for large-scale distribution and use there are several basic requirements that must be first fulfilled:
The synthesis of the polymer should be easy and relatively inexpensive. To be produced on an industrial scale the synthetic route should ideally utilize techniques that have already been well developed.
The polymer should have a long shelf life, or be stable over long periods of time. It should be able to be stored at the temperature for which it is intended for use.
If the polymer is to be used for the disinfection of water, then it should be insoluble in water to prevent toxicity issues (as is the case with some current small molecule antimicrobial agents).
The polymer should not decompose during use, or emit toxic residues.
The polymer should not be toxic or irritating to those during handling.
Antimicrobial activity should be able to be regenerated upon loss of activity.
Antimicrobial polymers should be biocidal to a broad range of pathogenic microorganisms in brief times of contact.
Applications
Water treatment
Polymeric disinfectants are ideal for applications in hand-held water filters, surface coatings, and fibrous disinfectants, because they can be fabricated by various techniques and can be made insoluble in water. The design of insoluble contact disinfectants that can inactivate, kill, or remove target microorganisms by mere contact without releasing any reactive agents to the bulk phase being disinfected is desired. Chlorine or water-soluble disinfectants have problems with the residual toxicity, even if minimal amounts of the substance used. Toxic residues can become concentrated in food, water, and in the environment. In addition, because free chlorine ions and other related chemicals can react with organic substances in water to yield trihalomethane analogues that are suspected of being carcinogenic, their use should be avoided. These drawbacks can be solved by the removal of microorganisms from water with insoluble substances.
Food applications
Antimicrobial substances that are incorporated into packaging materials can control microbial contamination by reducing the growth
rate and the maximum growth population. This is done by extending the lagphase of the target microorganism or by inactivating the
microorganisms on contact. One of these applications is to extend the shelf life of food and promote
safety by reducing the rate of growth of microorganisms when the package is in contact with the surfaces of solid foods, for example, meat, cheese, etc. Second, antimicrobial packaging materials greatly reduce the potential for recontamination of processed products and simplify the treatment of materials to eliminate product contamination. For example, self-sterilizing packaging might eliminate the need for peroxide treatment in aseptic packaging. Antimicrobial polymers can also be used to cover surfaces of food processing equipment as self-sanitizer. Examples include filter gaskets, conveyors, gloves, garments, and other personal hygiene equipment.
Some polymers are inherently antimicrobial and have been used in films and coatings. Cationic polymers such as chitosan promote cell adhesion. This is because charged amines interact with negative charges on the cell membrane, and can cause leakage of intracellular constituents. Chitosan has been used as a coating and appears to protect fresh vegetables and fruits from fungal degradation. Although the antimicrobial effect is attributed to antifungal properties of chitosan, it may be possible that the chitosan acts as a barrier between the nutrients contained in the produce and microorganisms.
Medicine and healthcare
Antimicrobial polymers are powerful candidates for controlled delivery systems and implants in dental restorative materials because of their high activities. This can be ascribed to their characteristic nature of carrying a high local charge density of active groups in the vicinity of the polymer chains. For example, electrospun fibers containing tetracycline hydrochloride based on poly(ethylene-co-vinyl acetate), poly(lactic acid), and blending were prepared to use as an antimicrobial wound dressing.
Cellulose derivatives are commonly used in cosmetics as skin and hair conditioners. Quaternary ammonium cellulose derivatives are of particular interest as conditioners in hair and skin products.
Future work in this field
The field of antimicrobial polymers has progressed steadily, but slowly over the past years, and appears to be on the
verge of rapid expansion. This is evidenced by a broad variety of new classes of compounds that have been prepared and studied
in the past few years.
Modification of polymers and fibrous surfaces, and changing the porosity, wettability, and other characteristics of the polymeric substrates, should produce implants and biomedical devices with greater resistance to microbial adhesion and biofilm formation. A number of polymers have been developed that can be incorporated into cellulose and other materials, which should provide significant advances in many fields such as food packaging, textiles, wound dressing, coating of catheter tubes, and necessarily sterile surfaces. The greater need for materials that fight infection will give incentive for discovery and use of antimicrobial polymers.
References
Bibliography
Cowie, J.M.G. Polymers: Chemistry and Physics of Modern Materials, Chapman and Hall, 3rd edition (2007);
United States. Congress. Office of Technology Assessment. Biopolymers : making materials nature's way, Washington, DC:The Office, (1993);
Marsh, J. Antimicrobial peptides, J. Wiley,(1994);
Wool, R.P. Bio-based polymers and composites, Elsevier Academic Press, (2005).
External links
Antimicrobial Polymer Technologies for Food Application
Antimicrobial Materials
Antimicrobial Polymer Surfaces
Polymers
Antimicrobials | Antimicrobial polymer | Chemistry,Materials_science,Biology | 5,310 |
40,254 | https://en.wikipedia.org/wiki/Genetic%20algorithm | In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems via biologically inspired operators such as selection, crossover, and mutation. Some examples of GA applications include optimizing decision trees for better performance, solving sudoku puzzles, hyperparameter optimization, and causal inference.
Methodology
Optimization problems
In a genetic algorithm, a population of candidate solutions (called individuals, creatures, organisms, or phenotypes) to an optimization problem is evolved toward better solutions. Each candidate solution has a set of properties (its chromosomes or genotype) which can be mutated and altered; traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible.
The evolution usually starts from a population of randomly generated individuals, and is an iterative process, with the population in each iteration called a generation. In each generation, the fitness of every individual in the population is evaluated; the fitness is usually the value of the objective function in the optimization problem being solved. The more fit individuals are stochastically selected from the current population, and each individual's genome is modified (recombined and possibly randomly mutated) to form a new generation. The new generation of candidate solutions is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population.
A typical genetic algorithm requires:
a genetic representation of the solution domain,
a fitness function to evaluate the solution domain.
A standard representation of each candidate solution is as an array of bits (also called bit set or bit string). Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, which facilitates simple crossover operations. Variable length representations may also be used, but crossover implementation is more complex in this case. Tree-like representations are explored in genetic programming and graph-form representations are explored in evolutionary programming; a mix of both linear chromosomes and trees is explored in gene expression programming.
Once the genetic representation and the fitness function are defined, a GA proceeds to initialize a population of solutions and then to improve it through repetitive application of the mutation, crossover, inversion and selection operators.
Initialization
The population size depends on the nature of the problem, but typically contains hundreds or thousands of possible solutions. Often, the initial population is generated randomly, allowing the entire range of possible solutions (the search space). Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to be found or the distribution of the sampling probability tuned to focus in those areas of greater interest.
Selection
During each successive generation, a portion of the existing population is selected to reproduce for a new generation. Individual solutions are selected through a fitness-based process, where fitter solutions (as measured by a fitness function) are typically more likely to be selected. Certain selection methods rate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as the former process may be very time-consuming.
The fitness function is defined over the genetic representation and measures the quality of the represented solution. The fitness function is always problem-dependent. For instance, in the knapsack problem one wants to maximize the total value of objects that can be put in a knapsack of some fixed capacity. A representation of a solution might be an array of bits, where each bit represents a different object, and the value of the bit (0 or 1) represents whether or not the object is in the knapsack. Not every such representation is valid, as the size of objects may exceed the capacity of the knapsack. The fitness of the solution is the sum of values of all objects in the knapsack if the representation is valid, or 0 otherwise.
In some problems, it is hard or even impossible to define the fitness expression; in these cases, a simulation may be used to determine the fitness function value of a phenotype (e.g. computational fluid dynamics is used to determine the air resistance of a vehicle whose shape is encoded as the phenotype), or even interactive genetic algorithms are used.
Genetic operators
The next step is to generate a second generation population of solutions from those selected, through a combination of genetic operators: crossover (also called recombination), and mutation.
For each new solution to be produced, a pair of "parent" solutions is selected for breeding from the pool selected previously. By producing a "child" solution using the above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its "parents". New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size is generated.
Although reproduction methods that are based on the use of two parents are more "biology inspired", some research suggests that more than two "parents" generate higher quality chromosomes.
These processes ultimately result in the next generation population of chromosomes that is different from the initial generation. Generally, the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions. These less fit solutions ensure genetic diversity within the genetic pool of the parents and therefore ensure the genetic diversity of the subsequent generation of children.
Opinion is divided over the importance of crossover versus mutation. There are many references in Fogel (2006) that support the importance of mutation-based search.
Although crossover and mutation are known as the main genetic operators, it is possible to use other operators such as regrouping, colonization-extinction, or migration in genetic algorithms.
It is worth tuning parameters such as the mutation probability, crossover probability and population size to find reasonable settings for the problem class being worked on. A very small mutation rate may lead to genetic drift (which is non-ergodic in nature). A recombination rate that is too high may lead to premature convergence of the genetic algorithm. A mutation rate that is too high may lead to loss of good solutions, unless elitist selection is employed. An adequate population size ensures sufficient genetic diversity for the problem at hand, but can lead to a waste of computational resources if set to a value larger than required.
Heuristics
In addition to the main operators above, other heuristics may be employed to make the calculation faster or more robust. The speciation heuristic penalizes crossover between candidate solutions that are too similar; this encourages population diversity and helps prevent premature convergence to a less optimal solution.
Termination
This generational process is repeated until a termination condition has been reached. Common terminating conditions are:
A solution is found that satisfies minimum criteria
Fixed number of generations reached
Allocated budget (computation time/money) reached
The highest ranking solution's fitness is reaching or has reached a plateau such that successive iterations no longer produce better results
Manual inspection
Combinations of the above
The building block hypothesis
Genetic algorithms are simple to implement, but their behavior is difficult to understand. In particular, it is difficult to understand why these algorithms frequently succeed at generating solutions of high fitness when applied to practical problems. The building block hypothesis (BBH) consists of:
A description of a heuristic that performs adaptation by identifying and recombining "building blocks", i.e. low order, low defining-length schemata with above average fitness.
A hypothesis that a genetic algorithm performs adaptation by implicitly and efficiently implementing this heuristic.
Goldberg describes the heuristic as follows:
"Short, low order, and highly fit schemata are sampled, recombined [crossed over], and resampled to form strings of potentially higher fitness. In a way, by working with these particular schemata [the building blocks], we have reduced the complexity of our problem; instead of building high-performance strings by trying every conceivable combination, we construct better and better strings from the best partial solutions of past samplings.
"Because highly fit schemata of low defining length and low order play such an important role in the action of genetic algorithms, we have already given them a special name: building blocks. Just as a child creates magnificent fortresses through the arrangement of simple blocks of wood, so does a genetic algorithm seek near optimal performance through the juxtaposition of short, low-order, high-performance schemata, or building blocks."
Despite the lack of consensus regarding the validity of the building-block hypothesis, it has been consistently evaluated and used as reference throughout the years. Many estimation of distribution algorithms, for example, have been proposed in an attempt to provide an environment in which the hypothesis would hold. Although good results have been reported for some classes of problems, skepticism concerning the generality and/or practicality of the building-block hypothesis as an explanation for GAs' efficiency still remains. Indeed, there is a reasonable amount of work that attempts to understand its limitations from the perspective of estimation of distribution algorithms.
Limitations
The practical use of a genetic algorithm has limitations, especially as compared to alternative optimization algorithms:
Repeated fitness function evaluation for complex problems is often the most prohibitive and limiting segment of artificial evolutionary algorithms. Finding the optimal solution to complex high-dimensional, multimodal problems often requires very expensive fitness function evaluations. In real world problems such as structural optimization problems, a single function evaluation may require several hours to several days of complete simulation. Typical optimization methods cannot deal with such types of problem. In this case, it may be necessary to forgo an exact evaluation and use an approximated fitness that is computationally efficient. It is apparent that amalgamation of approximate models may be one of the most promising approaches to convincingly use GA to solve complex real life problems.
Genetic algorithms do not scale well with complexity. That is, where the number of elements which are exposed to mutation is large there is often an exponential increase in search space size. This makes it extremely difficult to use the technique on problems such as designing an engine, a house or a plane . In order to make such problems tractable to evolutionary search, they must be broken down into the simplest representation possible. Hence we typically see evolutionary algorithms encoding designs for fan blades instead of engines, building shapes instead of detailed construction plans, and airfoils instead of whole aircraft designs. The second problem of complexity is the issue of how to protect parts that have evolved to represent good solutions from further destructive mutation, particularly when their fitness assessment requires them to combine well with other parts.
The "better" solution is only in comparison to other solutions. As a result, the stopping criterion is not clear in every problem.
In many problems, GAs have a tendency to converge towards local optima or even arbitrary points rather than the global optimum of the problem. This means that it does not "know how" to sacrifice short-term fitness to gain longer-term fitness. The likelihood of this occurring depends on the shape of the fitness landscape: certain problems may provide an easy ascent towards a global optimum, others may make it easier for the function to find the local optima. This problem may be alleviated by using a different fitness function, increasing the rate of mutation, or by using selection techniques that maintain a diverse population of solutions, although the No Free Lunch theorem proves that there is no general solution to this problem. A common technique to maintain diversity is to impose a "niche penalty", wherein, any group of individuals of sufficient similarity (niche radius) have a penalty added, which will reduce the representation of that group in subsequent generations, permitting other (less similar) individuals to be maintained in the population. This trick, however, may not be effective, depending on the landscape of the problem. Another possible technique would be to simply replace part of the population with randomly generated individuals, when most of the population is too similar to each other. Diversity is important in genetic algorithms (and genetic programming) because crossing over a homogeneous population does not yield new solutions. In evolution strategies and evolutionary programming, diversity is not essential because of a greater reliance on mutation.
Operating on dynamic data sets is difficult, as genomes begin to converge early on towards solutions which may no longer be valid for later data. Several methods have been proposed to remedy this by increasing genetic diversity somehow and preventing early convergence, either by increasing the probability of mutation when the solution quality drops (called triggered hypermutation), or by occasionally introducing entirely new, randomly generated elements into the gene pool (called random immigrants). Again, evolution strategies and evolutionary programming can be implemented with a so-called "comma strategy" in which parents are not maintained and new parents are selected only from offspring. This can be more effective on dynamic problems.
GAs cannot effectively solve problems in which the only fitness measure is a binary pass/fail outcome (like decision problems), as there is no way to converge on the solution (no hill to climb). In these cases, a random search may find a solution as quickly as a GA. However, if the situation allows the success/failure trial to be repeated giving (possibly) different results, then the ratio of successes to failures provides a suitable fitness measure.
For specific optimization problems and problem instances, other optimization algorithms may be more efficient than genetic algorithms in terms of speed of convergence. Alternative and complementary algorithms include evolution strategies, evolutionary programming, simulated annealing, Gaussian adaptation, hill climbing, and swarm intelligence (e.g.: ant colony optimization, particle swarm optimization) and methods based on integer linear programming. The suitability of genetic algorithms is dependent on the amount of knowledge of the problem; well known problems often have better, more specialized approaches.
Variants
Chromosome representation
The simplest algorithm represents each chromosome as a bit string. Typically, numeric parameters can be represented by integers, though it is possible to use floating point representations. The floating point representation is natural to evolution strategies and evolutionary programming. The notion of real-valued genetic algorithms has been offered but is really a misnomer because it does not really represent the building block theory that was proposed by John Henry Holland in the 1970s. This theory is not without support though, based on theoretical and experimental results (see below). The basic algorithm performs crossover and mutation at the bit level. Other variants treat the chromosome as a list of numbers which are indexes into an instruction table, nodes in a linked list, hashes, objects, or any other imaginable data structure. Crossover and mutation are performed so as to respect data element boundaries. For most data types, specific variation operators can be designed. Different chromosomal data types seem to work better or worse for different specific problem domains.
When bit-string representations of integers are used, Gray coding is often employed. In this way, small changes in the integer can be readily affected through mutations or crossovers. This has been found to help prevent premature convergence at so-called Hamming walls, in which too many simultaneous mutations (or crossover events) must occur in order to change the chromosome to a better solution.
Other approaches involve using arrays of real-valued numbers instead of bit strings to represent chromosomes. Results from the theory of schemata suggest that in general the smaller the alphabet, the better the performance, but it was initially surprising to researchers that good results were obtained from using real-valued chromosomes. This was explained as the set of real values in a finite population of chromosomes as forming a virtual alphabet (when selection and recombination are dominant) with a much lower cardinality than would be expected from a floating point representation.
An expansion of the Genetic Algorithm accessible problem domain can be obtained through more complex encoding of the solution pools by concatenating several types of heterogenously encoded genes into one chromosome. This particular approach allows for solving optimization problems that require vastly disparate definition domains for the problem parameters. For instance, in problems of cascaded controller tuning, the internal loop controller structure can belong to a conventional regulator of three parameters, whereas the external loop could implement a linguistic controller (such as a fuzzy system) which has an inherently different description. This particular form of encoding requires a specialized crossover mechanism that recombines the chromosome by section, and it is a useful tool for the modelling and simulation of complex adaptive systems, especially evolution processes.
Elitism
A practical variant of the general process of constructing a new population is to allow the best organism(s) from the current generation to carry over to the next, unaltered. This strategy is known as elitist selection and guarantees that the solution quality obtained by the GA will not decrease from one generation to the next.
Parallel implementations
Parallel implementations of genetic algorithms come in two flavors. Coarse-grained parallel genetic algorithms assume a population on each of the computer nodes and migration of individuals among the nodes. Fine-grained parallel genetic algorithms assume an individual on each processor node which acts with neighboring individuals for selection and reproduction.
Other variants, like genetic algorithms for online optimization problems, introduce time-dependence or noise in the fitness function.
Adaptive GAs
Genetic algorithms with adaptive parameters (adaptive genetic algorithms, AGAs) is another significant and promising variant of genetic algorithms. The probabilities of crossover (pc) and mutation (pm) greatly determine the degree of solution accuracy and the convergence speed that genetic algorithms can obtain. Researchers have analyzed GA convergence analytically.
Instead of using fixed values of pc and pm, AGAs utilize the population information in each generation and adaptively adjust the pc and pm in order to maintain the population diversity as well as to sustain the convergence capacity. In AGA (adaptive genetic algorithm), the adjustment of pc and pm depends on the fitness values of the solutions. There are more examples of AGA variants: Successive zooming method is an early example of improving convergence. In CAGA (clustering-based adaptive genetic algorithm), through the use of clustering analysis to judge the optimization states of the population, the adjustment of pc and pm depends on these optimization states. Recent approaches use more abstract variables for deciding pc and pm. Examples are dominance & co-dominance principles and LIGA (levelized interpolative genetic algorithm), which combines a flexible GA with modified A* search to tackle search space anisotropicity.
It can be quite effective to combine GA with other optimization methods. A GA tends to be quite good at finding generally good global solutions, but quite inefficient at finding the last few mutations to find the absolute optimum. Other techniques (such as simple hill climbing) are quite efficient at finding absolute optimum in a limited region. Alternating GA and hill climbing can improve the efficiency of GA while overcoming the lack of robustness of hill climbing.
This means that the rules of genetic variation may have a different meaning in the natural case. For instance – provided that steps are stored in consecutive order – crossing over may sum a number of steps from maternal DNA adding a number of steps from paternal DNA and so on. This is like adding vectors that more probably may follow a ridge in the phenotypic landscape. Thus, the efficiency of the process may be increased by many orders of magnitude. Moreover, the inversion operator has the opportunity to place steps in consecutive order or any other suitable order in favour of survival or efficiency.
A variation, where the population as a whole is evolved rather than its individual members, is known as gene pool recombination.
A number of variations have been developed to attempt to improve performance of GAs on problems with a high degree of fitness epistasis, i.e. where the fitness of a solution consists of interacting subsets of its variables. Such algorithms aim to learn (before exploiting) these beneficial phenotypic interactions. As such, they are aligned with the Building Block Hypothesis in adaptively reducing disruptive recombination. Prominent examples of this approach include the mGA, GEMGA and LLGA.
Problem domains
Problems which appear to be particularly appropriate for solution by genetic algorithms include timetabling and scheduling problems, and many scheduling software packages are based on GAs. GAs have also been applied to engineering. Genetic algorithms are often applied as an approach to solve global optimization problems.
As a general rule of thumb genetic algorithms might be useful in problem domains that have a complex fitness landscape as mixing, i.e., mutation in combination with crossover, is designed to move the population away from local optima that a traditional hill climbing algorithm might get stuck in. Observe that commonly used crossover operators cannot change any uniform population. Mutation alone can provide ergodicity of the overall genetic algorithm process (seen as a Markov chain).
Examples of problems solved by genetic algorithms include: mirrors designed to funnel sunlight to a solar collector, antennae designed to pick up radio signals in space, walking methods for computer figures, optimal design of aerodynamic bodies in complex flowfields
In his Algorithm Design Manual, Skiena advises against genetic algorithms for any task:
History
In 1950, Alan Turing proposed a "learning machine" which would parallel the principles of evolution. Computer simulation of evolution started as early as in 1954 with the work of Nils Aall Barricelli, who was using the computer at the Institute for Advanced Study in Princeton, New Jersey. His 1954 publication was not widely noticed. Starting in 1957, the Australian quantitative geneticist Alex Fraser published a series of papers on simulation of artificial selection of organisms with multiple loci controlling a measurable trait. From these beginnings, computer simulation of evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970) and Crosby (1973). Fraser's simulations included all of the essential elements of modern genetic algorithms. In addition, Hans-Joachim Bremermann published a series of papers in the 1960s that also adopted a population of solution to optimization problems, undergoing recombination, mutation, and selection. Bremermann's research also included the elements of modern genetic algorithms. Other noteworthy early pioneers include Richard Friedberg, George Friedman, and Michael Conrad. Many early papers are reprinted by Fogel (1998).
Although Barricelli, in work he reported in 1963, had simulated the evolution of ability to play a simple game, artificial evolution only became a widely recognized optimization method as a result of the work of Ingo Rechenberg and Hans-Paul Schwefel in the 1960s and early 1970s – Rechenberg's group was able to solve complex engineering problems through evolution strategies. Another approach was the evolutionary programming technique of Lawrence J. Fogel, which was proposed for generating artificial intelligence. Evolutionary programming originally used finite state machines for predicting environments, and used variation and selection to optimize the predictive logics. Genetic algorithms in particular became popular through the work of John Holland in the early 1970s, and particularly his book Adaptation in Natural and Artificial Systems (1975). His work originated with studies of cellular automata, conducted by Holland and his students at the University of Michigan. Holland introduced a formalized framework for predicting the quality of the next generation, known as Holland's Schema Theorem. Research in GAs remained largely theoretical until the mid-1980s, when The First International Conference on Genetic Algorithms was held in Pittsburgh, Pennsylvania.
Commercial products
In the late 1980s, General Electric started selling the world's first genetic algorithm product, a mainframe-based toolkit designed for industrial processes.
In 1989, Axcelis, Inc. released Evolver, the world's first commercial GA product for desktop computers. The New York Times technology writer John Markoff wrote about Evolver in 1990, and it remained the only interactive commercial genetic algorithm until 1995. Evolver was sold to Palisade in 1997, translated into several languages, and is currently in its 6th version. Since the 1990s, MATLAB has built in three derivative-free optimization heuristic algorithms (simulated annealing, particle swarm optimization, genetic algorithm) and two direct search algorithms (simplex search, pattern search).
Related techniques
Parent fields
Genetic algorithms are a sub-field:
Evolutionary algorithms
Evolutionary computing
Metaheuristics
Stochastic optimization
Optimization
Related fields
Evolutionary algorithms
Evolutionary algorithms is a sub-field of evolutionary computing.
Evolution strategies (ES, see Rechenberg, 1994) evolve individuals by means of mutation and intermediate or discrete recombination. ES algorithms are designed particularly to solve problems in the real-value domain. They use self-adaptation to adjust control parameters of the search. De-randomization of self-adaptation has led to the contemporary Covariance Matrix Adaptation Evolution Strategy (CMA-ES).
Evolutionary programming (EP) involves populations of solutions with primarily mutation and selection and arbitrary representations. They use self-adaptation to adjust parameters, and can include other variation operations such as combining information from multiple parents.
Estimation of Distribution Algorithm (EDA) substitutes traditional reproduction operators by model-guided operators. Such models are learned from the population by employing machine learning techniques and represented as Probabilistic Graphical Models, from which new solutions can be sampled or generated from guided-crossover.
Genetic programming (GP) is a related technique popularized by John Koza in which computer programs, rather than function parameters, are optimized. Genetic programming often uses tree-based internal data structures to represent the computer programs for adaptation instead of the list structures typical of genetic algorithms. There are many variants of Genetic Programming, including Cartesian genetic programming, Gene expression programming, grammatical evolution, Linear genetic programming, Multi expression programming etc.
Grouping genetic algorithm (GGA) is an evolution of the GA where the focus is shifted from individual items, like in classical GAs, to groups or subset of items. The idea behind this GA evolution proposed by Emanuel Falkenauer is that solving some complex problems, a.k.a. clustering or partitioning problems where a set of items must be split into disjoint group of items in an optimal way, would better be achieved by making characteristics of the groups of items equivalent to genes. These kind of problems include bin packing, line balancing, clustering with respect to a distance measure, equal piles, etc., on which classic GAs proved to perform poorly. Making genes equivalent to groups implies chromosomes that are in general of variable length, and special genetic operators that manipulate whole groups of items. For bin packing in particular, a GGA hybridized with the Dominance Criterion of Martello and Toth, is arguably the best technique to date.
Interactive evolutionary algorithms are evolutionary algorithms that use human evaluation. They are usually applied to domains where it is hard to design a computational fitness function, for example, evolving images, music, artistic designs and forms to fit users' aesthetic preference.
Swarm intelligence
Swarm intelligence is a sub-field of evolutionary computing.
Ant colony optimization (ACO) uses many ants (or agents) equipped with a pheromone model to traverse the solution space and find locally productive areas.
Although considered an Estimation of distribution algorithm, Particle swarm optimization (PSO) is a computational method for multi-parameter optimization which also uses population-based approach. A population (swarm) of candidate solutions (particles) moves in the search space, and the movement of the particles is influenced both by their own best known position and swarm's global best known position. Like genetic algorithms, the PSO method depends on information sharing among population members. In some problems the PSO is often more computationally efficient than the GAs, especially in unconstrained problems with continuous variables.
Other evolutionary computing algorithms
Evolutionary computation is a sub-field of the metaheuristic methods.
Memetic algorithm (MA), often called hybrid genetic algorithm among others, is a population-based method in which solutions are also subject to local improvement phases. The idea of memetic algorithms comes from memes, which unlike genes, can adapt themselves. In some problem areas they are shown to be more efficient than traditional evolutionary algorithms.
Bacteriologic algorithms (BA) inspired by evolutionary ecology and, more particularly, bacteriologic adaptation. Evolutionary ecology is the study of living organisms in the context of their environment, with the aim of discovering how they adapt. Its basic concept is that in a heterogeneous environment, there is not one individual that fits the whole environment. So, one needs to reason at the population level. It is also believed BAs could be successfully applied to complex positioning problems (antennas for cell phones, urban planning, and so on) or data mining.
Cultural algorithm (CA) consists of the population component almost identical to that of the genetic algorithm and, in addition, a knowledge component called the belief space.
Differential evolution (DE) inspired by migration of superorganisms.
Gaussian adaptation (normal or natural adaptation, abbreviated NA to avoid confusion with GA) is intended for the maximisation of manufacturing yield of signal processing systems. It may also be used for ordinary parametric optimisation. It relies on a certain theorem valid for all regions of acceptability and all Gaussian distributions. The efficiency of NA relies on information theory and a certain theorem of efficiency. Its efficiency is defined as information divided by the work needed to get the information. Because NA maximises mean fitness rather than the fitness of the individual, the landscape is smoothed such that valleys between peaks may disappear. Therefore it has a certain "ambition" to avoid local peaks in the fitness landscape. NA is also good at climbing sharp crests by adaptation of the moment matrix, because NA may maximise the disorder (average information) of the Gaussian simultaneously keeping the mean fitness constant.
Other metaheuristic methods
Metaheuristic methods broadly fall within stochastic optimisation methods.
Simulated annealing (SA) is a related global optimization technique that traverses the search space by testing random mutations on an individual solution. A mutation that increases fitness is always accepted. A mutation that lowers fitness is accepted probabilistically based on the difference in fitness and a decreasing temperature parameter. In SA parlance, one speaks of seeking the lowest energy instead of the maximum fitness. SA can also be used within a standard GA algorithm by starting with a relatively high rate of mutation and decreasing it over time along a given schedule.
Tabu search (TS) is similar to simulated annealing in that both traverse the solution space by testing mutations of an individual solution. While simulated annealing generates only one mutated solution, tabu search generates many mutated solutions and moves to the solution with the lowest energy of those generated. In order to prevent cycling and encourage greater movement through the solution space, a tabu list is maintained of partial or complete solutions. It is forbidden to move to a solution that contains elements of the tabu list, which is updated as the solution traverses the solution space.
Extremal optimization (EO) Unlike GAs, which work with a population of candidate solutions, EO evolves a single solution and makes local modifications to the worst components. This requires that a suitable representation be selected which permits individual solution components to be assigned a quality measure ("fitness"). The governing principle behind this algorithm is that of emergent improvement through selectively removing low-quality components and replacing them with a randomly selected component. This is decidedly at odds with a GA that selects good solutions in an attempt to make better solutions.
Other stochastic optimisation methods
The cross-entropy (CE) method generates candidate solutions via a parameterized probability distribution. The parameters are updated via cross-entropy minimization, so as to generate better samples in the next iteration.
Reactive search optimization (RSO) advocates the integration of sub-symbolic machine learning techniques into search heuristics for solving complex optimization problems. The word reactive hints at a ready response to events during the search through an internal online feedback loop for the self-tuning of critical parameters. Methodologies of interest for Reactive Search include machine learning and statistics, in particular reinforcement learning, active or query learning, neural networks, and metaheuristics.
See also
Genetic programming
List of genetic algorithm applications
Genetic algorithms in signal processing (a.k.a. particle filters)
Propagation of schema
Universal Darwinism
Metaheuristics
Learning classifier system
Rule-based machine learning
References
Bibliography
Rechenberg, Ingo (1994): Evolutionsstrategie '94, Stuttgart: Fromman-Holzboog.
Schwefel, Hans-Paul (1974): Numerische Optimierung von Computer-Modellen (PhD thesis). Reprinted by Birkhäuser (1977).
External links
Resources
Provides a list of resources in the genetic algorithms field
An Overview of the History and Flavors of Evolutionary Algorithms
Tutorials
Genetic Algorithms - Computer programs that "evolve" in ways that resemble natural selection can solve complex problems even their creators do not fully understand An excellent introduction to GA by John Holland and with an application to the Prisoner's Dilemma
An online interactive Genetic Algorithm tutorial for a reader to practise or learn how a GA works: Learn step by step or watch global convergence in batch, change the population size, crossover rates/bounds, mutation rates/bounds and selection mechanisms, and add constraints.
A Genetic Algorithm Tutorial by Darrell Whitley Computer Science Department Colorado State University An excellent tutorial with much theory
"Essentials of Metaheuristics", 2009 (225 p). Free open text by Sean Luke.
Global Optimization Algorithms – Theory and Application
Genetic Algorithms in Python Tutorial with the intuition behind GAs and Python implementation.
Genetic Algorithms evolves to solve the prisoner's dilemma. Written by Robert Axelrod.
Search algorithms
Cybernetics
sv:Genetisk programmering#Genetisk algoritm | Genetic algorithm | Biology | 6,894 |
54,508,765 | https://en.wikipedia.org/wiki/NGC%207068 | NGC 7068 is a spiral galaxy located about 215 million light-years away in the constellation of Pegasus. NGC 7068 was discovered by astronomer Albert Marth on November 7, 1863.
On June 26, 2013 a Type Ia supernova designated as SN 2013ei was discovered in NGC 7068.
References
External links
Spiral galaxies
Pegasus (constellation)
7068
66765
Astronomical objects discovered in 1863 | NGC 7068 | Astronomy | 85 |
38,246,962 | https://en.wikipedia.org/wiki/Railway%20Tie%20Association | The Railway Tie Association (RTA) is a trade association in the railroad and rail transit industry. The purpose of the RTA is to promote the economical and environmentally sound use of wood crossties. The RTA is involved in research into crosstie design and ongoing activities dealing with sound forest management, conservation of timber resources, timber processing, wood preservation, environmentally sound used tie disposal, and safety of industry workers. The Association's mission statement is: "Our mission since 1919 has been to ensure that the engineered wood crosstie system continues to evolve and improve in order to remain cost-effective and to meet the ever-changing requirements of track systems around the world."
History
Early railroads
The first railroads in the United States were constructed in the early 1830s. These railroads mounted track made of strips of iron secured to wood stringers onto stone blocks. The first recorded use of wood railroad ties is in 1832 when Robert Stevens, president of the Camden and Amboy Railroad in New Jersey substituted wood ties for stone due to slow deliveries of stone ties from Sing Sing prison in New York. Hewn wood crossties caught on quickly and were cut from trees along the railroad right-of-way. Tie hackers used a crosssaw and a broadaxe to hand hew railroad ties until they were phased out by sawmills by the early 1940s. The advent of steam power and then gasoline engines allowed sawmills to operate efficiently and on site as needed making tie hacking obsolete over time.
Pressure treatment
The crosstie industry began to employ pressure treating as a means of prolonging tie life beginning in the mid-1800s. The first crossties were treated in 1838 with an infusion of bichloride of mercury and laid on the Northern Central railroad in Maryland. The first permanent treating facility began operations in 1848 in Lowell, Massachusetts using alternately bichloride of mercury and chloride of zinc. Tie treatments continue to evolve today with new research and methods. Railroad and aviation engineer Octave Chanute, who was instrumental in the use of preservatives for ties, is credited with the introduction of the date nail to keep track of the life of treated railroad ties in track.
The Tie Industry
Railroad development kept pace with the expanding frontier in the United States after the American Civil War, creating a burgeoning need for new railroad ties. Every mile of track required about 2,500-3,500 crossties. Trains became heavier and faster and the railroads found it was less expensive to add more ties per mile than to buy heavier rail. During World War I, President Woodrow Wilson created the United States Railroad Administration to support the financially weak railroad industry. The USRA took control of pricing and standardization of crosstie sizes. Following World War I, tie demand contracted as railroads consolidated lines, used more preserved wood ties, and the Great Depression caused railroads to become bankrupt. During the Great Depression, the Federal Government of the United States again stepped in to control the price of forest products, including wood ties. During World War II, tie demand rose again as the war effort created a need for more track and thus more ties. Tie demand has seen its share of ebb and flow, but remains fairly constant as ties come to the end of their useful life in track and are retired to other uses.
Foundation of the Association
The RTA was founded in St. Louis, Missouri in 1919 as the National Association of Railroad Tie Producers. The first annual meeting for the association was held at the Hotel Statler in St. Louis on January 30 and 31, 1919. John W. Fristoe was the first president. In his inaugural speech he stated, "I hope that you gentlemen will succeed in forming your organization; that when it is formed, the first effort you make will be to develop economies; not how to get more for your stock, but how to produce it for less. How to pay your labor well, how they may derive a part of it...We have got to get down to the simplest form of business in which the producer comes closest to the consumer, and in that way wipe out unnecessary expenses." The Association still values these same ideals. The name was changed on July 26, 1932 to The Railway Tie Association.
Committees
At the beginning of 2013 the RTA had the following active committees:
Committee for Legislative & Environmental Affairs Response (CLEAR)
Education Committee
Executive Committee
Manufacturing Safety & Material Handling
Research and Development
Tie Disposal Task Force (Subcommittee to R&D)
Activities
In 2012, the RTA added a members only section to its website where members can interact with each other during the year. Members use this area to discuss market trends, manufacturing and safety, Legislative action, news, education, environmental and recycling concerns, product questions, and offer things for sale or trade.
Meetings
In or around October, the RTA holds a Symposium and Technical Conference. Around 300 people attend the annual Conference.
In the spring The RTA conducts a Field Trip to various industry-specific sites to help members better understand different aspects of the wood crosstie industry.
In the summer The RTA holds a Tie Grading Seminar dedicated to member education about wood crosstie specifications.
In March, the RTA participates in the annual Railroad Day on Capitol Hill event with the American Short Line and Regional Railroad Association, the Association of American Railroads, and the National Railroad Construction and Maintenance Association.
Scholarships
The RTA offers two scholarships to college students enrolled in Forestry programs accredited by The Society of American Foresters. The submission deadline is 30 June each year.
Industry recommended practices and publications
The RTA publishes industry specifications and news on its website and in hard copies available to the public and in various railroad industry publications.
The RTA publishes the bi-monthly magazine titled Crossties which covers all aspects of the wood crosstie industry The Tie Guide in English and Spanish which offers specification and treatment information, and The Tie Specifications booklet for quick reference.
Ongoing research
The RTA partners with organizations such as the Association of American Railroads' subsidiary Transportation Technology Center, Inc. (TTCI), University of Delaware, University of Illinois at Urbana-Champaign, Mississippi State University, American Wood Protection Association, and others to conduct ongoing research into processes intended to extend the life of wood railroad ties.
References
External links
Official website
Railway associations
Transportation organizations based in the United States
Transport industry associations
Construction organizations | Railway Tie Association | Engineering | 1,271 |
143,338 | https://en.wikipedia.org/wiki/Land%20%28economics%29 | In economics, land comprises all naturally occurring resources as well as geographic land. Examples include particular geographical locations, mineral deposits, forests, fish stocks, atmospheric quality, geostationary orbits, and portions of the electromagnetic spectrum. Supply of these resources is fixed.
Factor of production
Land is considered one of the three factors of production (also sometimes called the three producer goods) along with capital, and labor. Natural resources are fundamental to the production of all goods, including capital goods. While the particular role of land in the economy was extensively debated in classical economics it played a minor role in the neoclassical economics dominant in the 20th century. Income derived from ownership or control of natural resources is referred to as rent.
Ownership
Because no man created the land, it does not have a definite original proprietor, owner or user.
Consequently, conflicting claims on geographic locations and mineral deposits have historically led to disputes over their economic rent and contributed to many civil wars and revolutions.
In the context of geographic locations the resulting conflict is regularly understood as the land question (see e.g. United Kingdom, South Africa, Canada).
Addressing the land question
Land reform
Land reform programs are designed to redistribute possession and/or use of geographic land.
Georgism
Georgists hold that this implies a perfectly inelastic supply curve (i.e., zero elasticity), suggesting that a land value tax that recovers the rent of land for public purposes would not affect the opportunity cost of using land, but would instead only decrease the value of owning it. This view is supported by evidence that although land can come on and off the market, market inventories of land show, if anything, an inverse relationship to price (i.e., negative elasticity).
Significance
Land plays a vital role in advanced economies. In the UK, the "non-produced asset of land" accounts for 51% of the country's total net worth, implying that it plays a more critical role in the economy than capital.
Academic
Some United Kingdom and commonwealth universities offer courses in land economy, where economics is studied alongside law, business regulation, surveying, and the built and natural environments. This mode of study at Cambridge dates back to 1917 when William Cecil Dampier suggested the creation of a school of rural economy at the university.
Accounting
As a tangible asset, land is represented in accounting as a fixed asset or a capital asset.
Sustainability
The sustainable use of land is the focus of some economic theories.
See also
References
Further reading
Anthony C. Fisher (1987). "Natural resources," The New Palgrave: A Dictionary of Economics, v. 3, pp. 612–14.
João Pedro Galhano Alves (2009). "The artificial simulacrum world. The geopolitical elimination of land use and its effects on our present global condition", Eloquent Books, New York, USA, 71 pp.
Pierre Coulomb (1994). "De la terre à l’état: Eléments pour un cours de politique agricole", ENGREF, INRA-ESR Laboratoire d’Economie des Transitions, Montpellier, France, 47 pp.
Environmental economics
Urban economics
Factors of production
Georgism
Land value taxation
Natural resources
Production economics | Land (economics) | Environmental_science | 662 |
66,540,143 | https://en.wikipedia.org/wiki/Glossary%20of%20industrial%20automation | This glossary of industrial automation is a list of definitions of terms and illustrations related specifically to the field of industrial automation. For a more general view on electric engineering, see Glossary of electrical and electronics engineering. For terms related to engineering in general, see Glossary of engineering.
A
See also
Glossary of engineering
Glossary of power electronics
Glossary of civil engineering
Glossary of mechanical engineering
Glossary of structural engineering
Notes
References
Attribution
External links
Websites
Glossary of Industrial Automation
Automation Glossary of terms
Glossary of technical terms commonly used by ABB
An automation glossary
Glossary - Industrial Electronic/Electrical Terms
Robotics Glossary: a Guide to Terms and Technologies
PDFs
Glossary of Terms used in Programmable Controller-based Systems
Glossary of Terms for Process Control
INDUSTRY 4.0: Glossary of terms/buzzwords/jargon
Electrical engineering
Electronic engineering
Industrial automation
Industrial automation
Industrial automation
Wikipedia glossaries using description lists | Glossary of industrial automation | Technology,Engineering | 187 |
1,339,615 | https://en.wikipedia.org/wiki/Stopping%20power | Stopping power is the ability of a weapon – typically a ranged weapon such as a firearm – to cause a target (human or animal) to be incapacitated or immobilized. Stopping power contrasts with lethality in that it pertains only to a weapon's ability to make the target cease action, regardless of whether or not death ultimately occurs. Which ammunition cartridges have the greatest stopping power is a much-debated topic.
Stopping power is related to the physical properties and terminal behavior of the projectile (bullet, shot, or slug), the biology of the target, and the wound location, but the issue is complicated and not easily studied. Although higher-caliber ammunitions usually have greater muzzle energy and momentum and thus traditionally been widely associated with higher stopping power, the physics involved are multifactorial, with caliber, muzzle velocity, bullet mass, bullet shape and bullet material all contributing to the ballistics.
Despite much disagreement, the most popular theory of stopping power is that it is usually caused not by the force of the bullet but by the wounding effects of the bullet, which are typically a rapid loss of blood causing a circulatory failure, which leads to impaired motor function and/or unconsciousness. The "Big Hole School" and the principles of penetration and permanent tissue damage are in line with this way of thinking. The other prevailing theories focus more on the energy of the bullet and its effects on the nervous system, including hydrostatic shock and energy transfer, which is similar to kinetic energy deposit.
History
The concept of stopping power appeared in the tail end of the 19th century when colonial troops (including American troops in the Philippines during the Moro Rebellion, and British soldiers during the New Zealand Wars) at close quarters found that their pistols were not able to stop charging native tribesmen. This led to the introduction or reintroduction of larger caliber weapons (such as the older .45 Colt and the newly developed .45 ACP) capable of stopping opponents with a single round.
During the Seymour Expedition in China, at one of the battles at Langfang, Chinese Boxers, armed with swords and spears, conducted a massed infantry charge against the forces of the Eight-Nation Alliance, who were equipped with rifles. At point-blank range, a British soldier had to fire four .303 Lee-Metford bullets into a Boxer before he stopped charging. U.S. Army officer Bowman McCalla reported that single rifle shots were not enough: multiple rifle shots were needed to halt a Boxer. Only machine guns were effective in immediately stopping the Boxers.
In the Moro Rebellion, Moro Muslim Juramentados in suicide attacks continued to charge against American soldiers even after being shot. Panglima Hassan in the Hassan uprising had to be shot dozens of times before he died. This forced the Americans to phase out .38 Long Colt revolvers and start using .45 Colt against the Moros.
British troops used expanding bullets during various conflicts in the Northwest Frontier in India, and the Mahdist War in Sudan. The British government voted against a prohibition on their use at the Hague Convention of 1899, although the prohibition only applied to international warfare.
In response to addressing stopping power issues, the Mozambique Drill was developed to maximize the likelihood of a target's quick incapacitation.
"Manstopper" is an informal term used to refer to any combination of firearm and ammunition that can reliably incapacitate, or "stop", a human target immediately. For example, the .45 ACP round and the .357 Magnum round both have firm reputations as "manstoppers". Historically, one type of ammunition has had the specific tradename "Manstopper". Officially known as the Mk III cartridge, these were made to suit the British Webley .455 service revolver in the early 20th century. The ammunition used a cylindrical bullet with hemispherical depressions at both ends. The front acted as a hollow point deforming on impact while the base opened to seal the round in the barrel. It was introduced in 1898 for use against "savage foes", but fell quickly from favor due to concerns of breaching the Hague Convention's international laws on military ammunition, and was replaced in 1900 by re-issued Mk II pointed-bullet ammunition.
Some sporting arms are also referred to as "stoppers" or "stopping rifles". These powerful arms are often used by game hunters (or their guides) for stopping a suddenly charging animal, like a buffalo or an elephant.
Dynamics of bullets
A bullet will destroy or damage any tissues which it penetrates, creating a wound channel. It will also cause nearby tissue to stretch and expand as it passes through tissue. These two effects are typically referred to as permanent cavity (the track left by the bullet as it penetrates flesh) and temporary cavity, which, as the name implies, is the temporary (instantaneous) displacement caused as the bullet travels through flesh, and is many times larger than the actual diameter of the bullet. These phenomena are unrelated to low-pressure cavitation in liquids.
The degree to which permanent and temporary cavitation occur is dependent on the mass, diameter, material, design and velocity of the bullet. This is because bullets crush tissue, and do not cut it. A bullet constructed with a half diameter ogive designed meplat and hard, solid copper alloy material may crush only the tissue directly in front of the bullet. This type of bullet (monolithic-solid rifle bullet) is conducive to causing more temporary cavitation as the tissue flows around the bullet, resulting in a deep and narrow wound channel. A bullet constructed with a two diameter, hollow point ogive designed meplat and low-antimony lead-alloy core with a thin gilding metal jacket material will crush tissue in front and to the sides as the bullet expands. Due to the energy expended in bullet expansion, velocity is lost more quickly. This type of bullet (hollow-point hand gun bullet) is conducive to causing more permanent cavitation as the tissue is crushed and accelerated into other tissues by the bullet, causing a shorter and wider wound channel. The exception to this general rule is non-expanding bullets which are long relative to their diameter. These tend to destabilize and yaw (tumble) soon after impact, increasing both temporary and permanent cavitation.
Bullets are constructed to behave in different ways, depending on the intended target. Different bullets are constructed variously to: not expand upon impact, expand upon impact at high velocity, expand upon impact, expand across a broad range of velocities, expand upon impact at low velocity, tumble upon impact, fragment upon impact, or disintegrate upon impact.
To control the expansion of a bullet, meplat design and materials are engineered. The meplat designs are: flat; round to pointed depending on the ogive; hollow pointed which can be large in diameter and shallow or narrow in diameter and deep and truncated which is a long narrow punched hole in the end of a monolithic-solid type bullet. The materials used to make bullets are: pure lead; alloyed lead for hardness; gilding metal jacket which is a copper alloy of nickel and zinc to promote higher velocities; pure copper; copper alloy of bronze with tungsten steel alloy inserts to promote weight.
Some bullets are constructed by bonding the lead core to the jacket to promote higher weight retention upon impact, causing a larger and deeper wound channel. Some bullets have a web in the center of the bullet to limit the expansion of the bullet while promoting penetration. Some bullets have dual cores to promote penetration.
Bullets that might be considered to have stopping power for dangerous large game animals are usually 11.63 mm (.458 caliber) and larger, including 12-gauge shotgun slugs. These bullets are monolithic-solids; full metal jacketed and tungsten steel insert. They are constructed to hold up during close range, high velocity impacts. These bullets are expected to impact and penetrate, and transfer energy to the surrounding tissues and vital organs through the entire length of a game animal's body if need be.
The stopping power of firearms when used against humans is a more complex subject, in part because many persons voluntarily cease hostile actions when shot; they either flee, surrender, or fall immediately. This is sometimes referred to as "psychological incapacitation".
Physical incapacitation is primarily a matter of shot location; most persons who are shot in the head are immediately incapacitated, and most who are shot in the extremities are not, regardless of the firearm or ammunition involved. Shotguns will usually incapacitate with one shot to the torso, but rifles and especially handguns are less reliable, particularly those which do not meet the FBI's penetration standard, such as .25ACP, .32 S&W, and rimfire models. More powerful handguns may or may not meet the standard, or may even overpenetrate, depending on what ammunition is used.
Fully jacketed bullets penetrate deeply without much expansion, while soft or hollow point bullets create a wider, shallower wound channel. Pre-fragmented bullets such as Glaser Safety Slugs and MagSafe ammunition are designed to fragment into birdshot on impact with the target. This fragmentation is intended to create more trauma to the target, and also to reduce collateral damage caused from ricocheting or overpenetrating of the target and the surrounding environments such as walls. Fragmenting rounds have been shown to be unlikely to obtain deep penetration necessary to disrupt vital organs located at the back of a hostile human.
Wounding effects
Physical
Permanent and temporary cavitation cause very different biological effects. A hole through the heart will cause loss of pumping efficiency, loss of blood, and eventual cardiac arrest. A hole through the liver or lung will be similar, with the lung shot having the added effect of reducing blood oxygenation; these effects however are generally slower to arise than damage to the heart. A hole through the brain can cause instant unconsciousness and will likely kill the recipient. A hole through the spinal cord will instantly interrupt the nerve signals to and from some or all extremities, disabling the target and in many cases also resulting in death (as the nerve signals to and from the heart and lungs are interrupted by a shot high in the chest or to the neck). By contrast, a hole through an arm or leg which hits only muscle will cause a great deal of pain but is unlikely to be fatal, unless one of the large blood vessels (femoral or brachial arteries, for example) is also severed in the process.
The effects of temporary cavitation are less well understood, due to a lack of a test material identical to living tissue. Studies on the effects of bullets typically are based on experiments using ballistic gelatin, in which temporary cavitation causes radial tears where the gelatin was stretched. Although such tears are visually engaging, some animal tissues (but not bone or liver) are more elastic than gelatin. In most cases, temporary cavitation is unlikely to cause anything more than a bruise. Some speculation states that nerve bundles can be damaged by temporary cavitation, creating a stun effect, but this has not been confirmed.
One exception to this is when a very powerful temporary cavity intersects with the spine. In this case, the resulting blunt trauma can slam the vertebrae together hard enough to either sever the spinal cord, or damage it enough to knock out, stun, or paralyze the target. For instance, in the shootout between eight FBI agents and two bank robbers in the 1986 FBI Miami shootout, Special Agent Gordon McNeill was struck in the neck by a high-velocity .223 bullet fired by Michael Platt. While the bullet did not directly contact the spine, and the wound incurred was not ultimately fatal, the temporary cavitation was sufficient to render SA McNeill paralyzed for several hours. Temporary cavitation may similarly fracture the femur if it is narrowly missed by a bullet.
Temporary cavitation can also cause the tearing of tissues if a very large amount of force is involved. The tensile strength of muscle ranges roughly from 1 to 4 MPa (145 to 580 lbf/in2), and minimal damage will result if the pressure exerted by the temporary cavitation is below this. Gelatin and other less elastic media have much lower tensile strengths, thus they exhibit more damage after being struck with the same amount of force. At typical handgun velocities, bullets will create temporary cavities with much less than 1 MPa of pressure, and thus are incapable of causing damage to elastic tissues that they do not directly contact.
Rifle bullets that strike a major bone (such as a femur) can expend their entire energy into the surrounding tissue. The struck bone is commonly shattered at the point of impact.
High velocity fragmentation can also increase the effect of temporary cavitation. The fragments sheared from the bullet cause many small permanent cavities around the main entry point. The main mass of the bullet can then cause a truly massive amount of tearing as the perforated tissue is stretched.
Whether a person or animal will be incapacitated (i.e. "stopped") when shot, depends on a large number of factors, including physical, physiological, and psychological effects.
Neurological
The only way to immediately incapacitate a person or animal is to damage or disrupt their central nervous system (CNS) to the point of paralysis, unconsciousness, or death. Bullets can achieve this directly or indirectly. If a bullet causes sufficient damage to the brain or spinal cord, immediate loss of consciousness or paralysis, respectively, can result. However, these targets are relatively small and mobile, making them extremely difficult to hit even under optimal circumstances.
Bullets can indirectly disrupt the CNS by damaging the cardiovascular system so that it can no longer provide enough oxygen to the brain to sustain consciousness. This can be the result of bleeding from a perforation of a large blood vessel or blood-bearing organ, or the result of damage to the lungs or airway. If blood flow is completely cut off from the brain, a human still has enough oxygenated blood in their brain for 10–15 seconds of wilful action, though with rapidly decreasing effectiveness as the victim begins to lose consciousness.
Unless a bullet directly damages or disrupts the central nervous system, a person or animal will not be instantly and completely incapacitated by physiological damage. However, bullets can cause other disabling injuries that prevent specific actions (a person shot in the femur cannot run) and the physiological pain response from severe injuries will temporarily disable most individuals.
Several scientific papers reveal ballistic pressure wave effects on wounding and incapacitation, including central nervous system injuries from hits to the thorax and extremities. These papers document remote wounding effects for both rifle and pistol levels of energy transfer.
Recent work by Courtney and Courtney provides compelling support for the role of a ballistic pressure wave in creating remote neural effects leading to incapacitation and injury. This work builds upon the earlier works of Suneson et al. where the researchers implanted high-speed pressure transducers into the brain of pigs and demonstrated that a significant pressure wave reaches the brain of pigs shot in the thigh. These scientists observed neural damage in the brain caused by the distant effects of the ballistic pressure wave originating in the thigh. The results of Suneson et al. were confirmed and expanded upon by a later experiment in dogs which "confirmed that distant effect exists in the central nervous system after a high-energy missile impact to an extremity. A high-frequency oscillating pressure wave with large amplitude and short duration was found in the brain after the extremity impact of a high-energy missile ..." Wang et al. observed significant damage in both the hypothalamus and hippocampus regions of the brain due to remote effects of the ballistic pressure wave.
Psychological
Emotional shock, terror, or surprise can cause a person to faint, surrender, or flee when shot or shot at. There are many documented instances where people have instantly dropped unconscious when the bullet only hit an extremity, or even completely missed. Additionally, the muzzle blast and flash from many firearms are substantial and can cause disorientation, dazzling, and stunning effects. Flashbangs (stun grenades) and other less-lethal "distraction devices" rely exclusively on these effects.
Pain is another psychological factor, and can be enough to dissuade a person from continuing their actions.
Temporary cavitation can emphasize the impact of a bullet, since the resulting tissue compression is identical to simple blunt force trauma. It is easier for someone to feel when they have been shot if there is considerable temporary cavitation, and this can contribute to either psychological factor of incapacitation.
However, if a person is sufficiently enraged, determined, or intoxicated, they can simply shrug off the psychological effects of being shot. During the colonial era, when native tribesmen came into contact with firearms for the first time, there was no psychological conditioning that being shot could be fatal, and most colonial powers eventually sought to create more effective manstoppers.
Therefore, such effects are not as reliable as physiological effects at stopping people. Animals will not faint or surrender if injured, though they may become frightened by the loud noise and pain of being shot, so psychological mechanisms are generally less effective against non-humans.
Penetration
According to Dr. Martin Fackler and the International Wound Ballistics Association (IWBA), between of penetration in calibrated tissue simulant is optimal performance for a bullet which is meant to be used defensively, against a human adversary. They also believe that penetration is one of the most important factors when choosing a bullet (and that the number one factor is shot placement). If the bullet penetrates less than their guidelines, it is inadequate, and if it penetrates more, it is still satisfactory though not optimal. The FBI's penetration requirement is very similar at .
A penetration depth of may seem excessive, but a bullet sheds velocity—and crushes a narrower hole—as it penetrates deeper, so the bullet might be crushing a very small amount of tissue (simulating an "ice pick" injury) during its last two or three inches of travel, giving only between of effective wide-area penetration. Also, skin is elastic and tough enough to cause a bullet to be retained in the body, even if the bullet had a relatively high velocity when it hit the skin. About velocity is required for an expanded hollow point bullet to puncture skin 50% of the time.
The IWBA's and FBI's penetration guidelines are to ensure that the bullet can reach a vital structure from most angles, while retaining enough velocity to generate a large diameter hole through tissue. An extreme example where penetration would be important is if the bullet first had to enter and then exit an outstretched arm before impacting the torso. A bullet with low penetration might embed itself in the arm whereas a higher penetrating bullet would penetrate the arm then enter the thorax where it would have a chance of hitting a vital organ.
Overpenetration
Excessive penetration or overpenetration occurs when a bullet passes through its intended target and out of the other side, with enough residual kinetic energy to continue flying as a stray projectile and risk causing unintended collateral damage to objects or persons beyond. This happens because the bullet has not released all its energy within the target, according to the energy transfer hypothesis.
Other hypotheses
These hypotheses are a matter of some debate among scientists in the field:
Energy transfer
The energy transfer hypothesis states that for small arms in general, the more energy transferred to the target, the greater the stopping power. It postulates that the pressure wave exerted on soft tissues by the bullet's temporary cavity hits the nervous system with a jolt of shock and pain and thereby forces incapacitation.
Proponents of this theory contend that the incapacitation effect is similar to that seen in non-concussive blunt-force trauma events, such as a knock-out punch to the body, a football player "shaken up" as result of a hard tackle, or a hitter being struck by a fastball. Pain in general has an inhibitory and weakening effect on the body, causing a person under physical stress to take a seat or even collapse. The force put on the body by the temporary cavity is supersonic compression, like the lash of a whip. While the lash only affects a short line of tissue across the back of the victim, the temporary cavity affects a volume of tissue roughly the size and shape of a football. Giving further credence to this theory is the support from the aforementioned effects of drugs on incapacitation. Pain killers, alcohol, and PCP have all been known to decrease the effects of nociception and increase a person's resistance to incapacitation, all while having no effect on blood loss.
Kinetic energy is a function of the bullet's mass and the square of its velocity. Generally speaking, it is the intention of the shooter to deliver an adequate amount of energy to the target via the projectiles. All else held equal, bullets that are light and fast tend to have more energy than those that are heavy and slow.
Over-penetration is detrimental to stopping power in regards to energy. This is because a bullet that passes through the target does not transfer all of its energy to the target. Lighter bullets tend to have less penetration in soft tissue and therefore are less likely to over-penetrate. Expanding bullets and other tip variations can increase the friction of the bullet through soft tissue, and/or allow internal ricochets off bone, therefore helping prevent over-penetration.
Non-penetrating projectiles can also possess stopping power and give support to the energy transfer hypothesis. Notable examples of projectiles designed to deliver stopping power without target penetration are Flexible baton rounds (commonly known as "beanbag bullets") and the rubber bullet, types of reduced-lethality ammunition.
The force exerted by a projectile upon tissue is equal to the bullet's local rate of kinetic energy loss, with distance (the first derivative of the bullet's kinetic energy with respect to position). The ballistic pressure wave is proportional to this retarding force (Courtney and Courtney), and this retarding force is also the origin of both temporary cavitation and prompt damage (CE Peters).
Hydrostatic shock
Hydrostatic shock is a controversial theory of terminal ballistics that states a penetrating projectile (such as a bullet) can produce a sonic pressure wave that causes "remote neural damage", "subtle damage in neural tissues" and/or "rapid incapacitating effects" in living targets. Proponents of the theory contend that damage to the brain from hydrostatic shock from a shot to the chest occurs in humans with most rifle cartridges and some higher-velocity handgun cartridges. Hydrostatic shock is not the shock from the temporary cavity itself, but rather the sonic pressure wave that radiates away from its edges through static soft tissue.
Knockback
The idea of "knockback" implies that a bullet can have enough force to stop the forward motion of an attacker and physically knock them backwards or downwards. It follows from the law of conservation of momentum that no "knockback" could ever exceed the recoil felt by the shooter, and therefore has no use as a weapon. The myth of "knockback" has been spread through its confusion with the phrase "stopping power" as well as by many films, which show bodies flying backward after being shot.
The idea of knockback was first widely expounded in ballistics discussions during American involvement in Philippine insurrections and, simultaneously, in British conflicts in its colonial empire, when front-line reports stated that the .38 Long Colt caliber revolvers carried by U.S. and British soldiers were incapable of bringing down a charging warrior. Thus, in the early 1900s, the U.S. reverted to the .45 Colt in single action revolvers, and later adopted the .45 ACP cartridge in what was to become the M1911A1 pistol, and the British adopted the .455 Webley caliber cartridge in the Webley Revolver. The larger cartridges were chosen largely due to the Big Hole Theory (a larger hole does more damage), but the common interpretation was that these were changes from a light, deeply penetrating bullet to a larger, heavier "manstopper" bullet.
Though popularized in television and movies, and commonly referred to as "true stopping power" by uneducated proponents of large powerful calibers such as .44 Magnum, the effect of knockback from a handgun and indeed most personal weapons is largely a myth. The momentum of the so-called "manstopper" .45 ACP bullet is approximately that of a mass dropped from a height of . or that of a baseball. Such a force is simply incapable of arresting a running target's forward momentum. In addition, bullets are designed to penetrate instead of strike a blunt force blow, because, in penetrating, more severe tissue damage is done. A bullet with sufficient energy to knock down an assailant, such as a high-speed rifle bullet, would be more likely to instead pass straight through, while not transferring the full energy (in fact only a very small percentage of the full energy) of the bullet to the victim. Most energy from a fully stopped rifle round instead goes into formation of the temporary cavity and the destruction of both the round, the wound channel, and some of the surrounding tissues. There is no physical principle preventing a hypervelocity round from causing a splash injury in which the ejecta create rocket-like impulse on their way out to cause knockback, and indeed, no principle preventing a similar effect for exit wounds causing "knockforward", but this is still generally not anywhere near the impulse required to stop the motion of a sprinting person or knock them over from pure momentum.
Sometimes "knockdown power" is a phrase used interchangeably with "knockback", while other times it's used interchangeably with "stopping power". The misuse and fluid meaning of these phrases have done their part in confusing the issue of stopping power. The ability of a bullet to "knock down" a metal or otherwise inanimate target falls under the category of momentum, as explained above, and has little correlation with stopping power.
One-shot stop
This hypothesis, promoted by Evan P. Marshall, is based on statistical analysis of actual shooting incidents from various reporting sources (typically police agencies). It is intended to be used as a unit of measurement and not as a tactical philosophy, as mistakenly believed by some. It considers the history of shooting incidents for a given factory ammunition load and compiles the percentage of "one-shot-stops" achieved with each specific ammunition load. That percentage is then intended to be used with other information to help predict the effectiveness of that load getting a "one-shot-stop". For example, if an ammunition load is used in 10 torso shootings, incapacitating all but two with one shot, the "one-shot-stop" percentage for the total sample would be 80%.
Some argue that this hypothesis ignores any inherent selection bias. For example, high-velocity 9×19mm Parabellum hollow point rounds appear to have the highest percentage of one-shot stops. Rather than identifying this as an inherent property of the firearm/bullet combination, the situations where these have occurred need to be considered. The 9mm has been the predominantly used caliber of many police departments, so many of these one-shot-stops were probably made by well-trained police officers, where accurate placement would be a contributory factor. However, Marshall's database of "one-shot-stops" does include shootings from law enforcement agencies, private citizens, and criminals alike.
Critics of this theory point out that bullet placement is a very significant factor, but is only generally used in such one-shot-stop calculations, covering shots to the torso. Others contend that the importance of "one-shot stop" statistics is overstated, pointing out that most gun encounters do not involve a "shoot once and see how the target reacts" situation. Proponents contend that studying one-shot situations is the best way to compare cartridges as comparing a person shot once to a person shot twice does not maintain a control and has no value.
Big hole school
This school of thought says that the bigger the hole in the target, the higher the rate of bleed-out and thus the higher the rate of the aforementioned "one-shot stop". According to this theory, as the bullet does not pass entirely through the body, it incorporates the energy transfer and the overpenetration ideals. Those that support this theory cite the .40 S&W round, arguing that it has a better ballistic profile than the .45 ACP, and more stopping power than a 9mm.
The theory centers on the "permanent cavitation" element of a handgun wound. A big hole damages more tissue. It is therefore valid to a point, but penetration is also important, as a large bullet that does not penetrate will be less likely to strike vital blood vessels and blood-carrying organs such as the heart and liver, while a smaller bullet that penetrates deep enough to strike these organs or vessels will cause faster bleed-out through a smaller hole. The ideal may therefore be a combination: a large bullet that penetrates deeply, which can be achieved with a larger, slower non-expanding bullet, or a smaller, faster expanding bullet such as a hollow point.
In the extreme, a heavier bullet (which preserves momentum greater than a lighter bullet of the same caliber) may "overpenetrate", passing completely through the target without expending all of its kinetic energy. So-called "overpenetration" is not an important consideration when it comes to wounding incapacitation or "stopping power" because: (a) while a lower proportion of the bullet's energy is transferred to the target, a higher absolute amount of energy is shed than in partial penetration, and (b) overpenetration creates an exit wound.
Other contributing factors
As mentioned earlier, there are many factors, such as drug and alcohol levels within the body, body mass index, mental illness, motivation levels, and gunshot location on the body which may determine which round will kill or at least catastrophically affect a target during any given situation.
See also
Table of handgun and rifle cartridges
Taylor knock-out factor
References
Notes
External links
What We Didn't Know Hurt Us (PDF)
One Shot Drops – Surviving the Myth
Ballistics | Stopping power | Physics | 6,265 |
29,413,418 | https://en.wikipedia.org/wiki/Bumping%20%28chemistry%29 | Bumping is a phenomenon in chemistry where homogeneous liquids boiled in a test tube or other container will superheat and, upon nucleation, rapid boiling will expel the liquid from the container. In extreme cases, the container may be broken.
Cause
Bumping occurs when a liquid is heated or has its pressure reduced very rapidly, typically in smooth, clean glassware. The hardest part of bubble formation is the initial formation of the bubble; once a bubble has formed, it can grow quickly. Because the liquid is typically above its boiling point, when the liquid finally starts to boil, a large vapor bubble is formed that pushes the liquid out of the test tube, typically at high speed. This rapid expulsion of boiling liquid poses a serious hazard to others and oneself in the lab. Furthermore, if a liquid is boiled and cooled back down, the chance of bumping increases on each subsequent boil, because each heating cycle progressively de-gasses the liquid, reducing the number of remaining nucleation sites.
Prevention
The most common way of preventing bumping is by adding one or two boiling chips to the reaction vessel. However, these alone may not prevent bumping and for this reason it is advisable to boil liquids in a boiling tube, a boiling flask, or an Erlenmeyer flask. In addition, heating test tubes should never be pointed towards any person, just in case bumping does occur. Whenever a liquid is cooled below its boiling point and re-heated to a boil, a new boiling chip will be needed, as the pores in the old boiling chip tend to fill with solvent, rendering it ineffective.
A sealed capillary tube can also be placed in a boiling solution to provide a nucleation site, reducing the bumping risk and allowing its easy removal from a system.
Stirring a liquid also lessens the chances of bumping, as the resulting vortex breaks up any large bubbles that might form, and the stirring itself creates bubbles.
References
Phase transitions
Chemical safety | Bumping (chemistry) | Physics,Chemistry | 404 |
1,357,593 | https://en.wikipedia.org/wiki/Sensitive%20high-resolution%20ion%20microprobe | The sensitive high-resolution ion microprobe (also sensitive high mass-resolution ion microprobe or SHRIMP) is a large-diameter, double-focusing secondary ion mass spectrometer (SIMS) sector instrument that was produced by Australian Scientific Instruments in Canberra, Australia and now has been taken over by Chinese company Dunyi Technology Development Co. (DTDC) in Beijing. Similar to the IMS 1270-1280-1300 large-geometry ion microprobes produced by CAMECA, Gennevilliers, France and like other SIMS instruments, the SHRIMP microprobe bombards a sample under vacuum with a beam of primary ions that sputters secondary ions that are focused, filtered, and measured according to their energy and mass.
The SHRIMP is primarily used for geological and geochemical applications. It can measure the isotopic and elemental abundances in minerals at a 10 to 30 μm-diameter scale and with a depth resolution of 1–5 μm. Thus, SIMS method is well-suited for the analysis of complex minerals, as often found in metamorphic terrains, some igneous rocks, and for relatively rapid analysis of statistical valid sets of detrital minerals from sedimentary rocks. The most common application of the instrument is in uranium-thorium-lead geochronology, although the SHRIMP can be used to measure some other isotope ratio measurements (e.g., δ7Li or δ11B) and trace element abundances.
History and scientific impact
The SHRIMP originated in 1973 with a proposal by Prof. Bill Compston, trying to build an ion microprobe at the Research School of Earth Sciences of the Australian National University that exceeded the sensitivity and resolution of ion probes available at the time in order to analyse individual mineral grains. Optic designer Steve Clement based the prototype instrument (now referred to as 'SHRIMP-I') on a design by Matsuda which minimised aberrations in transmitting ions through the various sectors. The instrument was built from 1975 and 1977 with testing and redesigning from 1978. The first successful geological applications occurred in 1980.
The first major scientific impact was the discovery of Hadean (>4000 million year old) zircon grains at Mt. Narryer in Western Australia and then later at the nearby Jack Hills. These results and the SHRIMP analytical method itself were initially questioned but subsequent conventional analysis were partially confirmed. SHRIMP-I also pioneered ion microprobe studies of titanium, hafnium and sulfur isotopic systems.
Growing interest from commercial companies and other academic research groups, notably Prof. John de Laeter of Curtin University (Perth, Western Australia), led to the project in 1989 to build a commercial version of the instrument, the SHRIMP-II, in association with ANUTECH, the Australian National University's commercial arm. Refined ion optic designs in the mid-1990s prompted development and construction of the SHRIMP-RG (Reverse Geometry) with improved mass resolution. Further advances in design have also led to multiple ion collection systems (already introduced in the market by a French company years before), negative-ion stable isotope measurements and on-going work in developing a dedicated instrument for light stable isotopes.
Fifteen SHRIMP instruments have now been installed around the world and SHRIMP results have been reported in more than 2000 peer reviewed scientific papers. SHRIMP is an important tool for understanding early Earth history having analysed some of the oldest terrestrial material including the Acasta Gneiss and further extending the age of zircons from the Jack Hills and the oldest impact crater on the planet. Other significant milestones include the first U/Pb ages for lunar zircon and Martian apatite dating. More recent uses include the determination of Ordovician sea surface temperature, the timing of snowball Earth events and development of stable isotope techniques.
Design and operation
Primary column
In a typical U-Pb geochronology analytical mode, a beam of (O2)1− primary ions are produced from a high-purity oxygen gas discharge in the hollow Ni cathode of a duoplasmatron. The ions are extracted from the plasma and accelerated at 10 kV. The primary column uses Köhler illumination to produce a uniform ion density across the target spot. The spot diameter can vary from ~5 μm to over 30 μm as required. Typical ion beam density on the sample is ~10 pA/μm2 and an analysis of 15–20 minutes creates an ablation pit of less than 1 μm.
Sample chamber
The primary beam is 45° incident to the plane of the sample surface with secondary ions extracted at 90° and accelerated at 10 kV. Three quadrupole lenses focus the secondary ions onto a source slit and the design aims to maximise transmission of ions rather than preserving an ion image unlike other ion probe designs. A Schwarzschild objective lens provides reflected-light direct microscopic viewing of the sample during analysis.
Electrostatic analyzer
The secondary ions are filtered and focussed according to their kinetic energy by a 1272 mm radius 90° electrostatic sector. A mechanically-operated slit provides fine-tuning of the energy spectrum transmitted into the magnetic sector and an electrostatic quadrupole lens is used to reduce aberrations in transmitting the ions to the magnetic sector.
Magnetic sector
The electromagnet has a 1000 mm radius through 72.5° to focus the secondary ions according to their mass/charge ratio according to the principles of the Lorentz force. Essentially, the path of a less massive ion will have a greater curvature through the magnetic field than the path of a more massive ion. Thus, altering the current in the electromagnet focuses a particular mass species at the detector.
Detectors
The ions pass through a collector slit in the focal plane of the magnetic sector and the collector assembly can be moved along an axis to optimise the focus of a given isotopic species. In typical U-Pb zircon analysis, a single secondary electron multiplier is used for ion counting.
Vacuum system
Turbomolecular pumps evacuate the entire beam path of the SHRIMP to maximise transmission and reduce contamination. The sample chamber also employs a cryopump to trap contaminants, especially water. Typical pressures inside the SHRIMP are between ~7 x 10−9 mbar in the detector and ~1 x 10−6 mbar in the primary column (with oxygen duoplasmatron source).
Mass resolution and sensitivity
In normal operations, the SHRIMP achieves mass resolution of 5000 with sensitivity >20 counts/sec/ppm/nA for lead from zircon.
Applications
Isotope dating
For U-Th-Pb geochronology a beam of primary ions (O2)1− are accelerated and collimated towards the target where it sputters "secondary" ions from the sample. These secondary ions are accelerated along the instrument where the various isotopes of uranium, lead and thorium are measured successively, along with reference peaks for Zr2O+, ThO+ and UO+. Since the sputtering yield differs between ion species and relative sputtering yield increases or decreases with time depending on the ion species (due to increasing crater depth, charging effects and other factors), the measured relative isotopic abundances do not relate to the real relative isotopic abundances in the target. Corrections are determined by analysing unknowns and reference material (matrix-matched material of known isotopic composition), and determining an analytical-session specific calibration factor.
SHRIMP instruments around the world
References
External links
Founding SHRIMP Lab at Australian National University
Australian Scientific Instruments
Geochronological dating methods
Mass spectrometry | Sensitive high-resolution ion microprobe | Physics,Chemistry | 1,561 |
4,562,380 | https://en.wikipedia.org/wiki/Hydrogen%20telluride | Hydrogen telluride is the inorganic compound with the formula H2Te. A hydrogen chalcogenide and the simplest hydride of tellurium, it is a colorless gas. Although unstable in ambient air, the gas can exist long enough to be readily detected by the odour of rotting garlic at extremely low concentrations; or by the revolting odour of rotting leeks at somewhat higher concentrations. Most compounds with Te–H bonds (tellurols) are unstable with respect to loss of H2. H2Te is chemically and structurally similar to hydrogen selenide, both are acidic. The H–Te–H angle is about 90°. Volatile tellurium compounds often have unpleasant odours, reminiscent of decayed leeks or garlic.
Synthesis
Electrolytic methods have been developed.
H2Te can also be prepared by hydrolysis of the telluride derivatives of electropositive metals. The typical hydrolysis is that of aluminium telluride:
Al2Te3 + 6 H2O → 2 Al(OH)3 + 3 H2Te
Other salts of Te2− such as MgTe and sodium telluride can also be used. Na2Te can be made by the reaction of Na and Te in anhydrous ammonia. The intermediate in the hydrolysis, , can be isolated as salts as well. NaHTe can be made by reducing tellurium with .
Hydrogen telluride cannot be efficiently prepared from its constituent elements, in contrast to H2Se.
Properties
is an endothermic compound, degrading to the elements at room temperature:
→ + Te
Light accelerates the decomposition. It is unstable in air, being oxidized to water and elemental tellurium:
2 + → 2 + 2 Te
It is almost as acidic as phosphoric acid (Ka = 8.1×10−3), having a Ka value of about 2.3×10−3. It reacts with many metals to form tellurides.
See also
Dimethyl telluride
References
Hydrogen compounds
Triatomic molecules
Tellurides | Hydrogen telluride | Physics,Chemistry | 433 |
69,003,912 | https://en.wikipedia.org/wiki/ASKAP%20J173608.2%E2%80%93321635 | ASKAP J173608.2–321635 is an unidentified astronomical radio source that sends "radio signals … from the direction of the Centre galaxy". It is nicknamed "Andy's Object" after its discoverer, Ziteng (Andy) Wang, from the University of Sydney in Australia. The object was detected using the Australian Square Kilometer Array Pathfinder and MeerKAT radio telescopes. It is not visible to "the most powerful non-radio telescopes" and was detected six times between 2020 January and 2020 September. This may be a new class of object because no counterpart has been detected at multiple wavelengths, which "rules out flaring stars, binary systems, NSs, GRBs, or supernovae as its source". The radio emissions exhibit a high level of polarization, suggesting scattering as a result of a black hole.
References
Astronomical radio sources
Scorpius
University of Sydney | ASKAP J173608.2–321635 | Astronomy | 187 |
53,537,827 | https://en.wikipedia.org/wiki/Theta1%20Orionis%20E | {{DISPLAYTITLE:Theta1 Orionis E}}
θ1 Orionis E (Latinised as Theta1 Orionis E) is a double-lined spectroscopic binary located 4' north of θ1 Orionis A in the Trapezium Cluster. The two components are almost identical pre-main-sequence stars in a close circular orbit, and they show shallow eclipses that produce brightness variations of a few tenths of a magnitude.
Each component of the binary system is slightly under . Although they have a subgiant spectral classification, they are still contracting onto the main sequence and are estimated to be only about 500 million years old. It is estimated that they will reach the main sequence as smaller hotter late-B stars.
The variability was first reported in 1954 and confirmed as an eclipsing binary in 2012. It has not been assigned a variable star designation but is listed in the New Catalogue of Suspected Variable Stars.
References
Orion (constellation)
G-type subgiants
5
J05351577-0523100
Orionis, Theta1E
Orionis, 41 E | Theta1 Orionis E | Astronomy | 223 |
75,974,965 | https://en.wikipedia.org/wiki/List%20of%20French%20departments%20by%20life%20expectancy |
INSEE (2023)
The official statistics of France, available on the INSEE website, do not include total life expectancy for the population as a whole. For a more correct comparison of regions with various differences in life expectancy for men and women, a column with the arithmetic mean of these indicators was added to the tables. By default tables are sorted by arithmetic mean for 2023.
Statistics by region
Metropolitan France
Overseas regions
Data source: INSEE
Statistics by department
The table is compiled only for departments in metropolitan France. Data source: INSEE
Eurostat (2019—2022)
By default the table is sorted by 2022.
Data source: Eurostat
Global Data Lab (2019–2022)
Data source: Global Data Lab
Charts
See also
List of countries by life expectancy
List of European countries by life expectancy
Administrative divisions of France
Demographics of France
References
Health in France
Demographics of France
France, life expectancy
France
Departments by life expectancy
France | List of French departments by life expectancy | Biology | 194 |
11,128,619 | https://en.wikipedia.org/wiki/Stemphylium%20alfalfae | Stemphylium alfalfae is a plant pathogen infecting alfalfa.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Pleosporaceae
Fungus species | Stemphylium alfalfae | Biology | 44 |
2,781,715 | https://en.wikipedia.org/wiki/Project%20for%20a%20metropole | Project for a metropole is an architecture plan by Étienne-Louis Boullée designed around 1781. It deals with light, as do many of his designs, as an important element. Light is a metaphor for enlightenment as is darkness for ignorance. The plan features columns spaced closer together then the canon for classical architecture would have them be placed and oversized pendentives.
References
1780s architecture
Neoclassical architecture in France
Proposed buildings and structures in France | Project for a metropole | Engineering | 89 |
59,283,665 | https://en.wikipedia.org/wiki/Calcium%205%27-ribonucleotides | Calcium 5'-ribonucleotides is a mixture used as a flavor enhancer food additive. It listed as E number reference E634. This food additive is banned in Australia and New Zealand.
References
E-number additives
Ribosides | Calcium 5'-ribonucleotides | Chemistry,Biology | 54 |
11,436,521 | https://en.wikipedia.org/wiki/Monster%20Pig | Monster Pig was the subject of a controversial 2007 story that initially ran in the news media as a report (and a series of accompanying photographs) of an 11-year-old boy shooting a massive feral pig. The pig was claimed to have been shot during a hunt on May 3, 2007, by an 11-year-old boy named Jamison Stone. The location of the shooting was the Lost Creek Plantation, a commercial hunting preserve outside Anniston, Alabama, US. According to the hunters (there were no independent witnesses), the pig weighed and measured in length.
The story quickly ran into veracity problems with news organizations backing off on their coverage when inconsistencies in the story were revealed, including NBC, who canceled their interview with the Stone family when they suspected the story was a hoax. It was pointed out right away that the photographs of the pig released to the media seemed to be purposely posed and doctored to exaggerate scale. It was later also revealed that the "giant feral hog" was actually a large domestic farm-raised pig named "Fred" that had been purchased by the hunting preserve's owner four days before the hunt in an apparent publicity stunt. A scheduled 2008 grand jury investigation of the event, based on charges of animal cruelty, was later canceled.
Claim
The story, as told by the Stones to the news media, was that on May 3, 2007, 11-year-old Jamison Stone was hunting a huge feral hog with his father Mike Stone along with Keith O'Neal and Charles Williams, owners of Southeastern Trophy Hunters, on a farm outside of Anniston, Alabama. Jamison told the media they were invited there by a "friend" who told them about a "big hog he had that was tearing up land". The Stones and the other hunters tracked/chased the hog through the woods for over 3 hours and Jamison Stone fired 16 shots with a .50 caliber Smith & Wesson Model 500 revolver equipped with red dot sight shooting 350-grain Hornady cartridges, hitting it nine times before he killed it with a head shot. They hauled the pig by truck to the Clay County Farmers Exchange in Lineville where they used a scale, finding out it weighed . The hunters also claimed that the pig was in length from the tip of its snout to the base of its tail. However, this has not been corroborated by any source.
Controversies
It was soon revealed that the hunt took place in a low fence enclosure within the larger commercial hunting preserve called Lost Creek Plantation. A later claim said that Mr Stone had paid $1,500 to Eddy Borden, the owner of Lost Creek Plantation, so that his son could shoot a trophy wild hog in the commercial hunting preserve. Other facts were revealed expanding the controversy around this story.
Images
Several days after the story broke, suspicion mounted over the authenticity of the photographic evidence. Retired New York University physicist Dr. Richard Brandt used perspective geometry to demonstrate that either the pig was long (far bigger than claimed) or the boy in the photo was standing several meters behind the pig, using forced perspective to create the optical illusion that the animal was larger than its actual size. Others claim the photographs were digitally altered.
It has been shown that most of the pictures that were distributed to the media were altered through the use of digital enhancement and perspective
to make the pig look much larger than it really was. There were also claims that other photographs (since removed) from Monsterpig.com (a website owned by the Stone family) showed a more normal size/scale for the pig.
Despite the evidence that the images were altered or produced by trick photography, a message was kept on the Stone family website denying that the images had been modified to make the pig look larger than it was.
The Associated Press (AP) continues to keep the monster pig image in their archives with no disclosure of the forced perspective trick; the AP's archive caption presents it as if it is a legitimate photograph, stating: "In this photo released by Melynne Stone, Jamison Stone, 11, poses with a wild pig he killed near Delta, Ala., May 3, 2007. Stone's father says the hog weighed a staggering and measured from the tip of its snout to the base of its tail. If claims of the animal's size are true, it would be larger than Hogzilla, the huge hog killed in Georgia in 2004."
Domestic versus feral
Shortly after the story hit the press, the truth about the origins of the pig was revealed. "Monster Pig" was a domestic hog named "Fred", partly of the Duroc breed, and until four days earlier had lived on a nearby farm. The farm's owners, Rhonda and Phil Blissitt, stated that the pig loved to play with their grandchildren and his favorite treat was canned sweet potatoes. Previous stories reported that the pig had escaped domestication; however, the Blissits in fact sold the pet pig to the game preserve. According to The Anniston Star report, the Blissitts had raised him from a piglet as a pet, but were selling all of the pigs on their farm, and came forward as they were concerned that Fred was being passed off as a wild pig.
Weight verification
As reported by the Associated Press, the problem with the claimed weight of was that, according to Jeff Kinder (the man who gave the keys to the scale to the plantation's owner), the scale at the Clay County Coop only weighs in increments. Thus, the number 1051 is not a possible reading from that scale, making the whole measurement, on its face, incorrect or in part an estimate. When asked about this, the father said he had misunderstood the reading on the scale and believed the true measurement had been .
Subsequent allegations
Later news reports brought forward allegations that the entire story was the result of a canned hunt scheme cooked up by Eddy Borden, the owner of Lost Creek Plantation, and Keith O'Neal of Southeastern Trophy Hunters, to build up business for the then four-months-old Lost Creek hunting plantation, trying to create their own news event along the lines of the 2004 "Hogzilla" event. Borden purchased "Fred" from the Blissitts for $250, released him into the enclosure, and passed him off as a wild hog to the unsuspecting Stones. It was also reported that they were told by a local TV station that it would only be a news worthy story if the boy shot the pig.
Stinkyjournalism.org also archived this notice from the Southeastern Trophy Hunters website:
LCP is offering a once in a lifetime opportunity to harvest a truly giant boar...Eddy Borden, owner and operator of LCP, has just trapped another boar... this monster will weigh at least a thousand ponds – that is a half ton of pork! The beast is now roaming the worlds of Lost Creek Plantation. We are offering this hunt on a no kill = a no pay basis. The total cost of this hunt is fifteen hundred dollars and includes everything but the processing of the meat. The boar is jet black and has huge tusks. Keith O' Neal and Chris Williams will be on hand to help guide and video this hunt. if you ever wanted to take an animal of this magnitude, now is your chance! this beast will not last long, so if you're interested call us ASAP. Yours in hunting/fishing, Keith O' Neal Southeastern Trophy Hunters, April 28, 2007.
January 29, 2008, saw reports that an Alabama grand jury was investigating Keith O'Neal, Charles Williams, and Lost Creek Plantation owner Eddy Borden, over the killing of the pig on grounds that since there was no "kill shot" delivered by Jamison Stone, it was animal cruelty to allow a pig to be chased and continually shot by an 11-year old until it bled out when there were experienced marksmen present who could have dispatched it. Clay County District Attorney Fred Thompson later cancelled the grand jury without a public explanation, and the case was not reviewed within the one-year statute of limitations.
See also
List of individual pigs
Hogzilla
References
External links
Video: CNN Interview of the Stone family
2007 animal deaths
Anniston, Alabama
Hunting records
Individual pigs
Individual animals in the United States
Optical illusions | Monster Pig | Physics | 1,692 |
33,513,081 | https://en.wikipedia.org/wiki/Open-source%20car | An open-source car is a car with open design: designed as open-source hardware, using open-source principles.
Automobiles
Open-source cars include:
Completed and available to build, with link to CAD files and build instructions:
LifeTrac tractor from Open Source Ecology has build instructions for most revisions
Concept stage:
Rally Fighter, an all-terrain vehicle by Local Motors uses a design released under a CC BY-NC-SA license. The design was made piece by piece by an open community in a forum. Several units have been manufactured and sold.
SGT01 from Wikispeed
OScar: started in 1999, still in concept phase as of 2013.
OSVehicle Tabby: Tabby is the first OSVehicle: an industrializable, production ready, versatile, universal chassis.
Riversimple Urban Car: The CAD models for the Riversimple Hyrban technology demonstrator have been released under a CC BY-NC-SA
Common, Dutch electric car (2009)
eCorolla, an electric vehicle conversion
FOSSHW Category L7e Hybrid EV
Luka EV, an electric car production platform which first car is the Luka EV. Only Mrk I & II are open source, the source was closed in July 2016 to allow commercial production of Mrk III
Google Community Vehicle, a multi-purpose mode of transport. It can be used as a farm vehicle that attaches to farming equipment or as a means to transport the produce. This car was create by an Indian team for the 2016 Michelin Challenge Design, "Mobility for All International Design Competition"
Self-driving car prototypes have collected petabytes of data. Some companies, including Daimler, Baidu, Aptiv, Lyft, Waymo, Argo AI, Ford and Audi have publicly released datasets under more-or-less open licenses.
Other open-source vehicles
Many open-source vehicles come in the form of velomobiles, like the PUUNK, the Hypertrike, the evovelo mö or the Atomic Duck velomobile.
Other open-source vehicles include the Xtracycle cargo bicycles.
See also
Modular design, subdivision of a system into smaller parts which can be independently changed
Kit car, an automobile sold as a set of parts
Right to repair, legal right to freely modify and repair products
Velomobile, enclosed human-powered vehicle "bicycle car"
References
Open hardware vehicles
Open design | Open-source car | Engineering | 497 |
32,707,853 | https://en.wikipedia.org/wiki/Textual%20entailment | In natural language processing, textual entailment (TE), also known as natural language inference (NLI), is a directional relation between text fragments. The relation holds whenever the truth of one text fragment follows from another text.
Definition
In the TE framework, the entailing and entailed texts are termed text (t) and hypothesis (h), respectively. Textual entailment is not the same as pure logical entailment – it has a more relaxed definition: "t entails h" (t ⇒ h) if, typically, a human reading t would infer that h is most likely true. (Alternatively: t ⇒ h if and only if, typically, a human reading t would be justified in inferring the proposition expressed by h from the proposition expressed by t.) The relation is directional because even if "t entails h", the reverse "h entails t" is much less certain.
Determining whether this relationship holds is an informal task, one which sometimes overlaps with the formal tasks of formal semantics (satisfying a strict condition will usually imply satisfaction of a less strict conditioned); additionally, textual entailment partially subsumes word entailment.
Examples
Textual entailment can be illustrated with examples of three different relations:
An example of a positive TE (text entails hypothesis) is:
text: If you help the needy, God will reward you.
hypothesis: Giving money to a poor man has good consequences.
An example of a negative TE (text contradicts hypothesis) is:
text: If you help the needy, God will reward you.
hypothesis: Giving money to a poor man has no consequences.
An example of a non-TE (text does not entail nor contradict) is:
text: If you help the needy, God will reward you.
hypothesis: Giving money to a poor man will make you a better person.
Ambiguity of natural language
A characteristic of natural language is that there are many different ways to state what one wants to say: several meanings can be contained in a single text and the same meaning can be expressed by different texts. This variability of semantic expression can be seen as the dual problem of language ambiguity. Together, they result in a many-to-many mapping between language expressions and meanings. The task of paraphrasing involves recognizing when two texts have the same meaning and creating a similar or shorter text that conveys almost the same information. Textual entailment is similar but weakens the relationship to be unidirectional. Mathematical solutions to establish textual entailment can be based on the directional property of this relation, by making a comparison between some directional similarities of the texts involved.
Approaches
Textual entailment measures natural language understanding as it asks for a semantic interpretation of the text, and due to its generality remains an active area of research. Many approaches and refinements of approaches have been considered, such as word embedding, logical models, graphical models, rule systems, contextual focusing, and machine learning. Practical or large-scale solutions avoid these complex methods and instead use only surface syntax or lexical relationships, but are correspondingly less accurate. , state-of-the-art systems are far from human performance; a study found humans to agree on the dataset 95.25% of the time. Algorithms from 2016 had not yet achieved 90%.
Applications
Many natural language processing applications, like question answering, information extraction, summarization, multi-document summarization, and evaluation of machine translation systems, need to recognize that a particular target meaning can be inferred from different text variants. Typically entailment is used as part of a larger system, for example in a prediction system to filter out trivial or obvious predictions. Textual entailment also has applications in adversarial stylometry, which has the objective of removing textual style without changing the overall meaning of communication.
Datasets
Some of available English NLI datasets include:
SNLI
MultiNLI
SciTail
SICK
MedNLI
QA-NLI
In addition, there are several non-English NLI datasets, as follows:
XNLI
FarsTail
OCNLI
SICK-NL
IndoNLI
See also
Entailment (linguistics)
Inference engine
Semantic reasoner
Fuzzy logic
References
Bibliography
External links
Textual Entailment Resource Pool
Tasks of natural language processing
Logical consequence
Text mining
Natural language processing | Textual entailment | Technology | 899 |
62,404,586 | https://en.wikipedia.org/wiki/List%20of%20construction%20methods | The list of construction methods covers the processes and techniques used in the construction process. The construction method is essential for civil engineers; utilizing it appropriately can help to achieve the desired results. The term building refers to the creation of physical structures such as buildings, bridges or railways. One of the four types of buildings is residential and building methods are easiest to study in these structures.
Background
Construction involves the creation of physical structures such as buildings, bridges or railways.
Bricks are small rectangular blocks that can be used to form parts of buildings, typically walls. Before 7,000 BC, bricks were formed from hand-molded mud and dried by the sun. During the Industrial Revolution, mass-produced bricks became a common alternative to stone. Stone was typically more expensive, less predictable and more difficult to handle. Bricks remain in common use. They are small and easy to handle, strong in compression, durable and low maintenance. They can be formed into complex shapes, providing ample opportunity for the construction of aesthetic designs.
The four basic types of structure are residential, institutional and commercial, industrial, and infrastructure/heavy.
Residential
Residential buildings go through five main stages, including foundations, formwork, scaffolding, concrete work and reinforcement.
Foundation
Foundations provide support for structures, transferring their load to layers of soil or rock that have sufficient bearing capacity and suitable settlement characteristics to support them. There are four types of foundation depending on the bearing capacity. Civil engineers will often determine what type of foundation is suitable for the respective bearing capacity.
The foundation construction method depends on considerations such as:
The nature of the load requiring support
Ground conditions
The presence of water
Space availability
Accessibility
Sensitivity to noise and vibration
Shallow foundation
Shallow foundations are used where the loads forced by a structure are low relative to the bearing capacity of the surface soils. Deep foundations are needed where the bearing capacity of the surface soils is insufficient. Those loads need to be transferred to deeper layers with higher bearing capacity.
Raft or mat foundation
Raft foundations are slabs that cover a wide area, often the entire building footprint. They are suitable where ground conditions are too poor to create individual strip or pad foundations for a large number of individual loads. Raft foundations may combine beams to add support for specific loads.
Pile foundation
Pile foundations are rectangular or circular pads used to support loads such as columns.
Strip foundation
Strip foundations provide a continuous line of support to a linear structure such as a wall. Trench fill foundations are a variation of strip foundations. The trench excavation is almost completely filled with concrete. Rubble trench foundations are a further variation of trench fill foundations and are a traditional construction method that uses loose stone or rubble to minimise the use of concrete and improve drainage.
Formwork
Formwork is used for the process of creating a mold into which concrete is poured and solidified. Traditional formwork is fabricated using wood, but it can employ steel, glass fibre, reinforced plastics and other materials.
Formwork for beams takes the form of a box that is supported and propped in the correct position and level. The removal time for the formwork will vary with air temperature, humidity and consequent curing rate. Typical striking times are as follows (using air temperature of 7-16 °C):
Beam sides: 9–12 hours.
Beam soffits: 8–14 days.
Beam props: 15–21 days.
This consists of a vertical mold of the desired shape and size matching the column to be poured. To keep the material thickness to a minimum, horizontal steel or timber clamps (or yokes) are used for batch filling and at varying centers for filling that is completed in one pour.
The head of the column can provide support for the beam formwork. Even though this gives good top lateral restraint, it can make the formwork complex. The column can be cast to the underside of the beams. A collar of formwork can be held around the cast column to complete the casting and support the incoming beam.
Falsework
Falsework consists of temporary structures used to support a permanent structure. Falsework need to have accurate calculation.
Bar bending
Rebar is a steel bar or mesh of steel wires used in reinforced concrete and masonry structures to strengthen and hold the concrete in tension. The surface of rebar is often patterned to improve the quality of the bond with the concrete. Rebar is necessary to add tensile strength, while concrete is strong in compression. It can support tensile loads and increase overall strength by casting rebar into concrete.
Concrete
Concrete is typically used in commercial buildings and civil engineering projects, for its strength and durability. Concrete is a mix of cement and water plus an aggregate such as sand or stone. Its compression strength means it can support heavy weights.
Insulating concrete forms (ICFs) cam be used for home construction. They are made by pouring concrete between rigid panels, often made out of polystyrene foam. Rebar can provide additional strength internally, and the exterior panels can remain in place once the concrete sets. It is essential to check the levels of foundation before pouring.
Brick
Bricks are laid with a mortar joint bonding them. The profile of the mortar can be varied depending on exposure or to create a specific visual effect. The most common profiles are flush (rag joint), bucket handle, weather struck, weather struck and cut and recessed.
The bonding pattern describes the alignment of the bricks. Many standard bond patterns have been defined, including stretcher bond. Each stretcher (brick laid lengthwise) is offset by half a brick relative to the courses above and below of English bond. Stretchers and headers are laid with alternating courses aligned to one another. American common bond is similar to the English bond but with one course of headers for every six stretcher courses. English cross bond has courses of stretchers and headers, but with the alternating stretcher courses offset by half a brick.
Flemish bond consists of alternating stretchers and headers in each course. Header bond has courses of headers offset by half a brick. Stack bond consists of bricks laid directly on top of one another with joints aligned. This is a weak bond and is likely to require reinforcement. Garden wall bond has three courses of stretchers then one course of headers. Sussex bond has three stretchers and one header in each course.
References
Construction | List of construction methods | Engineering | 1,262 |
1,512,810 | https://en.wikipedia.org/wiki/Landmark%20%28hotel%20and%20casino%29 | The Landmark was a hotel and casino located in Winchester, Nevada, east of the Las Vegas Strip and across from the Las Vegas Convention Center. Frank Caroll, the project's original owner, purchased the property in 1961. Fremont Construction began work on the tower that September, while Caroll opened the adjacent Landmark Plaza shopping center and Landmark Apartments by the end of the year. The tower's completion was expected for early 1963, but because of a lack of financing, construction was stopped in 1962, with the resort approximately 80 percent complete. Up to 1969, the topped-off tower was the tallest building in Nevada until the completion of the International Hotel across the street.
In 1966, the Central Teamsters Pension Fund provided a $5.5 million construction loan to finish the project, with ownership transferred to a group of investors that included Caroll and his wife. The Landmark's completion and opening was delayed several more times. In April 1968, Caroll withdrew his request for a gaming license after he was charged with assault and battery against the project's interior designer. The Landmark was put up for sale that month.
Billionaire Howard Hughes, through Hughes Tool Company, purchased the Landmark in 1969 at a cost of $17.3 million. Hughes spent approximately $3 million to add his own touches to the resort before opening it on July 1, 1969, with 400 slot machines and 503 hotel rooms. In addition to a ground-floor casino, the resort also had a second, smaller casino on the 29th floor; it was the first high-rise casino in Nevada. Aside from the second casino, the five-story cupola dome at the top of the tower also featured restaurants, lounges, and a night club.
During the 1970s, the Landmark became known for its performances by country music artists. The resort also played host to celebrities such as Danny Thomas and Frank Sinatra. However, the resort suffered financial problems after its opening and underwent several ownership changes, none of which resulted in success. The Landmark entered bankruptcy in 1985, and ultimately closed on August 8, 1990, unable to compete with new megaresorts. The Las Vegas Convention and Visitors Authority purchased the property in September 1993, and demolished the resort in November 1995, to add a 2,200-space parking lot for its convention center. In 2019, work was underway on a convention center expansion which includes the former site of the Landmark. The Las Vegas Convention Center's West Hall expansion opened on the site in June 2021.
History
Frank Caroll, also known as Frank Caracciolo, was a building developer from Kansas City. In 1960, he and his wife Susan decided to construct a hotel-casino and shopping center in Las Vegas. Frank Caroll received a gaming license that year. In 1961, the Carolls purchased of land at the northwest corner of Convention Center Drive and Paradise Road in Winchester, Nevada, approximately half a mile east of the Las Vegas Strip and across from the Las Vegas Convention Center. Aside from a gas station, the property was vacant.
Construction (1961–1968)
Commencement
The Landmark was initially planned as a 14-story hotel with a casino, although the floor count increased as the project progressed. Fremont Construction, owned by Louis P. Scherer of Redlands, California, began construction of the tower at the end of September 1961, under a $1.5 million contract. Frank Caroll's company, Caroll Construction Company, also worked on the tower. At the start of construction, the tower was to include 20 stories, while completion was planned for early 1963. The tower was built on a five-foot-thick base of concrete and steel, measuring 80 feet in diameter and resting on a base of caliche that descended 30 feet into the ground. Consolidated Construction Company was the concrete subcontractor for the tower.
By December 1961, Caroll had opened the two-story Landmark Plaza shopping center, built out in an L-shape at the base of the tower. The Landmark Apartments, with 120 units, were also built near the tower and operational by the end of 1961. In 1962, a bar known as Shannon's Saloon and a western music radio station, KVEG, began operating in the Landmark Plaza. In addition to studios, KVEG also had its offices in the shopping center.
By February 1962, the tower was planned to include 31 floors, making it the tallest building in Nevada. While plans for a separate hotel structure were being made, work began on the tower by pouring concrete on a continuous 24-hour schedule. The concrete pour was done with a slip forming method. With 21 floors expected to be added to the tower over a 12-day period, it was expected to reach the 24th floor by the end of the month. In March 1962, at the request of Caroll, Clark County Commissioners removed a restriction which specified that gaming licenses could only be issued for ground-level casinos, as Caroll wanted to open a casino on the second floor of the Landmark's shopping center. That month, Caroll received a $450,000 loan from Appliance Buyers Credit Corporation (ABCC), a subsidiary of RCA-Whirlpool.
Construction had reached the 26th floor by the end of April 1962. Upon completion of the floor, work was to begin on the tower's bubble dome. By June 1962, ABCC loaned an additional $300,000 to Caroll, who reached his $3 million loan limit with the company. Caroll ultimately owed ABCC a total of $3.5 million. In August 1962, the Landmark tower was designated as a civilian fallout shelter, with the capacity to hold 3,500 people after its completion. That month, work was underway on the steel framework base for the tower's glass bubble dome.
By September 1962, the Landmark tower was nearing completion and had become the tallest building in Las Vegas and the state, being visible from 20 miles away. By that time, many stores in the Landmark Plaza had closed due to falling debris that included welding sparks, steel, tools, rivets, and cement. A construction delay occurred in September 1962, when shipments of steel for the tower's dome were deemed inadequate and crews had to wait for new shipments. Construction was progressing rapidly on the tower's dome during October 1962, with steel and concrete still being added to the tower. Completion was still scheduled for early 1963. The Aluminium Division of Apex Steel Corporation Limited was contracted to install a $40,000 aluminum undershine on the tower's dome, to provide a maintenance-free and clean-looking appearance for viewers on the ground. Crews used scaffolding and hoists to reach the area where aluminum sheets needed to be placed. Each day, it took crews 18 minutes to be lifted up. Due to delays arising from strong winds, it took crews two months for the aluminium to be attached.
Delay
In December 1962, construction of the tower was stopped when ABCC denied further funding and alleged that the Carolls had defaulted on payments. The 31-story tower had been topped off and the resort was approximately 80 percent complete, with $5 million already spent on the project. The tower's planned opening was delayed until April 1963, but it did not occur as scheduled. In May 1963, ABCC was planning a sale of the apartments, shopping center, and unfinished tower for the following month. The Carolls sought to halt the sale, and filed a $2.1 million damage suit against ABCC, alleging that the company stopped construction and refused to pay the contractors. An injunction against foreclosure was granted in June 1963, but was dissolved the following year. In October 1964, a sale of the tower was approved for later that month, after being requested by ABCC, which was still owed $3.5 million by Landmark Plaza Corporation. Up to that time, the tower had been appraised several times and was valued between $8 million and $9 million. Ownership subsequently changed, as did the resort's design plans.
In August 1965, Maury Friedman was working on a deal with RCA Victor to convert the Landmark's tower and apartment buildings into office space. By the following month, Inter-Nation Tower, Inc. – a Beverly Hills-based corporation – was negotiating with RCA-Whirlpool to develop the tower and adjacent land as an international market place, an idea that was supported by local retailers and resorts. In December 1965, architect Gerald Moffitt said the Landmark's design had gone through many revisions and that his design plans had been impounded by a court; a spokesman said there were no plans to resume construction in the near future. It was estimated that an additional six months were needed to complete the tower.
The unfinished tower became an eyesore for visitors to the nearby convention center. During its vacancy, people noted that the building appeared to be tilted, similar to the Leaning Tower of Pisa; experts stated that this was an illusion caused when the building was viewed with nearby power poles, which were tilted rather than the building itself. Local residents nicknamed it the "Leaning Tower of Plaza", the "Leaning Tower of Las Vegas", and "Frank's Folly." Moffitt said, "It doesn't tilt. There is only three-eights of an inch difference in diameter from top to bottom." In May 1966, early negotiations were being held with a prospective buyer of the Landmark.
Resumption
In July 1966, new design plans were filed with the county for the completion of the tower. Scherer planned to acquire additional property for use as a parking lot to accommodate the redesigned project. In August 1966, the Central Teamsters Pension Fund provided a $5.5 million construction loan for the project. By that time, ownership had been transferred to Plaza Tower, Inc., made up of several investors, including the Carolls and Scherer, whose construction company was awarded a $2.5 million contract to finish the Landmark tower. Because of legal problems involved with the project, the acquisition of title required over 5,000 hours of legal work and the settlement of more than 40 lawsuits. Construction was underway again in early September 1966, with completion expected in early 1967. The shops and taverns in the Landmark Plaza were closed, and the shopping center and gas station were demolished, so the land around the tower could be used to construct a casino, a hotel lobby, offices, and new shops. The adjacent Landmark apartments were to be converted into hotel rooms for the new resort.
In November 1966, Caroll planned to install two slot machines inside the Landmark Coffee Shop, which sold food to construction workers from inside a temporary structure that was to become the site of a permanent building eventually. Caroll's plans were denied as his gaming license did not apply to the coffee shop. At the time, Caroll was also accused by sheriff Ralph Lamb of being uncooperative with police officers who were searching for a hoodlum at the Landmark Apartments.
The Landmark had been scheduled to open on September 15, 1967, but its opening was further delayed because of construction problems. A new opening date of November 15 was announced, with an official grand opening to be held on December 31, 1967. In early November 1967, Scherer was awarded a $2.2 million contract for the final construction phase of the Landmark. Construction crews worked 24 hours a day for each day of the week during the final phase to have the 650-seat dinner showroom theater ready for the planned New Year's Eve opening. Also included in the final phase were clothing and jewelry shops, as well as a recreation area with swimming pools and a 20-foot waterfall.
By the time of its planned New Year's Eve opening, the tower was nearly complete, with an opening now scheduled for mid-January 1968. Two groups – Plaza Tower Inc., the property's landlord group; and Plaza Tower Operating Corporation, the casino operating group – submitted a request for a gaming license to the Nevada Gaming Control Board, which investigates licensees and top casino employees prior to issuing gaming licenses. The Landmark's opening did not occur as scheduled.
During February and March 1968, the Landmark was declared as being completed, although it was stated the following year that some construction work remained unfinished. At the time of its stated completion in 1968, a total of 200,000 hours had been spent working on the project, which used 100,000 yards of concrete and 100 tons of steel. The tower occupied of the property, and remained as the tallest building in the state.
Further developments (1968–1969)
Gaming license
In February 1968, an updated list of top casino employees was submitted to the gaming control board, which had up to 90 days to make a decision regarding the issuance of a gaming license. An opening date of mid-April 1968 was considered possible. In March 1968, the Nevada Gaming Control Board recommended against the issuance of a gaming license due to "inadequate financial capabilities and resources of the operating corporation and of its principal investor", referring to Caroll. However, the Nevada Gaming Commission had the Gaming Control Board reevaluate the license application.
On April 5, 1968, the Las Vegas media was given a tour of the Landmark. During the event, Caroll beat the Landmark's interior designer, Leonard Edward England, for allegedly flirting with Caroll's wife. Caroll was arrested on April 17, 1968, on charges of assault and battery against England. On April 22, 1968, Caroll withdrew his request for a gaming license, a decision that was approved two days later. The company then planned to receive new financing and to eventually submit a new gaming application. Approximately 600 people were expected to be employed at the Landmark upon its opening. The Landmark was put up for sale in April 1968, and the charges against Caroll were dropped two months later on the condition that he not renew his gaming license application.
Financial problems
In May 1968, the Teamsters Pension Fund filed a notice of breach on the trust deed, alleging that Caroll, Plaza Tower Inc. and Plaza Tower Operating had been defaulting on loan payments since October 1967. In late August 1968, the Las Vegas-based Supreme Mattress Company filed a lawsuit stating that it had only received $4,250 in payments for $25,505 worth of bedding material that was sold to the Landmark in December 1967.
On August 29, 1968, a joint petition was filed to declare the Landmark bankrupt. The petition was filed by Vegas Valley Electric, Inc., a plumbing contractor, and Landmark architects George Tate and Thomas Dobrusky. By that time, the Teamsters Union Pension Fund agreed to delay its foreclosure until the property was sold. Simultaneously, Sylvania Electric Company had intended to foreclose on the property because of an unpaid $3.7 million bill relating to electronic equipment installed in the Landmark. The joint petition prevented Sylvania from taking over ownership of the property.
Plane crash
On the night of August 2, 1968, Everett Wayne Shaw, a 39-year-old mechanic depressed by the break-up of his month-long marriage, stole a Cessna 180 plane as part of an apparent suicide attempt. Shaw flew the plane toward the Landmark tower and pulled up just before hitting it. The plane brushed the top of the tower before crashing into the Las Vegas Convention Center across the street, approximately away. Shaw was killed in the crash, which did not harm anyone else. Plane debris was found on the Landmark's roof and at its base, but the crash was not believed to have caused any damage to the building.
Sale negotiations and Howard Hughes
In July 1968, there were five firms interested in purchasing the Landmark, which was expected to sell for $16 million to $17 million. One of the firms, Olla Corporation, withdrew consideration of a purchase later that month, while an announcement of the resort's sale was expected within several days. Multiple companies made purchase offers that were ultimately rejected, including Rosco Industries Inc., based in Los Angeles. On October 12, 1968, Caroll denied a report that the Landmark would be leased to Royal Inns of America, Inc. and operated without a casino. At the time, negotiations were underway with three corporations interested in purchasing the resort.
On October 23, 1968, billionaire Howard Hughes reached an agreement to purchase the Landmark through Hughes Tool Company for $17.3 million, after denying reports earlier in the year that he was interested in purchasing the project. As part of the sale agreement, Hughes' Hotel Properties, Inc. would accept responsibility for approximately $8.9 million owed to the Teamster Union, as well as approximately $5.9 million in other debts and a balance of $2.4 million to Plaza Tower, Inc. At the time of the agreement, Hughes also owned five other hotel-casinos in Las Vegas. The United States Department of Justice launched an antitrust investigation into Hughes' proposed purchase, after previously investigating his attempt to purchase the Stardust Resort and Casino. As part of the investigation, the Department of Justice tried to determine whether there were other prospective buyers for the Landmark. By December 1968, negotiations were underway with several interested firms, including a $20 million offer from Tanger Industries, a holding company based in El Monte, California.
Hughes purchase and opening preparations
On January 17, 1969, the Department of Justice approved Hughes' plan to purchase the Landmark as his sixth Las Vegas resort. Later that month, a $1.5 million lawsuit was filed against Hughes Tool Company by Pennsylvania resident James U. Meiler and New York brokerage firm John R. Roake and Son, Inc. Meiler and the brokerage firm stated that they were entitled to a $500,000 brokerage fee for previously arranging a sale of the Landmark to Republic Investors Holding Company, before Hughes Tool Company agreed to purchase it. The lawsuit alleged that Hughes Tool Company "purposely and intentionally caused a restraining of interstate commerce".
At the end of January 1969, Hughes spokesmen stated that some construction on the resort was never finished; that some maintenance systems had not yet been installed; and that some repairs were needed. Hughes also planned to have some of the hotel rooms refurbished. Because of the additional work, the resort was not expected to open until at least July 1, 1969. Approximately 1,000 to 1,100 people were expected to be employed at the Landmark. The Landmark was the only casino that Hughes had taken over before it was opened. As a result, Hughes was heavily involved in details regarding the project. Hughes spent approximately $3 million to give the interior a lavish design and to add other touches to the resort, while the exterior of the Landmark buildings was left unchanged.
In March 1969, Hughes applied for approval to operate the Landmark's gambling operations, with a tentative opening date of July 1, 1969. Hughes planned to operate the casino through his Nevada company, Hughes Properties Inc., which was overseen by Hughes executive Edward H. Nigro. Hughes planned for the resort to include 26 table games and 401 slot machines. Hughes' purchase of the Landmark was not complete at that time, and his representatives stated that the sale would not be completed unless gambling and liquor licenses were issued by the state. In April 1969, Hughes received approval from the Gaming Control Board and from the state.
Hughes planned to personally oversee planning for the Landmark's grand opening; Robert Maheu, who had worked for Hughes since the 1950s, said "I knew from that point on that I was in trouble. He was completely incapable of making decisions." Hughes and Maheu never met each other in person due to Hughes' reclusive lifestyle. Instead, they communicated by telephone and through written messages. For months, they had intense arguments regarding the Landmark's opening date. Maheu believed the Landmark should open on July 1, 1969, but Hughes did not want to commit to an exact date for various reasons. Across the street from the Landmark, Kirk Kerkorian was planning to open his International Hotel on July 2, 1969. Hughes had wanted the Landmark's grand opening event to be better than Kerkorian's, but was concerned that the opening night would not go as planned. Hughes also did not want the opening date to be publicly announced too soon in the event that it should be delayed; Hughes wrote to Maheu: "With my reputation for unreliability in the keeping of engagements, I dont [sic] want this event announced until the date is absolutely firmly established."
Additionally, Hughes wrote to Maheu: "I would hate to see the Landmark open on the 1st of July and then watch the International open a few days later and make the Landmark opening look like small potatoes by comparison." Maheu became concerned, as it was difficult to plan the grand opening without knowing the date. As the tentative opening date approached, Hughes became concerned about other events scheduled for July 1969 – such as the Apollo 11 Moon landing – which might distract from the publicity of the Landmark's opening. By mid-June 1969, Hughes had still not given a definite opening date, which was still tentatively scheduled for July 1, although Hughes had wanted the Landmark to open sometime after the International Hotel. Weeks before the tentative opening, Hughes obsessively made repeated changes to the guest list for the resort's opening night. Regarding who should be invited, Hughes had complex specifications for Maheu to follow. Maheu ultimately had to decide the guest list himself.
On June 16, 1969, Sun Realty filed a claim against Plaza Tower, Inc., thus delaying Hughes' purchase of the Landmark and threatening its planned opening. Sun Realty alleged that it was owed a $500,000 finder's fee for locating Hughes as a buyer. The case was dismissed on June 25, 1969. On June 30, 1969, Sun Realty appealed the decision but was denied that day as it was unable to post a bond that would pay the $5.8 million worth of claims, filed by approximately 120 other creditors after Plaza Towers Inc. entered bankruptcy. Hughes' $17.3 million acquisition of the Landmark, through Hughes Tool Company, was completed on July 1, 1969, a day after Hughes issued checks to three different entities to complete the purchase: $2.5 million to Plaza Towers; $5.8 million to fully pay unsecured creditors; and $9 million to pay off the Teamsters Union.
Opening and operation (1969–1990)
The Landmark opened on the night of July 1, 1969, a day before the International Hotel. The resort was first unveiled to 480 VIP guests prior to the public opening, which was scheduled for after 9:00 p.m. Apollo 10 astronauts Thomas P. Stafford and Eugene Cernan attended the grand opening, and were the first people to enter the new resort. Other guests included Cary Grant, Dean Martin, Jimmy Webb, Phil Harris, Tony Bennett, Sammy Cahn, Steve and Eydie, and Wilt Chamberlain. Nevada governor Paul Laxalt, as well as senators Alan Bible and Howard Cannon, were also at the opening. Three members of the Los Angeles Rams were also in attendance: Jack Snow, Lamar Lundy, and Roger Brown.
Local, national and international media were also present for the grand opening, which was described by the Las Vegas Sun as resembling a Hollywood premiere. A closed-circuit television camera filmed the festivities in the Landmark on opening night, with the footage being shown live to guests at Hughes' other hotels, the Sands and the Frontier. Hughes – who lived in a secluded penthouse at his nearby Desert Inn hotel-casino – did not attend the grand opening. For opening night, comedian Danny Thomas was the first to perform in the Landmark's theater-restaurant showroom. Hughes had earlier suggested a Rat Pack reunion or a Bob Hope-Bing Crosby reunion as the opening act, both of which were considered unlikely to happen.
Television advertisements for the resort stated: "In France, it's the Eiffel Tower. In India, it's the Taj Mahal. In Las Vegas, it's the Landmark." Dick Parker, executive vice president for the Landmark, had stated during the previous year that the International and the nearby Las Vegas Convention Center would not harm the Landmark's business. The Landmark reportedly lost $5 million in its first week of operations, and despite its close proximity to the convention center, the resort failed to make a profit during the subsequent years of its operation. In October 1969, Sun Realty filed a damages lawsuit against Hughes Tool Company and Plaza Tower, Inc, alleging that the two companies conspired to avoid paying the realty company its $500,000 finder's fee. Aside from the finder's fee, Sun Realty also sought an additional $5 million in punitive damages. In February 1971, the Nevada supreme court rejected the lawsuit, which had sought $3 million by that time. In December 1971, Hughes paid a little over $1 million to purchase of adjacent land located west of the Landmark. Hughes had previously leased the property, which he had been using as a parking lot for the resort.
In January 1973, ownership of the Landmark was transferred to Hughes' Summa Corporation, formerly Hughes Tool Company. That year, the Landmark was valued at $25 million in a property appraisal. By 1974, William Bennett and William Pennington made an offer to buy the Landmark, but Hughes raised the price several times, from $15 million to $20 million; they bought the Circus Circus resort instead. In January 1976, the Landmark began offering foreign-language gaming video tapes to its German, Japanese, and Spanish hotel guests, who frequently limited themselves to playing slot machines rather than table games because of language barriers. Summa general manager E. H. Milligan said, "As far as we know, we are the first hotel in Las Vegas to present this service in this manner." The hotel and casino briefly closed in March 1976, as part of a hotel worker strike consisting of nearly 25,000 employees, affecting 15 Las Vegas resorts. The strike lasted two weeks before ending in late March. Hughes died of kidney failure the following month.
By May 1977, Summa was financially struggling; that month, the brokerage firm of Merrill, Lynch, Pierce, Fenner & Smith recommended that Summa sell its various holdings, including the Landmark. According to the brokerage firm, the Landmark "has proven highly inefficient for hotel/casino operations and, in the opinion of Summa Corporation's management, does not warrant further investment."
Gas leak and fire
On July 15, 1977, shortly after 4:00 a.m., a water pipe burst in the tower's subbasement, two floors below ground level. Two feet of water flooded the basement room and shorted out the main power panel, thereby cutting out electricity for the resort shortly before 5:00 a.m. An auxiliary power generator provided lighting for the resort. However, telephones, air conditioning, and four of the tower's five elevators were left non-functional because of the main power failure. Carbon monoxide, freon and methane, all originating from the auxiliary generator, infiltrated the tower through ventilation ducts, forcing an evacuation of the building. Between 9:00 a.m. and 11:00 a.m., crews from the Southwest Gas Corporation inspected the building with firemen and found no further traces of gas, allowing guests and employees to re-enter the building.
A second evacuation was ordered at 2:30 p.m. after another power failure, which rendered the elevators inoperable once again. During the outage, 21 table games remained open with the use of emergency lights, while a bar gave away free drinks. Power was restored at 6:45 p.m., although telephones remained inoperable. Guests were given the option to stay at one of Summa's other hotel properties. Despite the incident, hotel executives stated that the resort maintained 95-percent occupancy. An investigation into the cause of the gas leaks could not begin that day due to the presence of fumes in the basement.
During the incident, a news reporter and a cameraman for the local KLAS-TV news channel – also owned by Summa – were beaten and forced out of the hotel lobby by Landmark guards who were armed with clubs and flashlights. Damaged in the altercation was the recording unit for a $37,000 camera owned by KLAS. Other local news crews were allowed to stay at the property to cover the incident. Orders to remove KLAS were given to the guards by hotel management, which had been irritated by recent KLAS news stories that related to Summa's properties, including a story stating that negotiations were underway to sell the Landmark to an Arabian investor.
A total of 138 people were hospitalized after inhaling the poisonous gases; they were treated at four local hospitals. Among the hospitalized were nearly 100 hotel guests, and several firemen and ambulance drivers; most of the patients were released from the hospitals within three days of the incident. A 55-year-old man was the sole casualty in the incident. An investigation into the cause of the gas leaks concluded on July 19, 1977, and found that a defective exhaust line on one of the emergency generators was responsible. The line had been installed during the hotel's construction. John Pisciotta, director of the Clark County Building Department, did not believe that he or anyone else would be able to determine how the line became damaged. Summa brought in the company which installed the system to have it repaired.
On October 23, 1977, at 3:44 p.m., a two-alarm fire was reported in a hotel room on the 22nd floor, after a bartender in the 27th floor lounge smelled smoke. The entire room had caught on fire from a cigarette. The fire was extinguished with help from 45 firefighters, who put it out within five minutes of their arrival. However, the fire led to heavy smoke infiltrating the entire hotel and ground-floor through elevator shafts. The Landmark was evacuated, and hundreds of guests and employees were allowed to return inside at approximately 5:15 p.m., after smoke had been cleared from the resort's interior. The 22nd through 27th floors had moderate smoke damage. Five hotel guests were treated for smoke inhalation, but none required hospitalization.
Prospective buyers
During October 1977, Summa was in negotiations with several prospective buyers for the Landmark, which had approximately 1,200 employees at the time. One interested buyer was a group of Chicago investors led by an attorney. Summa was also in negotiations to sell the Landmark for $12 million to Nick Lardakis, a tavern owner who lived in Akron, Ohio. Simultaneously, Summa was holding discussions with the Scott Corporation – a group of downtown Las Vegas entrepreneurs led by Frank Scott – which wanted to purchase the resort at a price of nearly $10 million. Lardakis' acquisition of the Landmark was rejected that month as he was unable to raise the necessary funds to make the purchase; according to Summa, Lardakis' terms were "unrealistic." The Chicago group made a $12 million offer, but Summa's board of directors favored the offer by Scott Corporation, which had no down payment and included a 20-year payout period, while the Chicago group was opposed to a long-term mortgage arrangement with Summa. The Chicago group noted that Summa officials repeatedly declined to let the group examine the Landmark's 1973 property appraisal. Other $12 million offers came from Las Vegas heiress JoAnn Seigal and Beverly Hills management consultant Charles Fink. Seigal also complained that Summa would not provide her with a property appraisal to base her negotiations.
The Beverly Hills-based Acro Management Consultants offered $16 million for the Landmark, the highest of five bids up to that time. Summa spokesman Fred Lewis said that Acro's bid was considered "more of an inquiry" than a serious offer, a belief that was disputed by Leonard Gale, vice president of Acro. Gale acknowledged that the Landmark was "the biggest lemon in Las Vegas", but was confident it could become a successful property under Acro's ownership. After weeks of negotiations, Summa announced that no decision had been made on a sale of the Landmark, reportedly due to disagreements within the company. William Lummis, a cousin of Hughes, had been named chairman of the Summa board earlier in the year. Lummis wanted to sell all of Summa's non-profitable properties, while chief operating officer Frank William Gay, citing the purported desires of Hughes, wanted to expand and modernize such properties. The Landmark was considered the weakest of Summa's six gaming and hotel properties in Nevada, as it had never made a profit up to that time.
Summa officials held a meeting on November 3, 1977, but the company made no decision on selling the Landmark, which lost an average of $500,000 per month. By that time, the Scott Corporation stated that it would likely withdraw its offer to purchase the Landmark because of inability to obtain long-term financing. In January 1978, Summa announced that the Landmark would be sold to the Scott Corporation, with the sale price reportedly ranging between $10 million and $12 million. Up to that time, the resort had reportedly lost $15 million since its opening, despite numerous attempts to increase business. Experts believed that the Landmark suffered financially as a result of its low room-count (486 guest rooms at the time) and its location across the street from the Las Vegas Hilton (formerly the International), which was the world's largest hotel at the time. Frank Scott owned downtown Las Vegas' Union Plaza Hotel, which had become one of the city's most successful casinos, and he said the same management principles used at the Union Plaza would be applied to the Landmark.
Scott intended to change the name of the resort, with "The Plaza Tower" as the favorite among several names under consideration. Scott planned to take over operations once the sale received approval from Summa, county and state gaming officials, and courts that were handling Hughes' estate. Because higher offers were subsequently made for the Landmark, the Scott Corporation's offer was rejected by a judge who was monitoring the Hughes estate.
Wolfram/Tickel ownership
A group of midwestern investors purchased the Landmark from the Summa Corporation in February 1978, at a cost of $12.5 million. The group was led by Lou Tickel and Zula Wolfram, and it included Gary Yelverton. The purchase was financed using money that Wolfram's husband, Ed Wolfram, embezzled from his brokerage firm, Bell & Beckwith. Faye Todd, the Landmark's entertainment director and a corporate executive assistant, primarily oversaw the Landmark's operations for the Wolframs, who lived in Ohio. The Wolframs were high rollers who frequently stayed at the Desert Inn resort when visiting Las Vegas. Todd met the Wolframs while working for the Desert Inn as special events coordinator, and she became close friends with Zula Wolfram, who had been planning to purchase a Las Vegas hotel with her husband. Tickel, a former magistrate judge and a resident of Salina, Kansas, previously owned several other hotels. The group was confident that the Landmark would overcome its financial problems, and they planned to add a 750-room hotel tower to the property within two years.
The sale was completed on March 31, 1978, under the new ownership of Zula Wofram, and Lou and Jo Ann Tickel. However, the new owners were unable to find someone with a gaming license and sufficient funds to continue operating the casino ahead of the sale's completion. The investment group had yet to apply for gaming and liquor licenses, and the Summa Corporation declined to continue operating the casino, citing a lack of interest. The Landmark's casino, which had 272 employees, was closed on April 1, 1978, due to the lack of gaming licenses. The owners began a search for a suitable licensed individual who could temporarily operate the casino until they could receive their own gaming license. The hotel, restaurants, and shops remained open, with 700 other employees. The casino reopened on June 2, 1978, after a one-year gaming license had been granted to Frank Modica, a Las Vegas gaming figure who would temporarily operate the casino on the owners' behalf. The casino's bingo parlor remained closed as it was undergoing renovations.
In October 1978, Tickel, Wolfram, and Yelverton were approved by the state to be licensed as the landlords of the Landmark. At the time, Ed Wolfram was listed as a financial adviser on the licensing plan. In 1979, Jesse Jackson Jr. was the Landmark hotel manager, and was the only such manager in the Las Vegas hotel industry to be black. The Tickels remained as co-owners of the Landmark until 1980, following Zula Wolfram's approval to purchase their interest in the resort. In 1982, architect Martin Stern Jr. was hired to design a large expansion of the Landmark. Revenue for the Landmark exceeded $26 million that year, although the resort lost $500,000 during the month of November 1982. Up to that time, the Landmark had lost an average of $3 million every year since its opening.
Federal investigators shut down Wolfram's firm on February 7, 1983, after they discovered $36 million of money missing in six accounts that were managed by him and his wife, ultimately leading to the discovery of his embezzlement. Lawyer Patrick McGraw, trustee for Bell & Beckwith, was approved later that month to operate the Landmark until it could be liquidated. The expansion designed by Stern was cancelled, and Ed Wolfram was convicted of embezzling later that year, after admitting to using money from his firm to pay for various businesses ventures, with the Landmark being the most expensive. Zula Wolfram, who had owed $5 million to Summa since her purchase of the Landmark, was forced to sell her majority share in the resort.
Morris ownership
The Landmark was entangled in a Toledo bankruptcy court in July 1983, at which point Bill Morris, a Las Vegas lawyer, made plans to purchase the resort. Morris, also a member of the Las Vegas Convention and Visitors Authority (LVCVA), had previously owned the Holiday Inn Center Strip hotel-casino, as well as the Riverside Resort in nearby Laughlin. Morris had also previously represented Plaza Tower, Inc. at the time that Hughes completed his purchase of the resort. Morris intended to eventually expand the resort to 1,100 hotel rooms.
Yelverton and his wife stated that they had been sold a five-percent interest in the Landmark in 1979, but that the document was never filed with the county recorder's office. In August 1983, the Yelvertons filed a state suit to prevent the sale to Morris, stating that they would not be compensated for their interest if the sale proceeded. At the time, Gary Yelverton was the Landmark's casino manager. The Nevada Gaming Control Board delayed approval of Morris' purchase until his offer could be updated to include what Zula Wolfram owed to Summa. Morris purchased the Landmark for $18.7 million, and took over ownership on October 30, 1983. The struggling resort had a profitable first month under its new management. Morris worked 18 hours a day to ensure the Landmark's success. He said the Landmark had "never really been given a fair chance," citing the absence of "on-hands management on a day-in, day-out basis" as one reason for its lack of success. Morris also believed that previous operators tried to make the Landmark "do something it was not meant to do" by competing with "superstar productions," whereas he believed the resort's location made it more ideal for serving attendees of the Las Vegas Convention Center.
The Landmark remained open while Morris spent nearly $3.5 million on a renovation, which was underway in late 1983. Morris said the Landmark would compete against rivals with its "budget prices and good service." He intended to capitalize on the resort's location with a planned expansion that would feature three 15-story towers with 1,500 hotel rooms, accompanied by a large domed family entertainment center. The expansion was to be built west of the Landmark on of vacant land that Morris had purchased along with the resort. The expansion did not occur, and the Landmark struggled throughout the 1980s.
By the middle of 1985, Morris was negotiating a $28 million loan to pay for improvements and fire safety updates for the Landmark. Clark County officials considered taking action against the resort because of its failed compliance with fire safety standards. On July 29, 1985, the Internal Revenue Service (IRS) filed a $2.1 million lien against the property, because of Morris' failure to pay withholding and payroll taxes for the resort's employees for the previous six months. Two days after the lien was filed, the Landmark filed for Chapter 11 bankruptcy to prevent the IRS from seizing assets such as casino cage money. The resort remained open despite the bankruptcy filing, and the casino had enough money to remain operational. The Landmark had debts totaling $30.6 million, while it had $30.6 million in assets. Morris blamed the bankruptcy on McGraw, alleging that he derailed a $28.8 million refinancing of the Landmark 24 hours prior to the finalization of the loan. Morris said operations would continue as normal despite the bankruptcy filing.
The Nevada National Bank requested in early 1986 that the bankruptcy be converted to a liquidation proceeding to pay off creditors, stating that the Landmark's bankruptcy reorganization plan could not succeed. Morris said he would have to cancel his reorganization plan and lay off 700 to 800 Landmark employees if a bankruptcy court did not allow the resort to abandon its union labor contracts. Part of Morris' reorganization plan involved cutting employee wages by 15 percent, including his own yearly salary of $145,000. The pay cut would give the Landmark an additional $6,500 per month, which would allow the resort to make its mortgage payments. Morris hoped to increase the hotel's room count after the resort's eventual emergence from bankruptcy, with additional financing from a national franchise hotel chain. He hoped that the Landmark would be out of Chapter 11 bankruptcy by March 1, 1986, although it would ultimately remain in bankruptcy for the rest of its operation.
In January 1987, a small fire broke out in the resort's showroom, located next to the casino. Five employees were evacuated, and there were no injuries. Customers in the casino were unaware of the fire, which was quickly extinguished by the local fire department. The fire was determined to have likely been caused by an arsonist. In July 1987, the Landmark began offering poker tournaments in its Nightcap Lounge each weekday night. To help bring in customers, two cash drawings were held during each tournament.
Morris and bank company Drexel Burnham Lambert began a search in 1989 for a new owner to take over the Landmark. At the end of the year, a U.S. bankruptcy court judge gave Morris until 1990 to find a buyer or refinancing. Otherwise, the Landmark would be liquidated to pay off creditors, in accordance with a court order. On January 2, 1990, the Landmark was ordered into Chapter 7 bankruptcy after a judge ruled that the creditors would not be able to receive compensation under the reorganization plan. Between $43 million and $46 million was owed to various creditors. Morris' gaming license expired that month after the resort failed to pay $500,000 in taxes and penalties. Richard Davis, a Las Vegas-based real estate agent, was appointed by the bankruptcy court that month to temporarily operate the resort. On February 21, 1990, the Nevada Gaming Commission extended the gaming license and allowed the resort to stay open for at least two additional weeks while its financial problems were analyzed by state experts. At that time, the hotel had $562,000 in cash, including $175,000 in revenue that had accumulated in the prior six weeks.
The Landmark continued to struggle, although the introduction of various casino programs helped improve revenue. A U.S. bankruptcy court judge approved a request for the Landmark to be sold seven weeks later in a public auction scheduled for August 6, 1990. The request was made by Davis, who cited numerous failed attempts to sell the resort. More than 200 prospective buyers had inquired about the Landmark, but only five to ten of them were considered as having serious interest in the resort. In July 1990, two Denver businessmen, David M. Droubay and Martin Heckmaster, offered $35.5 million to purchase the bankrupt resort. Morris was dissatisfied with the offer, stating that the property had been appraised as high as $70 million.
Closure (1990–1995)
On August 6, 1990, the bankruptcy hearing failed to attract a buyer for the Landmark. Ralph Engelstad and Charles Frias, who both held substantial interest in the resort, had made $100,000 deposits which allowed them to bid at the hearing, but they did not do so and left the hearing without commenting. Droubay and Heckmaster were ineligible to bid as they did not make a deposit. At the request of Davis' attorney, a U.S. bankruptcy judge granted permission to close the Landmark. Gaming operations began shutting down that afternoon, within an hour of the failed hearing. Slot machine and hotel operations were scheduled to shut down later in the week. With 498 rooms at the time, the Landmark was unable to compete with new megaresorts, and was fully closed on August 8, 1990.
Morris, upset about the failed auction, said, "Sometimes it comes down to good luck and bad luck. I had nothing but bad luck. Someone is going to come in and run the Landmark and look like a genius." Forrest Woodward, who managed the casino for Davis, said, "This is just an obsolete gaming property that no one's interested in, considering the debt," which included $48 million; a portion of that was $10 million in unsecured claims. Davis' attorney predicted the Landmark would be closed for 100 days or more while creditors pursued a foreclosure sale. A week after the closure, Davis received permission from the U.S. bankruptcy court to abandon the property as trustee, due to the cost of maintaining security at the closed resort. Davis' attorney said it would cost between $60,000 and $200,000 each month to maintain the property. Creditors would be left to pay bills relating to the property until a foreclosure sale could take place. In December 1990, the property was purchased through a foreclosure sale by Lloyds Bank of London for $20 million. Lloyds Bank made the purchase in order to protect a $25 million loan it had made to Morris in 1988. By March 1993, the Landmark's contents had been liquidated through a sale conducted by National Content Liquidators.
By July 1993, representatives of Lloyds Bank had approached the LVCVA about the possibility of purchasing the Landmark. LVCVA was interested in the proposal, with plans to use the Landmark's 21-acre property either for a parking lot or expansion. LVCVA purchased the Landmark in September 1993, at a cost of $15.1 million. During 1994, board members of LVCVA debated on whether to restore the Landmark or demolish it, ultimately deciding on the latter. Only three LVCVA board members voted to save the building. Among those voting in support was Lorraine Hunt, who later said that the Landmark "was iconic and part of the history of Las Vegas. Had they kept it, it could have been the office for the Las Vegas Convention and Visitors Authority."
Demolition
LVCVA paid $800,000 for asbestos removal in the tower. Central Environmental Inc. was hired to remove the asbestos, while AB-Haz Environmental, Inc. was the asbestos removal consultant. In mid-1994, AB-Haz Environmental began removing asbestos insulation from the Landmark. The removal, scheduled for completion in August 1994, took nearly six months. In October 1994, it was announced that the Landmark would be demolished the following month to make way for a 21-acre parking lot, to be used by the Las Vegas Convention Center. Demolition of the tower was delayed several times, to allow for the removal of additional asbestos. The Clark County Health District proposed penalties against the asbestos companies.
By February 1995, AB-Haz had twice declared the Landmark to be asbestos-free and safe for demolition, although Clark County officials discovered that some hotel floors still contained 90 percent of the asbestos. Up to that time, LVCVA had already paid a total of $1 million to the asbestos companies to have the asbestos removed from the hotel and an adjacent apartment complex, allowing for their demolition. The Clark County Air Pollution Control Division recommended a $450,000 fine against AB-Haz for failure to remove the asbestos, while LVCVA would have to spend an additional $1 million for further asbestos removal. AB-Haz was ultimately cited for violating air emission standards during the asbestos removal, and signed a settlement in which the company agreed to pay an $18,000 fine. Central Environmental was removing asbestos from the tower as of August 1995. Because of previous delays, officials for LVCVA had given up on setting a demolition date until all the asbestos was removed. In October 1995, LVCVA paid Iconco Inc. $740,000 to remove remaining asbestos from the resort, hoping to have it demolished in time for ConExpo to be held on the property's new parking lot in March 1996.
Controlled Demolition, Inc. (CDI) was hired to implode the tower. No blueprints could be found for the tower, which CDI president Mark Loizeaux considered unusual. Demolition crews discovered secret stairwells in the tower, and Loizeaux said, "We have learned everything as we have gone in. It was a very strange structure, very unique." A week before the Landmark tower was demolished, crews removed the remaining asbestos from the low-rise structures and subsequently tore them down. Crews then spent the final days of demolition by drilling in the tower to weaken and prepare it ahead of its planned implosion. Less than 100 pounds of dynamite was placed in certain locations throughout the tower's first four floors.
At 5:37 a.m. on November 7, 1995, the Landmark tower was demolished through implosion. An estimated 7,000 people arrived to witness the implosion. Upon detonation, the tower's northwest half was brought down, followed by the second half, which caved in on itself, followed by a black cloud of dust ascending 150 feet into the air. Most of the material from the demolished structure was to be recycled and used in other construction projects. The 31-story tower was the tallest reinforced concrete building ever demolished in North America, and the second tallest building in the world to be demolished. Demolition and related expenses cost $3 million. Frank Wright, curator of the Nevada State Museum and Historical Society, said "I kind of hate to see it come down," stating that the Landmark tower still represented what the then-upcoming Stratosphere tower represented: "the biggest and the tallest." The property was to become occupied by 2,200 parking spaces, expected to be ready by March 1996.
One of the Landmark's ground-level signs, with gold and blue cursive neon lettering, was restored by the Neon Museum and installed at the parking lot. As of 2017, the property contains 2,948 parking spaces for the Las Vegas Convention Center. In 2019, work was underway on an expansion of the convention center, to be built on the former sites of the Landmark and the nearby Riviera. The sign was removed from the site and temporarily put into storage by the Neon Museum. The convention center's West Hall expansion opened on the site in June 2021.
Architecture
The Landmark tower was designed by architects Gerald Moffitt and Ed Hendricks. The uniquely designed Landmark tower was the first of its kind to be built in Nevada; its design was inspired by the Space Needle located in Seattle, Washington. When construction stopped in 1962, the project consisted of of floor space, and included two basements that were 30 feet deep. The tower's height measured 297 feet, while its diameter measured 60 feet. The tower's dome measured 141 feet in diameter. In 1966 – the year that construction resumed – architects George Tate and Thomas Dobrusky were hired to design new portions of the resort, including the ground-floor casino.
Height
The Landmark tower was billed as having 31 floors, although it skipped floors 13 and 28. The Landmark tower was the tallest building in the state from 1962 to 1969. In 1967, a revolving letter "L" neon sign was installed at the top of the tower. Excluding its rooftop sign, the tower stood , seven feet taller than the Mint hotel in downtown Las Vegas.
Conflicting numbers have been given for the tower's total height. According to Scherer, the sign measured , and the tower measured , including the sign. At the time of opening, the Landmark tower was billed as having a height of . By that time, the new 30-story International Hotel had become the tallest building in the state at . When it was demolished, the tower reportedly stood . According to Emporis, the tower stood from the ground to its roof, while the tip raised the height to a total of .
Features
When the Landmark opened, it had a total of 400 slot machines. The ground-floor casino was , while a second casino, consisting of , was located in the dome on the 29th floor; it was the first high-rise casino in the state. At the time of opening, the ground-floor casino featured red and black colors, while the upper casino used orange coloring and wood. The hotel contained 476 rooms and 27 suites for a total of 503, a small number in comparison to other Las Vegas resorts, which commonly had 1,000 rooms. The tower included 157 hotel rooms, while the remaining units were located on ground level. The tower used an octagonal floorplan, and the rooms in the tower used a layout that had them shaped like pie slices. By 1977, the room count had increased to 524, before ultimately being lowered to 498 at the time of the Landmark's closure in 1990.
The Landmark's interior designer was Las Vegas resident Leonard Edward England, who designed the ground floor to include a colorful and primitive Incan theme, which gradually changed to a Space Age theme on subsequent floors. The interior included $200,000 light fixtures, glowing, red-colored Incan masks, and a burnished metal wall sculpture representing a Cape Kennedy launch. The interior also included 65 tons of black and white polished marble, and carved mahogany woodwork from Mexico. In addition, the interior featured murals depicting the eight Wonders of the World, which included the Landmark tower.
After Hughes agreed to purchase the resort, he had an island built in the middle of the hotel's 240-foot swimming pool, which cost $200,000 and was the longest in the world. The Landmark's pool included waterfalls and three carpeted bridges leading to its center island, which featured palm trees. For the hotel, Hughes replaced 72-inch beds with 80-inch beds and had color televisions built into the walls of each room ahead of the resort's opening.
The Landmark's second floor was used for offices. The tower's dome included five floors, although floors 26 and 30 were used by employees for maintenance equipment, elevator equipment, and dressing rooms. The shape and strength of the tower's bubble dome was maintained by perlite concrete and steel girders. The Landmark included a high-speed exterior glass elevator, which took people up to the five-story cupola dome. The elevator was located on the tower's west side, facing the Las Vegas Strip. It was capable of moving 1,000 feet per minute, allowing people to go from the ground floor to the 31st floor in 20 seconds. It was the fastest elevator in the Western United States. Hughes biographer Michael Drosnin stated that the elevator was prone to constant malfunctions, and that the Landmark's air-conditioning system "never really worked." The dome provided wraparound views of the city, and was capable of holding over 2,000 people. The dome included lounges and a night club, as well as the high-rise casino on the 29th floor. At the time of the Landmark's opening, the showroom and the Cascade Terrace coffee shop were located on the first floor, while a steak and seafood gourmet restaurant known as Towers Restaurant was located on the 27th floor and a Chinese restaurant known as the Mandarin Room was located on the 29th floor.
In April 1971, plans were announced for a $750,000 expansion that would include luxury suites on the 29th floor, the highest in Las Vegas at the time. Also planned was the remodeling of the casino and lobby, and the expansion of a coffee shop. The Skytop Rendezvous, a piano bar and dance floor on the top floor of the tower, was reopened as a discotheque on February 3, 1975, specializing in middle of the road music. The Landmark was the only major hotel in the state to have a discotheque. When Morris' renovation began in December 1983, the tower contained 150 rooms, a number that was expected to be reduced as the rooms would be enlarged and upgraded to first class standards. Other plans included changes to the coffee shop, new casino carpeting, and redesigning and renaming the 27th-floor restaurant as Anthony's Seafood and Prime Rib Room. The renovation was financed by Valley Bank of Nevada.
The Love Song Lounge operated on the top floor during the mid-1980s, before and after Morris' renovation, and offered dancing. During 1985 through 1987, the resort also operated the Sunset Room on the 27th floor, offering piano-bar music and fine dining, with an emphasis on steaks and seafood. The Poolside Room operated on the ground level. The Nightcap Lounge opened at the Landmark in 1986, and offered comedy acts.
Reception
In 1962, the Los Angeles Times called the $6 million Landmark, "By far the most spectacular project", out of several Las Vegas resorts that were under construction; the newspaper further wrote that the Landmark was "destined to become the Mark Hopkins of Las Vegas." The following year, the Reno Evening Gazette opined that the Landmark had "the most unusual exterior architecture in Nevada." In 1966, Billboard wrote that the mushroom-shaped Landmark tower had "the most spectacular design" of all recent high-rise structures in the city. In 1993, architecture critic Alan Hess noted the simplicity of the Landmark and the nearby International Hotel when compared with previous Las Vegas casinos, writing, "As singular, self-contained forms, they showed none of the complexity of the different pieces and sequential additions that made the original Strip visually and urbanistically richer." In 2002, Geoff Carter of Las Vegas Weekly wrote that the demolished Landmark was "Vegas' coolest building and a veritable shrine to 1960s 'Googie' architecture."
Performances
Peggy Lee performed at the Landmark during the year of its opening. In its early years, the Landmark became well known for its performances by country singers, including Kay Starr, Jimmy Dean, Patti Page, Bobbie Gentry, and Danny Davis with his Nashville Brass band, as well as a four-week show starring Ferlin Husky and Archie Campbell. Frank Sinatra also performed at the Landmark, and Bobby Darin made one of his final appearances there. In 1974, the Landmark launched Red McIlvaine's Star Search, a variety show featuring people from across the United States.
The following year, The Jim Halsey Company began Country Music USA, a show at the Landmark that featured a different country music headliner every two to three weeks. The show was usually sold out. Roy Clark and Mel Tillis made their debuts in Country Music USA, as did Freddy Fender. The Oak Ridge Boys made their Las Vegas debut in Country Music USA. Leroy Van Dyke performed in the show, with Fender as his opening act. Van Dyke performed again at the Landmark later in the 1970s, with Sons of the Pioneers as his opening act. Other artists who performed in Country Music USA included Barbara Fairchild, Johnny Paycheck and Tommy Overstreet, as well as Jody Miller, Roy Head, and Hank Thompson. Country Music USA ran for two years, until 1977.
Spellcaster, an 80-minute family oriented show featuring country-western singer Roy Clayborne, debuted at the Landmark in 1982. Spellcaster, a production show with dancers and showgirls, featured Clayborne singing 15 songs. Spellcaster was named after one of the Wolframs' racing horses, and was produced through Zula Wolfram's Las Vegas production company, Zula Productions. The show was designed and directed by Larry Hart, a 1979 Grammy Award winner, and it ran for approximately eight months. At the time of Spellcasters debut, Danny Hein and Terri Dancer also began performing in the resort's Galaxy Lounge. Hein and Dancer had four different shows consisting of various costumes and set decorations, and were accompanied by a five-person band of musicians who backed up the duo.
In the late 1980s, the Landmark's showroom hosted minor acts and was considered small in comparison to other Las Vegas resorts. The Landmark hosted magician Melinda Saxe in a family-friendly magic show, which was initially known as 88 Follies Revue and was renamed Follies Revue '89 the following year before concluding its run. In 1990, the main showroom featured Spellbound, a magic show consisting of two illusionist teams. Dick Foster was the show's director and producer.
In popular culture
The unfinished tower briefly appears in the 1964 film, Viva Las Vegas. In 1971, Sean Connery and stuntmen rode atop the Landmark's exterior elevator as part of filming for scenes in the James Bond film Diamonds Are Forever; the tower was among other Las Vegas resorts that stood in as the fictional Whyte House hotel-casino. In the 1980s, the Landmark appeared in the television series Vega$ and Crime Story. In October 1994, the exterior entrance of the Landmark was lit up for one night so it could be used for outdoor shots as the fictional Tangiers casino, featured in the 1995 film, Casino.
The Landmark's implosion was filmed for use in director Tim Burton's 1996 film, Mars Attacks!. In the film, the Landmark is portrayed as the fictional Galaxy Hotel, which is destroyed by an alien spaceship. Burton had stayed at the hotel a few times and was upset by the decision to demolish it, so he wanted to immortalize it in his film. A scale model of the Landmark tower was also made for the production of Mars Attacks!. The demolition of the Landmark also appears during the closing credits of the 2003 film, The Cooler. The Lucky 38, a fictional tower casino featured in the 2010 video game Fallout: New Vegas, partially resembles the Landmark.
A near-exact replica of the Landmark called the Bikini Atoll Casino can be seen in the Saints Row (2022) reboot, in the El Dorado district (which is based on the Las Vegas Strip) of Santo Ileso. It is portrayed as an abandoned casino.
See also
Fontainebleau Las Vegas, tallest building in Nevada since 2008; opened in 2023 after a construction delay
Stratosphere Las Vegas, still-extant hotel with a similarly-designed tower
Notes
References
External links
Slideshow of Landmark photos
Landmark demolition video
Footage of the Landmark's implosion used for Mars Attacks!
Eyewitness News Las Vegas news coverage
KLAS-TV news coverage
KSNV news coverage
KTNV-TV news coverage
Tribute to the Landmark
Casinos completed in 1969
Hotel buildings completed in 1969
Hotels established in 1969
1990 disestablishments in Nevada
Defunct casinos in the Las Vegas Valley
Defunct hotels in the Las Vegas Valley
Skyscraper hotels in Winchester, Nevada
Former skyscraper hotels
Demolished hotels in Clark County, Nevada
Buildings and structures demolished by controlled implosion
Buildings and structures demolished in 1995
1969 establishments in Nevada
Casino hotels | Landmark (hotel and casino) | Engineering | 13,140 |
23,435,704 | https://en.wikipedia.org/wiki/Official%20Handbook%20of%20Stations | The Official Handbook of Stations was a large book (, 494 pages) listing all the passenger and goods stations, as well as private sidings, on the railways of Great Britain and Ireland. It was published in 1956 by the British Transport Commission (under the Railway Clearing House name) and provides an historical snapshot of the railways of the time.
Each station or depot was shown against its county, railway region (including its pre-grouping company), and parent station. If the station had a crane then its weight limit was also shown in tons & cwt.
Classes of traffic
In six columns the classes of traffic handled at the station was shown as follows:
Column 1 - G = Goods Traffic
Column 1 - G* = Coal Class, Mineral and Station-to-Station Traffic in Truck Loads.
Column 2 - P = Passenger, Parcels & Miscellaneous Traffic.
Column 2 - P* = Passenger, but not Parcels & Miscellaneous Traffic.
Column 2 - P† = Parcels & Miscellaneous Traffic (i.e. not Passengers).
Column 3 - F = Furniture Vans, Carriages, Motor Cars, Portable Engines and Machines on Wheels.
Column 4 - L = Livestock
Column 5 - H = Horse Boxes and Prize Cattle Vans.
Column 6 - C = Carriages and Motor Cars by Passenger or Parcels Train.
References | Official Handbook of Stations | Physics | 263 |
38,650,474 | https://en.wikipedia.org/wiki/Poll%20Everywhere | Poll Everywhere is a privately held company headquartered in San Francisco, California. The company, founded in April 2007 is an online service for classroom response and audience response systems. Poll Everywhere's product allows audiences and classrooms in over 100 countries to use mobile phones, thereby "plotting the obsolescence" of proprietary hardware response devices otherwise known as clickers.
The company raised $20,000 in venture funding from Y Combinator in 2008.
Origins
Jeff Vyduna, Brad Gessler, and Sean Eby were coworkers at Deloitte Consulting charged with giving internal presentations. One day, to keep the audience awake, they decided to pull a text message "out of a hat". Code was started in April 2007, and the online site launched in September 2007. During that time, Vyduna matriculated at the Sloan School of Management at MIT. On May 14, 2008, the company placed as a semi finalist in the MIT $100K Entrepreneurship Challenge. Vyduna subsequently took a leave of absence when Poll Everywhere was accepted into Y Combinator in mid-2008; In 2010 the company moved to San Francisco.
Reception
Mashable.com noted in 2008, "The Usefulness of Poll Everywhere is apparent." http://mashable.com/2009/06/17/poll-everywhere/
See also
Audience Response Systems
References
Mobile technology companies
Y Combinator companies
American companies established in 2007
Companies based in San Francisco | Poll Everywhere | Technology | 294 |
5,499,083 | https://en.wikipedia.org/wiki/Masterminds%20%281997%20film%29 | Masterminds is a 1997 American action comedy film directed by Roger Christian, written by Floyd Byers and starring Patrick Stewart, Vincent Kartheiser, Brenda Fricker, Brad Whitford, and Matt Craven. It tells the story of a computer engineering prodigy who matches wits with a security consultant who has taken over his stepsister's school that he used to go to as a ransom is demanded for their release.
Plot
Oswald "Ozzie" Paxton is a computer engineering prodigy and expert hacker whose actions often have his father Jake threatening to send him to military school if he does not shape up. One day, he begins an unauthorized download of a soon-to-be-released movie. His download is interrupted when his younger stepsister Melissa Randall enters his room without permission. The resulting squabble between them results in Jake and Melissa's mother Helen intervening. In the process, Jake discovers the illicit download and Helen punishes Ozzie, making him take Melissa to her private school Shady Glen.
He takes her there by skateboard where they run into Principal Claire Maloney and security consultant Rafe Bentley where it was revealed that Maloney previously expelled Ozzie which she explained to Bentley why security measures were taken after the "science room burnout" that Ozzie caused. Before he can get out of the school, Bentley and his crew of "security guards" use a variety of firearms and tranquilizer dart guns to subdue several staff members, lock down the school, and hold the children hostage. Bentley has planned stages of a ransom scheme involving their parents' corporations. Ozzie attempts to alert Melissa to the danger. She does not believe him and he is subsequently chased by one of the gunmen. Using a bunsen burner and a vial of acid, he is able to subdue his pursuer. He subsequently begins wreaking havoc with Bentley's computerized security system.
The police make several attempts to breach the school's perimeter only to run into automatic gunfire, rocket launchers, and mines. As a concession, Bentley releases most of the children, but keeps the ten richest like Melissa and demands a very large ransom for their return. Ozzie locates ten of the eleven children and rescues them, but Melissa has been taken by Bentley. He then places an improvised time bomb at the bottom of the school's indoor pool. He attempts to stop the ransom payment, but finds out too late that the man designated to deliver it named Foster Deroy was actually Bentley's confederate. Bentley ties Ozzie to a chair and leaves with his men, keeping Melissa as an insurance policy. They intend to escape through the sewer pipes using ATVs.
While Ozzie is struggling to free himself, the bomb explodes, flooding the school's lower levels and neutralizing nearly everyone there. Ozzie and his friend K-Dog seize an abandoned ATV and pursue Bentley. They rescue Melissa, but Bentley escapes with the ransom. However, Ozzie is able to blow the whistle on Deroy with a little help from Maloney who also witnessed Rafe's actions. Through his cellphone, the police trace Rafe's employer to the CEO of a rival corporation named Larry Millard, who masterminded the plot so that the money used for the bidding would be given to terrorists so he could win a bidding war against the corporation run by Miles Lawrence that is employing Jake.
Soon afterward, Bentley sees a light at the end of the tunnel only to discover that the light leads to a sewage reclamation plant. The money begins to sink into the sewage as police officers arrive to arrest him.
Cast
Patrick Stewart as Rafe Bentley, a security consultant who takes over Shady Glen.
Vincent Kartheiser as Oswald "Ozzie" Paxton, a computer-engineering prodigy and hacker who matches wits with Rafe.
Brenda Fricker as Claire Maloney, the principal of Shady Glen.
Bradley Whitford as Miles Lawrence, the CEO of a company that Rafe demands a ransom from after he previously fired Rafe for embezzlement.
Matt Craven as Jake Paxton, a businessman and the father of Ozzie.
Annabelle Gurwitch as Helen Randall, Ozzie's stepmother.
Jon Abrahams as "K-Dog", Ozzie's friend
Katie Stuart as Melissa Randall, Ozzie's stepsister.
Michael MacRae as Foster Deroy, the CFO of Miles' company and an ally of Rafe.
Callum Keith Rennie as Ollie, one of Rafe's minions.
Earl Pastko as Captain Jankel
Jason Schombing as Marvin
Michael David Simms as Colonel Duke
David Paul Grove as "Ferret", one of Rafe's minions.
Pamela Martin as TV Reporter
Teryl Rothery as Ms. Saunders
Vanessa Morley as Gabby Lawrence, the daughter of Miles who attends Shady Glen.
Jay Brazeau as Eliot, the gate guard at Shady Glen.
Michael Benyaer as Taxi Driver
Jim Byrnes as Larry Millard (uncredited), the CEO of a company that is the rival of Miles' company.
Production
On site locations included Hatley Castle in Colwood, British Columbia, as well as locations in Victoria and Vancouver. While on-site filming took place in British Columbia, Canada, studio filming took place in Shepperton Studios in England.
Performance
In a release from Studio Briefing, Masterminds was listed as a box office flop for the Labor Day box office weekend, grossing only $1.8 million.
Reception
On Rotten Tomatoes the film has an approval rating of 19% based on reviews from 16 critics.
Roger Ebert of the Chicago Sun-Times panned the film, saying "all of the pieces have been assembled from better films, but then there are few worse films to borrow from" but had some praise for Stewart "the sole remaining interest comes from the presence of Stewart."
References
External links
1997 films
American action comedy films
Films about computing
Films directed by Roger Christian
Films set in schools
Films shot in Vancouver
Films scored by Anthony Marinelli
Columbia Pictures films
1997 action comedy films
1997 comedy films
1990s English-language films
1990s American films
English-language action comedy films | Masterminds (1997 film) | Technology | 1,230 |
34,733,019 | https://en.wikipedia.org/wiki/Jacobson%E2%80%93Morozov%20theorem | In mathematics, the Jacobson–Morozov theorem is the assertion that nilpotent elements in a semi-simple Lie algebra can be extended to sl2-triples. The theorem is named after , .
Statement
The statement of Jacobson–Morozov relies on the following preliminary notions: an sl2-triple in a semi-simple Lie algebra (throughout in this article, over a field of characteristic zero) is a homomorphism of Lie algebras . Equivalently, it is a triple of elements in satisfying the relations
An element is called nilpotent, if the endomorphism (known as the adjoint representation) is a nilpotent endomorphism. It is an elementary fact that for any sl2-triple , e must be nilpotent. The Jacobson–Morozov theorem states that, conversely, any nilpotent non-zero element can be extended to an sl2-triple. For , the sl2-triples obtained in this way are made explicit in .
The theorem can also be stated for linear algebraic groups (again over a field k of characteristic zero): any morphism (of algebraic groups) from the additive group to a reductive group H factors through the embedding
Furthermore, any two such factorizations
are conjugate by a k-point of H.
Generalization
A far-reaching generalization of the theorem as formulated above can be stated as follows: the inclusion of pro-reductive groups into all linear algebraic groups, where morphisms in both categories are taken up to conjugation by elements in , admits a left adjoint, the so-called pro-reductive envelope. This left adjoint sends the additive group to (which happens to be semi-simple, as opposed to pro-reductive), thereby recovering the above form of Jacobson–Morozov.
This generalized Jacobson–Morozov theorem was proven by by appealing to methods related to Tannakian categories and by by more geometric methods.
References
Lie algebras
Algebraic groups | Jacobson–Morozov theorem | Mathematics | 429 |
5,127,933 | https://en.wikipedia.org/wiki/Nerve%20%28category%20theory%29 | In category theory, a discipline within mathematics, the nerve N(C) of a small category C is a simplicial set constructed from the objects and morphisms of C. The geometric realization of this simplicial set is a topological space, called the classifying space of the category C. These closely related objects can provide information about some familiar and useful categories using algebraic topology, most often homotopy theory.
Motivation
The nerve of a category is often used to construct topological versions of moduli spaces. If X is an object of C, its moduli space should somehow encode all objects isomorphic to X and keep track of the various isomorphisms between all of these objects in that category. This can become rather complicated, especially if the objects have many non-identity automorphisms. The nerve provides a combinatorial way of organizing this data. Since simplicial sets have a good homotopy theory, one can ask questions about the meaning of the various homotopy groups πn(N(C)). One hopes that the answers to such questions provide interesting information about the original category C, or about related categories.
The notion of nerve is a direct generalization of the classical notion of classifying space of a discrete group; see below for details.
Construction
Let C be a small category. There is a 0-simplex of N(C) for each object of C. There is a 1-simplex for each morphism f : x → y in C. Now suppose that f: x → y and g : y → z are morphisms in C. Then we also have their composition gf : x → z. The diagram suggests our course of action: add a 2-simplex for this commutative triangle. Every 2-simplex of N(C) comes from a pair of composable morphisms in this way. The addition of these 2-simplices does not erase or otherwise disregard morphisms obtained by composition, it merely remembers that this is how they arise.
In general, N(C)k consists of the k-tuples of composable morphisms
of C. To complete the definition of N(C) as a simplicial set, we must also specify the face and degeneracy maps. These are also provided to us by the structure of C as a category. The face maps
are given by composition of morphisms at the ith object (or removing the ith object from the sequence, when i is 0 or k). This means that di sends the k-tuple
to the (k − 1)-tuple
That is, the map di composes the morphisms Ai−1 → Ai and Ai → Ai+1 into the morphism Ai−1 → Ai+1, yielding a (k − 1)-tuple for every k-tuple.
Similarly, the degeneracy maps
are given by inserting an identity morphism at the object Ai.
Simplicial sets may also be regarded as functors Δop → Set, where Δ is the category of totally ordered finite sets and order-preserving morphisms. Every partially ordered set P yields a (small) category i(P) with objects the elements of P and with a unique morphism from p to q whenever p ≤ q in P. We thus obtain a functor i from the category Δ to the category of small categories. We can now describe the nerve of the category C as the functor Δop → Set
This description of the nerve makes functoriality transparent; for example, a functor between small categories C and D induces a map of simplicial sets N(C) → N(D). Moreover, a natural transformation between two such functors induces a homotopy between the induced maps. This observation can be regarded as the beginning of one of the principles of higher category theory. It follows that adjoint functors induce homotopy equivalences. In particular, if C has an initial or final object, its nerve is contractible.
Examples
The primordial example is the classifying space of a discrete group G. We regard G as a category with one object whose endomorphisms are the elements of G. Then the k-simplices of N(G) are just k-tuples of elements of G. The face maps act by multiplication, and the degeneracy maps act by insertion of the identity element. If G is the group with two elements, then there is exactly one nondegenerate k-simplex for each nonnegative integer k, corresponding to the unique k-tuple of elements of G containing no identities. After passing to the geometric realization, this k-tuple can be identified with the unique k-cell in the usual CW structure on infinite-dimensional real projective space. The latter is the most popular model for the classifying space of the group with two elements. See (Segal 1968) for further details and the relationship of the above to Milnor's join construction of BG.
Most spaces are classifying spaces
Every "reasonable" topological space is homeomorphic to the classifying space of a small category. Here, "reasonable" means that the space in question is the geometric realization of a simplicial set. This is obviously a necessary condition; it is also sufficient. Indeed, let X be the geometric realization of a simplicial set K. The set of simplices in K is partially ordered, by the relation x ≤ y if and only if x is a face of y. We may consider this partially ordered set as a category with the relations as morphisms. The nerve of this category is the barycentric subdivision of K, and thus its realization is homeomorphic to X, because X is the realization of K by hypothesis and barycentric subdivision does not change the homeomorphism type of the realization.
The nerve of an open covering
If X is a topological space with open cover Ui, the nerve of the cover is obtained from the above definitions by replacing the cover with the category obtained by regarding the cover as a partially ordered set with set inclusions as relations (and hence morphisms). Note that the realization of this nerve is not generally homeomorphic to X (or even homotopy equivalent): homotopy equivalence will usually hold only for a good cover by contractible sets having contractible intersections.
A moduli example
One can use the nerve construction to recover mapping spaces, and even get "higher-homotopical" information about maps. Let D be a category, and let X and Y be objects of D. One is often interested in computing the set of morphisms X → Y. We can use a nerve construction to recover this set. Let C = C(X,Y) be the category whose objects are diagrams
such that the morphisms U → X and Y → V are isomorphisms in D. Morphisms in C(X, Y) are diagrams of the following shape:
Here, the indicated maps are to be isomorphisms or identities. The nerve of C(X, Y) is the moduli space of maps X → Y. In the appropriate model category setting, this moduli space is weak homotopy equivalent to the simplicial set of morphisms of D from X to Y.
Nerve theorem
The next theorem is due to Grothendieck.
See also: Segal space.
References
Blanc, D., W. G. Dwyer, and P.G. Goerss. "The realization space of a -algebra: a moduli problem in algebraic topology." Topology 43 (2004), no. 4, 857–892.
Goerss, P. G., and M. J. Hopkins. "Moduli spaces of commutative ring spectra." Structured ring spectra, 151–200, London Math. Soc. Lecture Note Ser., 315, Cambridge Univ. Press, Cambridge, 2004.
Segal, Graeme. "Classifying spaces and spectral sequences." Inst. Hautes Études Sci. Publ. Math. No. 34 (1968) 105–112.
Category theory
Simplicial sets | Nerve (category theory) | Mathematics | 1,708 |
24,264,072 | https://en.wikipedia.org/wiki/Carpuject | The carpuject is a syringe device for the administration of injectable fluid medication. It was patented by the Sterling Drug Company, which became the Sterling Winthrop, after World War II. It is designed with a luer-lock device to accept a sterile hypodermic needle or to be linked directly to intravenous tubing line. The product can deliver an intravenous or intramuscular injection by means of a holder which attaches to the barrel and plunger to the barrel plug. Medication is prefilled into the syringe barrel. When the plug at the end of the barrel is advanced to the head of the barrel it discharges and releases the contents through the needle or into the lumen of the tubing.
The carpuject competed with the tubex injection system developed by Wyeth. It has been redesigned several times to comply with sterility and infection controls standards.
In 1974, Sterling opened a manufacturing plant in McPherson, Kansas. In 1988 Kodak purchased Winthrop Labs and in 1994 sold the injectable drug division and all intellectual property rights to Sanofi, a French pharmaceutical company, now Sanofi Aventis. In 1997 Sanofi sold the injectable carpuject line of business to Abbott Laboratories of Abbott Park, IL for US$200 million. They added generic injectable drugs to the injectable line. In about 2004 Abbott separated its hospital supply line into a separate hospital supply company, Hospira from its drug division. The split placed all of Abbott's hospital products in a separate division. In 2015, Hospira, including the carpuject device, was purchased by Pfizer.
References
Medical equipment | Carpuject | Biology | 349 |
13,501,019 | https://en.wikipedia.org/wiki/Lower%20flammability%20limit | The lower flammability limit (LFL), usually expressed in volume per cent, is the lower end of the concentration range over which a flammable mixture of gas or vapour in air can be ignited at a given temperature and pressure. The flammability range is delineated by the upper and lower flammability limits. Outside this range of air/vapor mixtures, the mixture cannot be ignited at that temperature and pressure. The LFL decreases with increasing temperature; thus, a mixture that is below its LFL at a given temperature may be ignitable if heated sufficiently.
For liquids, the LFL is typically close to the saturated vapor concentration at the flash point, however, due to differences in the liquid properties, the relationship of LFL to flash point (which is also dependent on the test apparatus) is not fixed and some spread in the data usually exists.
The of a mixture can be evaluated using the Le Chatelier mixing rule if the of the components are known:
Where is the lower flammability of the mixture, is the lower flammability of the -th component of the mixture, and is the molar fraction of the -th component of the mixture.
See also
Flash point
Minimum ignition energy
Stoichiometry
References
Chemical properties
Fire | Lower flammability limit | Chemistry | 267 |
27,937,678 | https://en.wikipedia.org/wiki/Zultanite | Zultanite is a gem variety of the mineral diaspore, mined in the İlbir Mountains of southwest Turkey at an elevation of over 4,000 feet. The mineral's name is a trade name and is equivalent to the trade name Csarite.
Turkey is the only place where the gem quality material has been found, and at the Ilbir Dağ deposit it has been "formed in open spaces by hydrothermal remobilization of bauxite components". The gem quality material was first discovered in the early 1980s.
Zultanite has a hardness of 6.5 to 7. Depending on its light source, zultanite's color varies between a yellowish green, light gold, and purplish pink. Its color can be pastel green in outdoor light and beige pink in incandescent light.
References
Gemstones
Trademarks | Zultanite | Physics | 177 |
725,742 | https://en.wikipedia.org/wiki/Stora%20Karls%C3%B6 | Stora Karlsö is a small Swedish island in the Baltic Sea, situated about west of the island of Gotland and part of Eksta socken.
Environment
Stora Karlsö has an area of about . It is mainly a limestone plateau, up to in height, bordered by steep cliffs along the shore. It is mostly covered with alvar. It is a nature reserve, the second oldest in the world after Yellowstone National Park.
Flora and fauna
The island is mostly known for its rich flora and birdlife. There are many juniper bushes and some small groves of deciduous trees. In spring, there is an extraordinary number of orchids, mostly elder-flowered orchid and early purple orchid. There are also several very rare plants for Sweden such as Adonis vernalis, Lactuca quercina (called Karlsösallat in Swedish), hart's-tongue fern and Corydalis gotlandica (the only endemic plant of Gotland).
Stora Karlsö also has large colonies of common guillemots (about 15,700 breeding pairs) and razorbills (12,000 pairs, in 2014–2015). Along with neighbouring Lilla Karlsö, it has been designated an Important Bird Area (IBA) by BirdLife International.
History
There is evidence that Stora Karlsö has been inhabited since the Stone Age. During the Middle Ages there was a marble quarry, which gave the material for many of Gotland's churches. The island is a nature reserve, and after Yellowstone National Park is the oldest established protected nature area in the world. From May to August there are tour boats from the village Klintehamn.
The Stora Karlsö Lighthouse was built in 1887. A house for the lighthouse keeper was added in the 1930s, which resulted in the island getting its first permanent residents in modern times. Since 1974, the lighthouse is automated and there are no permanent residents on the island. The lighthouse and the surrounded buildings are now listed.
Gallery
See also
Lilla Karlsö
Svenska Turistföreningen
References
External links
Gotland
Swedish islands in the Baltic
Lighthouses in Sweden
Biota of Sweden
Geography of Gotland County
Islands of Gotland County
Tourist attractions in Gotland County
Nature reserves in Sweden
Important Bird Areas of Sweden
Important Bird Areas of Baltic islands
1970 establishments in Sweden | Stora Karlsö | Biology | 471 |
17,602,635 | https://en.wikipedia.org/wiki/Suaeda%20australis | Suaeda australis, the austral seablite, is a species of plant in the family Amaranthaceae, native to Australia. It grows to in height, with a spreading habit and branching occurring from the base. The leaves are up to 40 mm in length and are succulent, linear and flattened. They are light green to purplish-red in colour.
The species occurs on shorelines in coastal or estuarine areas or in salt marshes. It is native across Australia including the states of Queensland, New South Wales, Victoria, Tasmania, South Australia and the south-west of Western Australia.
In irrigated areas, the species is known as a salinity indicator plant and is referred to as redweed.
References
External links
Online Field guide to Common Saltmarsh Plants of Queensland
Suaeda australis occurrence data from Australasian Virtual Herbarium
australis
Caryophyllales of Australia
Halophytes
Flora of New South Wales
Flora of Queensland
Flora of South Australia
Flora of Tasmania
Flora of Victoria (state)
Eudicots of Western Australia | Suaeda australis | Chemistry | 224 |
3,200,942 | https://en.wikipedia.org/wiki/Lichenin | Lichenin, also known as lichenan or moss starch, is a complex glucan occurring in certain species of lichens. It can be extracted from Cetraria islandica (Iceland moss). It has been studied since about 1957.
Structure
Chemically, lichenin is a mixed-linkage glucan, consisting of repeating glucose units linked by β-1,3 and β-1,4 glycosidic bonds.
Uses
It is an important carbohydrate for reindeers and northern flying squirrels, which eat the lichen Bryoria fremontii.
It can be extracted by digesting Iceland moss in a cold, weak solution of carbonate of soda for some time, and then boiling. By this process the lichenin is dissolved and on cooling separates as a colorless jelly. Iodine imparts no color to it.
Other uses of the name
In his 1960 novel Trouble with Lichen, John Wyndham gives the name Lichenin to a biochemical extract of lichen used to extend life expectancy beyond 300 years.
References
Polysaccharides
Lichen products | Lichenin | Chemistry,Biology | 232 |
49,631,066 | https://en.wikipedia.org/wiki/I%C3%B1aki%20Pi%C3%B1uel | Iñaki Piñuel y Zabala (Madrid, 1965) is a Spanish psychologist, essayist, researcher and professor of Organization and Human Resources at the Faculty of Business and Labour Sciences in the University of Alcalá, Madrid. He is an expert in Management and Human Resources and one of the leading European specialists in research and divulgation of mobbing or psychological harassment in the workplace and education.
He was director of human resources in various companies in the technology sector. Currently he is a psychotherapist and consultant specializing in this field, consultant and trainer of several agencies, including notably the Instituto Nacional de la Seguridad Social (National Institute of Social Security, (INSS)) and the Consejo General del Poder Judicial (General Council of the Judiciary, (CGPJ)) on psychological violence at work and education.
He is also an Executive MBA from the Instituto de Empresa of Madrid and director of the "Barómetros Cisneros sobre Acoso laboral y Violencia psicológica en el trabajo y Acoso escolar en el entorno educativo" ("Barometers Cisneros on Mobbing and psychological violence at work and Bullying in the educational environment").
He was the author of the first book in Spanish on Mobbing: Mobbing: Cómo sobrevivir al acoso psicológico en el trabajo (Ed. Sal Terrae, 2001).
In 2008 he received the Everis Award on Business Essay for the work: Liderazgo Zero: el liderazgo más allá del poder, la rivalidad y la violencia.
Professor Piñuel on mobbing affected, stated:
Bibliography
Las 100 claves del Mobbing: Detectar y Salir del Acoso Psicológico en el trabajo. Ed. EOS 2018.
Tratamiento EMDR del Mobbing y el Bullying: una guía para psicoterapeutas. Ed. EOS 2017.
Las 5 Trampas del Amor: Por qué fracasan las relaciones y cómo evitarlo. Ed. La Esfera de los Libros 2017.
Cómo Prevenir el Acoso Escolar: Implantación de protocolos Anti Bullying en los centros escolares, una visión práctica y aplicada. Ed CEU 2017.
Amor Zero: Cómo sobrevivir a los amores psicopáticos. Ed. La Esfera de los Libros. España 2016.
Amor Zero: Cómo sobrevivir a los amores psicopáticos. Ed SB. Buenos Aires, 2015.
Por si acaso te acosan: 100 cosas que debes saber para salir del acoso psicológico en el trabajo. Ed Códice. Buenos Aires, 2013.
Liderazgo Zero: el liderazgo más allá del poder, la rivalidad y la violencia Ed Lid. Madrid, 2009. (Premio EVERIS al mejor ensayo empresarial).
Mobbing, el estado de la cuestión: Todo lo que usted siempre quiso saber sobre el acoso psicológico y nadie le explicó. Ed. Gestión 2000. Barcelona, 2008.
La dimisión interior: del síndrome posvacacional a los riesgos psicosociales en el trabajo. Ed. Pirámide. Madrid, 2008.
Mi jefe es un psicópata: por qué la gente normal se vuelve perversa al alcanzar el poder. Ed. Alienta. Barcelona, 2008.
Mobbing escolar: Violencia y acoso psicológico contra los niños.Ed CEAC.Barcelona, 2007.
Neomanagement: jefes tóxicos y sus víctimas. Ed. Aguilar. Madrid, 2004.
Mobbing: manual de autoayuda. Ed. Aguilar. Madrid, 2003.
Mobbing: cómo sobrevivir al acoso psicológico en el trabajo. Ed. Punto de Lectura. Madrid, 2003.
Mobbing: cómo sobrevivir al acoso psicológico en el trabajo. Ed. Sal Terrae. Santander, 2001.
References
External links
Web page of the Author.
Bibliography in Dialnet.
Biography, articles, interviews.
1965 births
Psychology writers
Spanish psychologists
Academic staff of the University of Alcalá
Living people
Academics and writers on bullying
Harassment and bullying | Iñaki Piñuel | Biology | 953 |
67,548,881 | https://en.wikipedia.org/wiki/Kawaratsuka%20Kiln | The is an archaeological site with the ruins of a Nara to Heian period factory for the production of earthenware, located in what is now the city of Ishioka in Ibaraki Prefecture in the northern Kantō region of Japan. It received protection as a National Historic Site in 2017.
Overview
roof tiles made of fired clay were introduced to Japan from Baekche during the 6th century along with Buddhism. During the 570s under the reign of Emperor Bidatsu, the king of Baekche sent six people to Japan skilled in various aspects of Buddhism, including a temple architect. Initially, tiled roofs were a sign of great wealth and prestige, and used for temple and government buildings. The material had the advantages of great strength and durability, and could also be made at locations around the country wherever clay was available.
The Kawarazuka kiln ruins were discovered during the clearing of an adjacent forest in 1968. An archaeological excavation found a total of 35 kilns across an area 130 meters north-to-south and 80 meters east-to-west. These kilns were made by hollowing out a clay hill to form an anagama-style staged kiln. Subsequent investigations found that one kiln was used for the production of Sue ware, whereas 34 kilns were specialized for roof tile production. An iron making furnace was also found in the area. From pottery shards found at site, it is likely that the kilns were the official kilns of Hitachi province and that tiles manufactured at the site were used in the construction of the provincial capital, Hitachi Kokubun-ji, Hitachi Kokubun-niji and the Baraki temple ruins, all of which date from around the Tenpyō era, or approximately 741 AD. The kilns remained in use through the first half of the 10th century.
The site was designated a Prefectural Historic Site in 1937 and was raised to national status in 2017.
See also
List of Historic Sites of Japan (Ibaraki)
References
External links
Prefectural Board of Education
Ishioka City official site
Ishioka Tourist Information site
History of Ibaraki Prefecture
Ishioka, Ibaraki
Historic Sites of Japan
Hitachi Province
Japanese pottery kiln sites | Kawaratsuka Kiln | Chemistry,Engineering | 462 |
44,308,703 | https://en.wikipedia.org/wiki/Flajolet%E2%80%93Martin%20algorithm | The Flajolet–Martin algorithm is an algorithm for approximating the number of distinct elements in a stream with a single pass and space-consumption logarithmic in the maximal number of possible distinct elements in the stream (the count-distinct problem). The algorithm was introduced by Philippe Flajolet and G. Nigel Martin in their 1984 article "Probabilistic Counting Algorithms for Data Base Applications". Later it has been refined in "LogLog counting of large cardinalities" by Marianne Durand and Philippe Flajolet, and "HyperLogLog: The analysis of a near-optimal cardinality estimation algorithm" by Philippe Flajolet et al.
In their 2010 article "An optimal algorithm for the distinct elements problem", Daniel M. Kane, Jelani Nelson and David P. Woodruff give an improved algorithm, which uses nearly optimal space and has optimal O(1) update and reporting times.
The algorithm
Assume that we are given a hash function that maps input to integers in the range , and where the outputs are sufficiently uniformly distributed. Note that the set of integers from 0 to corresponds to the set of binary strings of length . For any non-negative integer , define to be the -th bit in the binary representation of , such that:
We then define a function that outputs the position of the least-significant set bit in the binary representation of , and if no such set bit can be found as all bits are zero:
Note that with the above definition we are using 0-indexing for the positions, starting from the least significant bit. For example, , since the least significant bit is a 1 (0th position), and , since the least significant set bit is at the 3rd position. At this point, note that under the assumption that the output of our hash function is uniformly distributed, then the probability of observing a hash output ending with (a one, followed by zeroes) is , since this corresponds to flipping heads and then a tail with a fair coin.
Now the Flajolet–Martin algorithm for estimating the cardinality of a multiset is as follows:
Initialize a bit-vector BITMAP to be of length and contain all 0s.
For each element in :
Calculate the index .
Set .
Let denote the smallest index such that .
Estimate the cardinality of as , where .
The idea is that if is the number of distinct elements in the multiset , then is accessed approximately times, is accessed approximately times and so on. Consequently, if , then is almost certainly 0, and if , then is almost certainly 1. If , then can be expected to be either 1 or 0.
The correction factor is found by calculations, which can be found in the original article.
Improving accuracy
A problem with the Flajolet–Martin algorithm in the above form is that the results vary significantly. A common solution has been to run the algorithm multiple times with different hash functions and combine the results from the different runs. One idea is to take the mean of the results together from each hash function, obtaining a single estimate of the cardinality. The problem with this is that averaging is very susceptible to outliers (which are likely here). A different idea is to use the median, which is less prone to be influences by outliers. The problem with this is that the results can only take form , where is integer. A common solution is to combine both the mean and the median: Create hash functions and split them into distinct groups (each of size ). Within each group use the mean for aggregating together the results, and finally take the median of the group estimates as the final estimate.
The 2007 HyperLogLog algorithm splits the multiset into subsets and estimates their cardinalities, then it uses the harmonic mean to combine them into an estimate for the original cardinality.
See also
Streaming algorithm
HyperLogLog
References
Additional sources
Algorithms | Flajolet–Martin algorithm | Mathematics | 795 |
36,935,574 | https://en.wikipedia.org/wiki/Gook%20%28headgear%29 |
A gook was a piece of protective headgear worn by bal maidens (female manual labourers in the mining industries of Cornwall and Devon). The gook was a bonnet which covered the head and projected forward over the face, to protect the wearer's head and face from sunlight and flying debris. Bal maidens often worked outdoors or in very crude surface-level shelters, and the gook also gave protection from extreme weather conditions. By covering the ears, gooks protected the ears from the noisy industrial environment.
While there was some regional variation in style, gooks would generally be tied under the chin and around the neck, and fall loose from the neck over the shoulders to protect the shoulders and upper arms. In bright sunlight, the wearer would sometimes pin the gook across her face, leaving only the eyes exposed. Gooks for use in winter were made of felt or padded cotton with cardboard stiffening to allow the top to project forward over the face, and in summer of cotton. Although gooks were traditionally white in colour, the lightweight summer gooks were sometimes made of bright cotton prints.
In the 19th century bal maidens began to wear straw hats in summer instead of cotton gooks. By the end of the 19th century, these straw bonnets had largely replaced the gook year-round. By this time the Cornish mining industry was in terminal decline, and very few bal maidens remained in employment.
When some bal maidens were re-hired to work in a temporarily expanded mining industry during the First World War (1914–18), traditional clothing was abandoned and gooks were largely replaced by more practical wool or fur hats. Gooks did not die out completely, and records exist of at least some bal maidens continuing to wear the gook until the early 1920s.
In 1921 Dolcoath, the last mine in Cornwall to employ female manual labourers, was closed, and the use of bal maidens ceased. Although some female manual labourers were employed by the mines in the 1940s and early 1950s owing to labour shortages caused by the Second World War, and a very limited number of female workers were employed after the Sex Discrimination Act 1975 ended the policy of recruiting only men for underground work in the few surviving mines, these women wore practical clothing similar to those of male workers. In 1998 Cornwall's last surviving tin mine at South Crofty closed, bringing mining in Devon and Cornwall to an end.
See also
Mining helmet
Notes
References
Bibliography
(1st edition published 2004 by The Hypatia Trust, Penzance as Balmaidens)
Further reading
External links
Bal Maidens & Mining Women Information, publications and resources on bal maidens and other female mineworkers
Bonnets (headgear)
Miners' clothing
Mining in Cornwall
Mining in Devon
Women in England | Gook (headgear) | Engineering | 563 |
4,071,245 | https://en.wikipedia.org/wiki/Liesegang%20rings | Liesegang rings () are a phenomenon seen in many, if not most, chemical systems undergoing a precipitation reaction under certain conditions of concentration and in the absence of convection. Rings are formed when weakly soluble salts are produced from reaction of two soluble substances, one of which is dissolved in a gel medium. The phenomenon is most commonly seen as rings in a Petri dish or bands in a test tube; however, more complex patterns have been observed, such as dislocations of the ring structure in a Petri dish, helices, and "Saturn rings" in a test tube. Despite continuous investigation since rediscovery of the rings in 1896, the mechanism for the formation of Liesegang rings is still unclear.
History
The phenomenon was first noticed in 1855 by the German chemist Friedlieb Ferdinand Runge. He observed them in the course of experiments on the precipitation of reagents in blotting paper. In 1896 the German chemist Raphael E. Liesegang noted the phenomenon when he dropped a solution of silver nitrate onto a thin layer of gel containing potassium dichromate. After a few hours, sharp concentric rings of insoluble silver dichromate formed. It has aroused the curiosity of chemists for many years. When formed in a test tube by diffusing one component from the top, layers or bands of precipitate form, rather than rings.
Silver nitrate–potassium dichromate reaction
The reactions are most usually carried out in test tubes into which a gel is formed that contains a dilute solution of one of the reactants.
If a hot solution of agar gel also containing a dilute solution of potassium dichromate is poured in a test tube, and after the gel solidifies a more concentrated solution of silver nitrate is poured on top of the gel, the silver nitrate will begin to diffuse into the gel. It will then encounter the potassium dichromate and will form a continuous region of precipitate at the top of the tube.
After some hours, the continuous region of precipitation is followed by a clear region with no sensible precipitate, followed by a short region of precipitate further down the tube. This process continues down the tube forming several, up to perhaps a couple dozen, alternating regions of clear gel and precipitate rings.
Some general observations
Over the decades huge number of precipitation reactions have been used to study the phenomenon, and it seems quite general. Chromates, metal hydroxides, carbonates, and sulfides, formed with lead, copper, silver, mercury and cobalt salts are sometimes favored by investigators, perhaps because of the pretty, colored precipitates formed.
The gels used are usually gelatin, agar or silicic acid gel.
The concentration ranges over which the rings form in a given gel for a precipitating system can usually be found for any system by a little systematic empirical experimentation in a few hours. Often the concentration of the component in the agar gel should be substantially less concentrated (perhaps an order of magnitude or more) than the one placed on top of the gel.
The first feature usually noted is that the bands which form farther away from the liquid-gel interface are generally farther apart. Some investigators measure this distance and report in some systems, at least, a systematic formula for the distance that they form at. The most frequent observation is that the distance apart that the rings form is proportional to the distance from the liquid-gel interface. This is by no means universal, however, and sometimes they form at essentially random, irreproducible distances.
Another feature often noted is that the bands themselves do not move with time, but rather form in place and stay there.
For very many systems the precipitate that forms is not the fine coagulant or flocs seen on mixing the two solutions in the absence of the gel, but rather coarse, crystalline dispersions. Sometimes the crystals are well separated from one another, and only a few form in each band.
The precipitate that forms a band is not always a binary insoluble compound, but may be even a pure metal. Water glass of density 1.06 made acidic by sufficient acetic acid to make it gel, with 0.05 N copper sulfate in it, covered by a 1 percent solution of hydroxylamine hydrochloride produces large tetrahedrons of metallic copper in the bands.
It is not possible to make any general statement of the effect of the composition of the gel. A system that forms nicely for one set of components, might fail altogether and require a different set of conditions if the gel is switched, say, from agar to gelatin. The essential feature of the gel required is that thermal convection in the tube be prevented altogether.
Most systems will form rings in the absence of the gelling system if the experiment is carried out in a capillary, where convection does not disturb their formation. In fact, the system does not have to even be liquid. A tube plugged with cotton with a little ammonium hydroxide at one end, and a solution of hydrochloric acid at the other will show rings of deposited ammonium chloride where the two gases meet, if the conditions are chosen correctly. Ring formation has also been observed in solid glasses containing a reducible species. For example, bands of silver have been generated by immersing silicate glass in molten AgNO3 for extended periods of time (Pask and Parmelee, 1943).
Theories
Several different theories have been proposed to explain the formation of Liesegang rings. The chemist Wilhelm Ostwald in 1897 proposed a theory based on the idea that a precipitate is not formed immediately upon the concentration of the ions exceeding a solubility product, but a region of supersaturation occurs first. When the limit of stability of the supersaturation is reached, the precipitate forms, and a clear region forms ahead of the diffusion front because the precipitate that is below the solubility limit diffuses into the precipitate. This was argued to be a critically flawed theory when it was shown that seeding the gel with a colloidal dispersion of the precipitate (which would arguably prevent any significant region of supersaturation) did not prevent the formation of the rings.
Another theory focuses on the adsorption of one or the other of the precipitating ions onto the colloidal particles of the precipitate which forms. If the particles are small, the absorption is large, diffusion is "hindered" and this somehow results in the formation of the rings.
Still another proposal, the "coagulation theory" states that the precipitate first forms as a fine colloidal dispersion, which then undergoes coagulation by an excess of the diffusing electrolyte and this somehow results in the formation of the rings.
Some more recent theories invoke an auto-catalytic step in the reaction that results in the formation of the precipitate. This would seem to contradict the notion that auto-catalytic reactions are, actually, quite rare in nature.
The solution of the diffusion equation with proper boundary conditions, and a set of good assumptions on supersaturation, adsorption, auto-catalysis, and coagulation alone, or in some combination, has not been done yet, it appears, at least in a way that makes a quantitative comparison with experiment possible. However, a theoretical approach for the Matalon-Packter law predicting the position of the precipitate bands when the experiments are performed in a test tube, has been provided
A general theory based on Ostwald's 1897 theory has recently been proposed. It can account for several important features sometimes seen, such as revert and helical banding.
References
Liesegang, R. E.,"Ueber einige Eigenschaften von Gallerten", Naturwissenschaftliche Wochenschrift, Vol. 11, Nr. 30, 353-362 (1896).
J.A. Pask and C.W. Parmelee, "Study of Diffusion in Glass," Journal of the American Ceramic Society, Vol. 26, Nr. 8, 267-277 (1943).
K. H. Stern, The Liesegang Phenomenon Chem. Rev. 54, 79-99 (1954).
Ernest S. Hedges, Liesegang Rings and other Periodic Structures Chapman and Hall (1932).
External links
Liesegang rings
Tout ce que la nature ne peut pas faire VI : Liesegang Rings
A Thesis having a summary on reaction-diffusion processes and Liesegang banding (pp. 1-36)
Chemical reactions
Diffusion
Petrology
Physical chemistry
Thermodynamics
Articles containing video clips | Liesegang rings | Physics,Chemistry,Mathematics | 1,822 |
2,850,124 | https://en.wikipedia.org/wiki/Fleming%27s%20left-hand%20rule%20for%20motors | Fleming's left-hand rule for electric motors is one of a pair of visual mnemonics, the other being Fleming's right-hand rule for generators. They were originated by John Ambrose Fleming, in the late 19th century, as a simple way of working out the direction of motion in an electric motor, or the direction of electric current in an electric generator.
When current flows through a conducting wire, and an external magnetic field is applied across that flow, the conducting wire experiences a force perpendicular both to that field and to the direction of the current flow (i.e. they are mutually perpendicular). A left hand can be held, as shown in the illustration, so as to represent three mutually orthogonal axes on the thumb, fore finger and middle finger. Each finger is then assigned to a quantity (mechanical force, magnetic field and electric current). The right and left hand are used for generators and motors respectively.
The direction of the electric current is that of [conventional current]: from positive to negative.
First variant
The Thumb represents the direction of the Motion of the Conductor.
The Fore finger represents the direction of the magnetic Field.
The Centre finger represents the direction of the Current.
Second variant
The Thumb represents the direction of Motion resulting from the force on the conductor
The First finger represents the direction of the magnetic Field
The Second finger represents the direction of the Current.
Third variant
Van de Graaff's translation of Fleming's rules is the FBI rule, easily remembered because these are the initials of the Federal Bureau of Investigation.
Fourth variant
The F (Thumb) represents the direction of Force of the conductor
The B (Forefinger) represents the direction of the Magnetic field
The I (Centre finger) represents the direction of the Current.
This uses the conventional symbolic parameters of F (for [Lorentz force]), B (for [magnetic flux density]) and I (for [electric current]), and attributing them in that order (FBI) respectively to the thumb, first finger and second finger.
The thumb is the force, F
The first finger is the magnetic flux density, B
The second finger is the electric current, I.
Of course, if the mnemonic is taught (and remembered) with a different arrangement of the parameters to the fingers, it could end up as a mnemonic that also reverses the roles of the two hands (instead of the standard left hand for motors, right hand for generators). These variants are catalogued more fully on the FBI mnemonics page.
Fifth variant
(Fire the field, feel the force and kill the current)
This approach to remembering which finger represents which quantity uses some actions. First of all you need to point your fingers like a pretend gun, with the index finger acting as the barrel of the gun and the thumb acting as the hammer. Then go through the following actions:
"Fire the field" out through your index finger
"Feel the force" of the gun recoil up through your thumb
Finally you display your middle finger as you "kill the current"
Earth rotation is based on flemings left hand rule. Here solar cosmic rays forms electric current or charge carriers and earth magnetic field forms stator coil magnetic field.
Distinction between the right-hand and left-hand rule
Fleming's left-hand rule is used for electric motors, while Fleming's right-hand rule is used for electric generators. In other words, Fleming's left hand rule should be used if one were to create motion, while Fleming's right hand rule should be used if one were to create electricity.
Different hands need to be used for motors and generators because of the differences between cause and effect.
In an electric motor, the electric current and magnetic field exist (which are the causes), and they lead to the force that creates the motion (which is the effect), and so the left-hand rule is used. In an electric generator, the motion and magnetic field exist (causes), and they lead to the creation of the electric current (effect), and so the right-hand rule is used.
To illustrate why, consider that many types of electric motors can also be used as electric generators. A vehicle powered by such a motor can be accelerated up to high speed by connecting the motor to a fully charged battery. If the motor is then disconnected from the fully charged battery, and connected instead to a completely flat battery, the vehicle will decelerate. The motor will act as a generator and convert the vehicle's kinetic energy back to electrical energy, which is then stored in the battery. Since neither the direction of motion nor the direction of the magnetic field (inside the motor/generator) has changed, the direction of the electric current in the motor/generator has reversed. This follows from the second law of thermodynamics (the generator current must oppose the motor current, and the stronger current outweighs the other to allow the energy to flow from the more energetic source to the less energetic source).
Physical basis for the rules
When electrons, or any charged particles, flow in the same direction (for example, as an electric current in an electrical conductor, such as a metal wire) they generate a cylindrical magnetic field that wraps round the conductor (as discovered by Hans Christian Ørsted).
The direction of the induced magnetic field can be remembered by Maxwell's corkscrew rule. That is, if the conventional current is flowing away from the viewer, the magnetic field runs clockwise round the conductor, in the same direction that a corkscrew would have to turn in order to move away from the viewer. The direction of the induced magnetic field is also sometimes remembered by the right-hand grip rule, as depicted in the illustration, with the thumb showing the direction of the conventional current, and the fingers showing the direction of the magnetic field. The existence of this magnetic field can be confirmed by placing magnetic compasses at various points round the periphery of an electrical conductor that is carrying a relatively large electric current.
The thumb shows the direction of motion and the index finger shows the field lines and the middle finger shows the direction of induced current.
If an external magnetic field is applied horizontally, so that it crosses the flow of electrons (in the wire conductor, or in the electron beam), the two magnetic fields will interact. Michael Faraday introduced a visual analogy for this, in the form of imaginary magnetic lines of force: those in the conductor form concentric circles round the conductor; those in the externally applied magnetic field run in parallel lines. If those on one side of the conductor are running (from the north to south magnetic pole) in the opposite direction to those surrounding the conductor, they will be deflected so that they pass on the other side the conductor (because magnetic lines of force cannot cross or run contrary to each other). Consequently, there will be a large number of magnetic field lines in a small space on that side of the conductor, and a dearth of them on the original side of the conductor. Since the magnetic field lines of force are no longer straight lines, but curved to run around the electrical conductor, they are under tension (like stretched elastic bands), with energy bound up in the magnetic field. Since this energetic field is now mostly unopposed, its build-up or expulsion in one direction creates — in a manner analogous to Newton's third law of motion — a force in the opposite direction. Since there is only one moveable object in this system (the electrical conductor) for this force to work upon, the net effect is a physical force working to expel the electrical conductor out of the externally applied magnetic field in the direction opposite to that which the magnetic flux is being redirected to — in this case (motors), if the conductor is carrying conventional current upwards, and the external magnetic field is moving away from the viewer, the physical force will work to push the conductor to the left. This is the reason for torque in an electric motor. (The electric motor is then constructed so that the expulsion of the conductor out of the magnetic field causes it be placed inside the next magnetic field, and for this switching to be continued indefinitely.)
Faraday's law states that the induced electromotive force in a conductor is directly proportional to the rate of change of the magnetic flux in the conductor.
Pop Culture
In the 2013 video game Metal Gear Rising: Revengeance, the character Monsoon mentions and uses Fleming's Left-Hand Rule multiple times during his fight.
See also
Fleming's right-hand rule for generators
References
External links
Diagrams at magnet.fsu.edu
Rules
Science mnemonics
Electromagnetism
Electric motors | Fleming's left-hand rule for motors | Physics,Technology,Engineering | 1,756 |
15,648,814 | https://en.wikipedia.org/wiki/Quinolinic%20acid | Quinolinic acid (abbreviated QUIN or QA), also known as pyridine-2,3-dicarboxylic acid, is a dicarboxylic acid with a pyridine backbone. It is a colorless solid. It is the biosynthetic precursor to niacin.
Quinolinic acid is a downstream product of the kynurenine pathway, which metabolizes the amino acid tryptophan. It acts as an NMDA receptor agonist.
Quinolinic acid has a potent neurotoxic effect. Studies have demonstrated that quinolinic acid may be involved in many psychiatric disorders, neurodegenerative processes in the brain, as well as other disorders. Within the brain, quinolinic acid is only produced by activated microglia and macrophages.
History
In 1949 L. Henderson was one of the earliest to describe quinolinic acid. Lapin followed up this research by demonstrating that quinolinic acid could induce convulsions when injected into mice brain ventricles. However, it was not until 1981 that Stone and Perkins showed that quinolinic acid activates the N-methyl--aspartate receptor (NMDAR). After this, Schwarcz demonstrated that elevated quinolinic acid levels could lead to axonal neurodegeneration.
Synthesis
One of the earliest reported syntheses of this quinolinic acid was by Zdenko Hans Skraup, who found that methyl-substituted quinolines could be oxidized to quinolinic acid by potassium permanganate.
This compound is commercially available. It is generally obtained by the oxidation of quinoline. Oxidants such as ozone, hydrogen peroxide, and potassium permanganate have been used. Electrolysis is able to perform the transformation as well.
Quinolinic acid may undergo further decarboxylation to nicotinic acid (niacin):
Biosynthesis
From aspartate
Oxidation of aspartate by the enzyme aspartate oxidase gives iminosuccinate, containing the two carboxylic acid groups that are found in quinolinic acid. Condensation of iminosuccinate with glyceraldehyde-3-phosphate, mediated by quinolinate synthase, affords quinolinic acid.
Catabolism of tryptophan
Quinolinic acid is a byproduct of the kynurenine pathway, which is responsible for catabolism of tryptophan in mammals. This pathway is important for its production of the coenzyme nicotinamide adenine dinucleotide (NAD+) and produces several neuroactive intermediates including quinolinic acid, kynurenine (KYN), kynurenic acid (KYNA), 3-hydroxykynurenine (3-HK), and 3-hydroxyanthranilic acid (3-HANA). Quinolinic acid's neuroactive and excitatory properties are a result of NMDA receptor agonism in the brain. It also acts as a neurotoxin, gliotoxin, proinflammatory mediator, and pro-oxidant molecule.
While quinolinic acid cannot pass the BBB, kynurenine, tryptophan and 3-hydroxykynurenine do and subsequently act as precursors to the production of quinolinic acid in the brain. The quinolinic acid produced in microglia is then released and stimulates NMDA receptors, resulting in excitatory neurotoxicity. While astrocytes do not produce quinolinic acid directly, they do produce KYNA, which when released from the astrocytes can be taken in by migroglia that can in turn increase quinolinic acid production.
Microglia and macrophages produce the vast majority of quinolinic acid present in the body. This production increases during an immune response. It is suspected that this is a result of activation of indoleamine dioxygenases (to be specific, IDO-1 and IDO-2) as well as tryptophan 2,3-dioxygenase (TDO) stimulation by inflammatory cytokines (mainly IFN-gamma, but also IFN-beta and IFN-alpha).
IDO-1, IDO-2 and TDO are present in microglia and macrophages. Under inflammatory conditions and conditions of T cell activation, leukocytes are retained in the brain by cytokine and chemokine production, which can lead to the breakdown of the BBB, thus increasing the quinolinic acid that enters the brain. Furthermore, quinolinic acid has been shown to play a role in destabilization of the cytoskeleton within astrocytes and brain endothelial cells, contributing to the degradation of the BBB, which results in higher concentrations of quinolinic acid in the brain.
Toxicity
Quinolinic acid is an excitotoxin in the CNS. It reaches pathological levels in response to inflammation in the brain, which activates resident microglia and macrophages. High levels of quinolinic acid can lead to hindered neuronal function or even apoptotic death. Quinolinic acid produces its toxic effect through several mechanisms, primarily as its function as an NMDA receptor agonist, which triggers a chain of deleterious effects, but also through lipid peroxidation, and cytoskeletal destabilization. The gliotoxic effects of quinolinic acid further amplify the inflammatory response. Quinolinic acid affects neurons located mainly in the hippocampus, striatum, and neocortex, due to the selectivity toward quinolinic acid by the specific NMDA receptors residing in those regions.
When inflammation occurs, quinolinic acid is produced in excessive levels through the kynurenine pathway. This leads to over excitation of the NMDA receptor, which results in an influx of Ca2+ into the neuron. High levels of Ca2+ in the neuron trigger an activation of destructive enzymatic pathways including protein kinases, phospholipases, NO synthase, and proteases. These enzymes will degenerate crucial proteins in the cell and increase NO levels, leading to an apoptotic response by the cell, which results in cell death.
In normal cell conditions, astrocytes in the neuron will provide a glutamate–glutamine cycle, which results in reuptake of glutamate from the synapse into the pre-synaptic cell to be recycled, keeping glutamate from accumulating to lethal levels inside the synapse. At high concentrations, quinolinic acid inhibits glutamine synthetase, a critical enzyme in the glutamate–glutamine cycle. In addition, It can also promote glutamate release and block its reuptake by astrocytes. All three of these actions result in increased levels of glutamate activity that could be neurotoxic.
This results in a loss of function of the cycle, and results in an accumulation of glutamate. This glutamate further stimulates the NMDA receptors, thus acting synergistically with quinolinic acid to increase its neurotoxic effect by increasing the levels of glutamate, as well as inhibiting its uptake. In this way, quinolinic acid self-potentiates its own toxicity. Furthermore, quinolinic acid results in changes of the biochemistry and structure of the astrocytes themselves, resulting in an apoptotic response. A loss of astrocytes results in a pro-inflammatory effect, further increasing the initial inflammatory response which initiates quinolinic acid production.
Quinolinic acid can also exert neurotoxicity through lipid peroxidation, as a result of its pro-oxidant properties. Quinolinic acid can interact with Fe(II) to form a complex that induces a reactive oxygen and nitrogen species (ROS/RNS), notably the hydroxyl radical •OH. This free radical causes oxidative stress by further increasing glutamate release and inhibiting its reuptake, and results in the breakdown of DNA in addition to lipid peroxidation.
Quinolinic acid has also been noted to increase phosphorylation of proteins involved in cell structure, leading to destabilization of the cytoskeleton.
Clinical implications
Psychiatric disorders
Mood disorders
The prefrontal cortices in the post-mortem brains of patients with major depression and bipolar depression contain increased quinolinic acid immunoreactivity compared to the brains of patients never having had depression. The fact that NMDA receptor antagonists possess antidepressant properties suggests that increased levels of quinolinic acid in patients with depression may overactivate NMDA receptors. By inducing increased levels of quinolinic acid in the cerebral spinal fluid with interferon α, researchers have demonstrated that increased quinolinic acid levels correlate with increased depressive symptoms.
Increased levels of quinolinic acid might contribute to the apoptosis of astrocytes and certain neurons, resulting in decreased synthesis of neurotrophic factors. With less neurotrophic factors, the astrocyte-microglia-neuronal network is weaker and thus is more likely to be affected by environmental factors such as stress. In addition, increased levels of quinolinic acid could play a role in impairment of the glial-neuronal network, which could be associated with the recurrent and chronic nature of depression.
Furthermore, studies have shown that unpredictable chronic mild stress (UCMS) can lead to the metabolism of quinolinic acid in the amygdala and striatum and a reduction in quinolinic acid pathway in the cingulate cortex. Experiments with mice demonstrate how quinolinic acid can affect behavior and act as endogenous anxiogens. For instance, when quinolinic acid levels are increased, mice socialize and groom for shorter periods of time. There is also evidence that increased concentrations of quinolinic acid can play a role in adolescent depression.
Schizophrenia
Quinolinic acid may be involved in schizophrenia; however, there has been no research done to examine the specific effects of quinolinic acid in schizophrenia. There are many studies that show that kynurenic acid (KYNA) plays a role in the positive symptoms of schizophrenia, and there has been some research to suggest that 3-hydroxykynurenine (OHK) plays a role in the disease as well. Because quinolinic acid is strongly associated with KYNA and OHK, it may too play a role in schizophrenia.
Conditions related to neuronal death
The cytotoxic effects of quinolinic acid elaborated upon in the toxicity section amplify cell death in neurodegenerative conditions.
Amyotrophic lateral sclerosis (ALS)
Quinolinic acid may contribute to the causes of amyotrophic lateral sclerosis (ALS). Researchers have found elevated levels of quinolinic acid in the cerebral spinal fluid (CSF), motor cortex, and spinal cord in ALS patients. These increased concentrations of quinolinic acid could lead to neurotoxicity. In addition, quinolinic acid is associated with overstimulating NMDA receptors on motor neurons. Studies have demonstrated that quinolinic acid leads to depolarization of spinal motor neurons by interacting with the NMDA receptors on those cells in rats. Also, quinolinic acid plays a role in mitochondrial dysfunction in neurons. All of these effects could contribute to ALS symptoms.
Alzheimer's disease
Researchers have found a correlation between quinolinic acid and Alzheimer's disease. For example, studies have found in the post-mortem brains of Alzheimer's disease patients higher neuronal quinolinic acid levels and that quinolinic acid can associate with tau protein. Furthermore, researchers have demonstrated that quinolinic acid increases tau phosphorylation in vitro in human fetal neurons and induces ten neuronal genes including some known to correlate with Alzheimer's disease. In immunoreactivity studies, researchers have found that quinolinic acid immunoreactivity is strongest in glial cells that are located close to amyloid plaques and that there is immunoreactivity with neurofibrillary tangles.
Brain ischemia
Brain ischemia is characterized by insufficient blood flow to the brain. Studies with ischaemic gerbils indicate that, after a delay, levels of quinolinic acid significantly increase, which correlates with increased neuronal damage. In addition, researchers have found that, after transient global ischaemia, there are microglia containing quinolinic acid within the brain. Following cerebral ischaemia, delayed neuronal death may occur in part because of central microglia and macrophages, which possess and secrete quinolinic acid. This delayed neurodegeneration could be associated with chronic brain damage that follows a stroke.
Human immunodeficiency virus (HIV) and Acquired immunodeficiency syndrome (AIDS)
Studies have found that there is a correlation between levels of quinolinic acid in cerebral spinal fluid (CSF) and HIV-associated neurocognitive disorder (HAND) severity. About 20% of HIV patients have this disorder. Concentrations of quinolinic acid in the CSF are associated to different stages of HAND. For example, raised levels of quinolinic acid after infection are correlated to perceptual-motor slowing in patients. Then, in later stages of HIV, increased concentrations of quinolinic acid in the CSF of HAND patients correlates with HIV encephalitis and cerebral atrophy.
Quinolinic acid has also been found in HAND patients' brains. In fact, the amount of quinolinic acid found in the brain of HAND patients can be up to 300 times greater than that found in the CSF. Neurons exposed to quinolinic acid for long periods of time can develop cytoskeletal abnormalities, vacuolization, and cell death. HAND patients' brains contain many of these defects. Furthermore, studies in rats have demonstrated that quinolinic acid can lead to neuronal death in brains structures that are affected by HAND, including the striatum, hippocampus, the substantia nigra, and non-limbic cortex.
Levels of quinolinic acid in the CSF of AIDS patients with AIDS- dementia can be up to twenty times higher than normal. Similar to HIV patients, this increased quinolinic acid concentration correlates with cognitive and motor dysfunction. When patients were treated with zidovudine to decrease quinolinic acid levels, the amount of neurological improvement was related to the amount of quinolinic acid decreased.
Huntington's disease
In the initial stages of Huntington's disease, patients have substantially increased quinolinic acid levels, in particular in the neostriatum and cortex. These areas of the brain that had the most damage at these stages. The increase in quinolinic acid correlates with the early activation of microglia and increased cerebral 3-hydroxykynurenine (3-HK) levels. Furthermore, these increased levels of quinolinic acid are great enough to produce excitotoxic neuronal damage. Studies have demonstrated that activation of NMDA receptors by quinolinic acid leads to neuronal dysfunction and death of striatal GABAergic medium spiny neurons (MSN).
Researchers utilize quinolinic acid in order to study Huntington's disease in many model organisms. Because injection of quinolinic acid into the striatum of rodents induces electrophysiological, neuropathological, and behavioral changes similar to those found in Huntington's disease, this is the most common method researchers use to produce a Huntington's disease phenotype. Neurological changes produced by quinolinic acid injections include altered levels of glutamate, GABA, and other amino acids. Lesions in the pallidum can suppress effects of quinolinic acid in monkeys injected with quinolinic acid into their striatum. In humans, such lesions can also diminish some of the effects of Huntington's disease and Parkinson's disease.
Parkinson's disease
Quinolinic acid neurotoxicity is thought to play a role in Parkinson's disease. Studies show that quinolinic acid is involved in the degeneration of the dopaminergic neurons in the substantia nigra (SN) of Parkinson's disease patients. SN degeneration is one of the key characteristics of Parkinson's disease. Microglia associated with dopaminergic cells in the SN produce quinolinic acid at this location when scientists induce Parkinson's disease symptoms in macaques. Quinolinic acid levels are too high at these sites to be controlled by KYNA, causing neurotoxicity to occur.
Other
Quinolinic acid levels are increased in the brains of children infected with a range of bacterial infections of the central nervous system (CNS), of poliovirus patients, and of Lyme disease with CNS involvement patients. In addition, raised quinolinic acid levels have been found in traumatic CNS injury patients, patients with cognitive decline with ageing, hyperammonaemia patients, hypoglycaemia patients, and systemic lupus erythematosus patients. Also, it has been found that people with malaria and patients with olivopontocerebellar atrophy have raised quinolinic acid metabolism.
Treatment focus
Reduction of the excitotoxic effects of quinolinic acid is the subject of on-going research. NMDAr antagonists have been shown to provide protection to motor neurons from excitotoxicity resulting from quinolinic acid production. Kynurenic acid, another product of the kynurenine pathway acts as an NMDA receptor antagonist.
Kynurenic acid thus acts as a neuroprotectant, by reducing the dangerous over-activation of the NMDA receptors. Manipulation of the kynurenine pathway away from quinolinic acid and toward kynurenic acid is therefore a major therapeutic focus. Nicotinylalanine has been shown to be an inhibitor of kynurenine hydroxylase, which results in a decreased production of quinolinic acid, thus favoring kynurenic acid production. This change in balance has the potential to reduce hyperexcitability, and thus excitotoxic damage produced from elevated levels of quinolinic acid.
Therapeutic efforts are also focusing on antioxidants, which have been shown to provide protection against the pro-oxidant properties of quinolinic acid.
Norharmane suppresses the production of quinolinic acid, 3-hydroxykynurenine and nitric oxide synthase, thereby acting as a neuroprotectant. Natural phenols such as catechin hydrate, curcumin, and epigallocatechin gallate reduce the neurotoxicity of quinolinic acid, via anti-oxidant and possibly calcium influx mechanisms. COX-2 inhibitors, such as licofelone have also demonstrated protective properties against the neurotoxic effects of quinolinic acid. COX-2 is upregulated in many neurotoxic disorders and is associated with increased ROS production. Inhibitors have demonstrated some evidence of efficacy in mental health disorders such as major depressive disorder, schizophrenia, and Huntington's disease.
See also
Homoquinolinic acid
References
Pyridines
Dicarboxylic acids
Aromatic acids
NMDA receptor agonists
Neurotransmitters
Neurotoxins
Enones | Quinolinic acid | Chemistry | 4,253 |
42,543,708 | https://en.wikipedia.org/wiki/Fill%20factor%20%28image%20sensor%29 | The fill factor of an image sensor array is the ratio of a pixel's light sensitive area to its total area.
For pixels without microlenses, the fill factor is the ratio of photodiode area to total pixel area,
but the use of microlenses increases the effective fill factor, often to nearly 100%, by converging light from the whole pixel area into the photodiode.
Another case that reduces the fill factor of an image is to add additional memory beside each pixel, so as to achieve a global shutter on CMOS sensors.
References
Image sensors
Ratios | Fill factor (image sensor) | Mathematics | 119 |
14,199,008 | https://en.wikipedia.org/wiki/Emotion%20Markup%20Language | An Emotion Markup Language (EML or EmotionML) has first been defined by the W3C Emotion Incubator Group (EmoXG) as a general-purpose emotion annotation and representation language, which should be usable in a large variety of technological contexts where emotions need to be represented. Emotion-oriented computing (or "affective computing") is gaining importance as interactive technological systems become more sophisticated. Representing the emotional states of a user or the emotional states to be simulated by a user interface requires a suitable representation format; in this case a markup language is used.
EmotionML version 1.0 was published by the group in May 2014.
History
In 2006, a first W3C Incubator Group, the Emotion Incubator Group (EmoXG), was set up "to investigate a language to represent the emotional states of users and the emotional states simulated by user interfaces" with the final Report published on 10 July 2007.
In 2007, the Emotion Markup Language Incubator Group (EmotionML XG) was set up as a follow-up to the Emotion Incubator Group, "to propose a specification draft for an Emotion Markup Language, to document it in a way accessible to non-experts, and to illustrate its use in conjunction with a number of existing markups." The final report of the Emotion Markup Language Incubator Group, Elements of an EmotionML 1.0, was published on 20 November 2008.
The work then was continued in 2009 in the frame of the W3C's Multimodal Interaction Activity, with the First Public Working Draft of "Emotion Markup Language (EmotionML) 1.0" being published on 29 October 2009. The Last Call Working Draft of "Emotion Markup Language 1.0", was published on 7 April 2011. The Last Call Working Draft addressed all open issues that arose from feedback of the community on the First Call Working Draft as well as results of a workshop held in Paris in October 2010. Along with the Last Call Working Draft, a list of vocabularies for EmotionML has been published to aid developers using common vocabularies for annotating or representing emotions.
Annual draft updates were published until the 1.0 version was finished in 2014.
Reasons for defining an emotion markup language
A standard for an emotion markup language would be useful for the following purposes:
To enhance computer-mediated human-human or human-machine communication. Emotions are a basic part of human communication and should therefore be taken into account, e.g. in emotional Chat systems or emphatic voice boxes. This involves specification, analysis and display of emotion related states.
To enhance systems' processing efficiency. Emotion and intelligence are strongly interconnected. The modeling of human emotions in computer processing can help to build more efficient systems, e.g. using emotional models for time-critical decision enforcement.
To allow the analysis of non-verbal behavior, emotion, mental states that can be provided using web services to enable data collection, analysis, and reporting.
Concrete examples of existing technology that could apply EmotionML include:
Opinion mining / sentiment analysis in Web 2.0, to automatically track customer's attitude regarding a product across blogs;
Affective monitoring, such as ambient assisted living applications, fear detection for surveillance purposes, or using wearable sensors to test customer satisfaction;
Wellness technologies that provide assistance according to a person's emotional state with the goal to improve the person's well-being;
Character design and control for games and virtual worlds;
Building web services to capture, analysis, and report data of non-verbal behavior, emotion and mental states of an individual or group across the internet using standard web technologies such as HTML5 and JSON.
Social robots, such as guide robots engaging with visitors;
Expressive speech synthesis, generating synthetic speech with different emotions, such as happy or sad, friendly or apologetic; expressive synthetic speech would for example make more information available to blind and partially sighted people, and enrich their experience of the content;
Emotion recognition (e.g., for spotting angry customers in speech dialog systems, to improve computer games or e-Learning applications);
Support for people with disabilities, such as educational programs for people with autism. EmotionML can be used to make the emotional intent of content explicit. This would enable people with learning disabilities (such as Asperger syndrome) to realise the emotional context of the content;
EmotionML can be used for media transcripts and captions. Where emotions are marked up to help deaf or hearing impaired people who cannot hear the soundtrack, more information is made available to enrich their experience of the content.
The Emotion Incubator Group has listed 39 individual use cases for an Emotion markup language.
A standardised way to mark up the data needed by such "emotion-oriented systems" has the potential to boost development primarily because data that was annotated in a standardised way can be interchanged between systems more easily, thereby simplifying a market for emotional databases, and the standard can be used to ease a market of providers for sub-modules of emotion processing systems, e.g. a web service for the recognition of emotion from text, speech or multi-modal input.
The challenge of defining a generally usable emotion markup language
Any attempt to standardize the description of emotions using a finite set of fixed descriptors is doomed to failure, as there is no consensus on the number of relevant emotions, on the names that should be given to them or how else best to describe them. For example, the difference between ":)" and "(:" is small, but using a standardized markup it would make one invalid. Even more basically, the list of emotion-related states that should be distinguished varies depending on the application domain and the aspect of emotions to be focused. Basically, the vocabulary needed depends on the context of use.
On the other hand, the basic structure of concepts is less controversial: it is generally agreed that emotions involve triggers, appraisals, feelings, expressive behavior including physiological changes, and action tendencies; emotions in their entirety can be described in terms of categories or a small number of dimensions; emotions have an intensity, and so on. For details, see the Scientific Descriptions of Emotions in the Final Report of the Emotion Incubator Group.
Given this lack of agreement on descriptors in the field, the only practical way of defining an emotion markup language is the definition of possible structural elements and to allow users to "plug in" vocabularies that they consider appropriate for their work.
An additional challenge lies in the aim to provide a markup language that is generally usable. The requirements that arise from different use cases are rather different. Whereas manual annotation tends to require all the fine-grained distinctions considered in the scientific literature, automatic recognition systems can usually distinguish only a very small number of different states and affective avatars need yet another level of detail for expressing emotions in an appropriate way.
For the reasons outlined here, it is clear that there is an inevitable tension between flexibility and interoperability, which need to be weighed in the formulation of an EmotionML. The guiding principle in the following specification has been to provide a choice only where it is needed, and to propose reasonable default options for every choice.
Applications and web services benefiting from an emotion markup language
There are a range of existing projects and applications to which an emotion markup language will enable the building of webservices to measure capture data of individuals non-verbal behavior, mental states, and emotions and allowing results to be reported and rendered in a standardized format using standard web technologies such as JSON and HTML5. One such project is measuring affect data across the Internet using EyesWeb.
See also
Affect display
Autism friendly
Human Markup Language
References
Markup languages
XML-based standards
Affective computing | Emotion Markup Language | Technology | 1,601 |
42,793,721 | https://en.wikipedia.org/wiki/Sodium%20cyanate | Sodium cyanate is the inorganic compound with the formula NaOCN. A white solid, it is the sodium salt of the cyanate anion.
Structure
The anion is described by two resonance structures:
The salt adopts a body centered rhombohedral crystal lattice structure (trigonal crystal system) at room temperature.
Preparation
Sodium cyanate is prepared industrially by the reaction of urea with sodium carbonate at elevated temperature.
2OC(NH2)2 + Na2CO3 → 2Na(NCO) + CO2 + 2NH3 + H2O
Sodium allophanate is observed as an intermediate:
It can also be prepared in the laboratory by oxidation of a cyanide in aqueous solution by a mild oxidizing agent such as lead oxide.
Uses and reactions
The main use of sodium cyanate is for steel hardening.
Sodium cyanate is used to produce cyanic acid, often in situ:
This approach is exploited for condensation with amines to give unsymmetrical ureas:
Such urea derivatives have a range of biological activity.
See also
Cyanate
References
Cyanates
Sodium compounds | Sodium cyanate | Chemistry | 242 |
573,054 | https://en.wikipedia.org/wiki/Niter | Niter or nitre is the mineral form of potassium nitrate, KNO3. It is a soft, white, highly soluble mineral found primarily in arid climates or cave deposits.
Historically, the term niter was not well differentiated from natron, both of which have been very vaguely defined but generally refer to compounds of sodium or potassium joined with carbonate or nitrate ions.
Characteristics
Niter is a colorless to white mineral crystallizing in the orthorhombic crystal system. It is the mineral form of potassium nitrate, , and is soft (Mohs hardness 2), highly soluble in water, and easily fusible. Its crystal structure resembles that of aragonite, with potassium replacing calcium and nitrate replacing carbonate. It occurs in the soils of arid regions and as massive encrustations and efflorescent growths on cavern walls and ceilings where solutions containing alkali potassium and nitrate seep into the openings. It occasionally occurs as prismatic acicular crystal groups, and individual crystals commonly show pseudohexagonal twinning on [110]. Niter and other nitrates can also form in association with deposits of guano and similar organic materials.
History and etymology
Niter as a term has been known since ancient times, although there is much historical confusion with natron (an impure sodium carbonate/bicarbonate), and not all of the ancient salts known by this name or similar names in the ancient world contained nitrate. The name is from the Ancient Greek from Ancient Egyptian , related to the Hebrew , for salt-derived ashes (their interrelationship is not clear).
The Hebrew may have been used as, or in conjunction with soap, as implied by Jeremiah 2:22, "For though thou wash thee with niter, and take thee much soap..." However, it is not certain which substance (or substances) the Biblical "neter" refers to, with some suggesting sodium carbonate.
The Neo-Latin word for sodium, , is derived from this same class of desert minerals called (French) through Spanish from Greek (), derived from Ancient Egyptian , referring to the sodium carbonate salts occurring in the deserts of Egypt, not the nitratine (nitrated sodium salts) typically occurring in the deserts of Chile (classically known as "Chilean saltpeter" and variants of this term).
A term (, or aphronitre) which translates as "foam of niter" was a regular purchase in a fourth-century AD series of financial accounts, and since it was expressed as being "for the baths" was probably used as soap.
Niter was used to refer specifically to nitrated salts known as various types of saltpeter (only nitrated salts were good for making gunpowder) by the time niter and its derivative nitric acid were first used to name the element nitrogen, in 1790.
Availability
Because of its ready solubility in water, niter is most often found in arid environments and often in conjunction with other soluble minerals like halides, iodates, borates, gypsum, and rarer carbonates and sulphates. Potassium and other nitrates are of great importance for use in fertilizers and, historically, gunpowder. Much of the world's demand is now met by synthetically produced nitrates, though the natural mineral is still mined and is still of significant commercial value.
Niter occurs naturally in certain places like the "Caves of Salnitre" (Collbató) known since the Neolithic. In the "Cova del Rat Penat", guano (bat excrements) deposited over thousands of years became saltpeter after being leached by the action of rainwater.
In 1783, Giuseppe Maria Giovene and Alberto Fortis together discovered a "natural nitrary" in a doline close to Molfetta, Italy, named Pulo di Molfetta. The two scientists discovered that niter formed inside the walls of the caves of the doline, under certain conditions of humidity and temperature. After the discovery, it was suggested that manure could be used for agriculture, in order to increase the production, rather than to make gunpowder. The discovery was challenged by scholars until chemist Giuseppe Vairo and his pupil Antonio Pitaro confirmed the discovery. Naturalists sent by academies from all Europe came in large number to visit the site; since niter is a fundamental ingredient in the production of gunpowder, these deposits were of considerable strategic interest. The government started extraction. Shortly thereafter, Giovene discovered niter in other caves of Apulia. The remnants of the extraction plant is a site of industrial archaeology, although currently not open to tourists.
Similar minerals
Related minerals are soda niter (sodium nitrate), ammonia niter or gwihabaite (ammonium nitrate), nitrostrontianite (strontium nitrate), nitrocalcite (calcium nitrate), nitromagnesite (magnesium nitrate), nitrobarite (barium nitrate) and two copper nitrates, gerhardtite and buttgenbachite; in fact all of the natural elements in the first three columns of the periodic table and numerous other cations form nitrates which are uncommonly found for the reasons given, but have been described. Niter was used to refer specifically to nitrated salts known as various types of saltpeter (only nitrated salts were good for making gunpowder) by the time niter and its derivative nitric acid were first used to name the element nitrogen, in 1790.
See also
Nitratine - Sodium based fertilizer
References
External links
Etymology of "niter"
Poe's The Cask of Amontillado
Nitrate minerals
Potassium minerals
Orthorhombic minerals
Minerals in space group 36
Potash
History of mining
Nitrogen | Niter | Chemistry | 1,174 |
3,174,646 | https://en.wikipedia.org/wiki/Free%20Lie%20algebra | In mathematics, a free Lie algebra over a field K is a Lie algebra generated by a set X, without any imposed relations other than the defining relations of alternating K-bilinearity and the Jacobi identity.
Definition
The definition of the free Lie algebra generated by a set X is as follows:
Let X be a set and a morphism of sets (function) from X into a Lie algebra L. The Lie algebra L is called free on X if is the universal morphism; that is, if for any Lie algebra A with a morphism of sets , there is a unique Lie algebra morphism such that .
Given a set X, one can show that there exists a unique free Lie algebra generated by X.
In the language of category theory, the functor sending a set X to the Lie algebra generated by X is the free functor from the category of sets to the category of Lie algebras. That is, it is left adjoint to the forgetful functor.
The free Lie algebra on a set X is naturally graded. The 1-graded component of the free Lie algebra is just the free vector space on that set.
One can alternatively define a free Lie algebra on a vector space V as left adjoint to the forgetful functor from Lie algebras over a field K to vector spaces over the field K – forgetting the Lie algebra structure, but remembering the vector space structure.
Universal enveloping algebra
The universal enveloping algebra of a free Lie algebra on a set X is the free associative algebra generated by X. By the Poincaré–Birkhoff–Witt theorem it is the "same size" as the symmetric algebra of the free Lie algebra (meaning that if both sides are graded by giving elements of X degree 1 then they are isomorphic as graded vector spaces). This can be used to describe the dimension of the piece of the free Lie algebra of any given degree.
Ernst Witt showed that the number of basic commutators of degree k in the free Lie algebra on an m-element set is given by the necklace polynomial:
where is the Möbius function.
The graded dual of the universal enveloping algebra of a free Lie algebra on a finite set is the shuffle algebra. This essentially follows because universal enveloping algebras have the structure of a Hopf algebra, and the shuffle product describes the action of comultiplication in this algebra. See tensor algebra for a detailed exposition of the inter-relation between the shuffle product and comultiplication.
Hall sets
An explicit basis of the free Lie algebra can be given in terms of a Hall set, which is a particular kind of subset inside the free magma on X. Elements of the free magma are binary trees, with their leaves labelled by elements of X. Hall sets were introduced by based on work of Philip Hall on groups. Subsequently, Wilhelm Magnus showed that they arise as the graded Lie algebra associated with the filtration on a free group given by the lower central series. This correspondence was motivated by commutator identities in group theory due to Philip Hall and Witt.
Lyndon basis
The Lyndon words are a special case of the Hall words, and so in particular there is a basis of the free Lie algebra corresponding to Lyndon words. This is called the Lyndon basis, named after Roger Lyndon. (This is also called the Chen–Fox–Lyndon basis or the Lyndon–Shirshov basis, and is essentially the same as the Shirshov basis.)
There is a bijection γ from the Lyndon words in an ordered alphabet to a basis of the free Lie algebra on this alphabet defined as follows:
If a word w has length 1 then (considered as a generator of the free Lie algebra).
If w has length at least 2, then write for Lyndon words u, v with v as long as possible (the "standard factorization"). Then .
Shirshov–Witt theorem
and showed that any Lie subalgebra of a free Lie algebra is itself a free Lie algebra.
Applications
Serre's theorem on a semisimple Lie algebra uses a free Lie algebra to construct a semisimple algebra out of generators and relations.
The Milnor invariants of a link group are related to the free Lie algebra on the components of the link, as discussed in that article.
See also Lie operad for the use of a free Lie algebra in the construction of the operad.
See also
Free object
Free algebra
Free group
References
Properties of Lie algebras
Free algebraic structures | Free Lie algebra | Mathematics | 929 |
70,322,792 | https://en.wikipedia.org/wiki/Phlebiarubrone | Phlebiarubrone is an antibiotic with the molecular formula C19H12O4 which is produced by the fungi Punctularia strigosozonata.
References
Further reading
antibiotics
Benzodioxoles | Phlebiarubrone | Chemistry,Biology | 48 |
12,401,224 | https://en.wikipedia.org/wiki/Eilenberg%E2%80%93Maclane%20spectrum | In mathematics, specifically algebraic topology, there is a distinguished class of spectra called Eilenberg–Maclane spectra for any Abelian group pg 134. Note, this construction can be generalized to commutative rings as well from its underlying Abelian group. These are an important class of spectra because they model ordinary integral cohomology and cohomology with coefficients in an abelian group. In addition, they are a lift of the homological structure in the derived category of abelian groups in the homotopy category of spectra. In addition, these spectra can be used to construct resolutions of spectra, called Adams resolutions, which are used in the construction of the Adams spectral sequence.
Definition
For a fixed abelian group let denote the set of Eilenberg–MacLane spaces with the adjunction map coming from the property of loop spaces of Eilenberg–Maclane spaces: namely, because there is a homotopy equivalencewe can construct maps from the adjunction giving the desired structure maps of the set to get a spectrum. This collection is called the Eilenberg–Maclane spectrum of pg 134.
Properties
Using the Eilenberg–Maclane spectrum we can define the notion of cohomology of a spectrum and the homology of a spectrum pg 42. Using the functorwe can define cohomology simply asNote that for a CW complex , the cohomology of the suspension spectrum recovers the cohomology of the original space . Note that we can define the dual notion of homology aswhich can be interpreted as a "dual" to the usual hom-tensor adjunction in spectra. Note that instead of , we take for some Abelian group , we recover the usual (co)homology with coefficients in the abelian group and denote it by .
Mod-p spectra and the Steenrod algebra
For the Eilenberg–Maclane spectrum there is an isomorphismfor the p-Steenrod algebra .
Tools for computing Adams resolutions
One of the quintessential tools for computing stable homotopy groups is the Adams spectral sequence. In order to make this construction, the use of Adams resolutions are employed. These depend on the following properties of Eilenberg–Maclane spectra. We define a generalized Eilenberg–Maclane spectrum as a finite wedge of suspensions of Eilenberg–Maclane spectra , soNote that for and a spectrum so it shifts the degree of cohomology classes. For the rest of the article for some fixed abelian group
Equivalence of maps to K
Note that a homotopy class represents a finite collection of elements in . Conversely, any finite collection of elements in is represented by some homotopy class .
Constructing a surjection
For a locally finite collection of elements in generating it as an abelian group, the associated map induces a surjection on cohomology, meaning if we evaluate these spectra on some topological space , there is always a surjectionof Abelian groups.
Steenrod-module structure on cohomology of spectra
For a spectrum taking the wedge constructs a spectrum which is homotopy equivalent to a generalized Eilenberg–Maclane space with one wedge summand for each generator or . In particular, it gives the structure of a module over the Steenrod algebra for . This is because the equivalence stated before can be read asand the map induces the -structure.
See also
Adams spectral sequence
Spectrum (topology)
Homotopy groups of spheres
References
External links
Complex cobordism and stable homotopy groups of spheres
The Adams Spectral Sequence
Algebraic topology
Homological algebra
Spectra (topology) | Eilenberg–Maclane spectrum | Mathematics | 737 |
1,444,981 | https://en.wikipedia.org/wiki/Micro%20process%20engineering | Micro process engineering is the science of conducting chemical or physical processes (unit operations) inside small volumina,
typically inside channels with diameters of less than 1 mm
(microchannels) or other structures with sub-millimeter dimensions.
These processes are usually carried out in continuous flow mode, as opposed to batch production, allowing a throughput high enough to make micro process engineering a tool for chemical production. Micro process engineering is therefore not to be confused with microchemistry, which deals with very small overall quantities of matter.
The subfield of micro process engineering that deals with chemical
reactions, carried out in microstructured reactors or
"microreactors", is also known as microreaction technology.
The unique advantages of microstructured reactors or microreactors are enhanced heat transfer due to the large surface area-to-volume ratio, and enhanced mass transfer. For example, the length scale of diffusion processes is comparable to that of microchannels or even shorter, and efficient mixing of reactants can be achieved during very short times (typically milliseconds). The good heat transfer properties allow a precise temperature control of reactions. For example, highly
exothermic reactions can be conducted almost isothermally when the microstructured reactor contains a second set of microchannels ("cooling passage"), fluidically separated from the reaction channels ("reaction
passage"), through which a flow of cold fluid with sufficiently high
heat capacity is maintained. It is also possible to change the temperature of microstructured reactors very rapidly to intentionally achieve a non-isothermal behaviour.
Process intensification
While the dimensions of the individual channels are small, a micro process engineering device ("microstructured reactor") can contain many thousands of such channels, and the overall size of a microstructured reactor can be on the scale of meters. The objective of micro process engineering is not primarily to miniaturize production plants, but to increase yields and selectivities of chemical reactions, thus reducing the cost of chemical production. This goal can be achieved by either using chemical reactions that cannot be conducted in larger volumina, or by running chemical reactions at parameters (temperatures, pressures, concentrations) that are inaccessible in larger volumina due
to safety constraints. For example, the detonation of the stoichiometric mixture of two volume unit of hydrogen gas and
one volume unit of oxygen gas does not propagate in microchannels
with a sufficiently small diameter. This property is referred to as the
"intrinsic safety" of microstructured reactors. The improvement of yields and selectivities by using novel reactions or running reactions at more extreme parameters is known as "process intensification".
History
Historically, micro process engineering originated around the 1980s, when mechanical micromachining methods developed for the fabrication of uranium isotope separation nozzles were first applied to the manufacturing of compact heat exchangers at the Karlsruhe (Nuclear) Research Center.
See also
Flow chemistry
Microreactor
Chemical process engineering
Microtechnology | Micro process engineering | Chemistry,Materials_science,Engineering | 611 |
12,287,781 | https://en.wikipedia.org/wiki/Routing%20domain | In computer networking, a routing domain is a collection of networked systems that operate common routing protocols and are under the control of a single administration. For example, this might be a set of routers under the control of a single organization, some of them operating a corporate network, some others a branch office network, and the rest the data center network.
A given autonomous system can contain multiple routing domains, or a set of routing domains can be coordinated without being an Internet-participating autonomous system.
References
Computer networking | Routing domain | Technology,Engineering | 104 |
52,744,500 | https://en.wikipedia.org/wiki/Tetrahedron%20Computer%20Methodology | The Tetrahedron Computer Methodology was a short lived journal that was published by Pergamon Press (now Elsevier) to experiment with electronic submission of articles in the ChemText format, and the sharing source code to enable reproducibility. It was the first chemical journal to be published electronically, with issues distributed in print and on floppy disks. It is likely it was also the first journal to accept submissions in a non-paper format (on floppy disks). The journal ceased publication owing to technical and non-technical reasons, and may have lacked sufficient institutional support. The last issue appeared in 1992 but was dated 1990.
References
External links
Computer science journals
Cheminformatics
Chemistry journals
Academic journals established in 1988
English-language journals
Elsevier academic journals
Publications disestablished in 1990 | Tetrahedron Computer Methodology | Chemistry | 161 |
61,288,762 | https://en.wikipedia.org/wiki/U-CARE | u-CARE otherwise known as user-friendly Comprehensive Antibiotic resistance Repository of Escherichia coli is a database focused on the documentation of multi-drug resistant Escherichia coli (E.coli). This database aims to provide a tool that is easily accessible to researchers unfamiliar with bioinformatics and to medical practitioners as a reference for which antibiotic to use/not use in the treatment of an E.coli infection. u-CARE is manually curated with 52 antibiotics, 107 genes, transcription factors, and SNP. Information provided include resistance mechanism for the gene and summary, chemical description, and structural descriptions for the antibiotic. On the antibiotic page, there is an external link linking to public databases like GO, CDD, Ecocyc, DEG, KEGG, DrugBank, Pubchem and Uniprot. u-CARE can be accessed at http://www.e-bioinformatics.net/ucare..
See also
Antimicrobial Resistance databases
References
Antimicrobial resistance organizations
Biological databases | U-CARE | Biology | 217 |
168,865 | https://en.wikipedia.org/wiki/Corollary | In mathematics and logic, a corollary ( , ) is a theorem of less importance which can be readily deduced from a previous, more notable statement. A corollary could, for instance, be a proposition which is incidentally proved while proving another proposition; it might also be used more casually to refer to something which naturally or incidentally accompanies something else.
Overview
In mathematics, a corollary is a theorem connected by a short proof to an existing theorem. The use of the term corollary, rather than proposition or theorem, is intrinsically subjective. More formally, proposition B is a corollary of proposition A, if B can be readily deduced from A or is self-evident from its proof.
In many cases, a corollary corresponds to a special case of a larger theorem, which makes the theorem easier to use and apply, even though its importance is generally considered to be secondary to that of the theorem. In particular, B is unlikely to be termed a corollary if its mathematical consequences are as significant as those of A. A corollary might have a proof that explains its derivation, even though such a derivation might be considered rather self-evident in some occasions (e.g., the Pythagorean theorem as a corollary of law of cosines).
Peirce's theory of deductive reasoning
Charles Sanders Peirce held that the most important division of kinds of deductive reasoning is that between corollarial and theorematic. He argued that while all deduction ultimately depends in one way or another on mental experimentation on schemata or diagrams, in corollarial deduction:
"It is only necessary to imagine any case in which the premises are true in order to perceive immediately that the conclusion holds in that case"
while in theorematic deduction:
"It is necessary to experiment in the imagination upon the image of the premise in order from the result of such experiment to make corollarial deductions to the truth of the conclusion."
Peirce also held that corollarial deduction matches Aristotle's conception of direct demonstration, which Aristotle regarded as the only thoroughly satisfactory demonstration, while theorematic deduction is:
The kind more prized by mathematicians
Peculiar to mathematics
Involves in its course the introduction of a lemma or at least a definition uncontemplated in the thesis (the proposition that is to be proved), in remarkable cases that definition is of an abstraction that "ought to be supported by a proper postulate."
See also
Lemma (mathematics)
Porism
Proposition
Lodge Corollary to the Monroe Doctrine
Roosevelt Corollary to the Monroe Doctrine
References
Further reading
Cut the knot: Sample corollaries of the Pythagorean theorem
Geeks for geeks: Corollaries of binomial theorem
Mathematical terminology
Theorems
Statements | Corollary | Mathematics | 582 |
24,154,760 | https://en.wikipedia.org/wiki/C17H26N2O | {{DISPLAYTITLE:C17H26N2O}}
The molecular formula C17H26N2O (molar mass: 274.40 g/mol) may refer to:
5-MeO-DPT, a hallucinogenic drug
5-Methoxy-diisopropyltryptamine
Phenampromide
Ropivacaine
Molecular formulas | C17H26N2O | Physics,Chemistry | 83 |
386,369 | https://en.wikipedia.org/wiki/Lines%20of%20Torres%20Vedras | The Lines of Torres Vedras were lines of forts and other military defences built in secrecy to defend Lisbon during the Peninsular War. Named after the nearby town of Torres Vedras, they were ordered by Arthur Wellesley, Viscount Wellington, constructed by Colonel Richard Fletcher and his Portuguese workers between November 1809 and September 1810, and used to stop Marshal Masséna's 1810 offensive. The Lines were declared a National Heritage by the Portuguese Government in March 2019.
Development
At the beginning of the Peninsular War (1807–14) France and Spain signed the Treaty of Fontainebleau in October 1807. This provided for the invasion and subsequent division of Portuguese territory into three kingdoms. Subsequently, French troops under the command of General Junot entered Portugal, which requested support from the British. In July 1808 troops commanded by Sir Arthur Wellesley, the later Duke of Wellington, landed in Portugal and defeated French troops at the Battles of Roliça and Vimeiro. This forced Junot to negotiate the Convention of Cintra, which led to the evacuation of the French army from Portugal. In March 1809, Marshal Soult led a new French expedition that advanced south to the city of Porto before being repulsed by Portuguese-British troops and forced to withdraw. After this retreat, Wellesley's forces advanced into Spain to join 33,000 Spanish troops under General Cuesta. At Talavera, some southwest of Madrid, they encountered and defeated 46,000 French soldiers under Marshal Claude Victor. After the Battle of Talavera, Wellington realised that he was seriously outnumbered by the French army, giving rise to the possibility that he could be forced to retreat to Portugal and possibly evacuate. He decided to strengthen the proposed evacuation area around the Fort of São Julião da Barra on the estuary of the River Tagus, near Lisbon.
Planning
In October 1809, Wellington, drawing on topographical maps prepared by José Maria das Neves Costa, and making use of a report that was prepared for General Junot in 1807, surveyed the area north of Lisbon with Lieutenant-Colonel Richard Fletcher. Eventually they chose the terrain from Torres Vedras to Lisbon because of its mountainous characteristics. From north to south, great undulations created peaks that straddled deep valleys, great gullies and wide ravines. The rugged and inhospitable area offered numerous possibilities for a stubborn rearguard fight from forts on many of the peaks.
Following the decision on the location, Lieutenant-Colonel Richard Fletcher ordered the work to begin on a network of interlocking fortifications, redoubts, escarpments, dams that flooded large areas, and other defences. Roads were also built to enable troops to move rapidly between forts. The work was supervised by Fletcher, assisted by Major John Thomas Jones, and 11 other British Officers, four Portuguese Army Engineers, and two KGL officers. The cost was less than £200,000 according to the Royal Engineers, one of the least expensive but most productive military investments in history.
When the results of the surveys by the Royal Engineers were completed, it was possible, in February 1810, to begin work on 150 smaller interlinking defensive positions, using, wherever possible, the natural features of the landscape. The work received a boost after the loss to the French of the fortress at the Siege of Almeida in August 1810 led to the public conscription of Portuguese labourers. The works were sufficiently complete to halt the advance of the French troops, who arrived in October of the same year. Even after the French had retreated from Portugal, construction of the lines continued in expectation of their return, and in 1812 34,000 men were still working on them. On completion there were 152 fortifications with a total of 648 cannon.
Construction
The work began on the main defensive works on 3 November 1809, initially at the Fort of São Julião da Barra and almost immediately afterwards at the Fort of São Vicente (St. Vincent) overlooking the town of Torres Vedras and at the Fort of Alqueidão on top of Monte Agraço. The entire construction was carried out in great secrecy and the French never became aware of it. Only one report appeared in the London newspapers, a major source of information for Napoleon. It is said that the British government did not know about the forts and was stunned when Wellington first said in dispatches that he had retreated to them. Even the British Ambassador in Lisbon appears to have been unaware of what was happening. These defences were accompanied by a scorched earth policy to their north in which the inhabitants were told to leave their farms, destroying all food they could not take and anything else that may be useful to the French. Although ultimately contributing to the success of the defence, this policy led to high rates of mortality among the Portuguese who had retreated south of the lines. By some estimates 40,000 died.
Labour for construction of the forts was supplied by Portuguese regiments from Lisbon, by hired Portuguese and, ultimately, through conscription of the whole district. The 152 works were supervised by just 18 engineers. The Lines were not continuous, as in the case of a defensive wall, but consisted of a series of mutually supporting forts and other defences that both guarded roads that the French could take and also covered each other’s flanks. The majority of the defences were redoubts holding 200 to 300 troops and three to six cannon, normally 12-pounders, which could fire canister shot or cannonballs. Each redoubt was protected by a ditch or dry moat, with parapets, and was palisaded. By the time the French reached the First Line in October 1810, 126 works had been completed and were manned by 29,750 men with 247 heavy guns. Wellington did not use his front-line troops to man the forts: instead, manpower was mainly provided by the Portuguese. Construction continued after the withdrawal of the French and was not fully completed until 1812.
Originally the Second Line was intended to be the main line of defence, north of Lisbon. The First Line, or Outer Line, was approximately to north of the Second Line. The original purpose of the First Line was to only delay the French. In fact, the First Line was not the original plan, the work was only carried out because the defenders were given extra time due to the slow advance of the French Army. In the end, the First Line succeeded in holding the French and the Second Line was never required. A Third Line, surrounding the Fort of São Julião da Barra near Lisbon, was built to protect Wellington’s evacuation by sea from the fort. A fourth line, of which little remains, was built south of the Tagus opposite Lisbon to prevent a French invasion of the city by boat.
First line
Wellington's first idea had been to construct the first line from Alhandra on the banks of the Tagus to Rio São Lourenço on the Atlantic coast, with advanced works at Torres Vedras, Sobral de Monte Agraço, and other commanding points. The delays to the French arrival, however, enabled him to strengthen the first line sufficiently to warrant aiming to hold it permanently rather than just using it for delaying purposes. Surveying this line from east to west, the first section from Alhandra to Arruda was about long, of which towards the Tagus had been inundated; another or more had been scarped into a precipice, and the most vulnerable point had been obstructed by a huge abatis. The additional defences included 23 redoubts mounting 96 guns, besides a flotilla of gunboats to guard the right flank on the Tagus. This area was under the command of Hill's division. Defences still visible in this section include the Fort of Subserra.
The second section extended from Arruda to the west of Monte Agraço, which was crowned by the very large fort now known as the Fort of Alqueidão, mounting twenty-five guns, with three smaller forts to support it. Monte Agraço itself was held by Pack's brigade with Anglo-Portuguese 5th Division (Leith's) in reserve behind it, while the less completely fortified country to the east was entrusted to the British Light Division.
The third section stretched from the west of Monte Agraço for nearly to the gorge of the river Sizandro, a little to south of Torres Vedras. This was strengthened by two redoubts which commanded the road from Sobral to Montachique. Here, therefore, were concentrated the 1st, 4th, and 6th divisions, under the eye of Wellington himself, who established his headquarters at Pero Negro, where he remained from approximately 16 October 1810 to 15 November 1810.
The last and most westerly section of the first line ran from the gorge of the Sizandro to the sea, a distance of nearly , more than half of which, however, on the western side had been rendered impassable by the damming of the Sizandro and by the conversion of its lower reaches into one huge inundation. The chief defence consisted of the entrenched camp of the Fort of São Vicente, a little to the north of Torres Vedras, which dominated the paved road leading from Leiria to Lisbon. The force assigned to this part of the Line was Picton's division.
Second line
The second line of defence was still more formidable. It can broadly be divided into three sections, from the Fort of Casa on the Tagus to Bucelas, from Bucelas to Mafra, and from Mafra to the sea, a total distance of . The main forts along this line that remain identifiable are three forts on the Serra da Aguieira that served to support the Fort of Casa in its defence of the River Tagus as well as covering the Bucelas Gorge. They also exchanged crossfire with the Fort of Arpim to their north, which was a link between the first and second lines as it was close to three other forts designed to protect the road from Bucelas to Alverca do Ribatejo. To the west of Bucelas was a line of hill-top forts dominated by the Montachique mountain. The mountain, at an altitude of 408 metres, was not fortified but was defended by what are today known as the Fort of Mosqueiro, the Fort of Ribas and others. Closer to Mafra, overlooking the town of Malveira, was the Fort of Feira, which was at the centre of a complex of 19 strongholds in the second Line. Mafra was one of the principle positions on the second line, with its defences being centred around the Tapada or royal park.
Third and fourth lines
In the event of failure even in the face of all these precautions, a very powerful line, long, was thrown up around the Fort of São Julião da Barra on the Tagus estuary to cover a retreat and any embarkation if it became necessary. This was considered to be the third line.
British ships dominated the Portuguese coast and the Tagus estuary, so a waterborne invasion by the French was unlikely. However, to guard against the possibility that the French would try to bypass the lines to the north of Lisbon by heading south along the left bank of the Tagus and then approaching Lisbon by boat, a fourth line was built south of the Tagus in the Almada area. The line was 7.3 kilometres (4.5 mi) long. It had 17 redoubts and covered trenches, 86 pieces of artillery, and was defended by marines and orderlies from Lisbon, with a total of 7,500 men.
Holding the Lines
The Anglo-Portuguese Army was forced to retreat to the first line after winning the Battle of Buçaco on 27 September 1810. The French army under Marshal Masséna discovered a barren land (under the scorched earth policy) and an enemy behind an almost impenetrable defensive position. Masséna's forces arrived at the lines on 11 October and took Sobral de Monte Agraço the following day. On 14 October the VIII Corps tried to push forward but at the Battle of Sobral they were repelled in an attempt to assault a strong British outpost. After attempting to wait out the enemy, the lack of food and fodder in the area north of the lines meant that Masséna was forced to order a French retreat northwards, starting on the night of 14/15 November 1810, to find an area that had not been subjected to the scorched earth policy.
In December 1810, fearing a French attempt on the left of the Tagus, a chain of 17 redoubts was constructed from Almada to Trafaria. However, the French made no movement, and after holding out through February, when starvation really set in, Marshal Masséna ordered a retreat at the beginning of March 1811, taking a month to get to Spain.
Marshal Masséna had begun his campaign with his 65,000 strong army (l'Armée de Portugal). After losing 4,000 at the Battle of Buçaco, he arrived at Torres Vedras with 61,000 men in October 1810 facing attrition warfare. When he eventually returned to Spain in April 1811, he had lost a further 21,000 men, mostly from starvation, severe illness and disease. Casualties had not been helped by the fact that the Iberian peninsula had suffered one of the coldest winters it had ever known.
When the Allies renewed their offensive in 1811, they were reinforced with fresh British troops. The advance started from the Lines of Torres Vedras shortly after the French retreat. Although work continued on certain sections of the lines, they saw no further action during rest of the Peninsular War.
Garrisons
The lines were divided up into districts by Wellington in a letter dated 6 October 1810. Each district was allocated one Captain and one Lieutenant of Engineers:
From Torres Vedras to the sea. HQ at Torres Vedras.
From Sobral de Monte Agraço to the valley of Calhandriz. HQ at Sobral de Monte Agraço.
From Alhandra to the valley of Calhandriz. HQ at Alhandra.
From the banks of the Tagus, near Alverca, to the Pass of Bucelas, inclusive. HQ at Bucelas.
From the Pass of Freixal, near Bucelas to the right of the Pass of Mafra. HQ at Montachique.
From the Pass of Mafra to the sea. HQ at Mafra.
The total number of troops available to Wellington amounted, exclusive of two battalions of marines around the Fort of São Julião, to 42,000 British, of whom 35,000 were combat ready together with over 27,000 Portuguese regulars, of whom 24,000 were combat ready; about 12,000 Portuguese militia; and 20–30,000 ordenanças, a Portuguese militia force used mainly for guerrilla warfare. Lastly, the Marquis of la Romana contributed 8,000 Spanish troops to the lines around Mafra. Altogether, therefore, Wellington had some 60,000 regular frontline troops whom he could depend upon, and 20,000 more who could be trusted to man the lines.
The redoubts of the First Line did not require more than 20,000 men to defend them, which left the whole of the true field-army free not only to reinforce any threatened point but also to make counter-attacks. To facilitate such movements a chain of five signal stations was established from one end of the First Line to the other, which allowed a message to be sent along the lines in 7 minutes, or from the HQ to any point in 4 minutes. The signal stations on the First Line were:
Redoubt n.30 close to the ocean (Ponte do Rol)
Fort of São Vicente at Torres Vedras
Monte do Socorro close to Pêro Negro, Wellington's headquarters.
Monte Agraço
Sobralinho, by the River Tagus.
while on the Second Line, five stations have been identified at:
Forts of Serra da Aguieira
Fort of Sunivel
Montachique mountain (Cabeço de Montachique)
Fort of Chipre
Fort of São Julião at Ericeira
Memorial
A monument commemorating the victory of the Anglo-Portuguese troops over the French armies and the construction of the Torres Vedras Lines was approved in 1874 and finished in 1883. Somewhat reminiscent of Nelson’s Column in London, the column is topped by a statue of the Classical Greek figure of Hercules. This was executed by the sculptor Simões de Almeida who was also responsible for the Monument to the Restorers in Lisbon. The column used marble from the parish of Pêro Pinheiro in Sintra municipality.
The monument was constructed near the village of Alhandra in the municipality of Vila Franca de Xira, on the site of the Boavista redoubt (originally numbered as work Number 3). It is close to work Number 114, the Fort of Subserra (also known as the Fort of Alhandra), which can be visited. In 1911, two plaques were added to acknowledge the contributions of Richard Fletcher and of José Maria das Neves Costa, on whose original topographic maps Wellington based his plans for the Lines.
Preservation and restoration
Substantial portions of the Lines survive today, albeit in most cases in a heavily decayed condition due to past removal of stones. Apart from some limited restoration of Fort St. Vincent in the 1960s the Lines had effectively lain abandoned from the end of the Peninsular War to the beginning of this millennium. In 2001 the six municipalities covered by the Lines (Torres Vedras, Mafra, Sobral de Monte Agraço, Arruda dos Vinhos, Loures and Villa Franca de Xira), together with agencies of what is now the Direção-Geral do Património Cultural (Directorate-General for Cultural Heritage - DGPC), and the Direção dos Serviços de Engenharia (Directorate of Military Engineering) signed a protocol to protect, restore and sustain the Lines. However, initial work was limited due to lack of resources. With the bicentennial of the Lines fast approaching the six municipalities set up an inter-municipal platform to move things forward and decided to apply for funding through the EEA and Norway Grants programme. Funding was granted in 2007.
EEA grants met the costs of 110 projects, while the municipalities funded the work at another 140 sites. Work involved included removal of excess vegetation, creation or restoration of access, archaeological studies, setting up of information boards, establishment of walking routes, and a Visitors' Centre in each municipality. This conservation work was awarded the European Union Prize for Cultural Heritage / Europa Nostra Awards in 2014.
The Leonel Trindade Municipal Museum, Torres Vedras in the centre of the town has a room dedicated to "The Lines" with a display of information boards and artefacts. A short distance from the museum just outside of the town, Fort of São Vicente and the Fort of Olheiros have been well conserved, with the former having a visitors' centre open Tue-Sun 10-1pm and 2-6pm. The visitors' centre has well-produced historic wall displays and a 20 min video. Other information centres along the lines are:
Lines of Torres Interpretation Centre at Bucelas Wine museum.
Fort of Casa
Interpretation Centre at Sobral de Monte Agraço
Centro Cultural do Morgado, Arruda dos Vinhos
Centro de Interpretação das Linhas de Torres de Mafra
In fiction
Death to the French, novel by C. S. Forester,
Sharpe's Gold, novel by Bernard Cornwell
Sharpe's Escape, novel by Bernard Cornwell
Lines of Wellington, film by Raúl Ruiz and Valeria Sarmiento
How the Brigadier Saved An Army, short story by Arthur Conan Doyle
Beyond the Sunrise, novel by Mary Balogh
References
Sources
Attribution:
Further reading
External links
Lines 1 and 2 mapped on Google Maps
Friends of the Lines of Torres Vedras
Complete list of the different military works (In Portuguese)
British Historical Society of Portugal, which organizes regular guided visits to the forts.
Photographs and map of fort locations
05 de Dezembro – Caminhada de Sobral de Monte Agraço – Forte do Alqueidão
More details on the military fortifications
The construction of São Vicente
Semaphore Tower Monte Socorro Historical Reconstruction of semaphore tower.
Lines of Torres Vedras Historical Trail: Guide .
1810 establishments in Portugal
Military installations established in 1810
19th-century fortifications
Peninsular War
Military history of Lisbon
Torres Vedras
Torres Vedras
Buildings and structures in Torres Vedras
National redoubts
National monuments in Lisbon District
Arthur Wellesley, 1st Duke of Wellington
Scorched earth operations | Lines of Torres Vedras | Engineering | 4,211 |
5,074,109 | https://en.wikipedia.org/wiki/Venezuelan%20equine%20encephalitis%20virus | Venezuelan equine encephalitis virus is a mosquito-borne viral pathogen that causes Venezuelan equine encephalitis or encephalomyelitis (VEE). VEE can affect all equine species, such as horses, donkeys, and zebras. After infection, equines may suddenly die or show progressive central nervous system disorders. Humans also can contract this disease. Healthy adults who become infected by the virus may experience flu-like symptoms, such as high fevers and headaches. People with weakened immune systems and the young and the elderly can become severely ill or die from this disease.
The virus that causes VEE is transmitted primarily by mosquitoes that bite an infected animal and then bite and feed on another animal or human. The speed with which the disease spreads depends on the subtype of the VEE virus and the density of mosquito populations. Enzootic subtypes of VEE are diseases endemic to certain areas. Generally these serotypes do not spread to other localities. Enzootic subtypes are associated with the rodent-mosquito transmission cycle. These forms of the virus can cause human illness but generally do not affect equine health.
Epizootic subtypes, on the other hand, can spread rapidly through large populations. These forms of the virus are highly pathogenic to equines and can also affect human health. Equines, rather than rodents, are the primary animal species that carry and spread the disease. Infected equines develop an enormous quantity of virus in their circulatory system. When a blood-feeding insect feeds on such animals, it picks up this virus and transmits it to other animals or humans. Although other animals, such as cattle, swine, and dogs, can become infected, they generally do not show signs of the disease or contribute to its spread.
The virion is spherical and approximately 70 nm in diameter. It has a lipid membrane with glycoprotein surface proteins spread around the outside. Surrounding the nuclear material is a nucleocapsid that has an icosahedral symmetry of T = 4, and is approximately 40 nm in diameter.
Viral subtypes
Serology testing performed on this virus has shown the presence of six different subtypes (classified I to VI). These have been given names, including Mucambo, Tonate, and Pixuna subtypes. There are seven different variants in subtype I, and three of these variants, A, B, and C are the epizootic strains.
The Mucambo virus (subtype III) appears to have evolved ~1807 AD (95% credible interval: 1559–1944). In Venezuela the Mucambo subtype was identified in 1975 by Jose Esparza and J. Sánchez using cultured mosquito cells.
Epidemiology
In the Americas, there have been 21 reported outbreaks of Venezuelan equine encephalitis virus. Outbreaks occurred in Central American and South American countries. This virus was isolated in 1938, and outbreaks have been reported in many different countries since then. Mexico, Colombia, Venezuela, and the United States are just some of the countries that have reported outbreaks. Outbreaks of VEE generally occur after periods of heavy precipitation that cause mosquito populations to thrive.
Between December 1992 and January 1993, the Venezuelan state of Trujillo experienced an outbreak of this virus. Overall, 28 cases of the disease were reported along with 12 deaths.
June 1993 saw a bigger outbreak in the Venezuelan state of Zulia, as 55 humans died as well as 66 equine deaths.
A much larger outbreak in Venezuela and Colombia occurred in 1995. On May 23, 1995, equine encephalitis-like cases were reported in the northwest portion of the country. Eventually, the outbreak spread more towards the north as well as to the south. The outbreak caused about 11,390 febrile cases in humans as well as 16 deaths. About 500 equine cases were reported with 475 deaths.
An outbreak of this disease occurred in Colombia in September 1995. This outbreak resulted in 14,156 human cases that were attributable to Venezuelan equine encephalitis virus with 26 human deaths. A possible explanation for the serious outbreaks was the particularly heavy rain that had fallen. This could have caused increased numbers of mosquitoes that could serve as vectors for the disease. A more likely explanation is that deforestation caused a change in mosquito species. Culex taenopius mosquitos, which prefer rodents, were replaced by Aedes taeniorhynchus mosquitoes, which are more likely to bite humans and large equines.
Though the majority of VEE outbreaks occur in Central and South America, the virus has potential to outbreak again in the United States. It has been shown the invasive mosquito species Aedes albopictus is a viable carrier of VEE virus.
Treatment
Oxatomide has shown antiviral activity against VEE virus in cell culture. Oxatomide is an over the counter drug and an H1 antihistamine. H1 antihistamines characteristically cause drowsiness (e.g., Benadryl) and cross the blood-brain barrier. To date, oxatomide has not been tested in humans or animals for the treatment of VEE. Oxatomide is still sold in Japan (Sawai Pharmaceutical).
Vaccine
There is an inactivated vaccine containing the C-84 strain for VEE virus that is used to immunize horses. Another vaccine, containing the TC-83 strain, is used on humans in military and laboratory positions who risk contracting the virus. The human vaccine can result in side effects and does not fully immunize the patient. The TC-83 strain was generated by passing the virus 83 times through a guinea pig heart cell culture; C-84 is a derivative of TC-83. Alphaviral genomes lacking the full set of structural proteins are currently being used to produce self-amplifying mRNA vaccines and may be useful for delivering therapeutic enzymes and proteins in the future.
Society and culture
In April 2009, the U.S. Army Medical Research Institute of Infectious Diseases at Fort Detrick reported that samples of Venezuelan equine encephalitis virus were discovered missing during an inventory of a group of samples left by a departed researcher. The report stated the samples were likely among those destroyed when a freezer malfunctioned.
Biological weapon
During the Cold War, both the United States biological weapons program and the Soviet biological weapons program researched and weaponized VEE. In his book Biohazard: The Chilling True Story of the Largest Covert Biological Weapons Program in the World, author Stephen Handelman details the weaponization of VEE and other biologicals including plague, anthrax, and smallpox, by Dr. Ken Alibek in the Cold War Soviet weapons programs.
References
Notes
APHIS. 1996. Venezuelan Equine Encephalomyelitis
External links
Disease card on World Organisation for Animal Health
Arthropod-borne viral fevers and viral haemorrhagic fevers
Alphaviruses
Horse diseases
Animal viral diseases
Biological agents
Viral encephalitis
Health in Venezuela
Biological anti-agriculture weapons | Venezuelan equine encephalitis virus | Biology,Environmental_science | 1,484 |
675,277 | https://en.wikipedia.org/wiki/Levonorgestrel | Levonorgestrel is a hormonal medication which is used in a number of birth control methods. It is combined with an estrogen to make combination birth control pills. As an emergency birth control, sold under the brand names Plan B One-Step and Julie, among others, it is useful within 72 hours of unprotected sex. The more time that has passed since sex, the less effective the medication becomes, and it does not work after pregnancy (implantation) has occurred. Levonorgestrel works by preventing ovulation or fertilization from occurring. It decreases the chances of pregnancy by 57–93%. In an intrauterine device (IUD), such as Mirena among others, it is effective for the long-term prevention of pregnancy. A levonorgestrel-releasing implant is also available in some countries.
Common side effects include nausea, breast tenderness, headaches, and increased, decreased, or irregular menstrual bleeding. When used as an emergency contraceptive, if pregnancy occurs, there is no evidence that its use harms the fetus. It is safe to use during breastfeeding. Birth control that contains levonorgestrel will not change the risk of sexually transmitted infections. It is a progestin and has effects similar to those of the hormone progesterone. It works primarily by preventing ovulation and closing off the cervix to prevent the passage of sperm.
Levonorgestrel was patented in 1960 and introduced for medical use together with ethinylestradiol in 1970. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In the United States, levonorgestrel-containing emergency contraceptives are available over the counter (OTC) for all ages. In 2020, it was the 323rd most commonly prescribed medication in the United States, with more than 800thousand prescriptions.
Medical uses
Birth control
At low doses, levonorgestrel is used in monophasic and triphasic formulations of combined oral contraceptive pills, with available monophasic doses ranging from 100 to 250 μg, and triphasic doses of 50 μg, 75 μg, and 125 μg. It is combined with the estrogen ethinylestradiol in these formulations.
At very low daily dose of 30 μg, levonorgestrel is used in some progestogen-only pill formulations.
Levonorgestrel is the active ingredient in a number of intrauterine devices including Mirena and Skyla. It is also the active ingredient in the birth control implants Norplant and Jadelle.
One of the more common forms of contraception that contains only levonorgestrel is an IUD. One IUD, the Mirena, is a small hollow cylinder containing levonorgestrel and polydimethylsiloxane and covered with a release rate-controlling membrane.
Emergency birth control
Levonorgestrel is used in emergency contraceptive pills (ECPs), both in a combined Yuzpe regimen which includes estrogen, and as a levonorgestrel-only method. The levonorgestrel-only method uses levonorgestrel 1.5 mg (as a single dose or as two 0.75 mg doses 12 hours apart) taken within three days of unprotected sex. One study indicated that beginning as late as 120 hours (5 days) after intercourse could be effective. However, taking more than one dose of emergency contraception does not increase the chance of pregnancy not happening. Planned Parenthood asserts "Taking the morning-after pill (also known as emergency contraception) multiple times doesn't change its effectiveness, and won't cause any long-term side effects."
The primary mechanism of action of levonorgestrel as a progestogen-only emergency contraceptive pill is, according to International Federation of Gynecology and Obstetrics (FIGO), to prevent fertilization by inhibition of ovulation and thickening of cervical mucus. FIGO has stated that: "review of the evidence suggests that LNG [levonorgestreol] ECPs cannot prevent implantation of a fertilized egg. Language on implantation should not be included in LNG ECP product labeling." In November 2013, the European Medicines Agency (EMA) approved a change to the label saying it cannot prevent implantation of a fertilized egg.
Other studies still find the evidence to be unclear. While it is unlikely that emergency contraception affects implantation it is impossible to completely exclude the possibility of post-fertilization effect.
In November 2013, the EMA also approved a change to the label for HRA Pharma's NorLevo saying: "In clinical trials, contraceptive efficacy was reduced in women weighing 75 kg [165 pounds] or more, and levonorgestrel was not effective in women who weighed more than 80 kg [176 pounds]." In November 2013 and January 2014, the FDA and the EMA said they were reviewing whether increased weight and body mass index (BMI) reduce the efficacy of emergency contraceptives.
An analysis of four WHO randomised clinical trials, published in January 2017, showed pregnancy rates of 1.25% (68/5428) in women with BMI under 25, 0.61% (7/1140) in women with BMI between 25 and 30, and 2.03% (6/295) in women with BMI over 30. These values yield an eight-fold reduction in efficacy for women with a BMI over 30 compared to women with a BMI under 25. However, emergency contraceptives remain effective regardless of BMI.
Hormone therapy
Levonorgestrel is used in combination with an estrogen in menopausal hormone therapy. It is used under the brand name Klimonorm as a combined oral tablet with estradiol valerate and under the brand name Climara Pro as a combined transdermal patch with estradiol.
Available forms
As a type of emergency contraception, levonorgestrel is used after unprotected intercourse to reduce the risk of pregnancy. However, it can serve different hormonal purposes in its different methods of delivery. It is available for use in a variety of forms:
By mouth
Levonorgestrel can be taken by mouth as a form of emergency birth control. The typical dosage is either 1.5 mg taken once or 0.75 mg taken 12–24 hours apart. The effectiveness in both methods is similar. The most widely used form of oral emergency contraception is the progestin-only pill, which contains a 1.5 mg dosage of levonorgestrel. Levonorgestrel-only emergency contraceptive pills are reported to have an 89% effectiveness rate if taken within the recommended 72 hours after sex. The efficacy of the drug decreases by 50% for each 12-hour delay in taking the dose after the emergency contraceptive regimen has been started.
Skin patch
Estradiol with levonorgestrel in the form of a skin patch is used under the brand name Climara Pro for hormone replacement therapy in postmenstrual women, treating symptoms such as hot flashes or osteoporosis. The simultaneous delivery of a progestogen such as levonorgestrel is necessary for the protection of the endometrium.
Intrauterine device
The levonorgestrel intrauterine system (LNG-IUS) is a type of long-term birth control that releases the progestin into the uterine cavity. Levonorgestrel is released at a constant, gradual rate of 0.02 mg per day by the polydimethylsiloxane membrane of the device, which renders it effective for up to five years. Because it is inserted directly into the uterus, levonorgestrel is present in the endometrium in much higher concentrations that would result from a LNG-containing oral pill; the LNG-IUS delivers 391 ng of levonorgestrel to the inner uterine region while a comparable oral contraceptive delivers only 1.35 ng. This high level of levonorgestrel impedes the function of the endometrium, making it hostile for sperm transport, fertilization, and implantation of an ovum.
Implant
Subcutaneous implants of levonorgestrel have been marketed as birth control implants under the brand names Norplant and Jadelle and are available for use in some countries.
Contraindications
Known or suspected pregnancy is a contraindication of levonorgestrel as an emergency contraceptive.
Side effects
After an intake of 1.5 mg levonorgestrel in clinical trials, very common side effects (reported by 10% or more) included: hives, dizziness, hair loss, headache, nausea, abdominal pain, uterine pain, delayed menstruation, heavy menstruation, uterine bleeding, and fatigue; common side effects (reported by 1% to 10%) included diarrhea, vomiting, and painful menstruation; these side effects usually disappeared within 48 hours. However, the long term side effects common with oral contraceptives such as arterial disease are lower with levonorgestrel than in combination pills.
Levonorgestrel as a contraceptive intrauterine device is associated with a higher risk of breast cancer than with non-use.
Overdose
Overdose of levonorgestrel as an emergency contraceptive has not been described. Nausea and vomiting might be expected.
Interactions
If taken together with drugs that induce the CYP3A4 cytochrome P450 liver enzyme, levonorgestrel may be metabolized faster and may have lower effectiveness.
These include, but are not limited to barbiturates, bosentan, carbamazepine, felbamate, griseofulvin, oxcarbazepine, phenytoin, rifampin, St. John's wort and topiramate.
Pharmacology
Pharmacodynamics
Levonorgestrel is a progestogen with weak androgenic activity. It has no other important hormonal activity, including no estrogenic, glucocorticoid, or antimineralocorticoid activity. The lack of significant mineralocorticoid or antimineralocorticoid activity with levonorgestrel is in spite of it having relatively high affinity for the mineralocorticoid receptor, which is as much as 75% of that of aldosterone.
Progestogenic activity
Levonorgestrel is a progestogen; that is, an agonist of the progesterone receptor (PR), the main biological target of the progestogen sex hormone progesterone. It has effects similar to those of the hormone progesterone. As a contraceptive, it works primarily by preventing ovulation and closing off the cervix to prevent the passage of sperm. The endometrial transformation dose of levonorgestrel is 150 to 250μg/day or 2.5 to 6mg per cycle.
Antigonadotropic effects
Due to its progestogenic activity, levonorgestrel has antigonadotropic effects and is able to suppress the secretion of the gonadotropins, luteinizing hormone and follicle-stimulating hormone, from the pituitary gland. This in turn, results in suppression of gonadal activity, including reduction of fertility and gonadal sex hormone production in both women and men. The ovulation-inhibiting dose of levonorgestrel in premenopausal women is 50 to 60μg/day.
In men, levonorgestrel causes marked suppression of circulating testosterone levels secondary to its antigonadotropic effects. In healthy young men, levonorgestrel alone at a dose of 120 to 240 μg/day orally for 2 weeks suppressed testosterone levels from ~450 ng/dL to ~248 ng/dL (–45%). Because of its effects on testosterone levels, and due to its androgenic activity being only weak and hence insufficient for purposes of androgen replacement in males, levonorgestrel has potent functional antiandrogenic effects in men. Consequently, it can produce adverse effects like decreased libido and erectile dysfunction, among others. Levonorgestrel has been combined with an androgen like testosterone or dihydrotestosterone when it has been studied as a hormonal contraceptive in men.
Androgenic activity
Levonorgestrel is a weak agonist of the androgen receptor (AR), the main biological target of the androgen sex hormone testosterone. It is a weakly androgenic progestin and in women may cause androgenic biochemical changes and side effects such as decreased sex hormone-binding globulin (SHBG) levels, decreased cholesterol levels, weight gain, and acne.
In combination with a potent estrogen like ethinylestradiol however, all contraceptives containing androgenic progestins are negligibly androgenic in practice and in fact can be used to treat androgen-dependent conditions like acne and hirsutism in women. This is because ethinylestradiol causes a marked increase in SHBG levels and thereby decreases levels of free and hence bioactive testosterone, acting as a functional antiandrogen. Nonetheless, contraceptives containing progestins that are less androgenic increase SHBG levels to a greater extent and may be more effective for such indications. Levonorgestrel is currently the most androgenic progestin that is used in contraceptives, and contraceptives containing levonorgestrel may be less effective for androgen-dependent conditions relative to those containing other progestins that are less androgenic.
Other activity
Levonorgestrel stimulates the proliferation of MCF-7 breast cancer cells in vitro, an action that is independent of the classical PRs and is instead mediated via the progesterone receptor membrane component-1 (PGRMC1). Certain other progestins act similarly in this assay, whereas progesterone acts neutrally. It is unclear if these findings may explain the different risks of breast cancer observed with progesterone and progestins in clinical studies.
Pharmacokinetics
The bioavailability of levonorgestrel is approximately 95% (range 85 to 100%). The plasma protein binding of levonorgestrel is about 98%. It is bound 50% to albumin and 48% to SHBG. Levonorgestrel is metabolized in the liver, via reduction, hydroxylation, and conjugation (specifically glucuronidation and sulfation). Oxidation occurs primarily at the C2α and C16β positions, while reduction occurs in the A ring. 5α-Dihydrolevonorgestrel is produced as an active metabolite of levonorgestrel by 5α-reductase. The elimination half-life of levonorgestrel is 24 to 32 hours, although values as short as 8 hours and as great as 45 hours have been reported. About 20 to 67% of a single oral dose of levonorgestrel is eliminated in urine and 21 to 34% in feces.
Chemistry
Levonorgestrel, also known as 17α-ethynyl-18-methyl-19-nortestosterone or as 17α-ethynyl-18-methylestr-4-en-17β-ol-3-one, is a synthetic estrane steroid and a derivative of testosterone. It is the C13β or levorotatory stereoisomer and enantiopure form of norgestrel, the C13α or dextrorotatory isomer being inactive. Levonorgestrel is more specifically a derivative of norethisterone (17α-ethynyl-19-nortestosterone) and is the parent compound of the gonane (18-methylestrane or 13β-ethylgonane) subgroup of the 19-nortestosterone family of progestins. Besides levonorgestrel itself, this group includes desogestrel, dienogest, etonogestrel, gestodene, norelgestromin, norgestimate, and norgestrel. Levonorgestrel acetate and levonorgestrel butanoate are C17β esters of levonorgestrel. Levonorgestrel has a molecular weight of 312.45 g/mol and a partition coefficient (log P) of 3.8.
History
Norgestrel (rac-13-ethyl-17α-ethynyl-19-nortestosterone), the racemic mixture containing levonorgestrel and dextronorgestrel, was discovered by Hughes and colleagues at Wyeth in 1963 via structural modification of norethisterone (17α-ethynyl-19-nortestosterone). It was the first progestogen to be manufactured via total chemical synthesis. Norgestrel was introduced for medical use as a combined birth control pill with ethinylestradiol under the brand name Eugynon in Germany in 1966 and under the brand name Ovral in the United States 1968, and as a progestogen-only pill under the brand name Ovrette in the United States in 1973. Following its discovery, norgestrel had been licensed by Wyeth to Schering AG, which separated the racemic mixture into its two optical isomers and identified levonorgestrel (13β-ethyl-17α-ethynyl-19-nortestosterone) as the active component of the mixture. Levonorgestrel was first studied in humans by 1970, and was introduced for medical use in Germany as a combined birth control pill with ethinylestradiol under the brand name Neogynon in August 1970. A more widely used formulation, containing lower doses of ethinylestradiol and levonorgestrel, was introduced under the brand name Microgynon by 1973. In addition to combined formulations, levonorgestrel was introduced as a progestogen-only pill under the brand names Microlut by 1972 and Microval by 1974. Many other formulations and brand names of levonorgestrel-containing birth control pills have also been marketed.
Levonorgestrel, taken alone in a single high dose, was first evaluated as a form of emergency contraception in 1973. It was the second progestin to be evaluated for such purposes, following a study of quingestanol acetate in 1970. In 1974, the Yuzpe regimen, which consisted of high doses of a combined birth control pill containing ethinylestradiol and norgestrel, was described as a method of emergency contraception by A. Albert Yuzpe and colleagues, and saw widespread interest. Levonorgestrel-only emergency contraception was introduced under the brand name Postinor by 1978. Ho and Kwan published the first study comparing levonorgestrel only and the Yuzpe regimen as methods of emergency contraception in 1993 and found that they had similar effectiveness but that levonorgestrel alone was better-tolerated. In relation to this, the Yuzpe regimen has largely been replaced as a method of emergency contraception by levonorgrestrel-only preparations. Levonorgestrel-only emergency contraception was approved in the United States under the brand name Plan B in 1999, and has also been marketed widely elsewhere throughout the world under other brand names such as Levonelle and NorLevo in addition to Postinor. In 2013, the Food and Drug Administration approved Plan B One-Step for sale over-the-counter in the United States without a prescription or age restriction.
Levonorgestrel has also been introduced for use as a progestogen-only intrauterine device under the brand names Mirena and Skyla among others, as a progestogen-only birth control implant under the brand names Norplant and Jadelle, as a combined oral tablet with estradiol valerate for menopausal hormone therapy under the brand name Klimonorm, and as a combined transdermal patch with estradiol for menopausal hormone therapy under the brand name Climara Pro. Ester prodrugs of levonorgestrel such as levonorgestrel acetate and levonorgestrel butanoate have been developed and studied as other forms of birth control such as long-acting progestogen-only injectable contraceptives and contraceptive vaginal rings, but have not been marketed for medical use.
Society and culture
Generic names
Levonorgestrel is the generic name of the drug and its , , , , , and , while lévonorgestrel is its . It is also known as d-norgestrel, d(–)-norgestrel, or D-norgestrel, as well as by its developmental code names WY-5104 (Wyeth) and SH-90999 (Schering AG).
Brand names
Levonorgestrel is marketed alone or in combination with an estrogen (specifically ethinylestradiol, estradiol, or estradiol valerate) under a multitude of brand names throughout the world, including Alesse, Altavera, Alysena, Amethia, Amethyst, Ashlyna, Aviane, Camrese, Chateal, Climara Pro, Cycle 21, Daysee, Emerres, Enpresse, Erlibelle, Escapelle, Falmina, Introvale, Isteranda, Jadelle, Jaydess, Jolessa, Klimonorm, Kurvelo, Kyleena, Lessina, Levlen, Levodonna, Levonelle, Levonest, Levosert, Levora, Liletta, Loette, Logynon, LoSeasonique, Lutera, Lybrel, Marlissa, Microgynon, Microlut, Microvlar, Min-Ovral, Miranova, Mirena, My Way, Myzilra, Next Choice, Nordette, Norgeston, NorLevo, Norplant, One Pill, Option 2, Orsythia, Ovima, Ovranette, Plan B, Plan B One-Step, Portia, Postinor, Postinor-2, Preventeza, Ramonna, Rigevidon, Quartette, Quasense, Seasonale, Seasonique, Skyla, Sronyx, Tri-Levlen, Trinordiol, Triphasil, Triquilar, Tri-Regol, Trivora, and Upostelle, among many others. These formulations are used as emergency contraceptives, normal contraceptives, or in menopausal hormone therapy for the treatment of menopausal symptoms.
As an emergency contraceptive, levonorgestrel is often referred to colloquially as the "morning-after pill".
Availability
Levonorgestrel is very widely marketed throughout the world and is available in almost every country.
Accessibility
Levonorgestrel-containing emergency contraception is available over-the-counter in some countries, such as the United States. On some college campuses, Plan B is available from vending machines.
A policy update in 2015, required all pharmacies, clinics, and emergency departments run by Indian Health Services (for Native Americans) to have Plan B One-Step in stock, to distribute it to any woman (or her representative) who asked for it without a prescription, age verification, registration or any other requirement, to provide orientation training to all staff regarding the medication, to provide unbiased and medically accurate information about emergency contraception, and to make someone available at all times to distribute the pill in case the primary staffer objected to providing it on religious or moral grounds.
Research
Levonorgestrel has been studied in combination with androgens such as testosterone and dihydrotestosterone as a hormonal contraceptive for men.
References
External links
Ethynyl compounds
Drugs developed by Bayer
Drugs developed by AbbVie
Anabolic–androgenic steroids
Antigonadotropins
Enantiopure drugs
Enones
Estranes
Hormonal contraception
Human drug metabolites
Progestogens
Wikipedia medicine articles ready to translate
Tertiary alcohols
World Health Organization essential medicines | Levonorgestrel | Chemistry | 5,115 |
19,101,222 | https://en.wikipedia.org/wiki/Easington%20Gas%20Terminal | The Easington Gas Terminal is one of six main gas terminals in the UK, and is situated on the North Sea coast at Easington, East Riding of Yorkshire and Dimlington. The other main gas terminals are at St Fergus, Aberdeenshire; Bacton, Norfolk; Teesside; Theddlethorpe, Lincolnshire and Rampside gas terminal, Barrow, Cumbria. The whole site consists of four plants: two run by Perenco, one by Centrica and one by Gassco. The Easington Gas Terminals are protected by Ministry of Defence Police officers and are provided with resources by the Centre for the Protection of National Infrastructure.
History
BP Easington Terminal opened in March 1967. This was the first time that North Sea Gas had been brought ashore in the UK from the West Sole field. In 1980 British Gas purchased the field Rough and in 1983 began conversion to a storage field. BP Dimlington opened in October 1988. BP's Ravenspurn North field was added in 1990 and the Johnston field was added in 1994. The Easington Catchment Area was added in 2000, and the Juno development in 2003. Up to 20% of the winter peak demand for gas is exported from Easington via Feeder 9 through the Humber Gas Tunnel.
Discovery of gas in the North Sea
Britain's first oil rig, the Sea Gem, first discovered gas in the North Sea on 20 August 1965. It was not a large enough field, but at the time it was not even known that there was a large amount of gas under the North Sea. Unfortunately the rig sank in December later that year, when it capsized. The Forties and Brent oilfields were discovered later in 1970 and 1971 respectively.
Langeled pipeline
Since October 2006, gas has been brought into the UK direct from the Norwegian Sleipner gas field via the Langeled pipeline, the world's longest subsea pipeline before the completion of the Nord Stream pipeline, owned by Gassco which itself is owned by the Kingdom of Norway.
Operation
The sites are run by and gas is produced by Perenco (after BP sold its operations to them in 2012), Gassco and Centrica Storage Ltd. Gas can be transferred to and from the Centrica Storage plant at Easington dependent on grid demand. The control of the Perenco sites takes place at the Dimlington site, and conditioning of the gas also takes place there. The function that is at the Perenco Easington site is the connection to the National Transmission System. Gas flows from the Easington terminal via a 24-inch diameter, pipeline known as Feeder No 1 across the Humber to Totley near Sheffield. Perenco Easington used to compress gas as well, but from 2007–9, the construction of the £125 million Onshore Compression and Terminal Integration Project (OCTIP) situated all compression and processing from the gas fields at the Dimlington site. As part of the facility, two RB211-GT61 gas turbines, built by Rolls-Royce Energy Systems in Mount Vernon, Ohio, were installed in a £12.7 million contract.
Centrica Rough Terminal
The Rough (facility) is a partially depleted offshore gas field that was converted for storage by British Gas. It is currently operated by Centrica Storage Ltd (a subsidiary of Centrica). The Rough Terminal also processes gas for the newly developed York field. The Rough Terminal used to receive gas from the Amethyst gasfield which was until 1988 owned by Britoil but this is now processed by Perenco. Since 2013 The Rough Terminal has also processed gas from the York field on behalf of Centrica Energy.
Langeled Receiving Facilities
The Langeled pipeline, which is controlled at the UK end by Gassco (Centrica Storage Ltd before 2011), can transfer up to 2,500 m cubic feet of gas per day from Nyhamna in Norway.
Perenco Easington
The gas is collected from the Hyde, Hoton, Newsham and West Sole natural gas fields. It can process up to 300 m cubic feet of gas per day. A gas turbine power generator is used to compress the gas.
Perenco Dimlington
Dimlington is the larger site of the four. The natural gas condensate is transferred to the Dimlington terminal. Dimlington also processes dry gas from the (former) Cleeton, Ravenspurn South, Ravenspurn North, Johnston, the Easington Catchment Area (Neptune and Mercury), and the Juno development (Whittle, Wollaston, Minerva and Apollo) gas fields. The Dimlington site has the control room for all of Perenco's gas fields that ship gas to the Easington site. Dimlington can handle up to 950m cubic feet of gas per day.
Fire risk
All sites are a considerable fire hazard, so have large water reservoirs for fire fighting containing about one million and three million litres of water each.
Dimlington gas fields
Cleeton
Cleeton and Ravenspurn South form part of the Villages Complex. Both were discovered in 1976. Gas production began in April 1987. Production stopped in 1999. Now used as a hub for the Easington Catchment Area. Named after the scientist, Claud E. Cleeton.
Ravenspurn South
Discovered in April 1983, off the East Riding of Yorkshire coast. Gas production began in October 1989. Gas via Cleeton to Dimlington. Named after Ravenspurn, the former coastal town. Owned and operated by Perenco.
Ravenspurn North
Discovered in October 1984 and developed in April 1988 by Hamilton Brothers. First gas produced in October 1989, and BP took over the operatorship of the field from BHP on 12 January 1998. Gas via Cleeton to Dimlington. Operated by Perenco and owned mostly by them, with smaller parts owned by Centrica Resources Ltd and E.ON Ruhrgas UK EU Ltd.
Johnston
Operated by E.ON Ruhrgas, and previously to them, Caledonia EU, and also by Consort EU Ltd. Discovered in April 1990. Gas first produced in October 1994. Pipeline to Dimlington via Ravenspurn North and Cleeton. Owned 50% by Dana Petroleum (E&P) Ltd and E.ON Ruhrgas UK EU Ltd.
Babbage
Discovered in 1989 with the first gas being brought ashore in August 2010. Gas will be transported via West Sole to Dimlington. Owned 40% by Dana Petroleum (E&P) Ltd, 47% by E.ON Ruhrgas UK EU Ltd and 13% by Centrica Resources Ltd. Named after the mathematician, Charles Babbage.
Easington Catchment Area
Consists of Neptune and Mercury fields. Operated by BG Group. Transported to Dimlington via BP's Cleeton.
Mercury discovered in February 1983 and production started in November 1999. Named after the planet Mercury. 73% owned by BG Group.
Neptune discovered in November 1985 and production started in November 1999. Named after the planet Neptune. 79% owned by BG Group.
Juno development
These are the most recent of the Dimlington gas fields. Named after Juno, the Roman goddess.
BG Group operates the Minerva, Apollo and Artemis fields, and owns 65% of these fields. Production started in 2003.
Artemis was discovered in August 1974, and named after Artemis the Greek hunter goddess.
Apollo was discovered in July 1987, named after Apollo the Greek sungod, brother of Artemis.
Minerva was discovered in January 1969, named after the Roman goddess Minerva.
BP operates the Whittle and Wollaston fields. They are 30% owned by BG Group. Production started in 2002.
Wollaston was discovered in April 1989, and named after William Hyde Wollaston, the Norfolk chemist.
Whittle was discovered in July 1990, and named after Frank Whittle.
Easington gas fields
These fields are around off the East Riding of Yorkshire coast. These fields are connected to the national grid by BP and Rough Terminals. Some of these were one of the 'Villages' gas fields; named after villages lost to the sea along the Holderness coast. These villages include: Cleeton, Dimlington, Hoton, Hyde, Newsham and Ravenspurn.
West Sole
Discovered in December 1965, east of the Humber. It is a faulted dome whose maximum dimensions are about wide, lying at a depth of . The reservoir comprises about of Permian Rotliegendes sandstone, and the gas has a high methane content and low nitrogen (1.3%). Gas first produced in March 1967. It had initial recoverable reserves of 61 billion m3. Owned and operated by BP until 2012. Acquired by Perenco 2012
Hyde
Discovered in May 1982. Gas first produced in August 1993. Was owned 55% by BP and 45% by Statoil. BP took control in January 1997, in exchange for its Jupiter gas field.
Newsham
Discovered in October 1989. Production began March 1996. Enters the West Sole pipeline. Owned and operated by BP.
Hoton
Discovered in February 1977. Gas first produced in December 2001. Owned and operated by BP. Named after Hoton, one of the East Riding of Yorkshire lost villages that fell into the sea due to coastal erosion.
Amethyst East and West
Amethyst East discovered in October 1972 and Amethyst West in April 1970. Owned 59.5% by BP, 24% by BG Group, 9% by Centrica, and 7.5% by Murphy. Amethyst East began in October 1990 and Amethyst West in July 1992. Control of the platform is entirely from Dimlington and therefore operated by BP. Comprises the Amethyst gasfield.
Acquired by Perenco 2012
Rough
Discovered in May 1968. It had initial recoverable reserves of 14 billion m3. Gas production began in 1975, and it was bought by British Gas in 1980. In 1983, they decided to convert it into gas storage. The gas storage started February 1985. As a depleted gas field, it is used as a storage facility, for essentially the whole of the UK, giving four days worth of gas. Originally owned by BG Storage Ltd (BGSL), who were bought by Dynegy Europe Ltd in November 2001 for £421 million. BGSL became known as Dynegy Storage Ltd, based in Solihull. This company was bought by Centrica on 14 November 2002 for £304 million. Centrica was essentially buying the Easington plant. To operate the field Centrica has to comply with a set of undertakings laid down by DECC and Ofgem due to its unique position in the UK gas market.
York
Owned and operated by Centrica. Gas back to Centrica Rough Terminal via new pipeline.
Helvellyn
Discovered in February 1985 with the first gas coming on stream in 2004. Operated by ATP Oil and Gas. Owned 50% by ATP Oil & Gas (UK) Ltd and First Oil Expro Ltd. Gas back to Easington via the Amethyst field. Named after Helvellyn in Cumbria.
Rose
Discovered in March 1998. Owned and operated by Centrica with the gas pumped back to Easington via the Amethyst field. The operation started in 2004 and was plugged and abandoned in 2015.
See also
List of oil and gas fields of the North Sea
Oil fields operated by BP
St Fergus Gas Terminal
Bacton Gas Terminal
Energy in the United Kingdom
References
External links
Centrica Storage
Centrica's purchase of the plant at Ofgem
World War Two bomb found in March 2008
1967 establishments in England
Buildings and structures in the East Riding of Yorkshire
Economy of the East Riding of Yorkshire
Energy infrastructure completed in 1967
Holderness
Natural gas infrastructure in the United Kingdom
Natural gas plants
Natural gas terminals
North Sea energy
Science and technology in the East Riding of Yorkshire | Easington Gas Terminal | Chemistry | 2,375 |
13,751,165 | https://en.wikipedia.org/wiki/Coherent%20diffraction%20imaging | Coherent diffractive imaging (CDI) is a "lensless" technique for 2D or 3D reconstruction of the image of nanoscale structures such as nanotubes, nanocrystals, porous nanocrystalline layers, defects, potentially proteins, and more. A comprehensive review titled Computational microscopy with coherent diffractive imaging and ptychography was published by Miao in Nature in 2025.
In CDI, a highly coherent beam of X-rays, electrons or other wavelike particle or photon is incident on an object. The beam scattered by the object produces a diffraction pattern downstream which is then collected by a detector. This recorded pattern is then used to reconstruct an image via an iterative feedback algorithm. Effectively, the objective lens in a typical microscope is replaced with software to convert from the reciprocal space diffraction pattern into a real space image. The advantage in using no lenses is that the final image is aberration–free and so resolution is only diffraction and dose limited (dependent on wavelength, aperture size and exposure). Applying a simple inverse Fourier transform to information with only intensities is insufficient for creating an image from the diffraction pattern due to the missing phase information. This is called the phase problem.
Imaging process
The overall imaging process can be broken down in four simple steps:
1. Coherent beam scatters from sample
2. Modulus of Fourier transform measured
3. Computational algorithms used to retrieve phases
4. Image recovered by Inverse Fourier transform
In CDI, the objective lens used in a traditional microscope is replaced with computational algorithms and software which are able to convert from the reciprocal space into the real space. The diffraction pattern picked up by the detector is in reciprocal space while the final image must be in real space to be of any use to the human eye.
To begin, a highly coherent light source of x-rays, electrons, or other wavelike particles must be incident on an object. This beam, although popularly x-rays, has potential to be made up of electrons due to their decreased overall wavelength; this lower wavelength allows for higher resolution and, thus, a clearer final image. However, electron beams are limited in penetration depth compared to X-rays, as electrons have an inherent mass. Due to this incident light, a spot is illuminated on the object being detected and reflected off of its surface. The beam is then scattered by the object producing a diffraction pattern representative of the Fourier transform of the object. The complex diffraction pattern is then collected by the detector and the Fourier transform of all the features that exist on the object’s surface are evaluated. With the diffraction information being put into the frequency domain, the image is not detectable by the human eye and, thus, very different from what we’re used to observing using normal microscopy techniques.
A reconstructed image is then made through utilization of an iterative feedback phase-retrieval algorithm where a few hundred of these incident rays are detected and overlapped to provide sufficient redundancy in the reconstruction process. Lastly, a computer algorithm transforms the diffraction information into the real space and produces an image observable by the human eye; this image is what we would likely see by means of traditional microscopy techniques. The hope is that using CDI would produce a higher resolution image due to its aberration-free design and computational algorithms.
The phase problem
There are two relevant parameters for diffracted waves: amplitude and phase. In typical microscopy using lenses there is no phase problem, as phase information is retained when waves are refracted. When a diffraction pattern is collected, the data is described in terms of absolute counts of photons or electrons, a measurement which describes amplitudes but loses phase information. This results in an ill-posed inverse problem as any phase could be assigned to the amplitudes prior to an inverse Fourier transform to real space.
Three ideas developed that enabled the reconstruction of real space images from diffraction patterns. The first idea was the realization by Sayre in 1952 that Bragg diffraction under-samples diffracted intensity relative to Shannon's theorem. If the diffraction pattern is sampled at twice the Nyquist frequency (inverse of sample size) or denser it can yield a unique real space image. The second was an increase in computing power in the 1980s which enabled iterative hybrid input output (HIO) algorithm for phase retrieval to optimize and extract phase information using adequately sampled intensity data with feedback. This method was introduced by Fienup in the 1980s. In 1998, Miao, Sayre and Chapman used numerical simulations to demonstrate that when the independently measured intensity points is more than the unknown variables, the phase can be in principle retrieved from the diffraction pattern via iterative algorithms. Finally, Miao and collaborators reported the first experimental demonstration of CDI in 1999 using a secondary image to provide low resolution information. Reconstruction methods were later developed that could remove the need for a secondary image.
Reconstruction
In a typical reconstruction the first step is to generate random phases and combine them with the amplitude information from the reciprocal space pattern. Then a Fourier transform is applied back and forth to move between real space and reciprocal space with the modulus squared of the diffracted wave field set equal to the measured diffraction intensities in each cycle. By applying various constraints in real and reciprocal space the pattern evolves into an image after enough iterations of the HIO process. To ensure reproducibility the process is typically repeated with new sets of random phases with each run having typically hundreds to thousands of cycles. The constraints imposed in real and reciprocal space typically depend on the experimental setup and the sample to be imaged. The real space constraint is to restrict the imaged object to a confined region called the "support". For example, the object to be imaged can be initially assumed to reside in a region no larger than roughly the beam size. In some cases this constraint may be more restrictive, such as in a periodic support region for a uniformly spaced array of quantum dots. Other researchers have investigated imaging extended objects, that is, objects that are larger than the beam size, by applying other constraints.
In most cases the support constraint imposed is a priori in that it is modified by the researcher based on the evolving image. In theory this is not necessarily required and algorithms have been developed which impose an evolving support based on the image alone using an auto-correlation function. This eliminates the need for a secondary image (support) thus making the reconstruction autonomic.
The diffraction pattern of a perfect crystal is symmetric so the inverse Fourier transform of that pattern is entirely real valued. The introduction of defects in the crystal leads to an asymmetric diffraction pattern with a complex valued inverse Fourier transform. It has been shown that the crystal density can be represented as a complex function where its magnitude is electron density and its phase is the "projection of the local deformations of the crystal lattice onto the reciprocal lattice vector Q of the Bragg peak about which the diffraction is measured". Therefore, it is possible to image the strain fields associated with crystal defects in 3D using CDI and it has been reported in one case. Unfortunately, the imaging of complex-valued functions (which for brevity represents the strained field in crystals) is accompanied by complementary problems namely, the uniqueness of the solutions, stagnation of the algorithm etc. However, recent developments that overcame these problems (particularly for patterned structures) were addressed. On the other hand, if the diffraction geometry is insensitive to strain, such as in GISAXS, the electron density will be real valued and positive. This provides another constraint for the HIO process, thus increasing the efficiency of the algorithm and the amount of information that can be extracted from the diffraction pattern.
Algorithms
One of the most important aspects of coherent diffraction imaging is the algorithm that recovers the phase from Fourier magnitudes and reconstructs the image. Several algorithms exist for this purpose, though they each follow a similar format of iterating between the real and reciprocal space of the object (Pham 2020). Furthermore, a support region is frequently defined to separate the object from its surrounding zero-density region (Pham 2020). As mentioned earlier, Fienup developed the initial algorithms of Error Reduction (ER) and Hybrid Input-Output (HIO) which both utilized a support constraint for real space and Fourier magnitudes as a constraint in reciprocal space (Fienup 1978). The ER algorithm sets both the zero-density region and the negative densities inside the support to zero for each iteration (Fienup 1978). The HIO algorithm relaxes the conditions of ER by gradually reducing the negative densities of the support to zero with each iteration (Fienup 1978). While HIO allowed for the reconstruction of an image from a noise-free diffraction pattern, it struggled to recover the phase in actual experiments where the Fourier magnitudes were corrupted by noise. This led to further development of algorithms that could better handle noise in image reconstruction. In 2010, a new algorithm called oversampling smoothness (OSS) was created to use a smoothness constraint on the imaged object. OSS would utilize Gaussian filters to apply a smoothness constraint to the zero-density region which was found to increase robustness to noise and reduce oscillations in reconstruction (Rodriguez 2013).
Generalized Proximal Imaging (GPS)
Building upon the success of OSS, a new algorithm called generalized proximal smoothness (GPS) has been developed. GPS addresses noise in the real and reciprocal space by incorporating principles of Moreau-Yosida regularization, which is a method of turning a convex function into a smooth convex function (Moreau 1965) (Yosida 1964). The magnitude constraint is relaxed into a least-fidelity squares term as a means of lessening the noise in the reciprocal space (Pham 2020). Overall, GPS was found to perform better than OSS and HIO in consistency, convergence speed, and robustness to noise. Using R-factor (relative error) as a measurement for effectiveness, GPS was found to have a lower R-factor in both real and reciprocal spaces (Pham 2020). Moreover, it took fewer iterations for GPS to converge towards a lower R-factor when compared to OSS and HIO in both spaces (Pham 2020).
Coherence
Two wave sources are coherent when their frequency and waveforms are identical; this property of waves allows for stationary interference in which the wave is temporally or spatially constant and the waves are either added or subtracted from one another. Coherence is important in the context of CDI as the coherence of the two sources allows for the continuous emission of waves to occur. A constant phase difference and the coherence of a wave are necessary in order to obtain any type of interference pattern.
Clearly a highly coherent beam of waves is required for CDI to work since the technique requires interference of diffracted waves. Coherent waves must be generated at the source (synchrotron, field emitter, etc.) and must maintain coherence until diffraction. It has been shown that the coherence width of the incident beam needs to be approximately twice the lateral width of the object to be imaged.
However determining the size of the coherent patch to decide whether the object does or does not meet the criterion is subject to debate. As the coherence width is decreased, the size of the Bragg peaks in reciprocal space grows and they begin to overlap leading to decreased image resolution.
Energy sources
X-ray
Coherent x-ray diffraction imaging (CXDI or CXD) uses x-rays (typically .5-4keV) to form a diffraction pattern which may be more attractive for 3D applications than electron diffraction since x-rays typically have better penetration. For imaging surfaces, the penetration of X-rays may be undesirable, in which case a glancing angle geometry may be used such as GISAXS. A typical x-ray CCD is used to record the diffraction pattern. If the sample is rotated about an axis perpendicular to the beam a 3-Dimensional image may be reconstructed.
Due to radiation damage, resolution is limited (for continuous illumination set-ups) to about 10 nm for frozen-hydrated biological samples but resolutions of as high as 1 to 2 nm should be possible for inorganic materials less sensitive to damage (using modern synchrotron sources). It has been proposed that radiation damage may be avoided by using ultra short pulses of x-rays where the time scale of the destruction mechanism is longer than the pulse duration. This may enable higher energy and therefore higher resolution CXDI of organic materials such as proteins. However, without the loss of information "the linear number of detector pixels fixes the energy spread needed in the beam" which becomes increasingly difficult to control at higher energies.
In a 2006 report, resolution was 40 nm using the Advanced Photon Source (APS) but the authors suggest this could be improved with higher power and more coherent X-ray sources such as the X-ray free electron laser.
Electrons
Coherent electron diffraction imaging works the same as CXDI in principle only electrons are the diffracted waves and an imaging plate is used to detect electrons rather than a CCD. In one published report a double walled carbon nanotube (DWCNT) was imaged using nano area electron diffraction (NAED) with atomic resolution. In principle, electron diffraction imaging should yield a higher resolution image because the wavelength of electrons can be much smaller than photons without going to very high energies. Electrons also have much weaker penetration so they are more surface sensitive than X-rays. However, typically electron beams are more damaging than x-rays so this technique may be limited to inorganic materials.
In Zuo's approach, a low resolution electron image is used to locate a nanotube. A field emission electron gun generates a beam with high coherence and high intensity. The beam size is limited to nano area with the condenser aperture in order to ensure scattering from only a section of the nanotube of interest. The diffraction pattern is recorded in the far field using electron imaging plates to a resolution of 0.0025 1/Å. Using a typical HIO reconstruction method an image is produced with Å resolution in which the DWCNT chirality (lattice structure) can be directly observed. Zuo found that it is possible to start with non-random phases based on a low resolution image from a TEM to improve the final image quality.
In 2007, Podorov et al. proposed an exact analytical solution of CDXI problem for particular cases.
In 2016 using the coherent diffraction imaging (CXDI) beamline at ESRF (Grenoble, France), the researchers quantified the porosity of large faceted nanocrystalline layers at the origin of photoluminescence emission band in the infrared. It has been shown that phonons can be confined in sub-micron structures, which could help enhance the output of photonic and photovoltaic (PV) applications.
In situ CDI
Incomplete measurements have been a problem observed across all algorithms in CDI. Since the detector is too sensitive to absorb a particle beam directly, a beamstop or hole must be placed at its center to prevent direct contact (Pham 2020). Furthermore, detectors are often constructed with multiple panels with gaps between them where data again cannot be collected (Pham 2020). Ultimately, these qualities of the detector result in missing data within the diffraction patterns. In situ CDI is a new method of this imaging technology that could increase resistance to incomplete measurements. In situ CDI images a static region and a dynamic region that changes over time as a result of external stimuli (Hung Lo 2018). A series of diffraction patterns are collected over time with interference from the static and dynamic regions (Hung Lo 2018). Because of this interference, the static region acts as a time invariant constraint that phases patterns together in fewer iterations (Hung Lo 2018). Enforcing this static region as a constraint makes in situ CDI more robust to incomplete data and noise interference in the diffraction patterns (Hung Lo 2018). Overall, in situ CDI provides clearer data collection in fewer iterations than other CDI techniques.
Related techniques
Various techniques for CDI have been developed over the years and utilized to study samples in physics, chemistry, materials, science, nanoscience, geology, and biology (6); this includes, but is not limited to, plane-wave DCI, Bragg CDI, ptychography, reflection CDI, Fresnel CDI, and sparsity CDI.
Ptychography is a technique which is closely related to coherent diffraction imaging. Instead of recording just one coherent diffraction pattern, several – and sometimes hundreds or thousands – of diffraction patterns are recorded from the same object. Each pattern is recorded from a different area of the object, although the areas must partially overlap with one another. Ptychography is only applicable to specimens that can survive irradiation in the illuminating beam for these multiple exposures. However, it has the advantage that a large field of view can be imaged. The extra translational diversity in the data also means the reconstruction procedure can be faster and ambiguities in the solution space are reduced.
See also
Diffraction
X-ray diffraction computed tomography
List of materials analysis methods
Nanotechnology
Surface physics
Synchrotron
References
External links
Ian Robinson X-Ray Studies Group Page
Jian-Min (Jim) Zuo Electron Microscopy Group Page
Diffraction
Materials science
Microscopes
Microscopy
Nanotechnology
Scientific techniques | Coherent diffraction imaging | Physics,Chemistry,Materials_science,Technology,Engineering | 3,665 |
47,993,602 | https://en.wikipedia.org/wiki/Preclinical%20SPECT | Preclinical or small-animal Single Photon Emission Computed Tomography (SPECT) is a radionuclide based molecular imaging modality for small laboratory animals (e.g. mice and rats). Although SPECT is a well-established imaging technique that is already for decades in use for clinical application, the limited resolution of clinical SPECT (~10 mm) stimulated the development of dedicated small animal SPECT systems with sub-mm resolution. Unlike in clinics, preclinical SPECT outperforms preclinical coincidence PET in terms of resolution (best spatial resolution of SPECT - 0.25mm, PET ≈ 1 mm ) and, at the same time, allows to perform fast dynamic imaging of animals (less than 15s time frames).
SPECT imaging requires administration of small quantities of γ-emitting radiolabeled molecules (commonly called "tracers") into the animal prior to the image acquisition. These tracers are biochemically designed in such a way that they accumulate at target locations in the body. The radiation emitted by the tracer molecules (single γ-photons) can be detected by gamma detectors and, after image reconstruction, results in a 3-dimensional image of the tracer distribution within the animal. Some key radioactive isotopes used in preclinical SPECT are 99mTc, 123I, 125I, 131I, 111In, 67Ga and 201Tl.
Preclinical SPECT plays an important role in multiple areas of translational research where SPECT can be used for non-invasive imaging of radiolabeled molecules, including antibodies, peptides, and nanoparticles. Among major areas of its applications are oncology, neurology, psychiatry, cardiology, orthopedics, pharmacology and internal medicine.
Basic imaging principle of preclinical SPECT
Due to the small size of the imaged animals (a mouse is about 3000 times smaller than a human measured by weight and volume), it is essential to have a high spatial resolution and detection efficiency for the preclinical scanner.
Spatial resolution
Looking at spatial resolution first, if we want to see the same level of details relatively to e.g. the size of the organs in a mouse as we can see in a human, the spatial resolution of clinical SPECT needs to be improved by a factor of or higher. Such an obstacle forced scientists to look for a new imaging approach for preclinical SPECT that was found in exploiting the pinhole imaging principle.
A pinhole collimator consists of a piece of dense material containing only a single hole, which typically has the shape of a double cone. First attempts to obtain SPECT images of rodents with a high resolution were based on use of pinhole collimators attached to convectional gamma cameras. In such a way, by placing the object (e.g. rodent) close to the aperture of the pinhole, one can reach a high magnification of its projection on the detector surface and effectively compensate for the limited intrinsic resolution of the detector.
The combined effects of the finite aperture size and the limited intrinsic resolution are described by:
de - effective pinhole diameter, Ri - intrinsic resolution of the detector, M – projection magnification factor.
The resolution of a SPECT system based on the pinhole imaging principle can be improved in one of three ways:
by decreasing the effective diameter of the pinhole
by increasing the magnification factor
by using detectors with higher intrinsic resolution
The exact size, shape and material of the pinhole are important to obtain good imaging characteristics and is a subject of collimator design optimization studies via e.g. use of Monte Carlo simulations.
Modern preclinical SPECT scanners based on pinhole imaging can reach up to 0.25 mm spatial or 0.015 μL volumetric resolution for in vivo mouse imaging.
Detection efficiency
The detection efficiency or sensitivity of a preclinical pinhole SPECT system is determined by:
S – detection efficiency (sensitivity), de-effective pinhole diameter with penetration, N – total number of pinholes, rc – collimator radius (e.g. object-to-pinhole distance).
The sensitivity can be improved by:
increasing the pinhole diameter
Possible drawbacks: degradation of spatial resolution
decreasing the object-to-pinhole distance (e.g. placing the animal as close as possible to the pinhole aperture)
using multiple pinholes that simultaneously capture projections from multiple angles
Possible drawbacks: When multiple pinhole projections are projected on a single detector surface, they can either overlap each other (multiplexing projections) or be fully separated (non-overlapping projections). Although pinhole collimators with multiplexing projections allow reaching a higher sensitivity (by allowing to use a higher number of pinholes) when compared to non-overlapping designs, they also suffer from multiple artifacts in reconstructed SPECT images. The artifacts are cause by ambiguity about the origin of γ -photons detected in the areas of the overlap.
decreasing the size of the "field-of-view"
Placing the animal close to the pinhole aperture comes at the cost of reducing the size of the area that can be imaged at a given time (the "field-of-view") compared to imaging at a lower magnification. However, when combined with moving the animal (the so-called "scanning-focus method" ) a larger area of interest can still be imaged with a good resolution and sensitivity.
The typical detection efficiency of preclinical SPECT scanner lies within a 0.1-0.2% (1000-2000 cps/MBq) range, which is more than tenfold higher than the average sensitivity of clinical scanners. At the same time, dedicated high-sensitivity collimators can allow >1% detection efficiency and maintain sub-mm image resolution.
System design
Multiple pinhole SPECT system designs have been proposed, including rotating gamma camera, stationary detector but rotating collimator, or completely stationary camera in which a large number of pinholes surround the animal and simultaneously acquire projections from a sufficient number of angles for tomographic image reconstruction. Stationary systems have several advantages over non-stationary systems:
no need for repetitive system geometry recalibration
Why: due to the stable position of the detector(s) and the collimator
unlike non-stationary systems, stationary systems are very well suited for dynamic SPECT imaging
Why: because all required angular information is acquired simultaneously by multiple pinholes.
Modern stationary preclinical SPECT systems can perform dynamic SPECT imaging with up to 15s time-frames during total body and up to 1s time-frames during "focused" (e.g. focusing on heart) image acquisitions.
Multimodality imaging
Medical imaging encompasses many different imaging modalities, which can roughly be divided into anatomical and functional imaging. Anatomical modalities (e.g. CT, MRI) mainly reveal the structure of the tissues and organs, while the functional modalities (SPECT, PET and optical imaging) mainly visualize the physiology and function of the tissue. Because none of the existing imaging modalities can provide information on all aspects of structure and function, an obvious approach is to either alter one imaging modality to the task (e.g. special imaging sequences in MRI) or to try to image a subject using multiple imaging modalities. Following the multimodality approach, in recent years the combination of a SPECT/CT system became a standard molecular imaging modality combination in both the pre-clinical and clinical fields, where the structural information of CT complements the functional information from SPECT. Nevertheless, integration of SPECT with other imaging modalities (e.g. SPECT/MR, SPECT/PET/CT) is not uncommon.
Reconstruction
A SPECT measurement consists of 2-dimensional projections of the radioactive source distribution that are obtained with collimator(s) and gamma-detector(s). It is the goal of an image reconstruction algorithm to accurately reconstruct the unknown 3-dimensional distribution of the radioactivity.
The Maximum Likelihood Expectation Maximization algorithm (MLEM) is an important "gold standard" in iterative image reconstruction of SPECT images, but it is also a computationally costly method. A popular solution of this obstacle is based on the use of so-called block-iterative reconstruction methods. With block-iterative methods, every iteration of the algorithm is subdivided into many subsequent sub-iterations, each using a different subset of the projection data. An example of a widely used block-iterative version of MLEM is the Ordered Subsets Expectation Maximization algorithm (OSEM). The reconstruction speedup of a full iteration OSEM over a single iteration MLEM is approximately equal to the number of subsets.
Quantification
Preclinical SPECT is a quantitative imaging modality. The uptake of SPECT tracers in organs (regions) of interest can be calculated from reconstructed images. The small size of laboratory animals diminishes the photon’s attenuation in the body of the animal (compared to one in human-sized objects). Nevertheless, depending on the energy of γ-photons and the size of the animal that is used for imaging, correction for photon attenuation and scattering might be required to provide good quantification accuracy. A detailed discussion about effects affecting quantification of SPECT images can be found in Hwang et al.
SPECT tracers
SPECT tracers emit single γ-photons with the energy of the emitted photon depending on the isotope that was used for radiolabeling of the tracer. Thus, in cases when different tracers are radiolabeled with isotopes of different energies, SPECT provides the ability to probe several molecular pathways simultaneously (multi-isotope imaging). Two examples of common multi-isotope tracer combinations used for SPECT imaging are 123I-NaI/99mTc-pertechnetate (thyroid function) or 99mTc-MAG3/111In-DTPA (assessment of renal filtration).
The time the tracer can be followed in vivo strongly depends on the half-life of the isotope used for radiolabeling of the compound. The wide range of relatively long-lived isotopes (compared to the isotopes typically used in PET) that can be used for SPECT imaging provide a unique possibility to image slow kinetic processes (days to weeks).
Another important characteristic of SPECT is the simplicity of tracer radiolabeling procedure that can be performed with a wide range of commercially available labelling kits.
Preclinical SPECT versus PET
Preclinical SPECT and PET are two very similar molecular imaging modalities used for noninvasive visualization of biodistribution of radiolabel tracers that are injected into an animal. The major difference between SPECT and PET lies in the nature of the radioactive decay of their tracers. SPECT tracer emits single γ-photons with the energy of photons that depends on the isotope that was used for radiolabeling. In PET, the tracer emits positrons that, after annihilation with electrons in the subject, produce a pair of 511 keV annihilation photons emitted into opposite directions. Coincidental detection of these annihilation photons is used for image formation in PET. As a result, different detection principles have been developed for SPECT and PET tracers, which has led to separate SPECT and PET scanners.
Comparison of preclinical SPECT and PET is provided in the table below
Modern preclinical SPECT
Manufacturers of preclinical SPECT systems include MILabs, Siemens, Bruker, Mediso and MOLECUBES. Systems are available combining SPECT with multiple other modalities including MR, PET and CT. They can achieve up to 0.25 mm spatial resolution (0.015 μL volumetric resolution) and up to 1 second-frame dynamic noninvasive SPECT imaging of rodents.
Applications in translational research
SPECT can be used for diagnostic or therapeutic imaging. When a radioactive tracer is labeled with primary gamma-emitting isotopes (e.g. 99mTc, 123I, 111In, 125I), the acquired images provide functional information about the bio-distribution of the compound that can be used for multiple diagnostic purposes. Examples of diagnostic applications: metabolism and perfusion imaging, cardiology, orthopedics.
When SPECT tracer is labeled with a combined gamma and α- or β-emitting isotope (e.g. 213Bi or 131I), it is possible to combine cancer radioisotope therapy with α- or β- particles with noninvasive imaging of response to the therapy that is achieved with SPECT.
References
Radiobiology
3D nuclear medical imaging
Neuroimaging
Veterinary diagnosis | Preclinical SPECT | Chemistry,Biology | 2,660 |
31,112 | https://en.wikipedia.org/wiki/Tesseract | In geometry, a tesseract or 4-cube is a four-dimensional hypercube, analogous to a two-dimensional square and a three-dimensional cube. Just as the perimeter of the square consists of four edges and the surface of the cube consists of six square faces, the hypersurface of the tesseract consists of eight cubical cells, meeting at right angles. The tesseract is one of the six convex regular 4-polytopes.
The tesseract is also called an 8-cell, C8, (regular) octachoron, or cubic prism. It is the four-dimensional measure polytope, taken as a unit for hypervolume. Coxeter labels it the polytope. The term hypercube without a dimension reference is frequently treated as a synonym for this specific polytope.
The Oxford English Dictionary traces the word tesseract to Charles Howard Hinton's 1888 book A New Era of Thought. The term derives from the Greek ( 'four') and ( 'ray'), referring to the four edges from each vertex to other vertices. Hinton originally spelled the word as tessaract.
Geometry
As a regular polytope with three cubes folded together around every edge, it has Schläfli symbol {4,3,3} with hyperoctahedral symmetry of order 384. Constructed as a 4D hyperprism made of two parallel cubes, it can be named as a composite Schläfli symbol {4,3} × { }, with symmetry order 96. As a 4-4 duoprism, a Cartesian product of two squares, it can be named by a composite Schläfli symbol {4}×{4}, with symmetry order 64. As an orthotope it can be represented by composite Schläfli symbol { } × { } × { } × { } or { }4, with symmetry order 16.
Since each vertex of a tesseract is adjacent to four edges, the vertex figure of the tesseract is a regular tetrahedron. The dual polytope of the tesseract is the 16-cell with Schläfli symbol {3,3,4}, with which it can be combined to form the compound of tesseract and 16-cell.
Each edge of a regular tesseract is of the same length. This is of interest when using tesseracts as the basis for a network topology to link multiple processors in parallel computing: the distance between two nodes is at most 4 and there are many different paths to allow weight balancing.
A tesseract is bounded by eight three-dimensional hyperplanes. Each pair of non-parallel hyperplanes intersects to form 24 square faces. Three cubes and three squares intersect at each edge. There are four cubes, six squares, and four edges meeting at every vertex. All in all, a tesseract consists of 8 cubes, 24 squares, 32 edges, and 16 vertices.
Coordinates
A unit tesseract has side length , and is typically taken as the basic unit for hypervolume in 4-dimensional space. The unit tesseract in a Cartesian coordinate system for 4-dimensional space has two opposite vertices at coordinates and , and other vertices with coordinates at all possible combinations of s and s. It is the Cartesian product of the closed unit interval in each axis.
Sometimes a unit tesseract is centered at the origin, so that its coordinates are the more symmetrical This is the Cartesian product of the closed interval in each axis.
Another commonly convenient tesseract is the Cartesian product of the closed interval in each axis, with vertices at coordinates . This tesseract has side length 2 and hypervolume .
Net
An unfolding of a polytope is called a net. There are 261 distinct nets of the tesseract. The unfoldings of the tesseract can be counted by mapping the nets to paired trees (a tree together with a perfect matching in its complement).
Construction
The construction of hypercubes can be imagined the following way:
1-dimensional: Two points A and B can be connected to become a line, giving a new line segment AB.
2-dimensional: Two parallel line segments AB and CD separated by a distance of AB can be connected to become a square, with the corners marked as ABCD.
3-dimensional: Two parallel squares ABCD and EFGH separated by a distance of AB can be connected to become a cube, with the corners marked as ABCDEFGH.
4-dimensional: Two parallel cubes ABCDEFGH and IJKLMNOP separated by a distance of AB can be connected to become a tesseract, with the corners marked as ABCDEFGHIJKLMNOP. However, this parallel positioning of two cubes such that their 8 corresponding pairs of vertices are each separated by a distance of AB can only be achieved in a space of 4 or more dimensions.
The 8 cells of the tesseract may be regarded (three different ways) as two interlocked rings of four cubes.
The tesseract can be decomposed into smaller 4-polytopes. It is the convex hull of the compound of two demitesseracts (16-cells). It can also be triangulated into 4-dimensional simplices (irregular 5-cells) that share their vertices with the tesseract. It is known that there are such triangulations and that the fewest 4-dimensional simplices in any of them is 16.
The dissection of the tesseract into instances of its characteristic simplex (a particular orthoscheme with Coxeter diagram ) is the most basic direct construction of the tesseract possible. The characteristic 5-cell of the 4-cube is a fundamental region of the tesseract's defining symmetry group, the group which generates the B4 polytopes. The tesseract's characteristic simplex directly generates the tesseract through the actions of the group, by reflecting itself in its own bounding facets (its mirror walls).
Radial equilateral symmetry
The radius of a hypersphere circumscribed about a regular polytope is the distance from the polytope's center to one of the vertices, and for the tesseract this radius is equal to its edge length; the diameter of the sphere, the length of the diagonal between opposite vertices of the tesseract, is twice the edge length. Only a few uniform polytopes have this property, including the four-dimensional tesseract and 24-cell, the three-dimensional cuboctahedron, and the two-dimensional hexagon. In particular, the tesseract is the only hypercube (other than a zero-dimensional point) that is radially equilateral. The longest vertex-to-vertex diagonal of an -dimensional hypercube of unit edge length is which for the square is for the cube is and only for the tesseract is edge lengths.
An axis-aligned tesseract inscribed in a unit-radius 3-sphere has vertices with coordinates
Properties
For a tesseract with side length :
Hypervolume (4D):
Surface "volume" (3D):
Face diagonal:
Cell diagonal:
4-space diagonal:
As a configuration
This configuration matrix represents the tesseract. The rows and columns correspond to vertices, edges, faces, and cells. The diagonal numbers say how many of each element occur in the whole tesseract. The diagonal reduces to the f-vector (16,32,24,8).
The nondiagonal numbers say how many of the column's element occur in or at the row's element. For example, the 2 in the first column of the second row indicates that there are 2 vertices in (i.e., at the extremes of) each edge; the 4 in the second column of the first row indicates that 4 edges meet at each vertex.
The bottom row defines they facets, here cubes, have f-vector (8,12,6). The next row left of diagonal is ridge elements (facet of cube), here a square, (4,4).
The upper row is the f-vector of the vertex figure, here tetrahedra, (4,6,4). The next row is vertex figure ridge, here a triangle, (3,3).
Projections
It is possible to project tesseracts into three- and two-dimensional spaces, similarly to projecting a cube into two-dimensional space.
The cell-first parallel projection of the tesseract into three-dimensional space has a cubical envelope. The nearest and farthest cells are projected onto the cube, and the remaining six cells are projected onto the six square faces of the cube.
The face-first parallel projection of the tesseract into three-dimensional space has a cuboidal envelope. Two pairs of cells project to the upper and lower halves of this envelope, and the four remaining cells project to the side faces.
The edge-first parallel projection of the tesseract into three-dimensional space has an envelope in the shape of a hexagonal prism. Six cells project onto rhombic prisms, which are laid out in the hexagonal prism in a way analogous to how the faces of the 3D cube project onto six rhombs in a hexagonal envelope under vertex-first projection. The two remaining cells project onto the prism bases.
The vertex-first parallel projection of the tesseract into three-dimensional space has a rhombic dodecahedral envelope. Two vertices of the tesseract are projected to the origin. There are exactly two ways of dissecting a rhombic dodecahedron into four congruent rhombohedra, giving a total of eight possible rhombohedra, each a projected cube of the tesseract. This projection is also the one with maximal volume. One set of projection vectors are , , .
Tessellation
The tesseract, like all hypercubes, tessellates Euclidean space. The self-dual tesseractic honeycomb consisting of 4 tesseracts around each face has Schläfli symbol {4,3,3,4}. Hence, the tesseract has a dihedral angle of 90°.
The tesseract's radial equilateral symmetry makes its tessellation the unique regular body-centered cubic lattice of equal-sized spheres, in any number of dimensions.
Related polytopes and honeycombs
The tesseract is 4th in a series of hypercube:
The tesseract (8-cell) is the third in the sequence of 6 convex regular 4-polytopes (in order of size and complexity).
As a uniform duoprism, the tesseract exists in a sequence of uniform duoprisms: {p}×{4}.
The regular tesseract, along with the 16-cell, exists in a set of 15 uniform 4-polytopes with the same symmetry. The tesseract {4,3,3} exists in a sequence of regular 4-polytopes and honeycombs, {p,3,3} with tetrahedral vertex figures, {3,3}. The tesseract is also in a sequence of regular 4-polytope and honeycombs, {4,3,p} with cubic cells.
The regular complex polytope 4{4}2, , in has a real representation as a tesseract or 4-4 duoprism in 4-dimensional space. 4{4}2 has 16 vertices, and 8 4-edges. Its symmetry is 4[4]2, order 32. It also has a lower symmetry construction, , or 4{}×4{}, with symmetry 4[2]4, order 16. This is the symmetry if the red and blue 4-edges are considered distinct.
In popular culture
Since their discovery, four-dimensional hypercubes have been a popular theme in art, architecture, and science fiction. Notable examples include:
"And He Built a Crooked House", Robert Heinlein's 1940 science fiction story featuring a building in the form of a four-dimensional hypercube. This and Martin Gardner's "The No-Sided Professor", published in 1946, are among the first in science fiction to introduce readers to the Moebius band, the Klein bottle, and the hypercube (tesseract).
Crucifixion (Corpus Hypercubus), a 1954 oil painting by Salvador Dalí featuring a four-dimensional hypercube unfolded into a three-dimensional Latin cross.
The Grande Arche, a monument and building near Paris, France, completed in 1989. According to the monument's engineer, Erik Reitzel, the Grande Arche was designed to resemble the projection of a hypercube.
Fez, a video game where one plays a character who can see beyond the two dimensions other characters can see, and must use this ability to solve platforming puzzles. Features "Dot", a tesseract who helps the player navigate the world and tells how to use abilities, fitting the theme of seeing beyond human perception of known dimensional space.
The word tesseract has been adopted for numerous other uses in popular culture, including as a plot device in works of science fiction, often with little or no connection to the four-dimensional hypercube; see Tesseract (disambiguation).
See also
Mathematics and art
Notes
References
F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss (1995) Kaleidoscopes: Selected Writings of H.S.M. Coxeter, Wiley-Interscience Publication
(Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, Mathematische Zeitschrift 46 (1940) 380–407, MR 2,10]
(Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591]
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss (2008) The Symmetries of Things, (Chapter 26. pp. 409: Hemicubes: 1n1)
T. Gosset (1900) On the Regular and Semi-Regular Figures in Space of n Dimensions, Messenger of Mathematics, Macmillan.
Norman Johnson Uniform Polytopes, Manuscript (1991)
N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. (1966)
Victor Schlegel (1886) Ueber Projectionsmodelle der regelmässigen vier-dimensionalen Körper, Waren.
External links
ken perlin's home page A way to visualize hypercubes, by Ken Perlin
Some Notes on the Fourth Dimension includes animated tutorials on several different aspects of the tesseract, by Davide P. Cervone
Tesseract animation with hidden volume elimination
Algebraic topology
Regular 4-polytopes
Cubes | Tesseract | Mathematics | 3,186 |
38,635,448 | https://en.wikipedia.org/wiki/M%201-42 | M 1-42 is a planetary nebula located in the constellation of Sagittarius, 10 000 light-years away from Earth.
Nickname
The nebula has been nicknamed the "Eye of Sauron Nebula" due to its resemblance to the Lord of the Rings film artifact.
References
Planetary nebulae
Sagittarius (constellation) | M 1-42 | Astronomy | 67 |
8,425,153 | https://en.wikipedia.org/wiki/Belgian%20Society%20of%20Biochemistry%20and%20Molecular%20Biology | The Belgian Society of Biochemistry and Molecular Biology (BMB) is a Belgian non-profit organization, concerned with biochemistry and molecular biology.
The BMB was created, based on an initiative of Marcel Florkin, so a Belgian society could join the new International Union of Biochemistry. The first charter of the society was drafted by Edouard J. Bigwood, Jean Brachet, Christian de Duve, Marcel Florkin, Lucien Massart, Paul Putzeys, Laurent Vandendriessche and Claude Lièbecq. The first general assembly was held on 12 January 1952, and the first President of the society was Marcel Florkin, with Claude Lièbecq as secretary and treasurer.
See also
National Fund for Scientific Research
References
External links
Belgian Society of Biochemistry and Molecular Biology
Biochemistry organizations
Biology societies
Chemistry societies
Molecular biology organizations
Scientific organisations based in Belgium | Belgian Society of Biochemistry and Molecular Biology | Chemistry,Biology | 175 |
14,688,422 | https://en.wikipedia.org/wiki/Centre%20for%20Development%20and%20the%20Environment | Centre for Development and the Environment (, SUM) is a research centre at the University of Oslo. The overarching goal of SUM is to conduct interdisciplinary research, teaching, and dissemination on development and environmental issues, with a particular focus on the interconnections between development and the environment. SUM is organized as a centre without faculty affiliation, directly under the university board. The centre was established on January 1, 1990, at the initiative of the Ministry of Culture and Science and the Norwegian Research Council for Science and the Humanities (NAVF). The initiative came in the wake of the active role Norway played in the World Commission on Environment and Development, Our Common Future, 1987, where the Brundtland Report was launched.
In the aftermath of the Brundtland Report, a research centre was established at each of the Norwegian core universities. Today, only SUM remains. SUM was established by merging three previously independent entities: the Council for Natural and Environmental Sciences (RNM), the Programme for Development Research in the Oslo Region (PUFO), and the Centre for International Development Studies (SIU). In 2000, the Programme for Research and Investigation for a Sustainable Society (ProSus) was incorporated into SUM. The legacy of the Brundtland Report is deeply rooted in SUM's mandate for interdisciplinary research and education on global development and environmental issues. Today, approximately 50 employees are associated with the centre.
SUM will play a central role in the university's new interdisciplinary initiative on sustainability, the Centre for Global Sustainability. The new centre will be established as a unit under the university board with the goal of strengthening and facilitating interdisciplinary research, education, and dissemination on sustainability. The centre is to be a meeting hub for researchers, students, external partners, and guests. As of May 2025, the new centre will be established virtually, with SUM at the forefront and in collaboration with several other initiatives and units at the university. By the end of 2027, the centre will be physically established and move into what will become the Sustainability House on the Blindern campus.
Research
SUM's vision is to promote groundbreaking, independent, and critical interdisciplinary research on sustainability, focusing on global and local challenges and crises related to health, welfare, nature, and society. The research at SUM seeks to be power-critical and globally oriented, challenge established views, and draw on interdisciplinary perspectives and approaches.
Initially, in addition to several smaller projects, SUM had three major interdisciplinary research programmes: ‘The Programme for Health, Population, Development’ (HEBUT), ‘Norwegian-Indonesian Rain Forest and Resource Management Project’ (NORINDRA), and ‘Environment and Development in Mali’. From 2003, the research was organized into three programmes that largely have influenced today's research groups at SUM. The programmes were: Local changes in developing countries; Culturally based attitudes towards the environment and development; Global governance for environment and development. In 2024, the centre has the following research groups:
- Sustainable consumption and energy equity
- Poverty reduction and the 2030 Agenda for Sustainable Development
- Culture, ethics, and sustainability
- Global Health Politics
- FoodSoil: Sustainable food systems
- Governance for sustainable development
- Rural futures
Additionally, the Research Centre for Socially Inclusive Energy Transition (INCLUDE) is a part of SUM (see below).
Networks
For 28 years, from 1996 to 2024, SUM was the host institution for the Network for Asian Studies. The Network for Asian Studies was a national research network that promoted studies and research on Asia. From 2008 to 2020, SUM was the host institution for NORLANET, a similar network for researchers with Latin America as their research area.
In 2007, SUM initiated the establishment of the Arne Næss Professorship in Global Justice and Environment. The award was established with support from UiO and others and was led by Nina Witoszek until 2024. The professorship is annually awarded to an internationally recognised researcher, leader, or activist. James Lovelock was the first to be appointed as the Arne Næss Professor.
From 2009 to 2019, SUM had a framework agreement with the Norwegian Ministry of Foreign Affairs for operating the secretariat for The Trust Fund for Environmentally and Socially Sustainable Development (TFESSD) funded by Norway and Finland. Desmond McNeill, and later Dan Banik, were co-chairs of the reference group together with Kristalina Georgieva at the World Bank.
From 2011 to 2014, the current centre leader at SUM, Sidsel Roalkvam, led the reference group "The Lancet - University of Oslo Commission on Global Governance for Health," an international commission established on the initiative of UiO, Harvard University, and the journal The Lancet. Desmond McNeill was a member of the commission led by UiO's rector Ole Petter Ottersen. As a follow-up to the commission's work, SUM established ‘The Collective for the Political Determinants of Health,' an international and interdisciplinary group of researchers and practitioners.
In 2016, Professor at SUM Dan Banik established the Oslo SDG Initiative, a network and meeting point for education, research, and dissemination on the UN's 2030 Agenda and the Sustainable Development Goals. The initiative aims to bridge the gap between research and policy. Through the initiative, regular dialogue forums and seminars are organized to communicate research results and create shared platforms for governments, civil society, business, and academia to discuss issues related to the 2030 Agenda.
SUM is the host institution for INCLUDE, a social science Research Centre for Environmental-Friendly Energy (FME) funded by the Norwegian Research Council. INCLUDE was established in 2019, led by SUM's Professor Tanja Winter. The purpose of INCLUDE is to produce knowledge about how a socially just low-emission society can be realized through inclusive processes and close collaboration between research and the public, private, and voluntary sectors.
Education
SUM offers university education at several levels. Until 2003, SUM offered three undergraduate courses (in Norwegian): “People and the Environment," "Environment and Development," and "Environmental Protection and Management." SUM also offered some graduate-level courses, and several master's students from various departments at UiO were associated with research groups at SUM through scholarship schemes. In 2003, SUM established its own master's programme, Culture, Environment, and Sustainability, in collaboration with the Faculty of Humanities at the University of Oslo. The master's programme is now called Development, Environment, and Cultural Change and is a two-year full-time study. Each year, the master's programme receives several hundred applicants, of which around 20 students are admitted. The student group consists of approximately half Norwegian and half international students.
In 2005, SUM received formal status as a Research School. The goal of SUM's Research School is to create a forum for PhD candidates that transcends disciplinary boundaries, challenges dominant values and perspectives, and fosters close collaboration with other departments at UiO and abroad. In addition to regular seminars for PhD students, the Research School organizes short, intensive PhD courses on a wide range of topics.
In 2015, UiO launched its first international MOOC (massive open online course) in collaboration with Stanford University, titled "What Works: Promising Practices in International Development," led by Dan Banik.
As part of the International Summer School, SUM offered the course "Energy Planning and Sustainable Development" until 2019.
In 2022, SUM organized a continuing education course in local sustainable transition management, the first of its kind at the University of Oslo. The course is aimed at professionals and leaders in the public, private, and voluntary sectors who have, or wish to take, responsibility for local transition projects or other development work aimed at sustainable transformation.
In 2023, the University of Oslo launched a sustainability certificate at the bachelor's level, which SUM leads and is mainly responsible for organizing and coordinating. The certificate provides students with an interdisciplinary understanding of challenges related to sustainability and just and sustainable change.
Research communication and dissemination
In line with its mandate, SUM places great emphasis on dissemination. SUM's researchers communicate their work through publication of books and articles in international journals, as well as in debates, panel discussions, presentations, opinion pieces, and events for the broader public.
The leaders of SUM's research groups in public discourse
Mariel Aguilar-Støen, professor and social geographer at SUM, has a public voice on issues related to meat production, the emergence of infectious diseases and pandemic viruses, Latin America in general, and Guatemala in particular. She is currently in charge of the Research School at SUM.
Professor and political scientist Dan Banik actively participates in public discourse on development, poverty reduction, and the UN's sustainable development goals. Among other things, Dan Banik has his own podcast, "In Pursuit of Development," where he invites researchers, politicians, and activists to discuss and converse on current topics.
Benedicte Bull, professor and political scientist, is particularly prominent in the public sphere. She frequently appears as a guest and expert commentator on television and radio, speaking about current issues in Latin American politics, economics, and development, with a particular focus on Venezuela.
Researcher Arve Hansen communicates research on sustainable consumption, food and meat consumption, and social conditions and development in Asia. He regularly writes opinion pieces in newspapers and is a guest on radio programmes.
Professor and cultural historian Karen Victoria Lykke is an active contributor to public debates about meat, agriculture, and Norwegian food production. She is a frequent lecturer and prolific writer.
Katerini Storeng, medical anthropologist and professor at SUM, is active in research dissemination. She regularly participates in panel discussions, conferences, and debates where global health policy issues are discussed.
Tanja Winther, social anthropologist, electrical engineer, and professor at SUM, is an important voice on issues concerning the social and cultural aspects of energy. She communicates research through op-eds, YouTube, and radio participation and is an active lecturer and panelist.
Directors
- Nils Christian Stenseth (1990–1992)
- Desmond McNeill (1992–2001)
- Bente Herstad (2001–2007)
- Kristi Anne Stølen (2007–2017)
- Sidsel Roalkvam (2017 - )
References
External links
Official Centre for Development and the Environment website—
Environmental research institutes
Research institutes in Norway
Social science research institutes
University of Oslo
Environmental organizations established in 1990
1990 establishments in Norway | Centre for Development and the Environment | Environmental_science | 2,112 |
199,081 | https://en.wikipedia.org/wiki/Period%207%20element | A period 7 element is one of the chemical elements in the seventh row (or period) of the periodic table of the chemical elements. The periodic table is laid out in rows to illustrate recurring (periodic) trends in the chemical behavior of the elements as their atomic number increases: a new row is begun when chemical behavior begins to repeat, meaning that elements with similar behavior fall into the same vertical columns. The seventh period contains 32 elements, tied for the most with period 6, beginning with francium and ending with oganesson, the heaviest element currently discovered. As a rule, period 7 elements fill their 7s shells first, then their 5f, 6d, and 7p shells in that order, but there are exceptions, such as uranium.
Properties
All period 7 elements are radioactive. This period contains the actinides, which include plutonium, the last naturally occurring element; subsequent elements must be created artificially. While the first five of these synthetic elements (americium through einsteinium) are now available in macroscopic quantities, most are extremely rare, having only been prepared in microgram amounts or less. The later transactinide elements have only been identified in laboratories in batches of a few atoms at a time.
Though the rarity of many of these elements means that experimental results are not many, their periodic and group trends are less well defined than other periods. Whilst francium and radium do show typical properties of their respective groups, actinides display a much greater variety of behavior and oxidation states than the lanthanides. These peculiarities are due to a variety of factors, including a large degree of spin–orbit coupling and relativistic effects, ultimately caused by the very high electric charge of their massive nuclei. Periodicity mostly holds throughout the 6d series and is predicted also for moscovium and livermorium, but the other four 7p elements, nihonium, flerovium, tennessine, and oganesson, are predicted to have very different properties from those expected for their groups.
Elements
{| class="wikitable sortable"
! colspan="3" | Chemical element
! Block
! Electron configuration
! Occurrence
|-
!
!
!
!
!
!
|- bgcolor=""
|| 87 || Fr || Francium || s-block || [Rn] 7s1 || From decay
|- bgcolor=""
|| 88 || Ra || Radium || s-block || [Rn] 7s2 || From decay
|- bgcolor=""
|| 89 || Ac || Actinium || f-block || [Rn] 6d1 7s2 (*) || From decay
|- bgcolor=""
|| 90 || Th || Thorium || f-block || [Rn] 6d2 7s2 (*) || Primordial
|- bgcolor=""
|| 91 || Pa || Protactinium || f-block || [Rn] 5f2 6d1 7s2 (*) || From decay
|- bgcolor=""
|| 92 || U || Uranium || f-block || [Rn] 5f3 6d1 7s2 (*) || Primordial
|- bgcolor=""
|| 93 || Np || Neptunium || f-block || [Rn] 5f4 6d1 7s2 (*) || From decay
|- bgcolor=""
|| 94 || Pu || Plutonium || f-block || [Rn] 5f6 7s2 || From decay
|- bgcolor=""
|| 95 || Am || Americium || f-block || [Rn] 5f7 7s2 || Synthetic
|- bgcolor=""
|| 96 || Cm || Curium || f-block || [Rn] 5f7 6d1 7s2 (*) || Synthetic
|- bgcolor=""
|| 97 || Bk || Berkelium || f-block || [Rn] 5f9 7s2 || Synthetic
|- bgcolor=""
|| 98 || Cf || Californium || f-block || [Rn] 5f10 7s2 || Synthetic
|- bgcolor=""
|| 99 || Es || Einsteinium || f-block || [Rn] 5f11 7s2 || Synthetic
|- bgcolor=""
|| 100 || Fm || Fermium || f-block || [Rn] 5f12 7s2 || Synthetic
|- bgcolor=""
|| 101 || Md || Mendelevium || f-block|| [Rn] 5f13 7s2 || Synthetic
|- bgcolor=""
|| 102 || No || Nobelium || f-block || [Rn] 5f14 7s2|| Synthetic
|- bgcolor=""
|| 103 || Lr || Lawrencium || d-block || [Rn] 5f14 7s2 7p1 (*) || Synthetic
|- bgcolor=""
|| 104 || Rf || Rutherfordium || d-block || [Rn] 5f14 6d2 7s2 || Synthetic
|- bgcolor=""
|| 105 || Db || Dubnium || d-block || [Rn] 5f14 6d3 7s2 || Synthetic
|- bgcolor=""
|| 106 || Sg || Seaborgium || d-block || [Rn] 5f14 6d4 7s2 || Synthetic
|- bgcolor=""
|| 107 || Bh || Bohrium || d-block || [Rn] 5f14 6d5 7s2 || Synthetic
|- bgcolor=""
|| 108 || Hs || Hassium || d-block || [Rn] 5f14 6d6 7s2 || Synthetic
|- bgcolor=""
|| 109 || Mt || Meitnerium || d-block || [Rn] 5f14 6d7 7s2 (?) || Synthetic
|- bgcolor=""
|| 110 || Ds || Darmstadtium || d-block || [Rn] 5f14 6d8 7s2 (?) || Synthetic
|- bgcolor=""
|| 111 || Rg || Roentgenium || d-block || [Rn] 5f14 6d9 7s2 (?) || Synthetic
|- bgcolor=""
|| 112 || Cn || Copernicium || d-block || [Rn] 5f14 6d10 7s2 (?) || Synthetic
|- bgcolor=""
|| 113 || Nh || Nihonium || p-block || [Rn] 5f14 6d10 7s2 7p1 (?) || Synthetic
|- bgcolor=""
|| 114 || Fl || Flerovium || p-block || [Rn] 5f14 6d10 7s2 7p2 (?) || Synthetic
|- bgcolor=""
|| 115 || Mc || Moscovium || p-block || [Rn] 5f14 6d10 7s2 7p3 (?) || Synthetic
|- bgcolor=""
|| 116 || Lv || Livermorium || p-block || [Rn] 5f14 6d10 7s2 7p4 (?) || Synthetic
|- bgcolor=""
|| 117 || Ts || Tennessine || p-block || [Rn] 5f14 6d10 7s2 7p5 (?) || Synthetic
|- bgcolor=""
|| 118 || Og || Oganesson || p-block || [Rn] 5f14 6d10 7s2 7p6 (?) || Synthetic
|}
(?) Prediction
(*) Exception to the Madelung rule.
In many periodic tables, the f-block is erroneously shifted one element to the right, so that lanthanum and actinium become d-block elements, and Ce–Lu and Th–Lr form the f-block tearing the d-block into two very uneven portions. This is a holdover from early erroneous measurements of electron configurations. Lev Landau and Evgeny Lifshitz pointed out in 1948 that lutetium is not an f-block element, and since then physical, chemical, and electronic evidence has overwhelmingly supported that the f-block contains the elements La–Yb and Ac–No, as shown here and as supported by International Union of Pure and Applied Chemistry reports dating from 1988 and 2021.
S-block
Francium and radium make up the s-block elements of the 7th period.
Francium (Fr, atomic number 87) is a highly radioactive metal that decays into astatine, radium, or radon. It is one of the two least electronegative elements; the other is caesium. As an alkali metal, it has one valence electron. Francium was discovered by Marguerite Perey in France (from which the element takes its name) in 1939. It was the last element discovered in nature, rather than by synthesis. Outside the laboratory, francium is extremely rare, with trace amounts found in uranium and thorium ores, where the isotope francium-223 continually forms and decays. As little as 20–30 g (one ounce) exists at any given time throughout Earth's crust; the other isotopes are entirely synthetic. The largest amount produced in the laboratory was a cluster of more than 300,000 atoms.
Radium (Ra, atomic number 88) is an almost pure-white alkaline earth metal, but it readily oxidizes, reacting with nitrogen (rather than oxygen) on exposure to air, becoming black in color. All isotopes of radium are radioactive; the most stable is radium-226, which has a half-life of 1601 years and decays into radon. Due to such instability, radium luminesces, glowing a faint blue. Radium, in the form of radium chloride, was discovered by Marie and Pierre Curie in 1898. They extracted the radium compound from uraninite and published the discovery at the French Academy of Sciences five days later. Radium was isolated in its metallic state by Marie Curie and André-Louis Debierne through electrolysis of radium chloride in 1910. Since its discovery, it has given names such as radium A and radium C to several isotopes of other elements that are decay products of radium-226. In nature, radium is found in uranium ores in trace amounts as small as a seventh of a gram per ton of uraninite. Radium is not necessary for living things, and adverse health effects are likely when it is incorporated into biochemical processes due to its radioactivity and chemical reactivity.
Actinides
The actinide or actinoid (IUPAC nomenclature) series encompasses the 15 metallic chemical elements with atomic numbers from 89 to 103, actinium through lawrencium.
The actinide series is named after its first element actinium. All but one of the actinides are f-block elements, corresponding to the filling of the 5f electron shell; lawrencium, a d-block element, is also generally considered an actinide. In comparison with the lanthanides, also mostly f-block elements, the actinides show much more variable valence.
Of the actinides, thorium and uranium occur naturally in substantial, primordial, quantities. Radioactive decay of uranium produces transient amounts of actinium, protactinium and plutonium, and atoms of neptunium and plutonium are occasionally produced from transmutation in uranium ores. The other actinides are purely synthetic elements, though the first six actinides after plutonium would have been produced at Oklo (and long since decayed away), and curium almost certainly previously existed in nature as an extinct radionuclide. Nuclear tests have released at least six actinides heavier than plutonium into the environment; analysis of debris from a 1952 hydrogen bomb explosion showed the presence of americium, curium, berkelium, californium, einsteinium and fermium.
All actinides are radioactive and release energy upon radioactive decay; naturally occurring uranium and thorium, and synthetically produced plutonium are the most abundant actinides on Earth. These are used in nuclear reactors and nuclear weapons. Uranium and thorium also have diverse current or historical uses, and americium is used in the ionization chambers of most modern smoke detectors.
In presentations of the periodic table, the lanthanides and the actinides are customarily shown as two additional rows below the main body of the table, with placeholders or else a selected single element of each series (either lanthanum or lutetium, and either actinium or lawrencium, respectively) shown in a single cell of the main table, between barium and hafnium, and radium and rutherfordium, respectively. This convention is entirely a matter of aesthetics and formatting practicality; a rarely used wide-formatted periodic table (32 columns) shows the lanthanide and actinide series in their proper columns, as parts of the table's sixth and seventh rows (periods).
Transactinides
Transactinide elements (also, transactinides, or super-heavy elements, or superheavies) are the chemical elements with atomic numbers greater than those of the actinides, the heaviest of which is lawrencium (103). All transactinides of period 7 have been discovered, up to oganesson (element 118).
Superheavies are also transuranic elements, that is, have atomic number greater than that of uranium (92). The further distinction of having an atomic number greater than the actinides is significant in several ways:
The transactinide elements all have electrons in the 6d subshell in their ground state (and thus are placed in the d-block).
Even the longest-lived known isotopes of many transactinides have extremely short half-lives, measured in seconds or smaller units.
The element naming controversy involved the first five or six transactinides. These elements thus used three-letter systematic names for many years after their discovery was confirmed. (Usually, the three-letter symbols are replaced with two-letter symbols relatively soon after a discovery has been confirmed.)
Transactinides are radioactive and have only been obtained synthetically in laboratories. None of these elements has ever been collected in a macroscopic sample. Transactinides are all named after scientists, or important locations involved in the synthesis of the elements.
Chemistry Nobel Prize winner Glenn T. Seaborg, who first proposed the actinide concept which led to the acceptance of the actinide series, also proposed the existence of a transactinide series ranging from element 104 to 121 and a superactinide series approximately spanning elements 122 to 153. The transactinide seaborgium is named in his honor.
IUPAC defines an element to exist if its lifetime is longer than 10 second, the time needed to form an electron cloud.
Notes
References
Periods (periodic table) | Period 7 element | Chemistry | 3,404 |
75,843,179 | https://en.wikipedia.org/wiki/Gilbert%20Pollock | Gilbert Reid Pollock (24 August 1865 - 26 May 1954) was a Scottish iron engineer, businessman, and footballer who was a founder of Spanish club Sevilla FC and the author of the club's first-ever away goal.
Early life
Gilbert Reid Pollock was born on 24 August 1865, in Neilston, a village near Glasgow. After completing his studies, he began gaining a reputation as an accomplished young engineer and, after achieving enough professional experience, moved to Seville towards the end of the 1880s, where he was employed at the engineering works of the Portilla White foundry in Seville.
Thanks to a strong commercial relationship with the United Kingdom, Seville became the home to a large British enclave, so once in the Andalusian capital, Pollock established connections not only with these people; mostly workers and directors of the shipping company MacAndrews, the Seville Water Works and the Portilla White foundry; but also with many locals.
Playing career
On 25 January 1890, Pollock, together with some of his co-workers and fellow Seville residents of British origin, attended an old café to mark the traditional Scottish celebration of Burns Night. That same evening, after consuming some beers and becoming concerned about their physical health and lifestyle, Pollock and the others began discussing the proposal of forming an Athletics Association, but after a short debate, they instead founded Sevilla FC to organize football matches regularly in order to exercise and feel more at home. To that end, they drew up the rough articles and the constitution of Sevilla FC, doing it so while in a drunken state. They elected Edward F. Johnston, who was the British vice-council in Seville, as the club's first president, while his fellow "Glasgowian" and foundry colleague Hugh MacColl was named captain; it was also decided that this club should play under the rules of the English FA.
Wasting no time, Sevilla FC began organizing several "kickabout" matches between the club's members in a close by racecourse, where Pollock and the others would set up goalposts to play 70-minute five-a-side matches on Sundays, which at the time was a non-working day, although Pollock and the others were able to persuade their bosses to give them Saturday afternoons off. One of Pollock's colleagues in the Portilla White foundry, Isaías White Méndez, the then secretary of Sevilla FC, organized a match with a Recreation Club 80 miles away in Huelva, which took place on Saturday 8 March 1890, at the Hipódromo de Tablada (horse racing track). This match is now considered to be the first official football match in Spain, but Pollock missed this match for unknown reasons.
Following the success of the first match, the clubs decided to play a return fixture three weeks later, this time in Huelva, on 7 April 1890, this time in Huelva, in front of a crowd of between 400 and 500, and it was Pollock who scored the opening goal after 25 minutes, thus becoming the first-ever player to score an away goal on Spanish soil. This time, however, Sevilla went on to lose as Huelva's side, fortified by "some athletes from the British colony of Rio-Tinto", fought back to win 2–1.
Later life
In 1896, Hugh MacColl, his former Portilla White colleague and Sevilla FC teammate, contacted Pollock to propose a business partnership following the sudden death of his first partner John T. Jameson. Pollock, moved north to Sunderland to join him and become a partner in the firm, which was renamed MacColl & Pollock, a marine engine building firm based at Wreath Quay, on the north side of the River Wear, near Wearmouth Bridge. This company was once a prosperous global enterprise, employing 500 men at its peak, and engining almost 400 vessels between 1896 and 1931. It was also probably the last engine-building company to be developed on the River Wear, building its last engine in 1930, with the firm dealing only with repairs until it closed in 1935.
During the company's early years, Pollock and MacColl were prominent members of the prestigious Wearside Golf Club, but never lost their passion for football, a sport that they promoted among their workers at Wreath Quay, where engineers, platers, and boilermakers formed different teams to compete against each other or against teams belonging to other Wearside firms. Sevilla's adopted colours, red and white stripes, are believed to have been taken from Sunderland AFC, since their former captain MacColl and one of their founding members Pollock were living there at the time. MacColl died in 1915, but Pollock continued managing the company until it closed in 1935, before retiring to the Isle of Man.
It remains unclear when he married Annie Blackwell of Hyde, Cheshire, but according to Pollock's obituary published in the Sunderland Echo on 27 May 1954, the couple shared a son and three daughters, including Bessie Reid Pollock (1895–1959), who married Andrew Common in 1923.
Death
Widowed some years earlier, Pollock was living at the Fort Anne Hotel in Douglas when he died in 1954, aged 88, and was buried in Braddan Cemetery, where his grave can still be seen today. At first, it was not known where he had been buried, but his grave was finally found on the Isle of Man by the Sevilla club historian Javier Terenti, who had already found the gravestone of Edward F. Johnston in Elgin.
References
1865 births
1915 deaths
Scottish men's footballers
Footballers from Glasgow
Men's association football defenders
Marine engineers | Gilbert Pollock | Engineering | 1,116 |
24,202,456 | https://en.wikipedia.org/wiki/C16H17NO3 | {{DISPLAYTITLE:C16H17NO3}}
The molecular formula C16H17NO3 (molar mass: 271.31 g/mol, exact mass: 271.1208 u) may refer to:
A-68930
Crinine
Higenamine, or norcoclaurine
Normorphine
Molecular formulas | C16H17NO3 | Physics,Chemistry | 74 |
56,598,976 | https://en.wikipedia.org/wiki/Oltmannsiellopsidaceae | Oltmannsiellopsidaceae is a family of green algae in the order Oltmannsiellopsidales.
It contains the genus Oltmannsiellopsis.
References
Oltmannsiellopsidales
Ulvophyceae families | Oltmannsiellopsidaceae | Biology | 53 |
18,346,830 | https://en.wikipedia.org/wiki/Online%20Centres%20Network | The Online Centres Network is a UK-based network which helps communities tackle social and digital exclusion.
Good Things Foundation coordinates the Online Centres Network of 5,000 community partners, who provide free or low-cost access to computers and the internet. The organisation also provide training and support to hundreds of volunteers, centre staff and community leaders, helping them to work within their own communities.
Over 2 million people have been helped to improve their skills through the Online Centres Network to date, with many learners also going on to further learning and increased employment opportunities.
In 2011 the management of the Online Centres Network (then known as UK online centres) was taken over by Good Things Foundation (formerly known as Tinder Foundation), a staff-owned mutual and social enterprise formed by the Sheffield-based team previously managing the UK online centres contract on behalf of Ufi Ltd. In July 2013, Online Centres Foundation became known as Tinder Foundation. Tinder Foundation officially received charity status in early 2016. In November 2016 Tinder Foundation rebranded as Good Things Foundation.
Good Things Foundation Chief Executive, Helen Milner, was inducted into the BIMA Digital Hall of Fame in 2012 alongside Sir Tim Berners Lee, Stephen Fry, and others noted for their work in the digital arena.
Online learning
In April 2011, Good Things Foundation (then known as Online Centres Foundation) launched a brand new learning platform Go ON, which was renamed Learn My Way in 2012 (http://www.learnmyway.com/). The website was developed by Good Things Foundation with the aim of bringing together all of the resources on the market for internet beginners, including those developed specifically by Good Things Foundation, and from other providers including the BBC and Digital Unite.
The new website contains four main sections:
Get ready, to tackle those basic literacy and numeracy skills before tackling any online skills.
Get started, which includes fun engagement resources to help get first time learners started.
Online basics, the course that was developed in conjunction with BIS and Becta to provide learners with all of the skills they need to get started with computers and the internet.
Learn more, which includes a number of popular courses including Facebook and socialising online, Shopping online and Using a computer.
What next, which contains resources to help learners progress, including details on volunteering opportunities.
myguide, the original learning platform which was developed by Online Centres Foundation ceased to exist in September 2011. The most popular courses that existed on myguide have been moved across to the new learning platform.
Get Online Week
Get Online Week is an annual national campaign run by Good Things Foundation throughout the Online Centres Network, which helps tens of thousands of people to improve their computer and internet skills each year.
Get Online Week has been going since 2007, when the organisation then known as UK online centres first marked out a date in October to bring digital inclusion to national attention. Since then the campaign has grown into a week-long annual celebration, with thousands of events taking place each year in centres and more unusual locations, bringing digital skills and know-how to everyone.
Past Get Online Week partners and sponsors have included, Lloyds Banking Group, the BBC, Go ON UK, Post Offices, BT, Three Mobile, Google, Sky, and Facebook.
Digital inclusion statistics
There are currently 12.6 million people who lack basic digital skills and 5.9 million who have never used the internet before. These people are likely to be socially excluded as well as lacking in online skills. Good Things Foundation and the Online Centres Network, along with their partners, are aiming to combat this digital and social exclusion.
External links
Online Centres Network
Good Things Foundation
Learn my way
Helen Milner's blog
References
Non-profit organisations based in the United Kingdom
Organizations established in 2000
Virtual learning environments
Organisations based in Sheffield
Digital divide
Information technology charities based in the United Kingdom
Internet access | Online Centres Network | Technology | 774 |
17,443,489 | https://en.wikipedia.org/wiki/Tact%20%28psychology%29 | Tact is a term that B.F. Skinner used to describe a verbal operant which is controlled by a nonverbal stimulus (such as an object, event, or property of an object) and is maintained by nonspecific social reinforcement (praise).
Less technically, a tact is a label. For example, a child may see their pet dog and say "dog"; the nonverbal stimulus (dog) evoked the response "dog" which is maintained by praise (or generalized conditioned reinforcement) "you're right, that is a dog!"
Chapter five of Skinner's Verbal Behavior discusses the tact in depth. A tact is said to "make contact with" the world, and refers to behavior that is under the control of generalized reinforcement. The controlling antecedent stimulus is nonverbal, and constitutes some portion of "the whole of the physical environment."
The tact described by Skinner includes three important and related events, known as the 3-term-contingency: a stimulus, a response, and a consequence, in this case reinforcement. A verbal response is occasioned by the presence of a stimulus, such as when you say "ball" in the presence of a ball. In this scenario, "ball" is more likely to be reinforced by the listener than saying "cat", showing the importance of the third event, reinforcement, in relation to the stimulus (ball) and response ("ball"). Although the stimulus controls the response, it is the verbal community which establishes the stimulus' control over the verbal response of the speaker. For example, a child may say "ball" in the presence of a ball (stimulus), the child's parent may respond "yes, that is a ball", (reinforcement) thereby increasing the probability that the child will say ball in the presence of a ball in the future. On the other hand, if the parent never responds to the child saying "ball" in the presence of a ball then the probability of that response will decrease in the future.
A tact may be pure or impure. For example, if the environmental stimulus evokes the response, the tact would be considered pure. If the tact is evoked by a verbal stimulus the resulting tact would be considered impure. For example, if a child is shown a picture of a dog, and emits the response "dog" this would be an example of a pure tact. If a child is shown a picture of a dog, and is given the verbal instruction "what is this?" then the response "dog" would be considered an impure tact.
The tact can be extended, as in generic, metaphorical, metonymical, solecistic, nomination, and "guessing" tact. It can also be involved in abstraction. Lowe, Horne, Harris & Randle (2002) would be one example of recent work in tacts.
Extensions
The tact is said to be capable of generic extension. Generic extension is essentially an example of stimulus generalization. The novel stimulus contains all of the relevant features of the original stimulus. For example, we may see a red car and say "car" as well as see a white car and say "car". Different makes and models of cars will all evoke the same response "car".
Tacts can be extended metaphorically; in this case the novel stimulus has only some of the defining features of the original stimulus. For example, when we describe something as "exploding with taste" by drawing the common property of an explosion with the response to our having eaten something (perhaps a strong response, or a sudden one).
Tacts can undergo metonymical extension when some irrelevant but related feature of the original stimulus controls a response. In metonymical extension, one word often replaces another; we may replace a part for a whole. For example, saying "refrigerator" when shown a picture of a kitchen, or saying "White house" in place of "President."
When controlling variables unrelated to standard or immediate reinforcement take over control of the tact, it is said to be solecistically extended. Malapropisms, solecism and catachresis are examples of this.
Skinner notes things like serial order, or conspicuous features of an object, may come to play as nominative tacts. A proper name may arise as a result of the tact. For example, a house that is haunted becomes "The Haunted House" as a nominative extension to the tact of its being haunted.
A guess may seemingly be the emission of a response in the absence of controlling stimuli. Skinner notes that this may simply be a tact under more subtle or hidden controlling variables, although this is not always the case in something like guessing the landing side of a coin toss, where the possible alternatives are fixed and there is no subtle or hidden stimuli to control responses.
Special conditions affecting stimulus control
Skinner deals with factors that interfere with, or change, generalized reinforcement. It is these conditions which, in turn, affect verbal behavior which may depend largely or entirely on generalized reinforcement. In children with developmental disabilities, tacts may need intensive training procedures to develop. Factors such as deprivation, emotional conditions and personal history may interfere with or change verbal behavior. Skinner mentions alertness, irrelevant emotional variables, "special circumstances" surrounding particular listeners or speakers, etc. (He refers to the conditions which are said to produce objective and subjective responses for example). We would now look at these as motivating operations/establishing conditions.
Under emersion conditions tacts will frequently emerge. However, in children with disabilities, more intensive training procedures are often needed.
Distortion
Distorted stimulus control may be minor as when a description (tact) is a slight exaggeration. Under stronger conditions of distortion, it may appear when the original stimulus is absent, as in the case of the response called a lie. Skinner notes that troubadours and fiction writers are perhaps both motivated by similar forms of tact distortion. Initially, they may recount real events, but as differential reinforcement affects the account we may see distortion and then total fabrication.
Tact training
Often, individuals with autism, developmental disabilities, or language delays have difficulty acquiring novel tacts. Many researchers in the field of verbal behavior and developmental disabilities have examined more intensive training procedures in order to teach tacts to these individuals. Specific types of prompts can be used in order to make a tact response more likely. For example, asking the student the question "what is this?" (this would be an example of an impure tact) has been used to prompt a correct tact response (this prompt can be faded until the learner can emit a pure tact). Echoic prompts (teacher repeats the correct answer which the learner must echo) have also been used to train tact responses. Kodak and Clements (2009) found that echoic training sessions before tact training was more effective at increasing independent tact responses.
Skinner (1957) suggested that verbal operants were functionally independent, meaning that after teaching one verbal operant the individual may not be able to emit the topographically same response under different stimulus conditions. For example, a child may be able to request water, but may not be able to tact water. Researchers are currently examining procedures that may facilitate the generalization across verbal operants. Some studies have indicated, for example, that after teaching a child to mand for items, they could then tact them as well without direct instruction. Multiple studies have found support for the emergence of tact responses without direct instruction. These teaching procedures are especially important for individuals with autism and developmental disabilities because the learner can gain additional skills without direct instruction time.
See also
Mand (psychology)
References
Behavioral concepts
Behaviorism | Tact (psychology) | Biology | 1,608 |
11,306,822 | https://en.wikipedia.org/wiki/Phyllosticta%20pseudocapsici | Phyllosticta pseudocapsici is a fungal plant pathogen infecting Jerusalem cherries.
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Ornamental plant pathogens and diseases
pseudocapsici
Fungi described in 1882
Taxa named by Casimir Roumeguère
Fungus species | Phyllosticta pseudocapsici | Biology | 61 |
2,399,097 | https://en.wikipedia.org/wiki/Numbering%20%28computability%20theory%29 | In computability theory a numbering is the assignment of natural numbers to a set of objects such as functions, rational numbers, graphs, or words in some formal language. A numbering can be used to transfer the idea of computability and related concepts, which are originally defined on the natural numbers using computable functions, to these different types of objects.
Common examples of numberings include Gödel numberings in first-order logic, the description numbers that arise from universal Turing machines and admissible numberings of the set of partial computable functions.
Definition and examples
A numbering of a set is a surjective partial function from to S (Ershov 1999:477). The value of a numbering ν at a number i (if defined) is often written νi instead of the usual .
Examples of numberings include:
The set of all finite subsets of has a numbering , defined so that and so that, for each finite nonempty set , where (Ershov 1999:477). This numbering is a (partial) bijection.
A fixed Gödel numbering of the computable partial functions can be used to define a numbering W of the recursively enumerable sets, by letting by W(i) be the domain of φi. This numbering will be surjective (like all numberings) but not injective: there will be distinct numbers that map to the same recursively enumerable set under W.
Types of numberings
A numbering is total if it is a total function. If the domain of a partial numbering is recursively enumerable then there always exists an equivalent total numbering (equivalence of numberings is defined below).
A numbering η is decidable if the set is a decidable set.
A numbering η is single-valued if η(x) = η(y) if and only if x=y; in other words if η is an injective function. A single-valued numbering of the set of partial computable functions is called a Friedberg numbering.
Comparison of numberings
There is a preorder on the set of all numberings. Let and be two numberings. Then is reducible to , written ,
if
If and then is equivalent to ; this is written .
Computable numberings
When the objects of the set S being numbered are sufficiently "constructive", it is common to look at numberings that can be effectively decoded (Ershov 1999:486). For example, if S consists of recursively enumerable sets, the numbering η is computable if the set of pairs (x,y) where y ∈ η(x) is recursively enumerable. Similarly, a numbering g of partial functions is computable if the relation R(x,y,z) = "[g(x)](y) = z" is partial recursive (Ershov 1999:487).
A computable numbering is called principal if every computable numbering of the same set is reducible to it. Both the set of all recursively enumerable subsets of and the set of all partial computable functions have principle numberings (Ershov 1999:487). A principle numbering of the set of partial recursive functions is known as an admissible numbering in the literature.
See also
Complete numbering
Cylindrification
Gödel numbering
Description number
References
Y.L. Ershov (1999), "Theory of numberings", Handbook of Computability Theory, Elsevier, pp. 473–506.
V.A. Uspenskiĭ, A.L. Semenov (1993), Algorithms: Main Ideas and Applications, Springer.
Theory of computation
Computability theory | Numbering (computability theory) | Mathematics | 785 |
3,799,749 | https://en.wikipedia.org/wiki/Porosimetry | Porosimetry is an analytical technique used to determine various quantifiable aspects of a material's porous structure, such as pore diameter, total pore volume, surface area, and bulk and absolute densities.
The technique involves the intrusion of a non-wetting liquid (often mercury) at high pressure into a material through the use of a porosimeter. The pore size can be determined based on the external pressure needed to force the liquid into a pore against the opposing force of the liquid's surface tension.
A force balance equation known as Washburn's equation for the above material having cylindrical pores is given as:
= pressure of liquid
= pressure of gas
= surface tension of liquid
= contact angle of intrusion liquid
= pore diameter
Since the technique is usually performed within a vacuum, the initial gas pressure is zero. The contact angle of mercury with most solids is between 135° and 142°, so an average of 140° can be taken without much error. The surface tension of mercury at 20 °C under vacuum is 480 mN/m. With the various substitutions, the equation becomes:
As pressure increases, so does the cumulative pore volume. From the cumulative pore volume, one can find the pressure and pore diameter where 50% of the total volume has been added to give the median pore diameter.
See also
BET theory, measurement of specific surface
Evapoporometry
Porosity
Wood's metal, also injected for pore structure impregnation and replica
References
Measurement
Scientific techniques
Porous media | Porosimetry | Physics,Materials_science,Mathematics,Engineering | 315 |
12,516,682 | https://en.wikipedia.org/wiki/Iron%20in%20biology | Iron is an important biological element. It is used in both the ubiquitous iron-sulfur proteins and in vertebrates it is used in hemoglobin which is essential for blood and oxygen transport.
Overview
Iron is required for life. The iron–sulfur clusters are pervasive and include nitrogenase, the enzymes responsible for biological nitrogen fixation. Iron-containing proteins participate in transport, storage and use of oxygen. Iron proteins are involved in electron transfer. The ubiquity of Iron in life has led to the Iron–sulfur world hypothesis that iron was a central component of the environment of early life.
Examples of iron-containing proteins in higher organisms include hemoglobin, cytochrome (see high-valent iron), and catalase. The average adult human contains about 0.005% body weight of iron, or about four grams, of which three quarters is in hemoglobin – a level that remains constant despite only about one milligram of iron being absorbed each day, because the human body recycles its hemoglobin for the iron content.
Microbial growth may be assisted by oxidation of iron(II) or by reduction of iron (III).
Biochemistry
Iron acquisition poses a problem for aerobic organisms because ferric iron is poorly soluble near neutral pH. Thus, these organisms have developed means to absorb iron as complexes, sometimes taking up ferrous iron before oxidising it back to ferric iron. In particular, bacteria have evolved very high-affinity sequestering agents called siderophores.
After uptake in human cells, iron storage is precisely regulated. A major component of this regulation is the protein transferrin, which binds iron ions absorbed from the duodenum and carries it in the blood to cells. Transferrin contains Fe3+ in the middle of a distorted octahedron, bonded to one nitrogen, three oxygens and a chelating carbonate anion that traps the Fe3+ ion: it has such a high stability constant that it is very effective at taking up Fe3+ ions even from the most stable complexes. At the bone marrow, transferrin is reduced from Fe3+ and Fe2+ and stored as ferritin to be incorporated into hemoglobin.
The most commonly known and studied bioinorganic iron compounds (biological iron molecules) are the heme proteins: examples are hemoglobin, myoglobin, and cytochrome P450. These compounds participate in transporting gases, building enzymes, and transferring electrons. Metalloproteins are a group of proteins with metal ion cofactors. Some examples of iron metalloproteins are ferritin and rubredoxin. Many enzymes vital to life contain iron, such as catalase, lipoxygenases, and IRE-BP.
Hemoglobin is an oxygen carrier that occurs in red blood cells and contributes their color, transporting oxygen in the arteries from the lungs to the muscles where it is transferred to myoglobin, which stores it until it is needed for the metabolic oxidation of glucose, generating energy. Here the hemoglobin binds to carbon dioxide, produced when glucose is oxidized, which is transported through the veins by hemoglobin (predominantly as bicarbonate anions) back to the lungs where it is exhaled. In hemoglobin, the iron is in one of four heme groups and has six possible coordination sites; four are occupied by nitrogen atoms in a porphyrin ring, the fifth by an imidazole nitrogen in a histidine residue of one of the protein chains attached to the heme group, and the sixth is reserved for the oxygen molecule it can reversibly bind to. When hemoglobin is not attached to oxygen (and is then called deoxyhemoglobin), the Fe2+ ion at the center of the heme group (in the hydrophobic protein interior) is in a high-spin configuration. It is thus too large to fit inside the porphyrin ring, which bends instead into a dome with the Fe2+ ion about 55 picometers above it. In this configuration, the sixth coordination site reserved for the oxygen is blocked by another histidine residue.
When deoxyhemoglobin picks up an oxygen molecule, this histidine residue moves away and returns once the oxygen is securely attached to form a hydrogen bond with it. This results in the Fe2+ ion switching to a low-spin configuration, resulting in a 20% decrease in ionic radius so that now it can fit into the porphyrin ring, which becomes planar. (Additionally, this hydrogen bonding results in the tilting of the oxygen molecule, resulting in a Fe–O–O bond angle of around 120° that avoids the formation of Fe–O–Fe or Fe–O2–Fe bridges that would lead to electron transfer, the oxidation of Fe2+ to Fe3+, and the destruction of hemoglobin.) This results in a movement of all the protein chains that leads to the other subunits of hemoglobin changing shape to a form with larger oxygen affinity. Thus, when deoxyhemoglobin takes up oxygen, its affinity for more oxygen increases, and vice versa. Myoglobin, on the other hand, contains only one heme group and hence this cooperative effect cannot occur. Thus, while hemoglobin is almost saturated with oxygen in the high partial pressures of oxygen found in the lungs, its affinity for oxygen is much lower than that of myoglobin, which oxygenates even at low partial pressures of oxygen found in muscle tissue. As described by the Bohr effect (named after Christian Bohr, the father of Niels Bohr), the oxygen affinity of hemoglobin diminishes in the presence of carbon dioxide.
Carbon monoxide and phosphorus trifluoride¿†¿ are poisonous to humans because they bind to hemoglobin similarly to oxygen, but with much more strength, so that oxygen can no longer be transported throughout the body. Hemoglobin bound to carbon monoxide is known as carboxyhemoglobin. This effect also plays a minor role in the toxicity of cyanide, but there the major effect is by far its interference with the proper functioning of the electron transport protein cytochrome a. The cytochrome proteins also involve heme groups and are involved in the metabolic oxidation of glucose by oxygen. The sixth coordination site is then occupied by either another imidazole nitrogen or a methionine sulfur, so that these proteins are largely inert to oxygen – with the exception of cytochrome a, which bonds directly to oxygen and thus is very easily poisoned by cyanide. Here, the electron transfer takes place as the iron remains in low spin but changes between the +2 and +3 oxidation states. Since the reduction potential of each step is slightly greater than the previous one, the energy is released step-by-step and can thus be stored in adenosine triphosphate. Cytochrome a is slightly distinct, as it occurs at the mitochondrial membrane, binds directly to oxygen, and transports protons as well as electrons, as follows:
4 Cytc2+ + O2 + 8H → 4 Cytc3+ + 2 H2O + 4H
Although the heme proteins are the most important class of iron-containing proteins, the iron–sulfur proteins are also very important, being involved in electron transfer, which is possible since iron can exist stably in either the +2 or +3 oxidation states. These have one, two, four, or eight iron atoms that are each approximately tetrahedrally coordinated to four sulfur atoms; because of this tetrahedral coordination, ths in the surrounding peptide chains. Another important class of iron–sulfur proteins is the ferredoxins, which have multiple iron atoms. Transferrin does not belong to either of these classes.
The ability of sea mussels to maintain their grip on rocks in the ocean is facilitated by their use of organometallic iron-based bonds in their protein-rich cuticles. Based on synthetic replicas, the presence of iron in these structures increased elastic modulus 770 times, tensile strength 58 times, and toughness 92 times. The amount of stress required to permanently damage them increased 76 times.
Vertebrate metabolism
In vertebrates, iron is an essential component of hemoglobin, the oxygen transport protein.
Human body iron stores
Most well-nourished people in industrialized countries have 4 to 5 grams of iron in their bodies (~38 mg iron/kg body weight for women and ~50 mg iron/kg body for men). Of this, about is contained in the hemoglobin needed to carry oxygen through the blood (around 0.5 mg of iron per mL of blood), and most of the rest (approximately 2 grams in adult men, and somewhat less in women of childbearing age) is contained in ferritin complexes that are present in all cells, but most common in bone marrow, liver, and spleen. The liver stores of ferritin are the primary physiologic source of reserve iron in the body. The reserves of iron in industrialized countries tend to be lower in children and women of child-bearing age than in men and in the elderly. Women who must use their stores to compensate for iron lost through menstruation, pregnancy or lactation have lower non-hemoglobin body stores, which may consist of , or even less.
Of the body's total iron content, about is devoted to cellular proteins that use iron for important cellular processes like storing oxygen (myoglobin) or performing energy-producing redox reactions (cytochromes). A relatively small amount (3–4 mg) circulates through the plasma, bound to transferrin. Because of its toxicity, free soluble iron is kept in low concentration in the body.
Iron deficiency first affects the storage of iron in the body, and depletion of these stores is thought to be relatively asymptomatic, although some vague and non-specific symptoms have been associated with it. Since iron is primarily required for hemoglobin, iron deficiency anemia is the primary clinical manifestation of iron deficiency. Iron-deficient people will suffer or die from organ damage well before their cells run out of the iron needed for intracellular processes like electron transport.
Macrophages of the reticuloendothelial system store iron as part of the process of breaking down and processing hemoglobin from engulfed red blood cells. Iron is also stored as a pigment called hemosiderin, which is an ill-defined deposit of protein and iron, created by macrophages where excess iron is present, either locally or systemically, e.g., among people with iron overload due to frequent blood cell destruction and the necessary transfusions their condition calls for. If systemic iron overload is corrected, over time the hemosiderin is slowly resorbed by the macrophages.
Mechanisms of iron regulation
Human iron homeostasis is regulated at two different levels. Systemic iron levels are balanced by the controlled absorption of dietary iron by enterocytes, the cells that line the interior of the intestines, and the uncontrolled loss of iron from epithelial sloughing, sweat, injuries and blood loss. In addition, systemic iron is continuously recycled. Cellular iron levels are controlled differently by different cell types due to the expression of particular iron regulatory and transport proteins.
Systemic iron regulation
Dietary iron uptake
The absorption of dietary iron is a variable and dynamic process. The amount of iron absorbed compared to the amount ingested is typically low, but may range from 5% to as much as 35% depending on circumstances and type of iron. The efficiency with which iron is absorbed varies depending on the source. Generally, the best-absorbed forms of iron come from animal products. Absorption of dietary iron in iron salt form (as in most supplements) varies somewhat according to the body's need for iron, and is usually between 10% and 20% of iron intake. Absorption of iron from animal products, and some plant products, is in the form of heme iron, and is more efficient, allowing absorption of from 15% to 35% of intake. Heme iron in animals is from blood and heme-containing proteins in meat and mitochondria, whereas in plants, heme iron is present in mitochondria in all cells that use oxygen for respiration.
Like most mineral nutrients, the majority of the iron absorbed from digested food or supplements is absorbed in the duodenum by enterocytes of the duodenal lining. These cells have special molecules that allow them to move iron into the body. To be absorbed, dietary iron can be absorbed as part of a protein such as heme protein or iron must be in its ferrous Fe2+ form. A ferric reductase enzyme on the enterocytes' brush border, duodenal cytochrome B (Dcytb), reduces ferric Fe3+ to Fe2+. A protein called divalent metal transporter 1 (DMT1), which can transport several divalent metals across the plasma membrane, then transports iron across the enterocyte's cell membrane into the cell. If the iron is bound to heme it is instead transported across the apical membrane by heme carrier protein 1 (HCP1).
These intestinal lining cells can then either store the iron as ferritin, which is accomplished by Fe2+ binding to apoferritin (in which case the iron will leave the body when the cell dies and is sloughed off into feces), or the cell can release it into the body via the only known iron exporter in mammals, ferroportin. Hephaestin, a ferroxidase that can oxidize Fe2+ to Fe3+ and is found mainly in the small intestine, helps ferroportin transfer iron across the basolateral end of the intestine cells. In contrast, ferroportin is post-translationally repressed by hepcidin, a 25-amino acid peptide hormone. The body regulates iron levels by regulating each of these steps. For instance, enterocytes synthesize more Dcytb, DMT1 and ferroportin in response to iron deficiency anemia. Iron absorption from diet is enhanced in the presence of vitamin C and diminished by excess calcium, zinc, or manganese.
The human body's rate of iron absorption appears to respond to a variety of interdependent factors, including total iron stores, the extent to which the bone marrow is producing new red blood cells, the concentration of hemoglobin in the blood, and the oxygen content of the blood. The body also absorbs less iron during times of inflammation, in order to deprive bacteria of iron. Recent discoveries demonstrate that hepcidin regulation of ferroportin is responsible for the syndrome of anemia of chronic disease.
Iron recycling and loss
Most of the iron in the body is hoarded and recycled by the reticuloendothelial system, which breaks down aged red blood cells. In contrast to iron uptake and recycling, there is no physiologic regulatory mechanism for excreting iron. People lose a small but steady amount by gastrointestinal blood loss, sweating and by shedding cells of the skin and the mucosal lining of the gastrointestinal tract. The total amount of loss for healthy people in the developed world amounts to an estimated average of a day for men, and 1.5–2 mg a day for women with regular menstrual periods. People with gastrointestinal parasitic infections, more commonly found in developing countries, often lose more. Those who cannot regulate absorption well enough get disorders of iron overload. In these diseases, the toxicity of iron starts overwhelming the body's ability to bind and store it.
Cellular iron regulation
Iron import
Most cell types take up iron primarily through receptor-mediated endocytosis via transferrin receptor 1 (TFR1), transferrin receptor 2 (TFR2) and GAPDH. TFR1 has a 30-fold higher affinity for transferrin-bound iron than TFR2 and thus is the main player in this process. The higher order multifunctional glycolytic enzyme glyceraldehyde-3-phosphate dehydrogenase (GAPDH) also acts as a transferrin receptor. Transferrin-bound ferric iron is recognized by these transferrin receptors, triggering a conformational change that causes endocytosis. Iron then enters the cytoplasm from the endosome via importer DMT1 after being reduced to its ferrous state by a STEAP family reductase.
Alternatively, iron can enter the cell directly via plasma membrane divalent cation importers such as DMT1 and ZIP14 (Zrt-Irt-like protein 14). Again, iron enters the cytoplasm in the ferrous state after being reduced in the extracellular space by a reductase such as STEAP2, STEAP3 (in red blood cells), Dcytb (in enterocytes) and SDR2.
The labile iron pool
In the cytoplasm, ferrous iron is found in a soluble, chelatable state which constitutes the labile iron pool (~0.001 mM). In this pool, iron is thought to be bound to low-mass compounds such as peptides, carboxylates and phosphates, although some might be in a free, hydrated form (aqua ions). Alternatively, iron ions might be bound to specialized proteins known as metallochaperones. Specifically, poly-r(C)-binding proteins PCBP1 and PCBP2 appear to mediate transfer of free iron to ferritin (for storage) and non-heme iron enzymes (for use in catalysis). The labile iron pool is potentially toxic due to iron's ability to generate reactive oxygen species. Iron from this pool can be taken up by mitochondria via mitoferrin to synthesize Fe-S clusters and heme groups.
The storage iron pool
Iron can be stored in ferritin as ferric iron due to the ferroxidase activity of the ferritin heavy chain. Dysfunctional ferritin may accumulate as hemosiderin, which can be problematic in cases of iron overload. The ferritin storage iron pool is much larger than the labile iron pool, ranging in concentration from 0.7 mM to 3.6 mM.
Iron export
Iron export occurs in a variety of cell types, including neurons, red blood cells, macrophages and enterocytes. The latter two are especially important since systemic iron levels depend upon them. There is only one known iron exporter, ferroportin. It transports ferrous iron out of the cell, generally aided by ceruloplasmin and/or hephaestin (mostly in enterocytes), which oxidize iron to its ferric state so it can bind ferritin in the extracellular medium. Hepcidin causes the internalization of ferroportin, decreasing iron export. Besides, hepcidin seems to downregulate both TFR1 and DMT1 through an unknown mechanism. Another player assisting ferroportin in effecting cellular iron export is GAPDH. A specific post translationally modified isoform of GAPDH is recruited to the surface of iron loaded cells where it recruits apo-transferrin in close proximity to ferroportin so as to rapidly chelate the iron extruded.
The expression of hepcidin, which only occurs in certain cell types such as hepatocytes, is tightly controlled at the transcriptional level and it represents the link between cellular and systemic iron homeostasis due to hepcidin's role as "gatekeeper" of iron release from enterocytes into the rest of the body. Erythroblasts produce erythroferrone, a hormone which inhibits hepcidin and so increases the availability of iron needed for hemoglobin synthesis.
Translational control of cellular iron
Although some control exists at the transcriptional level, the regulation of cellular iron levels is ultimately controlled at the translational level by iron-responsive element-binding proteins IRP1 and especially IRP2. When iron levels are low, these proteins are able to bind to iron-responsive elements (IREs). IREs are stem loop structures in the untranslated regions (UTRs) of mRNA.
Both ferritin and ferroportin contain an IRE in their 5' UTRs, so that under iron deficiency their translation is repressed by IRP2, preventing the unnecessary synthesis of storage protein and the detrimental export of iron. In contrast, TFR1 and some DMT1 variants contain 3' UTR IREs, which bind IRP2 under iron deficiency, stabilizing the mRNA, which guarantees the synthesis of iron importers.
Marine systems
Iron plays an essential role in marine systems and can act as a limiting nutrient for planktonic activity. Because of this, too much of a decrease in iron may lead to a decrease in growth rates in phytoplanktonic organisms such as diatoms. Iron can also be oxidized by marine microbes under conditions that are high in iron and low in oxygen.
Iron can enter marine systems through adjoining rivers and directly from the atmosphere. Once iron enters the ocean, it can be distributed throughout the water column through ocean mixing and through recycling on the cellular level. In the arctic, sea ice plays a major role in the store and distribution of iron in the ocean, depleting oceanic iron as it freezes in the winter and releasing it back into the water when thawing occurs in the summer. The iron cycle can fluctuate the forms of iron from aqueous to particle forms altering the availability of iron to primary producers. Increased light and warmth increases the amount of iron that is in forms that are usable by primary producers.
See also
known to incorporate iron into its exoskeleton
References
Biological systems
Biology and pharmacology of chemical elements
Dietary minerals
Biology
Nutrition
Physiology | Iron in biology | Chemistry,Biology | 4,630 |
74,904,423 | https://en.wikipedia.org/wiki/Max%20Rubner%20Institute | The Max Rubner Institute (MRI), Federal Research Institute of Nutrition and Food is a higher federal authority of the Federal Republic of Germany in the portfolio of the Federal Ministry of Food and Agriculture (BMEL). The research focus is on consumer health protection in the nutrition sector. In this field, the MRI advises the BMEL.
The institute was named after the physician and physiologist Max Rubner. Until January 1, 2008, the institute was called the Federal Research Institute of Nutrition and Food (BfEL). The president of the MRI is Tanja Schwerdtle.
The institution's headquarters are in Karlsruhe. Other locations are Kiel, Detmold and Kulmbach. The Münster site has been closed; "the fish quality department is currently still located in Hamburg." In total, the institute employs around 200 scientists at its various sites.
The MRI is a member of the Working Group of Departmental Research Institutions.
History
The MRI's predecessor, the Federal Research Institute of Nutrition and Food, was established on January 1, 2004, through the merger of the following institutions:
Federal Institute for Cereal, Potato and Fat Research in Detmold and Münster.
Federal Dairy Research Station in Kiel
Federal Research Station for Nutrition in Karlsruhe
Federal Institute for Meat Research in Kulmbach
Fish Quality Branch of the Institute for Fishery Technology and Fish Quality of the Federal Research Institute for Fisheries in Hamburg-Altona, Palmaille 9.
References
2004 establishments in Germany
Agricultural research institutes in Germany
Food chemistry organizations
Government health agencies of Germany
Organisations based in Karlsruhe
Research institutes established in 2004
Oststadt (Karlsruhe) | Max Rubner Institute | Chemistry | 324 |
5,291,435 | https://en.wikipedia.org/wiki/Gallium%28III%29%20iodide | Gallium(III) iodide is the inorganic compound with the formula GaI3. A yellow hygroscopic solid, it is the most common iodide of gallium. In the chemical vapor transport method of growing crystals of gallium arsenide uses iodine as the transport agent. In the solid state, it exists as the dimer Ga2I6. When vaporized, its forms GaI3 molecules of D3h symmetry where the Ga–I distance is 2.458 Angstroms.
Gallium triiodide can be reduced with gallium metal to give a green-colored gallium(I) iodide. The nature of this species is unclear, but it is useful for the preparation of gallium(I) and gallium(II) compounds.
See also
Gallium halides
References
Cited sources
Iodides
Gallium compounds
Metal halides | Gallium(III) iodide | Chemistry | 186 |
24,910,342 | https://en.wikipedia.org/wiki/2009%20Sulawesi%20superbolide | The 2009 Sulawesi superbolide was an atmospheric fireball blast over Indonesia on October 8, 2009, at approximately 03:00 UTC (11:00 local time), near the coastal city of Watampone in South Sulawesi, island of Sulawesi. The meteoritic impactor broke up at an estimated height of 15–20 km. The impact energy of the bolide was estimated in the 10 to 50 kiloton TNT equivalent range, with the higher end of this range being more likely. The likely size of the impactor was 5–10 m diameter.
References
External links
Explosions in 2009
2009 Sulawesi superbolid
2009 in Indonesia
2009 in outer space
21st century in Sulawesi
October 2009 events in Indonesia
21st-century astronomical events
Explosions in Indonesia | 2009 Sulawesi superbolide | Astronomy | 150 |
49,995,369 | https://en.wikipedia.org/wiki/Richard%20Dolby | Richard Edwin Dolby, OBE, HonDMet, FREng, FIMMM, HonFWeldI (born 7 July 1938 in Sheffield) is a metallurgist and former Director of Research and Technology at The Welding Institute (TWI) in Cambridge, UK. He is a past President at the Institute of Materials, Minerals and Mining and a current Distinguished Research Fellow at the University of Cambridge Department of Materials Science and Metallurgy.
Education
Richard Dolby was educated at Northampton Grammar School and, following two years' National Service in the Royal Electrical and Mechanical Engineers, at the University of Cambridge (Selwyn College), Department of Materials Science and Metallurgy where he also gained his PhD.
Career
Richard Dolby's early career began at British Alcan and the General Electric Company, and in 1964 he joined The Welding Institute (British Welding Research Association). Here he worked on metallurgical aspects of HAZ toughness of pressure vessel steels and jointly led pioneering studies into lamellar tearing in welded structural steel. He spent 14 years specialising in metallurgy and carrying out fundamental industrial research in the Materials Department at the Institute, becoming Head of Department in 1978. He was appointed Director, Research and Technology in 1985 until his retirement in 2003 when The Welding Institute hosted a two-day conference in his honour.
The Welding Institute's Richard Dolby-Rolls-Royce Prize is given biennially to young engineers who demonstrate success in, and enthusiasm for, welding, joining and/or materials engineering at an early stage in their career.
Advisory committees and awards
Richard Dolby has held appointments on the UK Technical Advisory Board on the Structural Integrity of High Integrity Plant (TAGSI), the Materials Board of the UK Defence Scientific Advisory Council and various Department of Trade and Industry Committees, as well as the Institute of Materials, Minerals and Mining (IoM3). He was President of the Institute of Materials Minerals and Mining from 2006-2007 and is a past Vice-President of the International Institute of Welding (IIW) where he also served as Chairman of the IIW Technical Management Board and Chairman of the IIW Research Strategy Group.
He was elected a Fellow of the Royal Academy of Engineering in 1987 and awarded the Order of the British Empire in 2000 for his services to research and technology transfer in materials joining. He is a past winner of The Welding Institute's Brooker Award and the Institute of Materials, Minerals and Mining's Bessemer Gold Medal for contributions to the steel industry. He was invited to deliver the Hatfield Memorial Lecture in 1997 and awarded an Honorary Doctorate of Metallurgy at the University of Sheffield in 1998. In 2003 he was presented with the IIW Arata Prize for his contribution to the science of welding and joining.
References
External links
R.E. Dolby selected publications and citations
Conference announcement: Metals Joining Technology - Where Next?
Richard Dolby-Rolls-Royce Prize, The Welding Institute
1938 births
Living people
Alumni of Selwyn College, Cambridge
Officers of the Order of the British Empire
Fellows of the Royal Academy of Engineering
Bessemer Gold Medal
British metallurgists
Fellows of the Institute of Materials, Minerals and Mining | Richard Dolby | Chemistry | 640 |
9,918,319 | https://en.wikipedia.org/wiki/Polyiodide | The polyiodides are a class of polyhalogen anions composed entirely of iodine atoms. The most common member is the triiodide ion, . Other known larger polyiodides include [I4]2−, [I5]−, [I6]2−, [I7]−, [I8]2−, [I9]−, [I10]2−, [I10]4−, [I11]3−, [I12]2−, [I13]3−, [I14]4-, [I16]2−, [I22]4−, [I26]3−, [I26]4−, [I28]4− and [I29]3−. All these can be considered as formed from the interaction of the I–, I2, and building blocks.
Preparation
The polyiodides can be made by addition of stoichiometric amounts of I2 to solutions containing I− and , with the presence of large countercations to stabilize them. For example, KI3·H2O can be crystallized from a saturated solution of KI when a stoichiometric amount of I2 is added and cooled.
Structure
]
]
Polyiodides adopt diverse structures. Most can be considered as associations of I2, I−, and units. Discrete polyiodides are usually linear. The more complex two- or three-dimensional network structures of chains and cages are formed as the ions interact with each other, with their shapes depending on their associated cations quite strongly, a phenomenon named dimensional caging. The table below lists the polyiodide salts which have been structurally characterized, along with their counter-cation.
Reactivity
Polyiodide compounds are generally sensitive to light.
Triiodide, , undergoes unimolecular photodissociation. Polyiodide has been used to improve the scalability in the synthesis of halide perovskite photovoltaic materials.
Conductivity
Solid state compounds containing linear-chain polyiodide ions exhibit enhanced conductivity than their simple iodide counterparts. The conductivity can be drastically modified by external pressure, which changes the interatomic distances between iodine moieties and the charge distribution.
See also
Triiodide
Polyhalogen ions
Iodine–starch test
Dye-sensitized solar cell
Halogen bond
Catenation
Inorganic polymer
References
Anions
Iodides
Polyhalides | Polyiodide | Physics,Chemistry | 518 |
74,561,896 | https://en.wikipedia.org/wiki/1%2C9-Nonanediol | 1,9-Nonanediol, also known as nonamethylene glycol, is a diol with the molecular formula HO(CH)OH. It is a colorless solid, which is sparingly soluble in water but readily soluble in ethanol.
1,9-nonanediol can be produced by isomerization of allyl alcohol. It can also be obtained by reacting methyl oleate with triethylsilyl hydrotrioxide and lithium aluminum hydride.
1,9-Nonanediol is used as a monomer in the synthesis of some polymers. It is also used as an intermediate in the manufacturing of aromatic chemicals and in the pharmaceutical industry.
See also
Ethylene glycol
1,2-Octanediol
References
Monomers
Alkanediols | 1,9-Nonanediol | Chemistry,Materials_science | 167 |
51,396,023 | https://en.wikipedia.org/wiki/Dynamic%20Graphics%20Project | The Dynamic Graphics Project (commonly referred to as dgp) is an interdisciplinary research laboratory at the University of Toronto devoted to projects involving Computer Graphics, Computer Vision, Human Computer Interaction, and Visualization. The lab began as the computer graphics research group of Computer Science Professor in 1967. Mezei invited Bill Buxton, a pioneer of human–computer interaction to join. In 1972, Ronald Baecker, another HCI pioneer joined dgp, establishing dgp as the first Canadian university group focused on computer graphics and human-computer interaction. According to csrankings.org, for the combined subfields of computer graphics, HCI, and visualization the dgp is the number one research institution in the world.
Since then, dgp has hosted many well known faculty and students in computer graphics, computer vision and HCI (e.g., Alain Fournier, Bill Reeves, Jos Stam, Demetri Terzopoulos, Marilyn Tremaine). dgp also occasionally hosts artists in residence (e.g., Oscar-winner Chris Landreth). Many past and current researchers at Autodesk (and before that Alias Wavefront) graduated after working at dgp. dgp is located in the St. George Campus of University of Toronto in the Bahen Centre for Information Technology. dgp researchers regularly publish at ACM SIGGRAPH, ACM SIGCHI and ICCV.
dgp hosts the Toronto User Experience (TUX) Speaker Series and the Sanders Series Lectures.
Notable alumni
Bill Buxton (MS 1978)
James McCrae (PhD 2013)
Dimitris Metaxas (PhD 1992)
Bill Reeves (MS 1976, Ph.D. 1980)
Jos Stam (MS 1991, Ph.D. 1995)
References
Computer graphics
Computer vision
Human–computer interaction
University of Toronto | Dynamic Graphics Project | Engineering | 372 |
15,611,210 | https://en.wikipedia.org/wiki/Valentin%20Rumyantsev | Valentin Vitalyevich Rumyantsev (; 19 July 1921 – 10 June 2007) was a Russian engineer who played a crucial role in Soviet space program, mainly working on robotics and controls. He was a member of the Russian Academy of Sciences (1992), Department of Engineering, Mechanics and Control.
Career
Rumyantsev was professor in the Faculty of Mechanics and Mathematics in the Department of Theoretical Mechanics and Mecatronics at Moscow State University. He was editor of the Journal of Applied Mathematics and Mechanics (). Rumyantsev was also a corresponding member (1995) and member (2000) of the International Academy of Astronautics (France, Paris).
References
1921 births
2007 deaths
People from Saratovsky Uyezd
Academic staff of Moscow State University
Corresponding Members of the USSR Academy of Sciences
Foreign members of the Serbian Academy of Sciences and Arts
Full Members of the Russian Academy of Sciences
Saratov State University alumni
Humboldt Research Award recipients
Recipients of the Order of Honour (Russia)
Recipients of the Order of the October Revolution
Recipients of the Order of the Red Banner of Labour
Recipients of the USSR State Prize
State Prize of the Russian Federation laureates
Control theorists
Early spaceflight scientists
Russian aerospace engineers
Soviet aerospace engineers
Soviet space program personnel
Burials at Vostryakovskoye Cemetery | Valentin Rumyantsev | Engineering | 257 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.