id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
2,646,364
https://en.wikipedia.org/wiki/Synthetic%20alexandrite
Synthetic alexandrite is an artificially grown crystalline variety of chrysoberyl, composed of beryllium aluminum oxide (BeAlO). The name is also often used erroneously to describe synthetically-grown corundum that simulates the appearance of alexandrite, but with a different mineral composition. Manufacture Most true synthetic alexandrite is grown by the Czochralski method, known as “pulling”. Another method is a “floating zone”, developed in 1964 by an Armenian scientist Khachatur Saakovich Bagdasarov, of the Russian (former Soviet) Institute of Crystallography, Moscow. Bagdasarov’s floating zone method was widely used to manufacture white YAG for spacecraft and submarine lighting, before the process found its way into jewelry production. Alexandrite crystals grown by floating zone method tend to have less intensity in color than crystals grown by the pulled method. Flux-grown alexandrite stones are expensive to make and are grown in platinum crucibles. Crystals of platinum may still be evident in the cut stones. Alexandrite grown by the flux-melt process will contain particles of flux, resembling liquid “feathers” with a refractive index and specific gravity that echo that of natural alexandrite. Some stones contain parallel groups of negative crystals. Due to the high cost of this process, it is no longer used commercially. The largest producer of jewelry quality laboratory-grown alexandrite to this day is Tairus. Production capacity is in the range of 100 kg/year. Chrysoberyl-based synthetics Czochralski or “pulled” alexandrite is easier to identify because it is very “clean”. Curved striations visible with magnification are a give-away. Some pulled stones have been seen to change color from blue to red – similar to natural alexandrite from Brazil, Madagascar, and India. Seiko synthetic alexandrites show a swirled internal structure characteristic of the floating zone method of synthesis. They have “tadpole” inclusions (with long tails) and spherical bubbles. Flux-grown alexandrites are more difficult to spot because of their convincing colors, and because they are not “clean”. Their inclusions of undissolved flux can look like inclusions in natural chrysoberyl. However, layers of dust-like particles parallel to the seed plate, and strong banding or growth lines may also be apparent. The Inamori synthetic alexandrite had a cat's eye variety, which showed a distinct color change. The eye was broad and of moderate intensity. Specimens were a dark greyish-green with slightly purple overtones under fluorescent lighting. The eye was slightly greenish-bluish-white and the stones were dull and oily. They appeared to be inclusion-free and under a strong incandescent light in the long direction, asterism could be seen with two rays weaker than the eye. This has not been reported in natural alexandrite. Under magnification, parallel striations could be seen along the length of the cabochon and the striations were undulating rather than straight, again not a feature of natural alexandrite. The name allexite has been used for synthetic alexandrite manufactured by the Diamonair Corporation who maintains that its product is Czochralski-grown. Corundum-based simulated alexandrite Most gemstones described as synthetic alexandrite are actually simulated alexandrite: Synthetic corundum laced with vanadium to produce the color change. This alexandrite-like sapphire material has been known for almost 100 years. The material shows a characteristic purple-mauve color change which, although attractive, differs from alexandrite because there is never any green. The stones will be very clean and may be available in large sizes. Gemological testing will reveal a refractive index of 1.759–1.778 (corundum) instead of 1.741–1.760 (chrysoberyl). Under magnification, gas bubbles and curved stria may be evident. When examined with a spectroscope a strong vanadium absorption line at 475 nm will be apparent. Footnotes References External links Alexandrite, synthetic Alexandrite, synthetic
Synthetic alexandrite
Physics,Chemistry
879
20,861,435
https://en.wikipedia.org/wiki/Chiral%20polytope
In the study of abstract polytopes, a chiral polytope is a polytope that is as symmetric as possible without being mirror-symmetric, formalized in terms of the action of the symmetry group of the polytope on its flags. Definition The more technical definition of a chiral polytope is a polytope that has two orbits of flags under its group of symmetries, with adjacent flags in different orbits. This implies that it must be vertex-transitive, edge-transitive, and face-transitive, as each vertex, edge, or face must be represented by flags in both orbits; however, it cannot be mirror-symmetric, as every mirror symmetry of the polytope would exchange some pair of adjacent flags. For the purposes of this definition, the symmetry group of a polytope may be defined in either of two different ways: it can refer to the symmetries of a polytope as a geometric object (in which case the polytope is called geometrically chiral) or it can refer to the symmetries of the polytope as a combinatorial structure (the automorphisms of an abstract polytope). Chirality is meaningful for either type of symmetry but the two definitions classify different polytopes as being chiral or nonchiral. Geometrically chiral polytopes Geometrically chiral polytopes are relatively exotic compared to the more ordinary regular polytopes. It is not possible for a geometrically chiral polytope to be convex, and many geometrically chiral polytopes of note are skew. In three dimensions In three dimensions, it is not possible for a geometrically chiral polytope to have finitely many finite faces. For instance, the snub cube is vertex-transitive, but its flags have more than two orbits, and it is neither edge-transitive nor face-transitive, so it is not symmetric enough to meet the formal definition of chirality. The quasiregular polyhedra and their duals, such as the cuboctahedron and the rhombic dodecahedron, provide another interesting type of near-miss: they have two orbits of flags, but are mirror-symmetric, and not every adjacent pair of flags belongs to different orbits. However, despite the nonexistence of finite chiral three-dimensional polyhedra, there exist infinite three-dimensional chiral skew polyhedra of types {4,6}, {6,4}, and {6,6}. In four dimensions In four dimensions, there are a geometrically chiral finite polytopes. One example is Roli's cube, a skew polytope on the skeleton of the 4-cube. References Further reading . . . . Chirality Polytopes
Chiral polytope
Physics,Chemistry,Biology
587
19,891,447
https://en.wikipedia.org/wiki/Methyldihydromorphine
Methyldihydromorphine is a semi-synthetic opioid originally developed in Germany in 1936, controlled under both domestic law and UN conventions because of its possible potential for abuse. Methyldihydromorphine is related to heterocodeine and is not a synonym for dihydrocodeine or dihydroheterocodeine (6-methoxydihydromorphine). This compound is a derivative of hydromorphone It has been found to be 33 percent the analgesic potency of morphine with a substantially longer duration of action. So far, little is currently known about this compound. It is a Schedule I controlled substance in the United States with an ACSCN of 9304 and a 2013 annual manufacturing quota of 2 grams. References 4,5-Epoxymorphinans Ethers Mu-opioid receptor agonists Hydroxyarenes Semisynthetic opioids
Methyldihydromorphine
Chemistry
196
8,586,700
https://en.wikipedia.org/wiki/Joint%20Board%20for%20the%20Enrollment%20of%20Actuaries
The Joint Board for the Enrollment of Actuaries licenses actuaries to perform a variety of actuarial tasks required of pension plans in the United States by the Employee Retirement Income Security Act of 1974 (ERISA). The Joint Board consists of five members – three appointed by the Secretary of the Treasury and two by the Secretary of Labor – as well as a sixth non-voting member representing the Pension Benefit Guaranty Corporation. The Joint Board administers two examinations to prospective Enrolled Actuaries. After an individual passes the two exams and completes sufficient relevant professional experience, she or he becomes an Enrolled Actuary. See also Title 20 of the Code of Federal Regulations Sources Joint Board for the Enrollment of Actuaries Employee Retirement Income Security Act of 1974 Pension Benefit Guaranty Corporation United States Department of the Treasury United States Department of Labor Actuarial science
Joint Board for the Enrollment of Actuaries
Mathematics
170
26,874,183
https://en.wikipedia.org/wiki/Endeavour%20Software%20Project%20Management
Endeavour Software Project Management is an open-source solution to manage large-scale enterprise software projects in an iterative and incremental development process. History Endeavour Software Project Management was founded in September 2008 with the intention to develop a solution for replacing expensive and complex project management systems that is easy to use, intuitive, and realistic by eliminating features considered unnecessary. In September 2009 the project was registered in SourceForge, and in April 2010 the project was included in SourceForge's blog with an average of 210 weekly downloads. Features The major features include support for the following software artifacts: Projects Use cases Iterations Project plans Change requests Defect tracking Test cases Test plans Task Actors Document management Project glossary Project Wiki Developer management Reports (assignments, defects, cumulative flow) SVN browser integration with Svenson Continuous Integration with Hudson Email notifications Fully internationalizable System requirements Endeavour Software Project Management can be deployed in any Java EE-compliant application server and any relational database running under a variety of different operating systems. Its cross-browser capability allows it to run in most popular web browsers. Usage Software project management Iterative and incremental development Use-case-driven Issue tracking Test-case management software Integrated wiki See also Project management software List of project management software Notes References Lee Schlesinger. Social media specialist at SourceForge.net blog post about Endeavour Software Project Management http://www.softpedia.com/get/Programming/Coding-languages-Compilers/Endeavour-Software-Project-Management.shtml http://freshmeat.net/projects/endeavour-software-project-management https://web.archive.org/web/20100504041659/http://www.federalarchitect.com/2009/07/21/new-open-source-project-management-tool-for-large-scale-enterprise-systems/ External links http://sourceforge.net/projects/endeavour-mgmt/reviews Software project management Software requirements Bug and issue tracking software Software development process Free software programmed in Java (programming language) Free project management software
Endeavour Software Project Management
Engineering
440
4,024,849
https://en.wikipedia.org/wiki/White%20noise%20machine
A white noise machine is a device that produces a noise that calms the listener, which in many cases sounds like a rushing waterfall or wind blowing through trees, and other serene or nature-like sounds. Often such devices do not produce actual white noise, which has a harsh sound, but pink noise, whose power rolls off at higher frequencies, or other colors of noise. Use White noise devices are available from numerous manufacturers in many forms, for a variety of different uses, including audio testing, sound masking, sleep-aid, and power-napping. Sleep-aid and nap machine products may also produce other soothing sounds, such as music, rain, wind, highway traffic and ocean waves mixed with—or modulated by—white noise. Electric fans are a common alternative, although some Asian communities historically avoided using fans due to the superstition that a fan could suffocate them while sleeping. White noise generators are often used by people with tinnitus to mask their symptoms. The sounds generated by digital machines are not always truly random. Rather, they are short prerecorded audio-tracks which continuously repeat at the end of the track. Manufacturers of sound-masking devices recommend that the volume of white noise machines be initially set at a comfortable level, even if it does not provide the desired level of privacy. As the ear becomes accustomed to the new sound and learns to tune it out, the volume can be gradually increased to increase privacy. Manufacturers of sleeping aids and power-napping devices recommend that the volume level be set slightly louder than normal music listening level, but always in a comfortable listening range. Sound and noise have their own measurement and color coding techniques, which allows specialized users to identify noise and sound according to their respective needs and utilization. These specialized needs are dependent on certain professions and needs, e.g. a psychiatrist who needs certain sounds for therapies and treatments on a mental level, and patients who have conditions such as insomnia, anxiety, and, tinnitus (these conditions are managed with special devices which are designed to create certain sounds that treat such conditions at a mental level). A white noise machine has “white” as the color code given to that noise having a particular frequency spectrum. Audio jammers White noise machines are used to diminish the potential for recording or overhearing conversations. Republican Glen Casada had a white noise machine installed in his office to prevent against eavesdropping. Smart speaker blockers have been developed. For example, Bracelet of Silence is a bracelet that outputs white noise to protect privacy against digital recording from smart speakers. Bracelet of Silence is portable and not attached to smart speakers, thus it is possible that this device can be used to prevent eavesdropping of other devices as well, for example smartphones and laptops. There is not a lot of research on the impact of loud sounds at inaudible frequencies (and their respective audible artifacts and harmonics). Design Most modern white noise generators are electronic, usually generating the sound in real-time with audio test equipment, or via electronic playback of a digital audio recording. Simple mechanical machines consist of a very basic setup, involving an enclosed fan and, optionally, a speed switch. This fan drives air through small slots in the machine's casing, producing the desired sound. The first fan-based white noise machine was the Marpac Dohm, which was invented in 1962 and is frequently credited as the original domestic use white noise machine. Risk One paper found that of the 14 white noise machines tested at maximum volume, all exceeded maximum safe sound levels for infants (50 dB). Three exceeded safe levels for adults (85 dB). See also Pink noise White noise Colors of noise Sound masking Tinnitus masker Sound Princess References Noise (electronics) Privacy Sleep Hardware device blockers
White noise machine
Biology
780
68,116,797
https://en.wikipedia.org/wiki/Assembly%20theory
Assembly theory is a framework developed to quantify the complexity of molecules and objects by assessing the minimal number of steps required to assemble them from fundamental building blocks. Proposed by chemist Lee Cronin and his team, the theory assigns an assembly index to molecules, which serves as a measurable indicator of their structural complexity. Cronin and colleagues argue that this approach allows for experimental verification and has applications in understanding selection processes, evolution, and the identification of biosignatures in astrobiology. However, the usefulness of the approach has been disputed. Background The hypothesis was proposed by chemist Leroy Cronin in 2017 and developed by the team he leads at the University of Glasgow, then extended in collaboration with a team at Arizona State University led by astrobiologist Sara Imari Walker, in a paper released in 2021. Assembly theory conceptualizes objects not as point particles, but as entities defined by their possible formation histories. This allows objects to show evidence of selection, within well-defined boundaries of individuals or selected units. Combinatorial objects are important in chemistry, biology and technology, in which most objects of interest (if not all) are hierarchical modular structures. For any object an 'assembly space' can be defined as all recursively assembled pathways that produce this object. The 'assembly index' is the number of steps on a shortest path producing the object. For such shortest path, the assembly space captures the minimal memory, in terms of the minimal number of operations necessary to construct an object based on objects that could have existed in its past. The assembly is defined as "the total amount of selection necessary to produce an ensemble of observed objects"; for an ensemble containing objects in total, of which are unique, the assembly is defined to be , where denotes 'copy number', the number of occurrences of objects of type having assembly index . For example, the word 'abracadabra' contains 5 unique letters (a, b, c, d and r) and is 11 symbols long. It can be assembled from its constituents as a + b --> ab + r --> abr + a --> abra + c --> abrac + a --> abraca + d --> abracad + abra --> abracadabra, because 'abra' was already constructed at an earlier stage. Because this requires at least 7 steps, the assembly index is 7. The word ‘abracadrbaa’, of the same length, for example, has no repeats so has an assembly index of 10. Take two binary strings and as another example. Both have the same length bits, both have the same Hamming weight . However, the assembly index of the first string is ("01" is assembled, joined with itself into "0101", and joined again with "0101" taken from the assembly pool), while the assembly index of the second string is , since in this case only "01" can be taken from the assembly pool. In general, for K subunits of an object O the assembly index is bounded by . Once a pathway to assemble an object is discovered, the object can be reproduced. The rate of discovery of new objects can be defined by the expansion rate , introducing a discovery timescale . To include copy number in the dynamics of assembly theory, a production timescale is defined, where is the production rate of a specific object . Defining these two distinct timescales , for the initial discovery of an object, and , for making copies of existing objects, allows to determine the regimes in which selection is possible. While other approaches can provide a measure of complexity, the researchers claim that assembly theory's molecular assembly number is the first to be measurable experimentally. Molecules with a high assembly index are very unlikely to form abiotically, and the probability of abiotic formation goes down as the value of the assembly index increases. The assembly index of a molecule can be obtained directly via spectroscopic methods. This method could be implemented in a fragmentation tandem mass spectrometry instrument to search for biosignatures. The theory was extended to map chemical space with molecular assembly trees, demonstrating the application of this approach in drug discovery, in particular in research of new opiate-like molecules by connecting the "assembly pool elements through the same pattern in which they were disconnected from their parent compound(s)". It is difficult to identify chemical signatures that are unique to life. For example, the Viking lander biological experiments detected molecules that could be explained by either living or natural non-living processes. It appears that only living samples can produce assembly index measurements above ~15. However, 2021, Cronin first explained how polyoxometalates could have large assembly indexes >15 in theory due to autocatalysis. Critical views Chemist Steven A. Benner has publicly criticized various aspects of assembly theory. Benner argues that it is transparently false that non-living systems, and with no life intervention, cannot contain molecules that are complex but people would be misled in thinking that because it was published in Nature journals after peer review, these papers must be right. A paper published in the Journal of Molecular Evolution concludes that "the hype around Assembly Theory reflects rather unfavorably both on the authors and the scientific publication system in general". The author concludes that what "assembly theory really does is to detect and quantify bias caused by higher-level constraints in some well-defined rule-based worlds"; one "can use assembly theory to check whether something unexpected is going on in a very broad range of computational model worlds or universes". Another paper authored by a group of chemists and planetary scientists published in the Journal of the Royal Society Interface demonstrated that abiotic chemical processes have the potential to form crystal structures of great complexity — values exceeding the proposed abiotic/biotic divide of MA index = 15. They conclude that "while the proposal of a biosignature based on a molecular assembly index of 15 is an intriguing and testable concept, the contention that only life can generate molecular structures with MA index ≥ 15 is in error". Two papers published in 2024 argue that assembly theory provides no insights beyond those already available using algorithmic complexity and Claude Shannon's information theory. See also List of interstellar and circumstellar molecules Smallest grammar problem Word problem for groups References Further reading Extraterrestrial life Molecular biology techniques Theories
Assembly theory
Chemistry,Astronomy,Biology
1,318
28,987,525
https://en.wikipedia.org/wiki/Korsakow
The Korsakow System (Pronounced 'KOR-SA-KOV') is a software for creating browser-based dynamic documentaries. Invented in 2000 by Berlin-based artist Florian Thalhofer, Korsakow allows users without any programming expertise to create and interact with non-linear or database-driven narratives, referred to as Korsakow-Films or K-Films. The software can be used to produce documentary, experimental and fictional narrative works and has been integrated into live performance and installation pieces. Korsakow is currently 299 US$ for the PRO version. Educational licenses are also available. Versions Development and Early Versions In the late 1990s, Florian Thalhofer began developing a software program to produce a documentary about alcohol consumption to accompany his Master's thesis. During his research, Thalhofer learned about an effect of extreme alcoholism known as "Korsakoff's Syndrome," characterized by short-term memory loss and a compulsion to tell stories. Thalhofer borrowed the name for his thesis and first Korsakow-film, "Korsakow Syndrom". From 2000 to 2015 the application has been released as free software. Version 6 Released in October 2016, Korsakow 6 is now exporting to html5. Version 5 Released in July 2009, the newest version of the Korsakow System involved a complete overhaul of the previous versions. This upgrade was produced under the aegis of the Concordia Interactive Narrative Experimentation Research Group (CINER-G, 2007–2011). CINER-G was funded by a research/creation grant from the Quebec Government's Fonds de recherche sur la société et la culture (FQRSC). In 2011 CINER-G was succeeded by Adventures in Research/Creation. ARC was funded by a research/creation grant from the Social Sciences and Humanities Council of Canada (SSHRC) from 2011 to 2015. Thalhofer remained the creative lead during the project, while Matt Soar co-directed the project and also designed the current logo. The coding is by David Reisch with early assistance from Stuart Thiel. Soar left the project in 2016. In addition to addressing many of the problems with version 3, version 5.0 was recreated from scratch in Java as open source software. The new version of the application can export as a .swf file which requires flash player to view, currently a much more common browser extension than Shockwave. Another change offered in the new version is the ability to design and use an unlimited number of interfaces per film. As the developers believed that the jump from version 3 to the current version was so substantial that, as an inside joke, they skipped 4.0 altogether. Korsakow-Films SNUs Though it may be used as such, the Korsakow System was not intended as a choose your own adventure builder. Instead, the intention of the software is to create narratives based on dynamic relationships between very short video clips, rather than on predetermined paths. In order to achieve this, each Korsakow-film is composed of multiple SNUs or smallest narrative units. These are usually short video clips ranging from 20 seconds to a few minutes in duration and are the building blocks of each Korsakow-film. Users of the Korsakow software "SNUify" their media by adding rules guiding the relationship between each SNU. Each SNU can be assigned "in" and "out" tags. Whenever a clip begins, the database will be queried for other SNUs whose "in" tags match the "out" tags of the current video. Any matches will be displayed as related options for viewing. Queries can be cued at specific points during the current SNU. As part of a non-linear narrative, each SNU may be reused within the narrative. SNUs may be assigned a number of "lives" or times they are allowed to appear within the narrative. The term "SNU" (smallest narrative unit) was coined by experimental filmmaker Prof. Heinz Emigholz at a lecture at the University of the Arts in Berlin on February 6, 2002. The lecture was later published in his book "Das schwarze Schamquadrat" . Compatibility The following file formats can be accepted by the software: Video files: .mp4 (codec: H.264) Previews: .jpg, .gif, .png, .mov (H.264) Startscreen: .jpg, .png, .gif Audio files: .wav, .mp3 Subtitles: .srt Interface Films output on Korsakow version 3 or earlier could only be viewed online in a single generic interface. This layout involved a single primary frame, in which the selected clips would play, and up to three preview frames of other related clips. While this remained the default layout in version 5.0, filmmakers now have the option of creating different interfaces for each SNU. References External links Official Website Gallery of example works ARC Website Interview with Florian Thalhofer by Matt Soar in Nomorepotlucks Thalhofer's Korsakow-film Planet Galata (2010, ARTE) Thalhofer's website. Includes links to many of his Korsakow-films. Archiving R69 (2011), a Korsakow film by Monika Kin Gagnon. Ceci N'est Pas Embres (2012), a Korsakow 'database diary' by Matt Soar. The Border Between Us (2012), a Korsakow film by Nicole Robicheau. The Signmakers of Montreal (2018), a Korsakow film by Matt Soar. Scholarly article on Korsakow by Adrian Miles (2014), titled Materialism and interactive documentary: sketch notes in the journal Studies in Documentary Film Vol 8, No. 3. Article on Soar, Thalhofer, and CINER-G in Concordia Journal. Article on Korsakow and ASAPs in DOX magazine (2014). Chapter by Matt Soar (2014), titled Making (with) the Korsakow System: Database Documentaries as Articulation and Assemblage. In Nash, K., Hight, C. & Summerhayes, C. (Eds.) New Documentary Ecologies. New York: Palgrave Macmillan, pp. 154-173. Chapter by Adrian Miles (2008), titled Programmatic Statements for a Facetted Videography comparing Korsakow with Videodefunct. In Lovinck, G. & Niederer, S. (Eds.) Video Vortex Reader: Responses to YouTube. Amsterdam: Institute of Network Cultures, pp. 223-230. Scholarly article on Korsakow by Hart Cohen (2012), titled Database Documentary: From Authorship to Authoring in Remediated/Remixed Documentary in the journal Culture Unbound. The Korsakow System is featured in Moments of Innovation: When documentary and technology converge (2012), an interactive timeline created by MIT Open Documentary Lab and IDFA DocLab. Multimedia software
Korsakow
Technology
1,444
31,710,978
https://en.wikipedia.org/wiki/S-Nitrosylation
In biochemistry, S-nitrosylation is the covalent attachment of a nitric oxide group () to a cysteine thiol within a protein to form an S-nitrosothiol (SNO). S-Nitrosylation has diverse regulatory roles in bacteria, yeast and plants and in all mammalian cells. It thus operates as a fundamental mechanism for cellular signaling across phylogeny and accounts for the large part of NO bioactivity. S-Nitrosylation is precisely targeted, reversible, spatiotemporally restricted and necessary for a wide range of cellular responses, including the prototypic example of red blood cell mediated autoregulation of blood flow that is essential for vertebrate life. Although originally thought to involve multiple chemical routes in vivo, accumulating evidence suggests that S-nitrosylation depends on enzymatic activity, entailing three classes of enzymes (S-nitrosylases) that operate in concert to conjugate NO to proteins, drawing analogy to ubiquitinylation. Beside enzymatic activity, hydrophobicity and low pka values also play a key role in regulating the process.S-Nitrosylation was first described by Stamler et al. and proposed as a general mechanism for control of protein function, including examples of both active and allosteric regulation of proteins by endogenous and exogenous sources of NO. The redox-based chemical mechanisms for S-nitrosylation in biological systems were also described concomitantly. Important examples of proteins whose activities were subsequently shown to be regulated by S-nitrosylation include the NMDA-type glutamate receptor in the brain. Aberrant S-nitrosylation following stimulation of the NMDA receptor would come to serve as a prototypic example of the involvement of S-nitrosylation in disease. S-Nitrosylation similarly contributes to physiology and dysfunction of cardiac, airway and skeletal muscle and the immune system, reflecting wide-ranging functions in cells and tissues. It is estimated that ~70% of the proteome is subject to S-nitrosylation and the majority of those sites are conserved. S-Nitrosylation is also known to show up in mediating pathogenicity in Parkinson's disease systems. S-Nitrosylation is thus established as ubiquitous in biology, having been demonstrated to occur in all phylogenetic kingdoms and has been described as the prototypic redox-based signalling mechanism, Denitrosylation The reverse of S-nitrosylation is denitrosylation, principally an enzymically controlled process. Multiple enzymes have been described to date, which fall into two main classes mediating denitrosylation of protein and low molecular weight SNOs, respectively. S-Nitrosoglutathione reductase (GSNOR) is exemplary of the low molecular weight class; it accelerates the decomposition of S-nitrosoglutathione (GSNO) and of SNO-proteins in equilibrium with GSNO. The enzyme is highly conserved from bacteria to humans. Thioredoxin (Trx)-related proteins, including Trx1 and 2 in mammals, catalyze the direct denitrosylation of S-nitrosoproteins (in addition to their role in transnitrosylation). Aberrant S-nitrosylation (and denitrosylation) has been implicated in multiple diseases, including heart disease, cancer and asthma as well as neurological disorders, including stroke, chronic degenerative diseases (e.g., Parkinson's and Alzheimer's disease) and amyotrophic lateral sclerosis (ALS). Transnitrosylation Another interesting aspect of S-nitrosylation includes the protein protein transnitrosylation, which is the transfer of an NO moiety from a SNO to the free thiols in another protein. Thioredoxin (Txn), a protein disulfide oxidoreductase for the cytosol and caspase 3 are a good example where transnitrosylation is significant in regulating cell death. Another example include, the structural changes in mammalian Hb to SNO-Hb under oxygen depleted conditions helps it to bind to AE1 (Anion Exchange, a membrane protein) and in turn gets transnitrosylated the later. Cdk5 (a neuronal-specific kinase) is known get nitrosylated at cysteine 83 and 157 in different neurodegenerative diseases like AD. This SNO-Cdk5 in turn is nitrosylated Drp1, the nitrosylated form of which can be considered as a therapeutic target. References Protein structure Protein biosynthesis Chemical reactions
S-Nitrosylation
Chemistry
1,002
20,964,104
https://en.wikipedia.org/wiki/Six-legged%20Soldiers
Six-Legged Soldiers: Using Insects as Weapons of War is a nonfiction scientific warfare book written by author and University of Wyoming professor, Jeffrey A. Lockwood. Published in 2008 by Oxford University Press, the book explores the history of bioterrorism, entomological warfare, biological warfare, and the prevention of agro-terrorism from the earliest times to modern threats. Lockwood, an entomologist, preceded this book with Ethical issues in biological control (1997) and Locust: The devastating rise and mysterious disappearance of the insect that shaped the American frontier (2004), among others. Summary Six-Legged Soldiers gives detailed examples of entomological warfare: using buckets of scorpions during a fortress siege, catapulting beehives ("bee bombs") across a castle wall, civilians as human guinea pigs in an effort to weaponize the plague, bombarding civilians from the air with infection-bearing insects, and assassin bugs placed on prisoners to eat away their flesh. Lockwood also describes a domestic ecoterrorism example with the 1989 threat to release the medfly (Ceratitis capitata) within California's crop belt. The last chapter highlights western nations' vulnerability to terrorist attacks. Interviewed about the book by BBC Radio 4's Today programme, the author describes how a terrorist with a suitcase could bring diseases into a country. "I think a small terrorist cell could very easily develop an insect-based weapon." Criticism In its January 2009 review, The Sunday Times criticised the book as being "scarcely scholarly" for its mixed collection of myth, legend and historical facts. Contrary to this critique, reviews from credible scholarly and scientific sources stated, "Six-Legged Soldiers is an excellent account of the effect arthropod-borne diseases have had on warfare...This book will inspire readers to understand...threats and prepare new methods to combat them." (Nature), "Lockwood thoroughly and objectively assembles an engaging chronicle on a topic for which official documentation is often sparse and the opportunity for propaganda is rife." (Science News), and "Lockwood...makes this history of entomological warfare morbidly entertaining...thanks to a lively writing style that ranges from the sardonic to the arch." (BioScience Magazine) References 2009 non-fiction books Science books American non-fiction books Non-fiction books about war Entomological literature Military science Biological agents
Six-legged Soldiers
Biology,Environmental_science
501
61,528,341
https://en.wikipedia.org/wiki/C17H9NO3
{{DISPLAYTITLE:C17H9NO3}} The molecular formula C17H9NO3 (molar mass: 275.258 g/mol) may refer to: Liriodenine 3-Nitrobenzanthrone (3-nitro-7H-benz[de]anthracen-7-one) Molecular formulas
C17H9NO3
Physics,Chemistry
77
32,367,668
https://en.wikipedia.org/wiki/DWSIM
DWSIM is an open-source CAPE-OPEN compliant chemical process simulator for Windows, Linux and macOS. DWSIM is built on top of the Microsoft .NET and Mono Platforms and features a graphical user interface (GUI), advanced thermodynamics calculations, reactions support and petroleum characterization / hypothetical component generation tools. DWSIM is able to simulate steady-state, vapor–liquid, vapor–liquid-liquid, solid–liquid and aqueous electrolyte equilibrium processes with the following Thermodynamic Models and Unit Operations: Thermodynamic models: CoolProp, Peng–Robinson equation of state, Peng–Robinson-Strÿjek-Vera (PRSV2), Soave–Redlich–Kwong, Lee-Kesler, Lee-Kesler-Plöcker, UNIFAC(-LL), Modified UNIFAC (Dortmund), Modified UNIFAC (NIST), UNIQUAC, NRTL, Chao-Seader, Grayson-Streed, Extended UNIQUAC, Raoult's Law, IAPWS-IF97 Steam Tables, IAPWS-08 Seawater, Black-Oil and Sour Water; Unit operations: CAPE-OPEN Socket, Spreadsheet, Custom (IronPython Script), Mixer, Splitter, Separator, Pump, Compressor, Expander, Heater, Cooler, Valve, Pipe Segment, Shortcut Column, Heat exchanger, Reactors (Conversion, PFR, CSTR, Equilibrium and Gibbs), Distillation column, Simple, Refluxed and Reboiled Absorbers, Component Separator, Solids Separator, Continuous Cake Filter and Orifice plate; Utilities: Binary Data Regression, Phase Envelope, Natural Gas Hydrates, Pure Component Properties, True Critical Point, PSV Sizing, Vessel Sizing, Spreadsheet and Petroleum Cold Flow Properties; Tools: Hypothetical Component Generator, Bulk C7+/Distillation Curves Petroleum Characterization, Petroleum Assay Manager, Reactions Manager and Compound Creator; Process Analysis and Optimization: Sensitivity Analysis Utility, Multivariate Optimizer with bound constraints; Extras: Support for Runtime Python Scripts, Plugins and CAPE-OPEN Flowsheet Monitoring Objects. Android and iOS versions DWSIM is also available on Android and iOS, where it is free to download. On these platforms, DWSIM includes a basic set of features while more advanced modules can be unlocked through in-app purchases. Raspberry Pi version A special DWSIM build is available for Raspberry Pi 2/3 devices running an armhf-based Linux distribution like Raspbian and Ubuntu MATE. See also Process design (chemical engineering) List of Chemical Process Simulators Standard temperature and pressure External links DWSIM homepage - documentation, download links, tutorials, help and support for DWSIM. CO-LaN - the CAPE-OPEN Laboratories Network is a neutral industry and academic association promoting open interface standards in process simulation software. CO-LaN members are committed to making Computer Aided Process Engineering easier, faster and less expensive by achieving complete interoperability of compliant commercial CAPE software tools. CO-LaN supports and maintains the CAPE-OPEN interface standards. References Simulation software Chemical engineering software
DWSIM
Chemistry,Engineering
676
71,886,846
https://en.wikipedia.org/wiki/Rs16891982
In genetics, rs16891982, also known as F374L, is the name for a single nucleotide polymorphism found in the SLC45A2 gene. The SNP consists of two alleles: C (cytosine) and G (guanine). It is associated with skin tone and hair/eye color. It is a type of missense mutation. C allele homozygosity is associated with black hair in people of European descent, although those with this genotype are usually of non-European descent. C/G allele heterozygosity is associated with black hair in people of European descent G allele homozygosity is associated with light skin, hair, and eye color (European ancestry), those with this genotype also have a slightly higher susceptibility to melanoma. References SNPs on chromosome 5
Rs16891982
Biology
187
2,770,582
https://en.wikipedia.org/wiki/ZOS%20Messaging%20Service
ZOS Messaging Service (ZMS) is a location communication standard that operates over cellular network. ZMS uses standardized communications protocols and IP Networks to allow the exchange of geo coordinates between mobile phones, and between mobile telephone devices and personal computers. ZMS enables developers of location-based service (LBS) applications to access multiple device platforms. The standard also opens the location-based advertising (LBA) market to create new advertising channels. ZMS was developed by ZOS Communications in 2007. Its first use was in the mobile peer-to-peer location application . End user and enterprise uses End user The end user software, known as manager, works to connect people in three ways. Either people-to-people, people-to-places, or people-to-services. Enterprise The ZMS standard may be implemented outside of the smartphone-based manager. Organizations may stay aware of a device's location, and dispatch the closest device to a given address. The enterprise software differs from the end user software in that location can be constantly updated to the ZOS servers, and can be shared the client organization. This allows for efficient allocation of resources, whether the solution is focusing on taxis, EMS, Sales and Service organizations, advertising, or other uses. External links zhiing home page iTunes app store link to zhiing Mobile telecommunications standards
ZOS Messaging Service
Technology
273
36,030,443
https://en.wikipedia.org/wiki/Outline%20of%20nuclear%20power
The following outline is provided as an overview of and topical guide to nuclear power: Nuclear power – the use of sustained nuclear fission to generate heat and electricity. Nuclear power plants provide about 6% of the world's energy and 13–14% of the world's electricity, with the U.S., France, and Japan together accounting for about 50% of nuclear generated electricity. What type of thing is nuclear power? Nuclear power can be described as all of the following: Nuclear technology (outline) – technology that involves the reactions of atomic nuclei. Among the notable nuclear technologies are nuclear power, nuclear medicine, and nuclear weapons. It has found applications from smoke detectors to nuclear reactors, and from gun sights to nuclear weapons. Electricity generation – the process of generating electric energy from other forms of energy. The fundamental principles of electricity generation were discovered during the 1820s and early 1830s by the British scientist Michael Faraday. His basic method is still used today: electricity is generated by the movement of a loop of wire, or disc of copper between the poles of a magnet. Science of nuclear power Nuclear engineering Nuclear chemistry Nuclear fission Nuclear physics Atomic nucleus Ionizing radiation Nuclear fission Radiation Radioactivity Radioisotope thermoelectric generator Steam generator (nuclear power) Nuclear material Nuclear material Nuclear fuel Fertile material Thorium Uranium Enriched uranium Depleted uranium Plutonium Deuterium Tritium Nuclear reactor technology Nuclear reactor technology Types of nuclear reactors Advanced gas-cooled reactor Boiling water reactor Fast breeder reactor Fast neutron reactor Gas-cooled fast reactor Generation IV reactor Integral Fast Reactor Lead-cooled fast reactor Liquid-metal-cooled reactor Magnox reactor Molten salt reactor Pebble bed reactor Pressurized water reactor Sodium-cooled fast reactor Supercritical water reactor Very high temperature reactor Dangers of nuclear power Lists of nuclear disasters and radioactive incidents Nuclear reactor accidents in the United States Radioactive waste Nuclear proliferation Nuclear terrorism Radioactive contamination Notable accidents 2011 Japanese nuclear accidents 1986 List of Chernobyl-related articles 1985 Soviet submarine K-431 1979 Three Mile Island accident 1968 Soviet submarine K-27 1961 Soviet submarine K-19 History of nuclear power History of nuclear power Atomic Energy Commission (disambiguation) History of uranium Lists of nuclear disasters and radioactive incidents United Nations Atomic Energy Commission (1946-1948) United States Atomic Energy Commission (1946-1974) Nuclear renaissance Nuclear power industry Environmental impact of nuclear power Nuclear renaissance Relative cost of electricity generated by different sources Uranium mining Uranium mining debate Nuclear power plant Uranium processing Isotope separation Enriched uranium Nuclear reprocessing Reprocessed uranium Nuclear power plants Economics of new nuclear power plants Nuclear power plant emergency response team List of nuclear reactors Reactor building Specific nuclear power plants List of nuclear power stations List of cancelled nuclear plants in the United States Baltic nuclear power plant (disambiguation) Belarusian nuclear power plant project Berkeley nuclear power station Bradwell nuclear power station Chapelcross nuclear power station Dodewaard nuclear power plant Heysham nuclear power station Hinkley Point A nuclear power station Hinkley Point C nuclear power station Hunterston A nuclear power station Hunterston B nuclear power station Russian floating nuclear power station Sizewell nuclear power stations Trawsfynydd nuclear power station Nuclear waste High-level radioactive waste management List of nuclear waste treatment technologies Nuclear power by region Nuclear power by country List of nuclear power accidents by country Nuclear power in Asia Nuclear power in India India's three stage nuclear power programme Nuclear power in Indonesia Nuclear power in Japan Nuclear power in North Korea Nuclear power in Pakistan Nuclear power in South Korea Nuclear power in Taiwan Nuclear power in Thailand Nuclear power in the People's Republic of China Nuclear power in the Philippines Nuclear power in the United Arab Emirates Nuclear power in Australia Nuclear power in Europe Nuclear power in the European Union Nuclear power in Albania Nuclear power in Belarus Nuclear power in Bulgaria Nuclear power in the Czech Republic Nuclear power in Finland Nuclear power in France Nuclear power in Germany Nuclear power in Italy Nuclear power in Romania Nuclear power in Russia Nuclear power in Scotland Nuclear power in Spain Nuclear power in Sweden Nuclear power in Switzerland Nuclear power in Ukraine Nuclear power in the United Kingdom Nuclear power in North America Nuclear power in Canada Nuclear power in the United States Nuclear power plants in New Jersey Nuclear power companies Companies in the nuclear sector – list of all large companies which are active along the nuclear chain, from uranium mining, processing and enrichment, to the actual operating of nuclear power plant and waste processing. BKW FMB Energie AG ČEZ Group China Guangdong Nuclear Power Group China National Nuclear Corporation China Nuclear International Uranium Corporation E.ON E.ON Kernkraft GmbH E.ON Sverige Electrabel Électricité de France Eletronuclear Endesa (Spain) Energoatom Fennovoima Fortum Iberdrola Korea Hydro & Nuclear Power Bhavini Nuclear Power Corporation of India Nuclearelectrica OKB Gidropress Resun Rosenergoatom RWE Unión Fenosa Teollisuuden Voima Vattenfall Vattenfall Europe Nuclear Energy GmbH Nuclear safety Nuclear safety Event tree Event tree analysis Exclusion area International Nuclear Safety Center Nuclear power plant emergency response team Reactor protection system Nuclear safety in the United States Nuclear power in space Nuclear power in space Advanced Stirling Radioisotope Generator Politics of nuclear power Alsos Digital Library for Nuclear Issues Anti-nuclear movement Anti-nuclear movement in Germany Anti-nuclear movement in the United States Anti-nuclear power movement in Japan Anti-nuclear protests Anti-nuclear protests in the United States Nuclear energy policy Nuclear power debate Nuclear power phase-out Nuclear power proposed as renewable energy Nuclear whistleblowers Nuclear renaissance Uranium mining debate Politics of nuclear power by region 1978 Austrian nuclear power referendum 2008 Lithuanian nuclear power referendum 1980 Swedish nuclear power referendum Nuclear regulatory agencies Association Nationale des Comités et Commissions Locales d'Information (France) Atomic Energy Regulatory Board (India) Autorité de sûreté nucléaire (France) Bangladesh Atomic Energy Commission Brazilian–Argentine Agency for Accounting and Control of Nuclear Materials Canadian Nuclear Safety Commission International Nuclear Regulators' Association Japanese Atomic Energy Commission Japanese Nuclear Safety Commission Nuclear and Industrial Safety Agency (Japan, retired) Nuclear Regulation Authority (Japan) Kernfysische dienst (The Netherlands) Nuclear Regulatory Commission (USA) Pakistan Nuclear Regulatory Authority Säteilyturvakeskus (Finland) Nuclear power organizations See also Nuclear regulatory agencies, above Alsos Digital Library for Nuclear Issues International Nuclear Safety Center Against Friends of the Earth International, a network of environmental organizations in 77 countries. Greenpeace International, a non-governmental environmental organization with offices in 41 countries. Nuclear Information and Resource Service (International) World Information Service on Energy (International) Sortir du nucléaire (France) Pembina Institute (Canada) Institute for Energy and Environmental Research (United States) Sayonara Nuclear Power Plants (Japan) Supportive Nuclear power groups World Nuclear Association, a confederation of companies connected with nuclear power production. (International) International Atomic Energy Agency (IAEA) Nuclear Energy Institute (United States) American Nuclear Society (United States) United Kingdom Atomic Energy Authority (United Kingdom) EURATOM (Europe) Atomic Energy of Canada Limited (Canada) Environmentalists for Nuclear Energy (International) Nuclear power publications Nuclear Power and the Environment Reaction Time: Climate Change and the Nuclear Option World Nuclear Industry Status Report In Mortal Hands Persons influential in nuclear power Scientists Enrico Fermi – an American physicist James Chadwick Politicians Harry Truman Ed Markey Naoto Kan Nobuto Hosaka Angela Merkel Engineers David Lochbaum Arnold Gundersen George Galatis See also Fusion power Future energy development German nuclear energy project Inertial fusion power plant Linear no-threshold model Polywell World energy resources and consumption References External links Nuclear Energy Institute – Beneficial Uses of Radiation Nuclear Technology Reactor Power Plant Technology Education – Includes the PC-based BWR reactor simulation. Alsos Digital Library for Nuclear Issues – Annotated Bibliography on Nuclear Power An entry to nuclear power through an educational discussion of reactors Argonne National Laboratory – Maps of Nuclear Power Reactors Briefing Papers from the Australian EnergyScience Coalition British Energy – Understanding Nuclear Energy / Nuclear Power Coal Combustion: Nuclear Resource or Danger?   Energy Information Administration provides lots of statistics and information How Nuclear Power Works IAEA Website The International Atomic Energy Agency IAEA's Power Reactor Information System (PRIS) Nuclear Power: Climate Fix or Folly? (2009) Nuclear Power Education Nuclear Tourist.com, nuclear power information Nuclear Waste Disposal Resources The World Nuclear Industry Status Reports website Wilson Quarterly – Nuclear Power: Both Sides TED Talk – Bill Gates on energy: Innovating to zero! LFTR in 5 Minutes – Creative Commons Film Compares PWR to Th-MSR/LFTR Nuclear Power. ! Nuclear power Nuclear power
Outline of nuclear power
Physics
1,750
102,519
https://en.wikipedia.org/wiki/Dolomite%20%28mineral%29
Dolomite () is an anhydrous carbonate mineral composed of calcium magnesium carbonate, ideally The term is also used for a sedimentary carbonate rock composed mostly of the mineral dolomite (see Dolomite (rock)). An alternative name sometimes used for the dolomitic rock type is dolostone. History As stated by Nicolas-Théodore de Saussure the mineral dolomite was probably first described by Carl Linnaeus in 1768. In 1791, it was described as a rock by the French naturalist and geologist Déodat Gratet de Dolomieu (1750–1801), first in buildings of the old city of Rome, and later as samples collected in the Tyrolean Alps. Nicolas-Théodore de Saussure first named the mineral (after Dolomieu) in March 1792. Properties The mineral dolomite crystallizes in the trigonal-rhombohedral system. It forms white, tan, gray, or pink crystals. Dolomite is a double carbonate, having an alternating structural arrangement of calcium and magnesium ions. Unless it is in fine powder form, it does not rapidly dissolve or effervesce (fizz) in cold dilute hydrochloric acid as calcite does. Crystal twinning is common. Solid solution exists between dolomite, the iron-dominant ankerite and the manganese-dominant kutnohorite. Small amounts of iron in the structure give the crystals a yellow to brown tint. Manganese substitutes in the structure also up to about three percent MnO. A high manganese content gives the crystals a rosy pink color. Lead, zinc, and cobalt also can substitute in the structure for magnesium. The mineral dolomite is closely related to huntite . Because dolomite can be dissolved by slightly acidic water, areas where dolomite is an abundant rock-forming mineral are important as aquifers and contribute to karst terrain formation. Formation Modern dolomite formation has been found to occur under anaerobic conditions in supersaturated saline lagoons such as those at the Rio de Janeiro coast of Brazil, namely, Lagoa Vermelha and Brejo do Espinho. There are many other localities where modern dolomite forms, notably along sabkhas in the Persian Gulf, but also in sedimentary basins bearing gas hydrates and hypersaline lakes. It is often thought that dolomite nucleates with the help of sulfate-reducing bacteria (e.g. Desulfovibrio brasiliensis), but other microbial metabolisms have been also found to mediate in dolomite formation. In general, low-temperature dolomite may occur in natural supersaturated environments rich in extracellular polymeric substances (EPS) and microbial cell surfaces. This is likely result from complexation of both magnesium and calcium by carboxylic acids comprising EPS. Vast deposits of dolomite are present in the geological record, but the mineral is relatively rare in modern environments. Reproducible, inorganic low-temperature syntheses of dolomite are yet to be performed. Usually, the initial inorganic precipitation of a metastable "precursor" (such as magnesium calcite) can easily be achieved. The precursor phase will theoretically change gradually into a more stable phase (such as partially ordered dolomite) during periodical intervals of dissolution and re-precipitation. The general principle governing the course of this irreversible geochemical reaction has been coined "breaking Ostwald's step rule". High diagenetic temperatures, such as those of groundwater flowing along deeply rooted fault systems affecting some sedimentary successions or deeply buried limestone rocks allocate dolomitization. But the mineral is also volumetrically important in some Neogene platforms never subjected to elevated temperatures. Under such conditions of diagenesis the long-term activity of the deep biosphere could play a key role in dolomitization, since diagenetic fluids of contrasting composition are mixed as a response to Milankovitch cycles. A recent biotic synthetic experiment claims to have precipitated ordered dolomite when anoxygenic photosynthesis proceeds in the presence of manganese(II). A still perplexing example of an organogenic origin is that of the reported formation of dolomite in the urinary bladder of a Dalmatian dog, possibly as the result of an illness or infection. Uses Dolomite is used as an ornamental stone, a concrete aggregate, and a source of magnesium oxide, as well as in the Pidgeon process for the production of magnesium. It is an important petroleum reservoir rock, and serves as the host rock for large strata-bound Mississippi Valley-Type (MVT) ore deposits of base metals such as lead, zinc, and copper. Where calcite limestone is uncommon or too costly, dolomite is sometimes used in its place as a flux for the smelting of iron and steel. Large quantities of processed dolomite are used in the production of float glass. In horticulture, dolomite and dolomitic limestone are added to soils and soilless potting mixes as a pH buffer and as a magnesium source. Pastures can be limed with dolomitic lime to raise their pH and where there is a magnesium deficiency. Dolomite is also used as the substrate in marine (saltwater) aquariums to help buffer changes in the pH of the water. Calcined dolomite is also used as a catalyst for destruction of tar in the gasification of biomass at high temperature. Particle physics researchers like to build particle detectors under layers of dolomite to enable the detectors to detect the highest possible number of exotic particles. Because dolomite contains relatively minor quantities of radioactive materials, it can insulate against interference from cosmic rays without adding to background radiation levels. In addition to being an industrial mineral, dolomite is highly valued by collectors and museums when it forms large, transparent crystals. The specimens that appear in the magnesite quarry exploited in Eugui, Esteribar, Navarra (Spain) are considered among the best in the world. See also References External links Sedimentary rocks Calcium minerals Magnesium minerals Carbonate minerals Dolomite group Trigonal minerals Minerals in space group 148 Evaporite Luminescent minerals Industrial minerals
Dolomite (mineral)
Chemistry
1,291
12,682,567
https://en.wikipedia.org/wiki/Saporin
Saporin is a protein that is useful in biological research applications, especially studies of behavior. Saporins are so-called ribosome inactivating proteins (RIPs), due to its N-glycosidase activity, from the seeds of Saponaria officinalis (common name: soapwort). It was first described by Fiorenzo Stirpe and his colleagues in 1983 in an article that illustrated the unusual stability of the protein. Among the RIPs are some of the most toxic molecules known, such as ricin and abrin. Each of these toxins contain a second protein subunit, which inserts the RIP into a cell, enabling it to enzymatically inactivate the ribosomes, shutting down protein synthesis, stopping basic cell functions, resulting in cell death, and eventually causing death of the victim. Saporin has no chain capable of inserting it into the cell. Thus it and the soapwort plant are safe to handle. This has aided its use in research. If given a method of entry into the cell, saporin becomes a very potent toxin, since its enzymatic activity is among the highest of all RIPs. The enzymatic activity of RIPs is unusually specific: a single adenine base is removed from the ribosomal RNA of the large subunit of the ribosome. This is the Achilles’ heel of the ribosome; the removal of this base completely inhibits the ability of that ribosome to participate in protein synthesis. The fungal toxin alpha-sarcin cuts the ribosomal RNA at the adjacent base, also causing protein synthesis inhibition. The conversion of saporin into a toxin has been used to create a series of research molecules. Attachment of saporin to something that enters the cell will convert it into a toxin for that cell. If the agent is specific for a single cell type, by being an antibody specific for some molecule that is only presented on the surface of the target cell type, then a set group of cells can be removed. This has many applications, some more successful than others. Saporin is not the only molecule that is used in this way; the enzymatic chain of ricin, the RIP gelonin, the enzymatic chain of Pseudomonas exotoxin, the enzymatic chain of diphtheria toxin have also been used, again with variations in success. Immunotoxins consisting of a monoclonal antibody linked to saporin have been developed and evaluated in formal clinical trials in patients with leukaemia and lymphoma in the UK and Germany. One disadvantage of these types of immunotoxin for clinical use is their relatively narrow therapeutic window and associated potentially life-threatening toxicities at dose levels that are therapeutic. During the past 15 years the research group of Dr David Flavell at Southampton General Hospital in the UK have been investigating various ways of improving the potency and widening the therapeutic window for saporin-based immunotoxins thereby opening up new possibilities for this class of drug. Most recently saponins (not to be confused with saporin) from Gypsophila paniculata have been shown to significantly augment saporin-based immunotoxins directed against human cancer cells by several orders of magnitude. In the last 15 years, in research begun by R. G. Wiley of Vanderbilt University, saporin has been used mainly to target specific neuronal populations in lab animals and eliminate them. This allows the researcher to observe the behavioral changes and associate them with the neuronal populations that were eliminated. For instance, the elimination of the cholinergic neurons of the rat basal forebrain by the toxin created by attaching saporin to an antibody that attaches to, and then internalizes into, only these neurons has created a mimic for the crucial result of Alzheimer's disease in humans. In this way, collateral results of the progression of the disease or drugs for the intervention can be studied. More than 300 scientific articles have been published utilizing saporin for study of the nervous systems, and more than 15 specific toxins have been created. Saporin’s success is probably due to its stability. Santanche et al. have evaluated the physical characteristics of the protein and conclude “the remarkable resistance of saporin to denaturation and proteolysis suggests this protein as an ideal candidate for biotechnological applications”. References Proteins Ribosome-inactivating proteins
Saporin
Chemistry
911
1,482,841
https://en.wikipedia.org/wiki/Feather-plucking
Feather-plucking, sometimes termed feather-picking, feather damaging behaviour or pterotillomania, is a maladaptive, behavioural disorder commonly seen in captive birds that chew, bite or pluck their own feathers with their beak, resulting in damage to the feathers and occasionally the skin. It is especially common among parrots (order Psittaciformes), with an estimated 10% of captive parrots exhibiting the disorder. The areas of the body that are mainly pecked or plucked are the more accessible regions such as the neck, chest, flank, inner thigh and ventral wing area. Contour and down feathers are generally identified as the main target, although in some cases, tail and flight feathers are affected. Although feather-plucking shares characteristics with feather pecking commonly seen in commercial poultry, the two behaviours are currently considered to be distinct as in the latter, the birds peck at and pull out the feathers of other individuals. Feather-plucking has characteristics that are similar to trichotillomania, an impulse control disorder in humans, and hair-pulling which has been reported in mice, guinea pigs, rabbits, sheep and muskox, dogs and cats, leading to suggestions for a comparative psychology approach to alleviating these problems. Causes Feather-plucking is generally regarded as a multifactorial disorder, although three main aspects of bird keeping may be related to the problem: (1) cage size often restricts the bird's movements; (2) cage design and barrenness of the environment often do not provide sufficient behavioural opportunities to meet the bird's sensitivity, intelligence and behavioural needs; and (3) solitary housing, which fails to meet the high social needs of the bird. Social and environmental factors Early experience Feather-plucking is often attributed to a variety of social causes that may include poor socialisation or absence of parents during the rearing period and because of this, the individual subsequently expressing the disorder fails to learn appropriate preening behaviour. Several studies have focused on the importance of rearing methods (wild-caught, parent-raised, hand-reared). Isolation In captivity, pet birds are often kept isolated from conspecifics whereas in the wild they would form stable, sometimes large, flocks. These birds may not deal well with a solitary lifestyle. Deprivation of a social or sexual partner may lead to 'separation anxiety', ‘loneliness’, ‘boredom’, sexual ‘frustration’ and ‘attention-seeking’ behaviour. These factors may all contribute to feather-plucking, although no empirical studies have been performed to test these ideas. Barren environment Increasing environmental complexity can reduce feather-plucking, however, other studies have only managed to stabilise existing plumage problems. Re-directed foraging behaviour Increasing foraging opportunities can markedly reduce feather-plucking. This has many similarities with the redirected foraging behaviour hypothesis proposed for feather pecking in commercial poultry. Birds in captivity are usually given energy-dense, readily available food that is consumed rapidly, whereas in the wild they would have to spend many hours foraging to find this. It is considered that a combination of a barren environment and the 'excess' foraging time available is then spent redirecting foraging to feathers of other individuals. When 18 feather-plucking grey parrots (Psittacus erithacus) were provided with food in pipe feeders rather than bowls, their foraging time significantly increased by 73 minutes each day and their plumage improved noticeably within one month. Stress Feather-plucking has also been interpreted as a coping strategy for negative affective states e.g. stress, loneliness, boredom, induced by inappropriate social or environmental factors. Findings in favour of the stress hypothesis include a study in which distinctive room position affected occurrence of the disorder. Orange-winged amazon parrots (Amazona amazonica) that were housed in proximity and direct line of sight to the door showed significantly more feather-plucking compared to individuals housed further away from the door, indicating presence of stressors as a causal factor. In addition, parrots that feather-pluck have been found to have higher levels of corticosterone, a hormone secreted by many animals when they are exposed to chronic stress. It has also been suggested that long day-lengths can cause feather-plucking; presumably this could relate to birds becoming overly tired and therefore stressed. Medical and physical factors Many medical causes underlying the development of feather-plucking have been proposed including allergies (contact/inhalation/food), endoparasites, ectoparasites, skin irritation (e.g. by toxic substances, low humidity levels), skin desiccation, hypothyroidism, obesity, pain, reproductive disease, systemic illness (in particular liver and renal disease), hypocalcemia, psittacine beak and feather disease (PBFD), proventricular dilatation syndrome, colic, giardiasis, psittacosis, airsacculitis, heavy metal toxicosis, bacterial or fungal folliculitis, genetic feather abnormalities, nutritional deficiencies (in particular vitamin A) and dietary imbalances, and neoplasia. For many of the above-mentioned factors, a causative relationship or correlation has not been established and may therefore merely be the result of coincidental findings. Approximately 50% of parrots exhibiting feather damaging behaviour have been diagnosed as having inflammatory skin disease based on paired skin and feather biopsies. The birds try to relieve itching by grooming their feathers, but this often leads to over-grooming and eventually feather-plucking. Neurobiological factors Little is currently known on brain dysfunction in feather-plucking. However, it may be hypothesized that abnormal brain function is involved, especially in those cases that appear sensitive to treatment with behavioural intervention and environmental changes. Psychotropic therapy for birds has been suggested as treatment for feather-plucking although responses seem variable. Genetic factors In orange-winged amazon parrots, a heritability estimate of 1.14 ± 0.27 was found for feather-plucking, indicating that a genetic basis exists. This study, however, only involved analysis of full siblings and a small number of birds, explaining the heritability value of greater than 1. Quantitative trait loci (QTL) analysis could provide more insight in possible genetic markers that are involved in feather-plucking. Treatment Veterinary treatment or an improved and more stimulating environment may help birds suffering from feather-plucking. Organic bitter sprays are sold in pet stores to discourage plucking, especially of newly grown feathers, although this may make general beak-based grooming difficult for the animal. This is not recommended since it does not address the real reason why the bird is picking feathers. Likewise, physical items such as collars or vests, which are commercially available or may be improvised by the parrot's owner from items such as pipe insulation tubes (placed around the neck) or socks (cut into a vest which the bird is made to wear) may prevent the bird from plucking by providing a barrier which makes the act more difficult, but does not deal with the underlying cause of the feather-plucking. Studies have shown that administration of haloperidol to affected birds will cause a long-term reduction in obsessive feather-plucking, however the birds always relapsed as soon as the medication was withdrawn. Clomipramine is also linked to minor long-term improvement for the condition, although it is not generally as effective as haloperidol. Administration of fluoxetine is also known to reduce feather-plucking activity but only for very short periods of time, with the birds generally relapsing after several weeks of therapy and requiring a continually increasing dose of the medication. Use of fluoxetine for this condition is also linked to major relapse of feather-plucking when the medication is withdrawn, and it is known to cause severe psychological side-effects in certain birds. See also Abnormal behaviour of birds in captivity Animal psychopathology Comparative psychology Stereotypy List of abnormal behaviours in animals References External links Feather Plucking in African Grey Parrots Feather Picking at parrotloversforum.com Ricobird: A Solution for Self-Mutilation and Feather Picking Abnormal behaviour in animals Bird diseases Feathers Bird behavior Self-harm
Feather-plucking
Biology
1,739
7,490,163
https://en.wikipedia.org/wiki/Atmospheric%20Chemistry%20and%20Physics
Atmospheric Chemistry and Physics is an open access peer-reviewed scientific journal published by the European Geosciences Union. It covers research on the Earth's atmosphere and the underlying chemical and physical processes, including the altitude range from the land and ocean surface up to the turbopause, including the troposphere, stratosphere, and mesosphere. The main subject areas comprise atmospheric modelling, field measurements, remote sensing, and laboratory studies of gases, aerosols, clouds and precipitation, isotopes, radiation, dynamics, and biosphere and hydrosphere interactions. Article types published are research and review articles, technical notes, and commentaries. The journal has a two-stage publication process. In the first stage, papers that pass a rapid access peer-review are immediately published on the Atmospheric Chemistry and Physics Discussions forum website. They are then subject to interactive public peer review, including the referees' comments (anonymous or attributed), additional comments by other members of the scientific community (attributed), and the authors' replies. In the second stage, if accepted, the final revised papers are published in the journal. To ensure publication precedence for authors, and to provide a lasting record of the scientific discussion, both the journal and the forum are permanently archived and fully citable. Abstracting and indexing This journal is abstracted and indexed by: Web of Science/Science Citation Index Current Contents Scopus Astrophysics Data System Chemical Abstracts GeoRef See also Atmospheric chemistry References External links Earth and atmospheric sciences journals Biweekly journals English-language journals Academic journals established in 2001 Geophysics journals Creative Commons Attribution-licensed journals European Geosciences Union academic journals Copernicus Publications academic journals Atmospheric chemistry
Atmospheric Chemistry and Physics
Chemistry
346
55,306,029
https://en.wikipedia.org/wiki/Phi%20Phoenicis
Phi Phoenicis, Latinized from φ Phoenicis, is a binary star system in the southern constellation of Phoenix. It is faintly visible to the naked eye with an apparent visual magnitude of 5.1. Based upon an annual parallax shift of as seen from Earth, it is located approximately 320 light years from the Sun. It is moving away with a heliocentric radial velocity of . Primary star The primary component is a B-type main-sequence star with a stellar classification of B9 V. It is a type of chemically peculiar star known as an HgMn star, which means it shows surface overabundances of certain elements including mercury and manganese, and deficiencies in others including helium, cobalt. The star has about three times the mass of the Sun and is radiating 87 times the Sun's luminosity from its photosphere at an effective temperature of about . The reconstruction of the surface of Phi Phoenicis by Doppler imaging showed it to be heterogeneous with regions of different elemental abundances. In particular, the star forms spots with high or low abundances of yttrium, strontium, titanium, and chromium. The comparison of the abundance maps in different epochs revealed that the spot configurations vary on monthly or yearly time scales. The spectral lines of the irregularly distributed elements show variations that allowed a precise rotation period of 9.53 days to be determined, and also show evidence of long term abundance changes. The analysis of the spots suggests that the rotation axis is inclined to the line of sight by an angle of about 53°, and shows evidence of very weak differential rotation. The starspots probably cause milimagnitude variations in the brightness of Phi Phoenicis, even though there are no precise observations to confirm this. The origin of the starspots and chemical anomalies in HgMn stars is uncertain and has generated controversy. Typically, such as for Ap and Bp stars, inhomogeneously distributed elements are attributed to be large-scale organized magnetic fields, but there are not conclusive detection of magnetic fields in HgMn stars. In 2012, a study claimed to have detected a weak magnetic field in Phi Phoenicis correlated with the spots, but this has been contested. It is believed that diffusion processes in the atmosphere may be related to the chemical anomalies, but this does not explain quantitatively the observed variations. Secondary star Phi Phoenicis is a single-lined spectroscopic binary with a period of 1126 days and an eccentricity of 0.59. There is no evidence for additional stars in the system, but in the past this has been considered a triple system, due to the detection of the wrong spectroscopic period. The variability of the radial velocity of Phi Phoenicis was discovered in the first spectroscopic observations of the star in 1911, and was confirmed in 1982, but the data were still inclusive and no orbit was determined. The first orbital solution was finally published in 1999, yielding a period of 41.4 days. At the same time, in 1997, the Hipparcos Catalogue was published revealing Phi Phoenicis to be an astrometric binary with an estimated period of 878 days (circular orbit solution). Thus Phi Phoenicis became a triple star system, with a visible star, a spectroscopic companion, and an astrometric companion. A 2013 study, with new high-resolution radial velocity data from the FEROS, HARPS and CORALIE spectrographs, showed that the period of the spectroscopic orbit is actually closer to 1126 days, and not 41.4 days; this indicates that the spectroscopic companion is the same one that the astrometric data detected. In the same year another study fitted the astrometric data to the spectroscopic orbit, revealing the orbital inclination of the system and allowing to estimate the properties of the secondary star. The orbit of the system is highly eccentricity and is seen almost side-on, with an inclination of 93 ± 4.7°. The high uncertainty means that the occurrence of eclipses is possible, despite being unlikely. From this inclination and assuming a mass of for the primary, the binary mass function can be used to calculate a mass of for the secondary. The secondary star is assumed to be a yellow dwarf with an effective temperature around , and is 5.7 visual magnitudes fainter than the primary. The average separation between the two star is estimated at around . References B-type main-sequence stars Mercury-manganese stars Spectroscopic binaries Phoenix (constellation) Phoenicis, Phi 0558 Durchmusterung objects 011753 008882
Phi Phoenicis
Astronomy
970
4,710,561
https://en.wikipedia.org/wiki/Scheduler%20activations
Scheduler activations are a threading mechanism that, when implemented in an operating system's process scheduler, provide kernel-level thread functionality with user-level thread flexibility and performance. This mechanism uses a so-called "N:M" strategy that maps some N number of application threads onto some M number of kernel entities, or "virtual processors." This is a compromise between kernel-level ("1:1") and user-level ("N:1") threading. In general, "N:M" threading systems are more complex to implement than either kernel or user threads, because both changes to kernel and user-space code are required. Scheduler activations were proposed by Anderson, Bershad, Lazowska, and Levy in Scheduler Activations: Effective Kernel Support for the User-Level Management of Parallelism in 1991. Support was implemented in the NetBSD kernel by Nathan Williams but has since been abandoned in favor of 1:1 threading. FreeBSD had a similar threading implementation called Kernel Scheduled Entities which is also being retired in favor of 1:1 threading. Scheduler activations were also implemented as a patch for the Linux kernel by Vincent Danjean: Linux Activations, the user-level part being done in the Marcel thread library. References Threads (computing)
Scheduler activations
Technology
274
8,389,273
https://en.wikipedia.org/wiki/Magnapinna%20sp.%20B
Magnapinna'' sp. B is an undescribed species of bigfin squid known only from a single immature specimen collected in the northern Atlantic Ocean. Description It is characterised by its dark epidermal pigmentation, which is epithelial, as opposed to the chromatophoral pigmentation found in other Magnapinna species. Discovery The only known specimen of Magnapinna sp. B is a juvenile male of mantle length (ML) held in the Bergen Museum. It was caught by the R/V G.O. SARS (MAR-ECO cruise super station 46, local station 374) on July 11, 2004, at . References Vecchione, M. & Young, R. E. (2006). "The squid family Magnapinnidae (Mollusca; Cephalopoda) in the North Atlantic with a description of Magnapinna atlantica, n. sp.". Proc. Biol. Soc. Wash.'' 119(3): 365–372. External links Tree of Life web project: Magnapinna sp. B Bigfin squid Undescribed mollusc species Species known from a single specimen
Magnapinna sp. B
Biology
249
12,930,396
https://en.wikipedia.org/wiki/XHTML%2BVoice
XHTML+Voice (commonly X+V) is an XML language for describing multimodal user interfaces. The two essential modalities are visual and auditory. Visual interaction is defined like most current web pages via XHTML. Auditory components are defined by a subset of Voice XML. Interfacing the voice and visual components of X+V documents is accomplished through a combination of ECMAScript, JavaScript, and XML Events. Voice input Voice input or speech recognition is based on grammars that define the set of possible input text. In contrast to a probabilistic approach employed by popular software packages such as Dragon Naturally Speaking, the grammar based approach provides the recognizer with important contextual information that significantly boosts recognition accuracy. The specific formats for grammars include JSGF. Voice output Voice output or speech synthesis can read any string at virtually any time. Pitch, volume, and other characteristics can be customized using CSS and Speech Synthesis Markup Language (SSML) however the Opera web browser doesn't currently support all these features. MIME types The previously recommended MIME type for any X+V document is application/xhtml+voice+xml which is what the Opera browser uses. Opera will also interpret X+V documents served as text/xml. The current recommended MIME type for any X+V document is application/xv+xml. Since most web servers associate the .xml extension with text/xml, an xml extension is a fairly safe way of making your static X+V document files browsable. X+V-enabled browsers The most commonly used X+V browser is the Opera browser. Users of the Opera browser can enable X+V support through steps described at https://web.archive.org/web/20080516174104/http://www.opera.com/voice . Voice is not yet supported in Opera Mini or on platforms other than Windows. Detecting support for X+V is best done from the server by checking the HTTP header "Accept" for the MIME type application/xhtml+voice+xml. Here is some PHP code that returns "true" if and only if the requesting browser supports XHTML+Voice: <?php /* The following script echoes "true" if and only if the requesting browser supports XHTML+Voice. */ // Determine whether browser is sending Accept header. if (isset($_SERVER['HTTP_ACCEPT'])) { $accept = $_SERVER['HTTP_ACCEPT']; // If they omit the MIME type from Accept then assume no support. if (strpos($accept, 'application/xhtml+voice+xml') === false) { echo 'false'; } else { echo 'true'; } } else { echo 'false'; } ?> Related technology Speech Application Language Tags (SALT) is a very similar format developed by Microsoft in 2001 to compete with VoiceXML and XHTML+Voice. SALT also provides users with multimodal support including grammar based recognition and speech synthesized output. The main differences are in the providers of support. Many different companies support VoiceXML and XHTML+Voice by providing various development tools and in particular IBM and Opera Software. SALT is supported almost exclusively from Microsoft by products such as the Microsoft Speech Application SDK and Microsoft Speech Server. External links XHTML+Voice v1.2 Voice - Opera Developer Community XHTML+Voice Programmer's Guide Download Opera Web Browser The SpeechWeb Project RFC 4374 on MIME type Video Demonstration of XHTML+Voice Page XML-based standards
XHTML+Voice
Technology
760
11,422,165
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20Z165
In molecular biology, Small nucleolar RNA Z165 is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA. snoRNA Z165 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs. Plant snoRNA Z165 was identified in a screen of Oryza sativa. References External links Small nuclear RNA
Small nucleolar RNA Z165
Chemistry
197
24,662,917
https://en.wikipedia.org/wiki/Journal%20of%20Sexual%20Aggression
The Journal of Sexual Aggression is a peer-reviewed academic journal It provides an international and interdisciplinary forum for the dissemination of original research findings, reviews, theory, and practice developments regarding sexual aggression in all its forms. The Journal aims to engage readers from a wide range of research, practice and policy areas, including prevention science, crime science, public health, law and regulation, policing and investigation, prosecution and sentencing, corrections and youth justice, child protection, victim advocacy and support, clinical and risk assessment, and offender treatment and risk management. The Journal recognises that human sexual aggression is a global problem, and therefore includes high quality contributions, written in English, from around the world. It is the official journal of the National Organisation for the Treatment of Abuse (NOTA). The Editor-in-Chief of the journal as of 2021 is A/Professor Nadine McKillop. Abstracting and indexing The journal is abstracted and indexed in Applied Social Sciences Index and Abstracts, CINAHL, Criminal Justice Abstracts, Family Studies Database, International Bibliography of the Social Sciences, PsycINFO, and PASCAL. References External links National Organisation for the Treatment of Abuse Sexology journals Criminology journals Triannual journals Taylor & Francis academic journals English-language journals Academic journals established in 1994
Journal of Sexual Aggression
Biology
261
3,475,555
https://en.wikipedia.org/wiki/%2855637%29%202002%20UX25
(provisional designation ) is a trans-Neptunian object that orbits the Sun in the Kuiper belt beyond Neptune. It briefly garnered scientific attention when it was found to have an unexpectedly low density of about 0.82 g/cm3. It was discovered on 30 October 2002, by the Spacewatch program; as of August 2024, the object has yet to be named. has an absolute magnitude of about 4.0, and Spitzer Space Telescope results estimate it to be about 681 km in diameter. The low density of this and many other mid sized TNOs implies that they have likely never compressed into fully solid bodies, let alone differentiated or collapsed into hydrostatic equilibrium, and so are highly unlikely to be dwarf planets. Numbering and naming This minor planet was numbered (55637) by the Minor Planet Center on 16 February 2003 (). , it has not been named. Classification has a perihelion of 36.7 AU, which it will next reach in 2065. As of 2020, is 40 AU from the Sun. The Minor Planet Center classifies as a cubewano while the Deep Ecliptic Survey (DES) classifies it as scattered-extended. The DES using a 10 My integration (last observation: 2009-10-22) shows it with a minimum perihelion (qmin) distance of 36.3 AU. It has been observed 212 times with precovery images dating back to 1991. Physical characteristics A variability of the visual brightness was detected which could be fit to a period of 14.38 or 16.78 h (depending on a single-peaked or double peaked curve). The light-curve amplitude is ΔM = . The analysis of combined thermal radiometry of from measurements by the Spitzer Space Telescope and Herschel Space Telescope indicates an effective diameter of and albedo of 0.107. Assuming equal albedos for the primary and secondary it leads to the size estimates of ~664 km and ~190 km, respectively. If the albedo of the secondary is half of that of the primary the estimates become ~640 and ~260 km, respectively. Using an improved thermophysical model slightly different sizes were obtained for UX25 and its satellite: 659 km and 230 km, respectively. has red featureless spectrum in the visible and near-infrared but has a negative slope in the K-band, which may indicate the presence of the methanol compounds on the surface. It is redder than Varuna, unlike its neutral-colored "twin" , in spite of similar brightness and orbital elements. Composition With a density of 0.82 g/cm3, assuming that the primary and satellite have the same density, is one of the largest known solid objects in the Solar System that is less dense than water. Why this should be is not well understood, because objects of its size in the Kuiper belt often contain a fair amount of rock and are hence pretty dense. To have a similar composition to others large KBOs, it would have to be exceptionally porous, which was believed to be unlikely given the compactability of water ice; this low density thus astonished astronomers. Studies by Grundy et al. suggest that at the low temperatures that prevail beyond Neptune, ice is brittle and can support significant porosity in objects significantly larger than , particularly if rock is present; the low density could thus be a consequence of this object failing to warm sufficiently during its formation to significantly deform the ice and fill these pore spaces. Satellite The discovery of a minor-planet moon was reported in IAUC 8812 on 22 February 2007. The satellite was detected using the Hubble Space Telescope in August 2005. The satellite was found at 0.16 arcsec from the primary with an apparent magnitude difference of 2.5. It orbits the primary in days, at a distance of , yielding a system mass of . The eccentricity of the orbit is . This moon is estimated to be in diameter. Assuming the same albedo as the primary, it would have a diameter of 190 km, assuming an albedo of 0.05 (typical of other cold, classical KBOs of similar size) a diameter of 260 km. References External links MPEC 2002-V08 Astronomers surprised by large space rock less dense than water, Ron Cowen, Nature, 13 November 2013 Scientist finds medium sized Kuiper belt object less dense than water, Bob Yirka, Phys.org, 14 November 2013 Classical Kuiper belt objects Discoveries by the Spacewatch project Possible dwarf planets Binary trans-Neptunian objects 20021030 Solar System
(55637) 2002 UX25
Astronomy
945
13,543,970
https://en.wikipedia.org/wiki/Wellsite%20Information%20Transfer%20Specification
The Wellsite Information Transfer Specification (WITS) is a specification for the transfer of drilling rig-related data. This petroleum industry standard is recognized by a number of companies internationally and is supported by many hardware devices and software applications. WITS is a multi-layered specification: Layer 0 describes an ASCII-based transfer specification Layer 1 describes a binary-based format based on 25 predefined fixed-size records and the Log Information Standard (LIS) data-transmission specification Layer 2 describes bidirectional communication using LIS Comment records Layer 2b describes buffering of data Layer 4 extends the previous layers to use a different data exchange format, RP66 Though still in active use as of 2013, the specification has been superseded by the XML-based WITSML. See also Wellsite information transfer standard markup language Drilling technology Petroleum technology
Wellsite Information Transfer Specification
Chemistry,Engineering
172
7,715,182
https://en.wikipedia.org/wiki/Nepenthes%20%C3%97%20merrilliata
Nepenthes × merrilliata (; a blend of merrilliana and alata) is a natural hybrid involving N. alata and N. merrilliana. Like its two parent species, it is endemic to the Philippines, but limited by the natural range of N. merrilliana to Samar as well as Mindanao and its offshore islands. References Fleming, R. 1979.   Carnivorous Plant Newsletter 8(1): 10–12. Mann, P. 1998. A trip to the Philippines. Carnivorous Plant Newsletter 27(1): 6–11. McPherson, S.R. & V.B. Amoroso 2011. Field Guide to the Pitcher Plants of the Philippines. Redfern Natural History Productions, Poole. CP Database: Nepenthes × merrilliata Carnivorous plants of Asia merrilliata Nomina nuda Flora of the Visayas Flora of Mindanao Plants described in 1979
Nepenthes × merrilliata
Biology
184
17,306,295
https://en.wikipedia.org/wiki/Echo%20%28computing%29
In telecommunications, echo is the local display of data, either initially as it is locally sourced and sent, or finally as a copy of it is received back from a remote destination. Local echo is where the local sending equipment displays the outgoing sent data. Remote echo is where the display is a return copy of data as received remotely. Both are used together in a computed form of error detection to ensure that data received at the remote destination of a telecommunication are the same as data sent from the local source (a/k/a echoplex, echo check, or loop check). When (two) modems communicate in echoplex mode the remote modem echoes whatever it receives from the local modem. Terminological confusion: echo is not duplex A displayed 'echo' is independent of 'duplex' (or any) telecommunications transmission protocol. Probably from technical ignorance, "half-duplex" and "full-duplex" are used as slang for 'local echo' (a/k/a echo on) and 'remote echo', respectively, as typically they accompany one another. Strictly incorrect, this causes confusion (see duplex). Typically 'local echo' accompanies half-duplex transmission, which effectively doubles channel bandwidth by not repeating (echoing) data back from its destination (remote), as is reserved-for with 'full duplex' (which has only half of the bandwidth of 'half duplex'). Half-duplex can be set to 'echo off' for no echo at all. One example of 'local echo' used together with 'remote echo' (requires full-duplex) is for error checking pairs of data characters or chunks (echoplex) ensuring their duplicity (or else its just an extraneous annoyance). Similarly, for another example, in the case of the TELNET communications protocol a local echo protocol operates on top of a full-duplex underlying protocol. The TCP connection over which the TELNET protocol is layered provides a full-duplex connection, with no echo, across which data may be sent in either direction simultaneously. Whereas the Network Virtual Terminal that the TELNET protocol itself incorporates is a half-duplex device with (by default) local echo. The devices that echo locally Terminals are one of the things that may perform echoing for a connection. Others include modems, some form of intervening communications processor, or even the host system itself. For several common computer operating systems, it is the host system itself that performs the echoing, if appropriate (which it isn't for, say, entry of a user password when a terminal first connects and a user is prompted to log in). On OpenVMS, for example, echoing is performed as necessary by the host system. Similarly, on Unix-like systems, local echo is performed by the operating system kernel's terminal device driver, according to the state of a device control flag, maintained in software and alterable by applications programs via an ioctl() system call. The actual terminals and modems connected to such systems should have their local echo facilities switched off (so that they operate in no echo mode), lest passwords be locally echoed at password prompts, and all other input appear echoed twice. This is as true for terminal emulator programs, such as C-Kermit, running on a computer as it is for real terminals. Controlling local echo Terminal emulators Most terminal emulator programs have the ability to perform echo locally (which sometimes they misname "half-duplex"): In the C-Kermit terminal emulator program, local echo is controlled by the SET TERMINAL ECHO command, which can be either SET TERMINAL ECHO LOCAL (which enables local echoing within the terminal emulator program itself) or SET TERMINAL ECHO REMOTE (where disables local echoing, leaving that up to another device in the communications channel—be that the modem or the remote host system—to perform as appropriate). In ProComm it is the combination, which is a hot key that may be used at any time to toggle local echo on and off. In the Terminal program that came with Microsoft Windows 3.1, local echo is controlled by a checkbox in the "Terminal Preferences" dialogue box accessed from the menu of the terminal program's window. Modems The Hayes commands that control local echo (in command mode) are for off and for on. For local echo (in data mode), the commands are and respectively. Note the reversal of the suffixed digits. Unlike the "" commands, the "" commands are not part of the EIA/TIA-602 standard. Host systems Some host systems perform local echo themselves, in their device drivers and so forth. In Unix and POSIX-compatible systems, local echo is a flag in the POSIX terminal interface, settable programmatically with the tcsetattr() function. The echoing is performed by the operating system's terminal device (in some way that is not specified by the POSIX standard). The standard utility program that alters this flag programmatically is the stty command, using which the flag may be altered from shell scripts or an interactive shell. The command to turn local echo (by the host system) on is stty echo and the command to turn it off is stty -echo. On OpenVMS systems, the operating system's terminal driver normally performs echoing. The terminal characteristic that controls whether it does this is the ECHO characteristic, settable with the DCL command SET TERMINAL /ECHO and unsettable with SET TERMINAL /NOECHO. Footnotes References What supports what Sources used Error detection and correction Modems Data transmission
Echo (computing)
Engineering
1,168
11,570,339
https://en.wikipedia.org/wiki/Granulobasidium%20vellereum
Granulobasidium vellereum is a species of fungus in the family Cyphellaceae. A plant pathogen associated with white rot of angiospermous logs, slash, and living trees, it has been found in Sweden and Denmark, and in North America. Originally described as Corticium vellereum in 1885, it was transferred to the genus Granulobasidium by Walter Jülich in 1979. References Fungi described in 1885 Fungi of Europe Fungi of North America Fungal tree pathogens and diseases Agaricales Fungus species
Granulobasidium vellereum
Biology
110
75,170,937
https://en.wikipedia.org/wiki/Garden%20Island%20Tunnel%20System
The Garden Island Tunnel System, also known as Garden Island tunnels, Garden Island Tunnel Complex and Potts Point Tunnels, is a former tunnel warfare system in Garden Island, Sydney, Australia. Used in World War II by the Royal Australian Navy in 1941, the tunnels were dug from sandstone beneath Potts Point after the Japanese attacked Pearl Harbor, to shelter the men working at the naval base from air raids. Some of the tunnels feature names such as Petticoat Lane (named after London's landmark), North-West Passage and Lambeth Walk. The tunnel system featured a power station, a command centre, offices and air raid shelters. Today, the tunnels and chambers are used for electrical wiring and communications. History Construction Four days after the Japanese air-raided Pearl Harbor on December 7, 1941, excavation began on the tunnels in Potts Point. The tunnel system was approved at an estimated cost of £15,150. After Sydney's shelling in June 1942, this pushed the exigent to excavate the five interlinked tunnels and multiple chambers underneath the base's northern point. A second tunnel system that runs all the way to Kings Cross exists, but little is known about it. The Garden Island tunnels are perfectly straight, resembling a wild west mine, and concrete-lined with dented cuts into the rock for stretcher-bearing, casualty clearing stations, backup generators, telephone exchange, bathrooms and toilet facilities, some of which are located in the northernmost bunker. Other tunnels were constructed to store pumping valves at the Captain Cook Graving Dock. World War II In early 1941, former prime minister of Australia and minister for the navy, Billy Hughes, stated that he did not want to alarm people by having them think Sydney was about to be bombarded. Hughes organized over a hundred of brodie helmets for the air raid officers and as well as training them in case if a chemical warfare were to occur. A pit constructed in the 1800s would have been used to store provisions if the island were to be sieged. In December 1941, due to concerns that an aerial attack on Sydney could follow the Pearl Harbor bombing, the five tunnels were constructed to provide an airstrike shelter for the 2,500 waterfront staff at Garden Island Naval Base, in case the Royal Australian Navy base that lies there were attacked. The tunnel system would have been able to protect the workers on Garden Island from 70 tons of bombs dropped within 24 hours. Though the tunnels did not provide room for the public population, as civilians could have sought cover in the proximate railway stations of Sydney. The tunnels were also used to transport guns and ammunition from one side of the island to the other side. The demand for protection from air attack became more serious in 1942, when Japan occupied Singapore on 15 February, and attacked Darwin on 19 February, and eventually, on 31 May, when three Japanese midget submarines entered and attacked the Sydney Harbour. Post-war In the 1960s, there still existed some tables, papers on walls and old telephones, but the tunnel complex became abandoned and the roof timber set either decayed or it got consumed by termites. In the 1970s, the tunnels were renovated with new steel roof supports and concrete after the fleecy sandstone walls and timber struts became weathered. Today, the tunnels are primarily used for storage and to provide fuel and communication lines at the naval base. Upgrade As of October 2023, the defense base in the tunnels is going through a half-a-billion dollar, three-year infrastructure upgrade. The warship wharves have been adjusted, in addition to replacement of a fuel tank for the ships and improvement to the island's structure that includes fuel, electricity, sewerage and water, which run through the tunnels. A part of Petticoat Lane, which was used to insulate boilers and pipage on ships, is gated off with a warning sign that indicates the presence of asbestos. See also Sydney Harbour defences Bradleys Head Fortification Complex Georges Head Battery Middle Head Fortifications References Tunnels in Sydney Tunnel warfare Tunnels completed in 1941 Forts in New South Wales History of Sydney Buildings and structures in Sydney Former military installations in New South Wales World War II sites in Australia Sydney in World War II Bunkers in Oceania
Garden Island Tunnel System
Engineering
855
43,468,546
https://en.wikipedia.org/wiki/Chamfered%20square%20tiling
In geometry, the chamfered square tiling or semitruncated square tiling is a tiling of the Euclidean plane. It is a square tiling with each edge chamfered into new hexagonal faces. It can also be seen as the intersection of two truncated square tilings with offset positions. And its appearance is similar to a truncated square tiling, except only half of the vertices have been truncated, leading to its descriptive name semitruncated square tiling. Usage and Names in tiling patterns In floor tiling, this pattern with small squares has been labeled as Metro Broadway Matte and alternate corner square tile. With large squares it has been called a Dijon tile pattern. As 3 rows of rectangles, it has been called a basketweave tiling and triple block tile pattern . Variations Variations can be seen in different degrees of truncation. As well, geometric variations exist within a given symmetry. The second row shows the tilings with a 45 degree rotation which also look a little different. Lower symmetry forms are related to the cairo pentagonal tiling with axial edges expanded into rectangles. The chiral forms be seen as two overlapping pythagorean tilings. Semikis square tiling The dual tiling looks like a square tiling with half of the squares divided into central triangles. It can be called a semikis square tiling, as alternate squares with kis operator applied. It can be seen as 4 sets of parallel lines. References Euclidean tilings
Chamfered square tiling
Physics,Mathematics
312
29,409,353
https://en.wikipedia.org/wiki/Refractory%20lined%20expansion%20joint
A Refractory lined expansion joint is an assembly used in a pipe line to allow it to expand and contract as climate conditions move from hot to cold and helps to ensure that the system remains functional. The refractory-lining can be vibra cast insulation with anchors, abrasion resistant refractory in hex mesh, gunned insulating refractory, or poured insulating refractory. Refractory lined expansion joints can be hinged, in-line pressure balanced, gimbal, tied-universal depending on the temperature, pressure, movement and flow media conditions. Refractory lined Expansion joints are used in extremely high temperature and high pressure applications and are designed to withstand extreme environments. The Refractory lining within the metallic Expansion joint bellows functions to reduce the pipe wall temperature by 300˚F to 450˚F, depending upon the thickness of the refractory lining. The lining also helps to withstand the abrasive material from the catalyst in FCCU applications. Applications Fluid catalytic cracking Units (FCCU) Furnaces Hot gas turbines Styrene plants Fluidized bed boilers Kilns Power recovery trains Thermal oxidizers References Structural connectors
Refractory lined expansion joint
Physics,Engineering
236
44,989,402
https://en.wikipedia.org/wiki/Procurement%20G6
The Procurement G6 is an informal group of six national central purchasing bodies. It is also known as the Multilateral Meeting on Government Procurement (MMGP). Members Members of the Procurement G6 are: : Public Services and Procurement Canada : ChileCompra : Consip : Public Procurement Service : Crown Commercial Service : General Services Administration Scope Each country shares experiences about: e–procurement systems challenges, opportunities and actions for small and medium enterprises (SMEs) their qualification systems for enterprises instruments and indicators for the performance measurement of the Central Purchasing Bodies and their impact on the economic system, on the public sector and on the enterprises actions to minimize the risk of corruption the green procurement scenarios Past meetings of the Procurement G6 have included: June 15–16, 2009 — San Antonio, June 10–12, 2010 – Rome, September 24–26, 2013 — Seoul, May 24–25, 2016 – Rome, October 10–11, 2018 — Vancouver See also Agreement on Government Procurement Auction E–procurement Expediting Global sourcing Group purchasing organization Purchasing Strategic sourcing Notes and references External links Consip CC – Direcciòn ChileCompra GSA – General Services Administration OGC – Office of Government Commerce PPS – Public Procurement Service PWGSC – Public Works and Government Services Canada Systems engineering Public eProcurement
Procurement G6
Engineering
270
47,293,002
https://en.wikipedia.org/wiki/NGC%203059
NGC 3059 is a barred spiral galaxy. It is located in the constellation of Carina. The galaxy can be described as being faint, large, and irregularly round. It was discovered on February 22, 1835, by John Herschel. The galaxy has been calculated to be 45 - 50 million lightyears from Earth. References External links NGC 3059 - Galaxy - SKY-MAP NGC 3059 - DeepSkyPedia :: Astronomy The kinematics of the barred spiral galaxy NGC3059* - Harvard.edu 3059 Carina (constellation) 18350222 Barred spiral galaxies 028298
NGC 3059
Astronomy
127
34,702,688
https://en.wikipedia.org/wiki/Trifluoromethyl%20hypofluorite
Trifluoromethyl hypofluorite is an organofluorine compound with the chemical formula . It exists as a colorless gas at room temperature and is highly toxic. It is a rare example of a hypofluorite (compound with an O−F bond). It can be seen as a similar chemical compound to methanol where every hydrogen atom is replaced by a fluorine atom. It is a trifluoromethyl ester of hypofluorous acid. It is prepared by the reaction of fluorine gas with carbon monoxide: The gas hydrolyzes only slowly at neutral pH. Use in organic chemistry The compound is a source of electrophilic fluorine. It has been used for the preparation of α-fluoroketones from silyl enol ethers. Behaving like a pseudohalogen, it adds to ethylene to give the ether: References Trifluoromethoxy compounds Fluorinating agents Hypofluorites
Trifluoromethyl hypofluorite
Chemistry
216
866,925
https://en.wikipedia.org/wiki/System%20Administrator%20Appreciation%20Day
System Administrator Appreciation Day, also known as Sysadmin Day, SysAdminDay, or BOFH Day is an annual event created by system administrator Ted Kekatos. The event exists to show appreciation for the work of sysadmins and other IT workers. It is celebrated on the last Friday in July. History The first System Administrator Appreciation Day was celebrated on July 28, 2000. Kekatos was inspired to create the special day by a Hewlett-Packard magazine advertisement in which a system administrator is presented with flowers and fruit-baskets by grateful co-workers as thanks for installing new printers. Kekatos had just installed several of the same model printer at his workplace. The official SysAdmin Day website includes many suggestions for the proper observation of the holiday. Most common is cake and ice cream. There are many international websites which celebrate the holiday. Many geek and Internet culture businesses, such as O'Reilly Media, also honor the holiday with special product offerings and contests. Various filk songs have been written to commemorate the day. The songs have reached a level of popularity where they are also covered by other performers. Attempts to have Hallmark Cards recognize the holiday as a Hallmark Holiday have yet to be realized. Many e-card websites already have special SysAdminDay cards available. The holiday has been recognized and promoted by many IT professional organizations, the League of Professional System Administrators, SAGE/USENIX and SNIPhub. "Sysadmin Day" around the world Russia Since 2006, the All-Russia System Administrator Gathering is held annually near Kaluga city. In present times thousands of people come there from more than 150 cities of Russia, Ukraine, Belarus and Kazakhstan. There is also a separate event near Kaluga called LinuxFest which is held on the last Friday in July. There is also a similar regular event in Novosibirsk on the bank of Novosibirsk Reservoir. Ekaterinburg technicians also celebrate the last July Friday. Before 2010 there were spontaneous gatherings, and since 2010 the event received official permission from Ekaterinburg authorities to be annually held near famous Keyboard monument in the city centre with support from IT companies. There are "sysadmin competitions" between participants: they throw defunct PC mice to reach maximum distance and also throw such mice (with their "tails" cut out) within an empty distant computer case used as basket. Additionally, they powerlift a bundle of unused HDDs. In 2016 during such an event a memorial plaque was set up at the monument to honor , the Ekaterinburg FidoNet organizer and Internet pioneer who died recently. See also World Information Society Day Programmers' Day References External links The official System Administrator Appreciation Day website Unofficial observances Awareness days Internet culture July observances System administration Recurring events established in 2000 Holidays and observances by scheduling (nth weekday of the month)
System Administrator Appreciation Day
Technology
592
42,069,938
https://en.wikipedia.org/wiki/Eckhaus%20equation
In mathematical physics, the Eckhaus equation – or the Kundu–Eckhaus equation – is a nonlinear partial differential equation within the nonlinear Schrödinger class: The equation was independently introduced by Wiktor Eckhaus and by Anjan Kundu to model the propagation of waves in dispersive media. Linearization The Eckhaus equation can be linearized to the linear Schrödinger equation: through the non-linear transformation: The inverse transformation is: This linearization also implies that the Eckhaus equation is integrable. Notes References . Published in part in: Nonlinear partial differential equations Schrödinger equation
Eckhaus equation
Physics
132
5,862,501
https://en.wikipedia.org/wiki/James%20Robert%20Brown
James Robert Brown (born 1949) is a Canadian philosopher of science. He is an emeritus professor of philosophy at the University of Toronto. In the philosophy of mathematics, he has advocated mathematical Platonism, visual reasoning, and in the philosophy of science he has defended scientific realism mostly against anti-realist views associated with social constructivism. He has also argued for the socialization of medical research (especially pharmaceutical research). He is largely known for his work on thought experiments. Elected: Academy of Sciences Leopoldina (Deutsche Akademie der Naturforscher Leopoldina – Nationale Akademie der Wissenschaften) 2004, Royal Society of Canada 2007, Académie Internationale de Philosophie des Sciences 2010 Brown was born in Montreal, Quebec. He is married to the philosopher Kathleen Okruhlik. Books 1989 The Rational and the Social (Routledge 1989) 1991 The Laboratory of the Mind: Thought Experiments in the Natural Sciences (Routledge 1991, second edition 2010) 1994 Smoke and Mirrors: How Science Reflects Reality (Routledge 1994) 1999 Philosophy of Mathematics: An Introduction to the World of Proofs and Pictures (Routledge 1999, second edition 2008) 2001 Who Rules in Science? An Opinionated Guide to the Wars (Harvard 2001) 2012 Platonism, Naturalism, and Mathematical Knowledge (Routledge 2012) 2017 On Foundations of Seismology: Bringing Idealizations Down to Earth (with M. Slawinski) Books edited include: 2012 Thought Experiments in Philosophy, Science, and the Arts (ed. with M. Frappier and L. Meynell ) (Routledge 2012) 2018 The Routledge Companion to Thought Experiments (ed. with M. Stuart and Y. Fehige), (Routledge 2018) References External links James Brown’s Homepage "Plato's Heaven: A User's Guide - A conversation with James Robert Brown" , Ideas Roadshow, 2013 20th-century Canadian philosophers Academics from Montreal Fellows of the Royal Society of Canada Living people Members of the German National Academy of Sciences Leopoldina Moral realists Philosophers of mathematics Canadian philosophers of science University of Guelph alumni Academic staff of the University of Toronto 1949 births
James Robert Brown
Mathematics
434
52,433,772
https://en.wikipedia.org/wiki/Ten%20Little%20Fingers%20and%20Ten%20Little%20Toes
Ten Little Fingers and Ten Little Toes is a 2008 children's picture book by Mem Fox and Helen Oxenbury. It is about babies, who, although they are from around the world, all share the common trait of having the same number of digits. Reception Ten Little Fingers has been commended for its positive treatment of racial diversity. A review by The New York Times stated that "two beloved picture-book creators — the storyteller Mem Fox and the artist Helen Oxenbury — merge their talents in a winsome look at babies around the world". Booklist called it "a standout for its beautiful simplicity" and "a gentle, joyous offering" School Library Journal described it as a "nearly perfect picture book" and concluded: "Whether shared one-on-one or in storytimes, where the large trim size and big, clear images will carry perfectly, this selection is sure to be a hit". Publishers Weekly, in a starred review, wrote: "Put two titans of kids' books together for the first time, and what do you get (besides the urge to shout, "What took you so long?")? The answer: an instant classic". New York Journal of Books, in a review of a bilingual edition, wrote: "This is a sturdy, toddler-sized board book that has something for everybody. Ms. Fox's text, soft and pure, offers sweet innocence, the joy of lives beginning, and the unique beauty of the mother-child love. Artist Helen Oxenbury's exquisite illustrations are the perfect complement to the text". The Horn Book Magazine referred to it as a "love song": "Snuggle up with your favorite baby and kiss those fingers and toes to both your hearts' content". BookPage Reviews called it "a jewel of a picture book" and wrote: "With minimal text, and sweet illustrations by beloved British artist Helen Oxenbury, it's truly an international treat. .. Ten Little Fingers and Ten Little Toes gently presents—but never preaches—a satisfying lesson about humanity and international harmony". Ten Little Fingers has also been reviewed by the Journal of Children's Literature, The Christian Century, First Opinions -- Second Reactions, YC: Young Children. It won the 2009 Australian Book Industry Book of the Year for Younger Children Award. References External links Library holdings of Ten Little Fingers 2008 children's books 2008 poems 2008 poetry books Australian children's books Australian poems Australian poetry books Children's books about race and ethnicity Children's poetry books Picture books by Mem Fox Finger-counting
Ten Little Fingers and Ten Little Toes
Mathematics
536
66,828,251
https://en.wikipedia.org/wiki/HD%2077887
HD 77887 (HR 3610) is a solitary star located in the southern circumpolar constellation Volans. It has an apparent magnitude of 5.87, making it faintly visible to the naked eye if viewed under ideal conditions. The star is situated at a distance of about 760 light years but is receding with a heliocentric radial velocity of . HD 77887 is an ageing M-type giant that is currently on the asymptotic giant branch. At present it has 1.12 times the mass of the Sun but has expanded to 56.73 times its girth. It shines at from its enlarged photosphere at an effective temperature of , which gives it a red glow. HD 77887 is suspected to be a slow irregular variable whose brightness fluctuates at a tenth of a magnitude. Koen and Eyer examined the Hipparcos data for the star, and found that it varied periodically, with an amplitude of 0.012 magnitudes, and a period of 4.4649 days. References M-type giants 077887 044283 Durchmusterung objects Suspected variables 3610 Volans Asymptotic-giant-branch stars
HD 77887
Astronomy
254
4,050,532
https://en.wikipedia.org/wiki/Pi-system
In mathematics, a -system (or pi-system) on a set is a collection of certain subsets of such that is non-empty. If then That is, is a non-empty family of subsets of that is closed under non-empty finite intersections. The importance of -systems arises from the fact that if two probability measures agree on a -system, then they agree on the -algebra generated by that -system. Moreover, if other properties, such as equality of integrals, hold for the -system, then they hold for the generated -algebra as well. This is the case whenever the collection of subsets for which the property holds is a -system. -systems are also useful for checking independence of random variables. This is desirable because in practice, -systems are often simpler to work with than -algebras. For example, it may be awkward to work with -algebras generated by infinitely many sets So instead we may examine the union of all -algebras generated by finitely many sets This forms a -system that generates the desired -algebra. Another example is the collection of all intervals of the real line, along with the empty set, which is a -system that generates the very important Borel -algebra of subsets of the real line. Definitions A -system is a non-empty collection of sets that is closed under non-empty finite intersections, which is equivalent to containing the intersection of any two of its elements. If every set in this -system is a subset of then it is called a For any non-empty family of subsets of there exists a -system called the , that is the unique smallest -system of containing every element of It is equal to the intersection of all -systems containing and can be explicitly described as the set of all possible non-empty finite intersections of elements of A non-empty family of sets has the finite intersection property if and only if the -system it generates does not contain the empty set as an element. Examples For any real numbers and the intervals form a -system, and the intervals form a -system if the empty set is also included. The topology (collection of open subsets) of any topological space is a -system. Every filter is a -system. Every -system that doesn't contain the empty set is a prefilter (also known as a filter base). For any measurable function the set   defines a -system, and is called the -system by (Alternatively, defines a -system generated by ) If and are -systems for and respectively, then is a -system for the Cartesian product Every -algebra is a -system. Relationship to -systems A -system on is a set of subsets of satisfying if then if is a sequence of (pairwise) subsets in then Whilst it is true that any -algebra satisfies the properties of being both a -system and a -system, it is not true that any -system is a -system, and moreover it is not true that any -system is a -algebra. However, a useful classification is that any set system which is both a -system and a -system is a -algebra. This is used as a step in proving the - theorem. The - theorem Let be a -system, and let   be a -system contained in The - theorem states that the -algebra generated by is contained in The - theorem can be used to prove many elementary measure theoretic results. For instance, it is used in proving the uniqueness claim of the Carathéodory extension theorem for -finite measures. The - theorem is closely related to the monotone class theorem, which provides a similar relationship between monotone classes and algebras, and can be used to derive many of the same results. Since -systems are simpler classes than algebras, it can be easier to identify the sets that are in them while, on the other hand, checking whether the property under consideration determines a -system is often relatively easy. Despite the difference between the two theorems, the - theorem is sometimes referred to as the monotone class theorem. Example Let be two measures on the -algebra and suppose that is generated by a -system If for all and then This is the uniqueness statement of the Carathéodory extension theorem for finite measures. If this result does not seem very remarkable, consider the fact that it usually is very difficult or even impossible to fully describe every set in the -algebra, and so the problem of equating measures would be completely hopeless without such a tool. Idea of the proof Define the collection of sets By the first assumption, and agree on and thus By the second assumption, and it can further be shown that is a -system. It follows from the - theorem that and so That is to say, the measures agree on -Systems in probability -systems are more commonly used in the study of probability theory than in the general field of measure theory. This is primarily due to probabilistic notions such as independence, though it may also be a consequence of the fact that the - theorem was proven by the probabilist Eugene Dynkin. Standard measure theory texts typically prove the same results via monotone classes, rather than -systems. Equality in distribution The - theorem motivates the common definition of the probability distribution of a random variable in terms of its cumulative distribution function. Recall that the cumulative distribution of a random variable is defined as whereas the seemingly more general of the variable is the probability measure where is the Borel -algebra. The random variables and (on two possibly different probability spaces) are (or ), denoted by if they have the same cumulative distribution functions; that is, if The motivation for the definition stems from the observation that if then that is exactly to say that and agree on the -system which generates and so by the example above: A similar result holds for the joint distribution of a random vector. For example, suppose and are two random variables defined on the same probability space with respectively generated -systems and The joint cumulative distribution function of is However, and Because is a -system generated by the random pair the - theorem is used to show that the joint cumulative distribution function suffices to determine the joint law of In other words, and have the same distribution if and only if they have the same joint cumulative distribution function. In the theory of stochastic processes, two processes are known to be equal in distribution if and only if they agree on all finite-dimensional distributions; that is, for all The proof of this is another application of the - theorem. Independent random variables The theory of -system plays an important role in the probabilistic notion of independence. If and are two random variables defined on the same probability space then the random variables are independent if and only if their -systems satisfy for all and which is to say that are independent. This actually is a special case of the use of -systems for determining the distribution of Example Let where are iid standard normal random variables. Define the radius and argument (arctan) variables Then and are independent random variables. To prove this, it is sufficient to show that the -systems are independent: that is, for all and Confirming that this is the case is an exercise in changing variables. Fix and then the probability can be expressed as an integral of the probability density function of See also Notes Citations References Measure theory Families of sets
Pi-system
Mathematics
1,500
23,559,879
https://en.wikipedia.org/wiki/Pneumatic%20non-return%20valve
Pneumatic non-return valves are used where a normal non-return valve would be ineffective. This is for example where there is a risk of flood water entering a site but an equal risk of pollution or a chemical spills leaving a site and polluting the environment. Pneumatic non-return valves are installed below ground and can be used to pneumatically lock the non-return valve closed thus containing a site in the event of a spill. It is common practice to lock sites using pneumatic non-return valves during the loading or transferring of chemicals or hazardous waste. Pneumatic non-return valves have a longer service life when compared to pneumatic bladder systems. References Valves Environmental protection
Pneumatic non-return valve
Physics,Chemistry,Engineering
146
47,962,780
https://en.wikipedia.org/wiki/Television%20guidance
Television guidance (TGM) is a type of missile guidance system using a television camera in the missile or glide bomb that sends its signal back to the launch platform. There, a weapons officer or bomb aimer watches the image on a television screen and sends corrections to the missile, typically over a radio control link. Television guidance is not a seeker because it is not automated, although semi-automated systems with autopilots to smooth out the motion are known. They should not be confused with contrast seekers, which also use a television camera but are true automated seeker systems. The concept was first explored by the Germans during World War II as an anti-shipping weapon that would keep the launch aircraft safely out of range of the target's anti-aircraft guns. The best-developed example was the Henschel Hs 293, but the TV-guided versions of this weapon did not see operational use. The US also experimented with similar weapons during the war, notably the GB-4 and Interstate TDR. Only small numbers were used experimentally, with reasonable results. Several systems were used operationally after the war. The British Blue Boar was cancelled after extensive testing. A separate line of development led to TV-guided versions of the Martel missile to fill the anti-shipping role. The US AGM-62 Walleye is a similar system attached to an unpowered bomb, the Soviet Kh-29 is similar. Television guidance was never widely used, as the introduction of laser guided bombs and GPS weapons have generally replaced them. However, they remain useful when certain approaches or additional accuracy are needed. One famous use was the attack on the Sea Island oil terminals during the Gulf War, which required pinpoint accuracy and was attacked by television-guided GBU-15 bombs. History German efforts The first concerted effort to build a television-guided bomb took place in Germany under the direction of Herbert Wagner at the Henschel aircraft company starting in 1940. This was one of several efforts to produce usable guidance systems for the ongoing Hs 293 glide bomb project. The Hs 293 had originally been designed as a purely MCLOS system in which flares on the tail of the bomb were observed by the bomb aimer and the Kehl-Strassburg radio command set sent commands to the bomb to align it with the target. The disadvantage of this approach is that the aircraft had to fly in such a way to allow the bomb aimer to view the bomb and target throughout the attack, which, given the cramped conditions of WWII bombers, significantly limited the directions the aircraft could fly. Any weather, smoke screens or even the problems of viewing the target at long range made the attack difficult. Placing a television camera in the nose of the bomb appeared to offer tremendous advantages. For one, the aircraft was free to fly any escape course it pleased, as the bomb aimer could watch the entire approach on an in-cockpit television and no longer had to look outside the aircraft. It also allowed the bomb aimer to be located anywhere in the aircraft. Additionally, it could be launched through clouds or smoke screens and then pick up the target when it passes through them. More importantly, as the bomb approaches the target the image grows on the television screen, providing increased accuracy and allowing the bomb aimer to pick vulnerable locations on the target to attack. At the time, television technology was in its infancy, and the size and fragility of both the cameras and receivers were unsuitable for weapon use. German Post Office technicians aiding the Fernseh company began the development of hardened miniaturized cameras and cathode ray tubes, originally based on the German pre-war 441-line standard. They found the refresh rate of 25 frames per second was too low, so instead of using two frames updating 25 times a second, they updated a single frame 50 times a second and displayed roughly half the resolution. In the case of anti-ship use, the key requirement was to resolve the line between the ship and the water, and with 224 lines this became difficult. This was solved by turning the tube sideways so it had 220 lines of horizontal resolution and an analog signal of much greater resolution vertically. In testing carried out by the Deutsche Forschungsanstalt für Segelflug (DFS) starting in 1943, they found one major advantage of the system was that it worked very well with the 2-axis control system on the missile. The Kehl control system used a control stick that started or stopped the motion of the aerodynamic controls on the bomb. Moving the controls to the left, for example, would move the controls to begin a left roll, but when the stick was centred it left the controls in that position and the roll continued to increase. Not being able to see the control surfaces after launch, the operators had to wait until they could see the bomb begin to move and then use opposite inputs to stop the motion. This caused them to continually overshoot their corrections. But when viewed through the television screen, the motion was immediately obvious and the operators had no problem making small corrections with ease. However, they also found that some launches made for very difficult control. During the approach, the operator naturally stopped the control inputs as soon as the camera was lined up with the target. If the camera was firmly attached to the missile, this happened as soon as enough control was input. Critically, the missile might be pointed in that direction but not actually travelling in that direction, there was normally some angle of attack in the motion. This would cause the image to once again begin trailing the target, requiring another correction, and so on. If the launch was too far behind the target, the operator eventually ran out of control power as the missile approached, leading to a circular error probable (CEP) of , too far to be useful. After considering several possibilities to solve this, including a proportional navigation system, they settled on an extremely simple solution. Small wind vanes on the nose of the missile were used to rotate the camera so it was always pointed in the direction of the flight path, not the missile body. Now when the operator maneuvered the missile, he saw where it was ultimately headed, not where it was pointed at that instant. This also helped reduce the motion of the image if they applied sharp control inputs. Another problem they found was that as the missile approached the target, corrections in the control system produced ever wilder motion on the television display, making last-minute corrections very difficult despite this being the most important part of the approach. This was addressed by training the controllers to ensure they had taken any last-minute corrections before this point, and then hold the stick in whatever position it was once the image grew to a certain size. Sources claim that 255 D models were built in total, and one claims one hit a Royal Navy ship in combat. However, other sources suggest the system was never used in combat. US efforts The US had been introduced to the glide bombing concept by the Royal Air Force just before the US entered into the war. "Hap" Arnold had Wright Patterson Air Force Base begin the development of a wide variety of concepts under the GB ("glide bomb") and related VB ("vertical bomb") programs. These were initially of low importance, as both the Army Air Force and US Navy were convinced that the Norden bombsight would offer pinpoint accuracy and eliminate the need for guided bombs. It was not long after the first missions by the 8th Air Force in 1942 that the promise of the Norden was replaced by the reality that accuracy under was essentially a matter of luck. Shortly thereafter the Navy came under attack by the early German MCLOS weapons in 1943. Both services began programs to put guided weapons into service as soon as possible, a number of these projects selected TV guidance. RCA, then a world leader in television technology, had been experimenting with military television systems for some time at this point. As part of this, they had developed a miniaturized iconoscope, model 1846, suitable for use in aircraft. In 1941 these were experimentally used to fly drone aircraft and in April 1942 one of these was flown into a ship about away. The US Army Air Force ordered a version of their GB-1 glide bomb to be equipped with this system, which became the GB-4. It was similar to the Hs 293D in almost every way. The Army's Signal Corps used the 1846 with their own transmitter and receiver system to produce an interlaced video display with 650 lines of resolution at 20 frames a second (40 fields a second). A film recorder was developed to allow post-launch critique. Two B-17's were fit with the receivers and the first five test drops were carried out in July 1943 at Eglin Field in Florida. Further testing was carried out at the Tonopah Test Range and was increasingly successful. By 1944 the system was considered developed enough to attempt combat testing, and the two launch aircraft and a small number of GB-4 bombs were sent to England in June. These launches did not go well, with the cameras generally not working at all, failing just after launch, or offering intermittent reception that generally resulted in the images becoming visible only after the bomb had passed its target. After a series of failed launches, the team returned home, having lost one of the launch aircraft in a landing accident. Attempts to use the system to produce an air-to-air missile using command guidance failed due to issues with closing speed and reaction time. By the end of the war, advances in tube miniaturization, especially as part of the development of the proximity fuse, allowed the iconoscope to be greatly reduced in size. However, RCA's continued research by this time had led to the development of the greatly improved image orthicon, and began Project MIMO, short for "Miniature Image Orthicon". The result was a dramatically smaller system that easily fit in the nose of a bomb. The Army's Air Technical Services Command used this in their VB-10 "Roc II" guided bomb project, a large vertically dropped bomb. Roc development began in early 1945 and was being readied for testing at Wendover Field when the war ended. Development continued after the war, and it was in the inventory for a time in the post-war period. Blue Boar and Green Cheese In the immediate post-war era, the Royal Navy developed a requirement for a guided bomb for the anti-shipping role. This emerged as the "Blue Boar", a randomly assigned rainbow code name. The system was designed to glide at an angle of about 40 degrees above the horizon and could be manoeuvred throughout the approach, to allow it to be directed onto a target within six seconds of breaking through cloud cover at . An even larger "Special Blue Boar" developed with a payload, intended to deliver nuclear warheads from the V-bombers at range as much as when dropped from altitude. Ordered in 1951, development using an EMI television camera went smoothly and live testing began in 1953. Although successful, the program was cancelled in 1954 as the naval version grew too heavy to be carried by their new strike aircraft, while the V-bombers were slated to receive the much higher performance Blue Steel. The anti-shipping role was unfilled and led to a second project, "Green Cheese". This was largely identical to Blue Boar with the addition of several solid fuel rockets to allow it to be launched from low altitude and fly to the target without exposing the launch aircraft to fire, while also replacing the television camera with a small radar. This too proved too heavy for its intended aircraft, the Fairey Gannet, and was cancelled in 1956. Martel In the early 1960s, Matra and Hawker Siddeley Dynamics began to collaborate on a long-range high-power anti-radar missile known as Martel. The idea behind Martel was to allow an aircraft to attack Warsaw Pact surface-to-air missile sites while well outside their range, and it carried a warhead large enough to destroy the radar even in the case of a near miss. In comparison to the US AGM-45 Shrike, Martel was far longer ranged, up to compared to for the early Shrike, and mounted a warhead instead of . Shortly thereafter, the Royal Navy began to grow concerned about the improving air defense capabilities of Soviet ships. The Blackburn Buccaneer had been designed specifically to counter these ships by flying at very low altitudes and dropping bombs from long distances and high speeds. This approach kept the aircraft under the ship's radar until the last few minutes of the approach, but by the mid-1960s it was felt even this brief period would open the aircraft to attack. A new weapon was desired that would keep the aircraft even further from the ships, ideally never rising above the radar horizon. This meant that the missile would have to be fired blind, while the aircraft's own radar was unable to see the target. At the time there was no indigenous active radar seeker available so the decision was made to use television guidance and data link system to send the video to the launch aircraft. The Martel airframe was considered suitable, and a new nose section with the electronics was added to create the AJ.168 version. Like the earlier German and US weapons, the Martel required the weapon officer to guide the missile visually while the pilot steered the aircraft away from the target. Unlike the earlier weapons, Martel flew its initial course using an autopilot that flew the missile high enough that it could see both the target and the launch aircraft so the data link could operate. The television signal would not turn on until the missile reached the approximate midpoint, at which point the weapons officer guided it like the earlier weapons. Although this required the missile to fly high enough to be visible to the ship, its small size made it an elusive target for radars of that era and especially weapons. Martel was not a sea-skimming missile and instead dove on the target from some altitude. The first test launch of the AJ.168 took place in February 1970 and a total of 25 were fired by the time testing ended in July 1973, mostly at RAF Aberporth in Wales. Further testing was carried out until October 1975, when it was cleared for service. It was used only briefly by the Royal Navy before they turned the remainder of their Buccaneers over to the RAF. The RAF used both the anti-radar and anti-ship versions on their Buccaneers, with the anti-ship versions being replaced by the Sea Eagle in 1988, while the original AS.37 anti-radar versions remained in use until the Buccaneers were retired in March 1994. Walleye US interest in television guidance largely ended in the post-war period. Nevertheless, small-scale development continued, and a team at the Naval Ordnance Test Station (NOTS) developed a way to automatically track light or dark spots on a television image, a concept today known as an optical contrast seeker. Most work focused on MACLOS weapons instead, and led to the development of the AGM-12 Bullpup which was considered to be so accurate it was referred to as a "silver bullet". Early use of the Bullpup demonstrated that it was too difficult to use and exposed the launch aircraft to anti-aircraft fire, precisely the same problems that led the Germans to begin TV guidance research. In January 1963, NOTS released a contract for a bomb and guidance system that could be used with their contrast tracker. Despite being a glide bomb, this was confusingly assigned a number as part of the new guided-missile numbering system, becoming the AGM-62 Walleye. As initially envisioned, the system would use a television only while the missile was still on the aircraft, and would automatically seek once launched. This quickly proved infeasible, as the system would often break lock for a wide variety of reasons. This led to the addition of a data link that sent the image back to the aircraft, allowing guidance throughout. This was not a true television guidance system in the classic sense, as the operator's task was to continue selecting points of high contrast which the seeker would then follow. In practice, however, the updating was almost continuous, and the system acted more like a television guidance system and autopilot, like the early plans for the Hs 293. Walleye entered service in 1966 and was quickly used in several precision attacks against bridges and similar targets. These revealed that it did not have enough striking power, and more range was desired. This led to the introduction of an extended range data link (ERDL) and larger wings to extend range from . Walleye II was a much larger version based on a bomb to improve performance against large targets like bridges, and further extended range to as much as . These were widely used in the later portions of the war and they remained in service through the 1970s and 80s. It was an ERDL equipped Walleye that was used to destroy the oil pipes feeding Sea Island and help stop the Gulf War oil spill in 1991. Walleye left service in the 1990s, replaced largely by laser-guided weapons. Kh-59 The Soviet Kh-59 is a long-range land attack missile that turns on its television camera after of travel from the launch aircraft. It has a maximum range of , and is used in a fashion essentially identical to that of the Walleye. Notes References Citations Bibliography Missile guidance Television technology
Television guidance
Technology
3,533
56,681,471
https://en.wikipedia.org/wiki/5%CE%B1-Dihydrolevonorgestrel
5α-Dihydrolevonorgestrel (5α-DHLNG) is an active metabolite of the progestin levonorgestrel which is formed by 5α-reductase. It has about one-third of the affinity of levonorgestrel for the progesterone receptor. In contrast to levonorgestrel, the compound has both progestogenic and antiprogestogenic activity, and hence has a selective progesterone receptor modulator-like profile of activity. This is analogous to the case of norethisterone and 5α-dihydronorethisterone. In addition to the progesterone receptor, 5α-DHLNG interacts with the androgen receptor. It has similar affinity for the androgen receptor relative to levonorgestrel (34.3% of that of metribolone for levonorgestrel and 38.0% of that of metribolone for 5α-DHLNG), and has androgenic effects similarly to levonorgestrel and testosterone. 5α-DHLNG is further transformed into 3α,5α- and 3β,5α-, which bind weakly to the estrogen receptor (0.4 to 2.4% of the of ) and have weak estrogenic activity. These metabolites are considered to be responsible for the weak estrogenic activity of high doses of levonorgestrel. See also 5α-Dihydronorethisterone 5α-Dihydroethisterone 5α-Dihydronandrolone 5α-Dihydronormethandrone References 5α-Reduced steroid metabolites Ethynyl compounds Anabolic–androgenic steroids Enantiopure drugs Estranes Human drug metabolites Ketones Selective progesterone receptor modulators Synthetic estrogens
5α-Dihydrolevonorgestrel
Chemistry
395
67,646,530
https://en.wikipedia.org/wiki/Online%20Safety%20Act%202023
The Online Safety Act 2023 (c. 50) is an act of the Parliament of the United Kingdom to regulate online speech and media. It passed on 26 October 2023 and gives the relevant Secretary of State the power, subject to parliamentary approval, to designate and suppress or record a wide range of speech and media deemed "harmful". The act requires platforms, including end-to-end encrypted messengers, to scan for child pornography, despite warnings from experts that it is not possible to implement such a scanning mechanism without undermining users' privacy. The act creates a new duty of care of online platforms, requiring them to take action against illegal, or legal but "harmful", content from their users. Platforms failing this duty would be liable to fines of up to £18 million or 10% of their annual turnover, whichever is higher. It also empowers Ofcom to block access to particular websites. It obliges large social media platforms not to remove, and to preserve access to, journalistic or "democratically important" content such as user comments on political parties and issues. The bill that became the act was criticised for its proposals to restrain the publication of "lawful but harmful" speech, effectively creating a new form of censorship of otherwise legal speech. As a result, in November 2022, measures that were intended to force big technology platforms to take down "legal but harmful" materials were removed from the bill. Instead, tech platforms are obliged to introduce systems that will allow users to better filter out the "harmful" content they do not want to see. The act grants significant powers to the secretary of state to direct Ofcom, the media regulator, on the exercise of its functions, which includes the power to direct Ofcom as to the content of codes of practice. This has raised concerns about the government's intrusion in the regulation of speech with unconstrained emergency-like powers that could undermine Ofcom's authority and independence. Provisions Scope Within the scope of the act is any "user-to-user service". This is defined as an Internet service by means of which content that is generated by a user of the service, or uploaded to or shared on the service by a user of the service, may be read, viewed, heard or otherwise experienced ("encountered") by another user, or other users. Content includes written material or messages, oral communications, photographs, videos, visual images, music and data of any description. The duty of care applies globally to services with a significant number of United Kingdom users, or which target UK users, or those which are capable of being used in the United Kingdom where there are reasonable grounds to believe that there is a material risk of significant harm. The idea of a duty of care for Internet intermediaries was first proposed in Thompson (2016) and made popular in the UK by the work of Woods and Perrin (2019). Duties The duty of care in the act refers to a number of specific duties to all services within scope: The illegal content risk assessment duty   The illegal content duties The duty about rights to freedom of expression and privacy The duties about reporting and redress The record-keeping and review duties For services 'likely to be accessed by children', adopting the same scope as the Age Appropriate Design Code, two additional duties are imposed: The children's risk assessment duties The duties to protect children’s online safety For category 1 services, which will be defined in secondary legislation but are limited to the largest global platforms, there are four further new duties: The adults' risk assessment duties The duties to protect adults’ online safety The duties to protect content of democratic importance The duties to protect journalistic content Enforcement This would empower Ofcom, the national communications regulator, to block access to particular user-to-user services or search engines from the United Kingdom, including through interventions by internet access providers and app stores. The regulator will also be able to impose, through "service restriction orders", requirements on ancillary services which facilitate the provision of the regulated services. The act lists in section 92 as examples (i) services which enable funds to be transferred, (ii) search engines which generate search results displaying or promoting content and (iii) services which facilitate the display of advertising on a regulated service (for example, an ad server or an ad network). Ofcom must apply to a court for both Access Restriction and Service Restriction Orders. Section 44 of the act also gives the Secretary of State the power to direct Ofcom to modify a draft code of practice for online safety if deemed necessary for reasons of public policy, national security or public safety. Ofcom must comply with the direction and submit a revised draft to the Secretary of State. The Secretary of State may give Ofcom further directions to modify the draft, and once satisfied, must lay the modified draft before Parliament. Additionally, the Secretary of State can remove or obscure information before laying the review statement before Parliament. Limitations The act has provisions to impose legal requirements ensuring that content removals do not arbitrarily remove or infringe access to what it defines as journalistic content. Large social networks would be required to protect "democratically important" content, such as user-submitted posts supporting or opposing particular political parties or policies. The government stated that news publishers' own websites, as well as reader comments on such websites, are not within the intended scope of the law. Age verification for online pornography Section 212 of the act repeals part 3 of the Digital Economy Act 2017, which demands mandatory age verification to access online pornography but was subsequently not enforced by the government. The act will include within scope any pornographic site which has functionality to allow for user-to-user services, but those which do not have this functionality, or choose to remove it, would not be in scope based on the draft published by the government. Addressing the House of Commons DCMS Select Committee, the Secretary of State, Oliver Dowden, confirmed he would be happy to consider a proposal during pre-legislative scrutiny of the act by a joint committee of both Houses of Parliament to extend the scope of the act to all commercial pornographic websites. According to the government, the act addresses the major concern expressed by campaigners such as the Open Rights Group about the risk to user privacy with the Digital Economy Act 2017's requirement for age verification by creating, on services within scope of the legislation, "A duty to have regard to the importance of... protecting users from unwarranted infringements of privacy, when deciding on, and implementing, safety policies and procedures." In February 2022 the Digital Economy Minister, Chris Philp, announced that the bill (as it then was) would be amended to bring commercial pornographic websites within its scope. Other provisions The Act adds two new offences to the Sexual Offences Act 2003: sending images of a person's genitals (cyberflashing), or sharing or threatening to share intimate images. Legislative process and timetable The draft bill was given pre-legislative scrutiny by a joint committee of Members of the House of Commons and peers from the House of Lords. The Opposition Spokesperson, Lord Ponsonby of Shulbrede, in the House of Lords said, "My understanding is that we now have a timeline for the online harms Bill, with pre-legislative scrutiny expected immediately after the Queen’s Speech—before the Summer Recess—and that Second Reading would be expected after the Summer Recess." But the Minister replying refused to pre-empt the Queen's Speech by confirming this. In early February 2022, ministers planned to add to their existing proposal several criminal offences against those who send death threats online or deliberately share dangerous disinformation about fake cures for COVID-19. Other new offences, such as revenge porn, posts advertising people-smuggling, and messages encouraging people to commit suicide, would fall under the responsibilities of online platforms like Facebook and Twitter to tackle. In September 2023, during the third reading in the Lords, Lord Parkinson presented a ministerial statement from the government claiming the controversial powers allowing Ofcom to break end-to-end encryption would not be used immediately. Despite the government's claim the powers will not be used, the provisions pertaining to end-to-end encryption weakening were not removed from the act and Ofcom can at any time issue notices requiring the breaking of end-to-end encryption technology. This followed statements from several tech firms, including Signal, suggesting they would withdraw from the UK market rather than weaken their encryption. Support The UK National Crime Agency, part of the Home Office, has said the act is necessary to protect children. The NSPCC has been a prominent supporter of the act, saying it will help protect children from abuse. The Samaritans, that had made strengthening the act one of its key campaigns "to ensure no one is left unprotected from harmful content under the new law" gave the final act its qualified support, also saying the act fell short of the promise to make the UK the safest place to be online. Opposition The international human rights organization Article 19 stated that they saw the Online Safety Act 2023 as a potential threat to human rights, describing it as an "extremely complex and incoherent piece of legislation". The Open Rights Group described the Online Safety Bill (OSB) as a "censor's charter". During an interview for the BBC, Rebecca MacKinnon, the vice president for global advocacy at the Wikimedia Foundation, criticised the OSB, saying the threat of "harsh" new criminal penalties for tech bosses would affect "not only big corporations, but also public interest websites, such as Wikipedia". In the same instance, MacKinnon argued the act should have been based on the European Union's Digital Services Act, which reportedly included differences between centralised content moderation and community-based moderation. In April 2023, both MacKinnon and the chief executive of Wikimedia UK, Lucy Crompton-Reid, announced that the WMF did not intend to apply the age-check requirements of the act to Wikipedia users, stating that it would violate their commitment to collect minimal data about readers and contributors. On 29 June of the same year, WMUK and the WMF officially published an open letter, asking the government and Parliament to exempt "public interest projects", including Wikipedia itself, from the OSB before it entered its report stage, starting on 6 July. Apple Inc. criticised legal powers in the OSB which threatened end-to-end encryption on messaging platforms in an official statement, describing the act as "a serious threat" to end-to-end encryption, and urging the UK government to "amend the Bill to protect strong end-to-end encryption". Meta Platforms has criticised the plan, saying, "We don't think people want us reading their private messages ... The overwhelming majority of Brits already rely on apps that use encryption to keep them safe from hackers, fraudsters and criminals". Head of WhatsApp Will Cathcart voiced his opposition to the OSB, stating that the service would not compromise its encryption for the proposed law and saying "The reality is, our users all around the world want security – ninety-eight percent of our users are outside the UK, they do not want us to lower the security of the product and just as a straightforward matter, it would be an odd choice for us to choose to lower the security of the product in a way that would affect those ninety-eight percent of users." He also stated in a tweet that scanning everyone's messages would destroy privacy. Ciaran Martin, a former head of the UK National Cyber Security Centre, accused the government of "magical thinking" and said that scanning for child abuse content would necessarily require weakening the privacy of encrypted messages. In February 2024, the European Court of Human Rights ruled, in an unrelated case, that requiring degraded end-to-end encryption "cannot be regarded as necessary in a democratic society" and was incompatible with Article 6 of the European Convention on Human Rights. This decision may potentially form part of the basis of legal challenges to the Online Safety Act 2023. See also Children's Code Proposed UK Internet age verification system Web blocking in the United Kingdom References External links Draft Online Safety Bill Joint Committee on the Draft Online Safety Bill Final Act United Kingdom Acts of Parliament 2023 Mass media regulation Social media Internet censorship in the United Kingdom United Kingdom tort law Data laws of the United Kingdom Child online safety laws Encryption debate
Online Safety Act 2023
Technology
2,563
32,123,082
https://en.wikipedia.org/wiki/Zeba%20Islam%20Seraj
Zeba Islam Seraj is a Bangladeshi scientist known for her research in developing salt-tolerant rice varieties suitable for growth in the coastal areas of Bangladesh. She is currently a professor at the Department of Biochemistry and Molecular Biology, University of Dhaka. Academic career Seraj studied at the University of Dhaka, Bangladesh obtaining a B.Sc. in 1980. She completed her M.Sc. from the same university in 1982. She obtained her PhD in biochemistry from University of Glasgow in 1986 and went to University of Liverpool for post-doctoral work in the following year. After completing her post-doc., she joined the Department of Biochemistry and Molecular Biology, University of Dhaka in 1988. She became an associate professor in 1991 and later a professor in 1997 at the same university. She has been supervising plant biotechnology projects funded by foreign and local grants as a principal investigator since 1991. She is a visiting researcher with UT Austin since 2013 Research activities Seraj has established a well-equipped plant biotechnology laboratory at the University of Dhaka. She has been a co-principal investigator in several projects, such as the Generation Challenge Program (GCP)—an initiative to use molecular biology to help boost agricultural production. Seraj has not only worked on fine mapping of the major QTLs for salinity tolerance in Pokkali, but also characterized traditional rice landraces with the aim of finding genetic loci responsible for salt tolerance and applying markers linked to these loci to aid breeding programs for incorporation of salinity tolerance in rice. She also works on developing genetically modified rice varieties with improved salt tolerance suitable for growing in the coastal region of Bangladesh. She was the recipient of the PEER award (joint USAID-NSF initiative) for using next generation sequencing technologies to find the basis of salt tolerance of a rice landrace endemic to the Bangladesh coast, where University of Texas at Austin served as the host for collaborative work. Seraj has been a visiting scientist in PBGB, IRRI (Constructs for salinity tolerance with Dr. John Bennett Jan-March 1998), PBGB & CSWS Division, IRRI (IRRI-PETRRA Bangladesh project on development of MV rice for the coastal wetlands of Bangladesh, June 11–29, 2002 and June 16–20, 2003), USDA research station at Beaumont, Texas, USA ( Aug. 4–16, 2003) and at the Department of Molecular, Cell and Developmental Biology, University of Texas, Austin, USA as Norman Borlaug Fellow (August 15-December 15, 2005). She has been honored with Visiting researcher status at University of Texas at Austin (October 2014-September, 2020). She was awarded the Annanya Award, 2017 for her scientific research. She was invited for a Tedx talk on how to save crops from sea level rise and salinity (Jan 16, 2018). She was featured in NHK TV, Japan in a talk on Science for Sustainable Earth in 2019. Personal life Zeba was married to Toufiq M Seraj, a Bangladeshi businessman who was the founder and managing director of Sheltech. They have two daughters. Awards Anannya Top Ten Awards (2016) References External links Zeba Islam Seraj's lab website url=http://sites.nationalacademies.org/PGA/PEER/PEERscience/PGA_084034 Academic staff of the University of Dhaka Bangladeshi scientists Biochemists Biotechnologists Molecular biologists 1958 births Living people Women biotechnologists
Zeba Islam Seraj
Chemistry,Biology
715
65,295,683
https://en.wikipedia.org/wiki/Dickkopf
Dickkopf (DKK) is a family of proteins consisting of five members as of 2020. That is, vertebrates usually contain five genes that are members of the family. The most well-studied is Dickkopf-related protein 1 (DKK1). DKK proteins inhibit the Wnt signaling pathway coreceptors LRP5 and LRP6. They bind with high affinity as ligands to KREMEN1 and KREMEN2, which are transmembrane proteins. DKK proteins have important roles in the development of vertebrates. Etymology is a German word meaning "stubborn person", or literally, "thick head". It was coined as the name for these proteins in a 1998 Nature paper by Glinka et al. in reference to the discovery that DKK1 induces head formation in the embryogenesis of Xenopus. Structure DKK proteins are glycoproteins consisting of 255–350 amino acids. DKK1, DKK2, and DKK4 have similar molecular weights, at 24–29 kDa (kilodaltons). DKK3 is heaviest, at 38 kDa. In addition to having similar weights, DKK1, -2, and -4 have high structural similarity, with two shared cysteine-rich domains. DKK3 differs from -1, -2, and -4 by the presence of a Soggy domain at its N-terminus. Proteins Four DKK proteins and one DKK-like protein occur in humans and other vertebrates, with five proteins in the family in total: DKK1 DKK2 DKK3 DKK4 DKKL1 (soggy-1, Cancer/testis antigen 34) Human disease DKK proteins are believed to be involved with several human diseases, including bone cancer and neurodegenerative disease. Evidence also indicates DKK1 and DKK3 are involved in the pathophysiology of the artery, where they could contribute to atherosclerosis. References Protein families
Dickkopf
Biology
425
23,646,085
https://en.wikipedia.org/wiki/Astronomical%20rings
Astronomical rings (Latin: annuli astronomici), also known as Gemma's rings, are an early astronomical instrument. The instrument consists of three rings, representing the celestial equator, declination, and the meridian. It can be used as a sun dial to tell time, if the approximate latitude and season is known, or to tell latitude, if the time is known or observed (at solar noon). It may be considered to be a simplified, portable armillary sphere, or a more complex form of astrolabe. History Parts of the instrument go back to instruments made and used by ancient Greek astronomers. Gemma Frisius combined several of the instruments into a small, portable, astronomical-ring instrument. He first published the design in 1534, and in Petrus Apianus's Cosmographia in 1539. These ring instruments combined terrestrial and celestial calculations. Types Fixed astronomical rings Fixed astronomical rings are mounted on a plinth, like armillary spheres, and can be used as sundials. Traveller's sundial or universal equinoctal ring dial The dial is suspended from a cord or chain; the suspension point on the vertical meridian ring can be changed to match the local latitude. The time is read off on the equatorial ring; in the example below, the center bar is twisted until a sunray passes through a small hole and falls on the horizontal equatorial ring. Sun ring A sunring or farmer's ring is a latitude-specific simplification of astronomical rings. On one-piece sunrings, the time and month scale is marked on the inside of the ring; a sunbeam passing through a hole in the ring lights a point on this scale. Newer sunrings are often made in two parts, one of which slides to set the month; they are usually less accurate. Sea ring In 1610, Edward Wright created the sea ring, which mounted a universal ring dial over a magnetic compass. This permitted mariners to determine the time and magnetic variation in a single step. These are also called "sundial compasses". Structure and function The three rings are oriented with respect to the local meridian, the planet's equator, and a celestial object. The instrument itself can be used as a plumb bob to align it with the vertical. The instrument is then rotated until a single light beam passes through two points on the instrument. This fixes the orientation of the instrument in all three axes. The angle between the vertical and the light beam gives the solar elevation. The solar elevation is a function of latitude, time of day, and season. Any one of these variables can be determined using astronomical rings, if the other two are known. The altitude of the sun does not change much in a single day at the poles (where the sun rises and sets once a year), so rough measurements of solar altitude don't vary with time of day at high latitudes. Use as a calendar sundial When the solar time is exactly noon, or known from another clock, the instrument can be used to determine the time of year. The meridional ring can function as the gnomon, when the rings are used as a sundial. A horizontal line aligned on a meridian with a gnomon facing the noon-sun is termed a meridian line and does not indicate the time, but instead the day of the year. Historically they were used to accurately determine the length of the solar year. A fixed meridional ring on its own can be used as an analemma calendar sundial, which can be read only at noon. When the shadow of the rings are aligned so that they appear to be in the same, or nearly the same, place, the meridian identifies itself. Meridional ring The meridian ring is placed vertically, then rotated (relative to the celestial object) until it is parallel to the local north-south line. The whole ring is thus parallel to the circle of longitude passing through the place where the user is standing. Because the instrument is often supported by the meridional ring, it is often the outermost ring, as it is in the traveller's rings illustrated above. There, a sliding suspension shackle is attached to the top of the meridional ring, from which the whole device can be suspended. The meridional ring is marked in degrees of latitude (0–90, for each hemisphere). When properly used, the pointer on the support points to the latitude of the instrument's location. This tilts the equatorial ring so that it lies at the same angle to the vertical as the local equator. Equatorial ring The equatorial ring occupies a plane parallel to the celestial equator, at right angles to the meridian. It is aligned by being attached to the meridional ring at the marking for latitude zero (see above) being aligned to the declension ring, which is aligned to the celestial object. Often equipped with a graduated scale, it can be used to measure right ascension. On the traveller's sundial shown above, it is the inner ring. This ring is sometimes engraved with the months on one side and corresponding zodiac signs on the outside; very similar to an astrolabe. Others have been found to be engraved with two twelve-hour time scales. Each twelve-hour scale is stretched over 180 degrees and numbered by hour with hashes every 20 minutes and smaller hashes every four minutes. The inside displays a calendrical scale with the names of the months indicated by their first letters, with a mark to show every 5 days and other marks to represent single days. On these, the outside of the ring is engraved with the corresponding symbols of the zodiac signs. The position of the symbol indicates the date of the entry of the sun into this particular sign. The vernal equinox is marked at March 15 and the autumnal equinox is marked at September 10. Declination ring The declination ring is moveable, and rotates on pivots set in the meridian ring. An imaginary line connecting these pivots is parallel to the Earth's axis. The declination "ring" of the traveller's sundial above is not a ring at all, but an oblong loop with a slider for setting the season. This ring is often equipped with vanes and pinholes for use as the alidade of a dioptra (see image). It can be used to measure declination. This ring is also often marked with the zodiac signs and twenty-five stars, similar to the astrolabe. References Bibliography Ancient Greek astronomy Historical scientific instruments Astronomical instruments Sundials
Astronomical rings
Astronomy
1,355
2,059,448
https://en.wikipedia.org/wiki/List%20of%20set%20theory%20topics
This page is a list of articles related to set theory. Articles on individual set theory topics Lists related to set theory Glossary of set theory List of large cardinal properties List of properties of sets of reals List of set identities and relations Set theorists Societies and organizations Association for Symbolic Logic The Cabal Topics Set theory
List of set theory topics
Mathematics
64
1,180,732
https://en.wikipedia.org/wiki/Data%20control%20language
A data control language (DCL) is a syntax similar to a computer programming language used to control access to data stored in a database (authorization). In particular, it is a component of Structured Query Language (SQL). Data Control Language is one of the logical group in SQL Commands. SQL is the standard language for relational database management systems. SQL statements are used to perform tasks such as insert data to a database, delete or update data in a database, or retrieve data from a database. Though database systems use SQL, they also have their own additional proprietary extensions that are usually only used on their system.  For Example Microsoft SQL server uses Transact-SQL (T-SQL) which is an extension of SQL. Similarly Oracle uses PL-SQL which is their proprietary extension for them only. However, the standard SQL commands such as "Select", "Insert", "Update", "Delete", "Create", and "Drop" can be used to accomplish almost everything that one needs to do with a database. Examples of DCL commands include: GRANT to allow specified users to perform specified tasks. REVOKE to remove the user accessibility to database object. The operations for which privileges may be granted to or revoked from a user or role apply to both the Data definition language (DDL) and the Data manipulation language (DML), and may include CONNECT, SELECT, INSERT, UPDATE, DELETE, EXECUTE, and USAGE. Microsoft SQL Server As per Microsoft SQL Server there are four groups of SQL Commands. Data Manipulation Language (DML) Data Definition Language (DDL) Data Control Language (DCL) Transaction Control Language (TCL) DCL commands are used for access control and permission management for users in the database. With them we can easily allow or deny some actions for users on the tables or records (row level security). DCL commands are: GRANT We can give certain permissions for the table (and other objects) for specified groups/users of a database. DENY bans certain permissions from groups/users. REVOKE this command takes away permissions from groups/users. For example: GRANT can be used to give privileges to user to do SELECT, INSERT, UPDATE and DELETE on a specific table or multiple tables. The REVOKE command is used take back a privilege (default) or revoking specific command like UPDATE or DELETE based on requirements. Example GRANT in first case we gave privileges to user User1 to do SELECT, INSERT, UPDATE and DELETE on the table called employees. REVOKE with this command we can take back privilege to default one, in this case, we take back command INSERT on the table employees for user User1. DENY is a specific command. We can conclude that every user has a list of privilege which is denied or granted so command DENY is there to explicitly ban you some privileges on the database objects.: Oracle Database Oracle Database divide SQL commands to different types. They are. Data Definition Language (DDL) Statements Data Manipulation Language (DML) Statements Transaction Control Statements Session Control Statements System Control Statement Embedded SQL Statements For details refer Oracle-TCL          Data definition language (DDL) statements let you to perform these tasks: Create, alter, and drop schema objects Grant and revoke privileges and roles Analyze information on a table, index, or cluster Establish auditing options Add comments to the data dictionary So Oracle Database DDL commands include the Grant and revoke privileges which is actually part of Data control Language in Microsoft SQL server. Syntax for grant and revoke in Oracle Database: Example Transaction Control Statements in Oracle Transaction control statements manage changes made by DML statements. The transaction control statements are: COMMIT ROLLBACK SAVEPOINT SET TRANSACTION SET CONSTRAINT MySQL MySQL server they divide SQL statements into different type of statement Data Definition Statements Data Manipulation Statements Transactional and Locking Statements Replication Statements Prepared Statements Compound Statement Syntax Database Administration Statements Utility Statements For details refer MySQL Transactional statements The grant, revoke syntax are as part of Database administration statementsàAccount Management System. The GRANT statement enables system administrators to grant privileges and roles, which can be granted to user accounts and roles. These syntax restrictions apply: GRANT cannot mix granting both privileges and roles in the same statement. A given GRANT statement must grant either privileges or roles. The ON clause distinguishes whether the statement grants privileges or roles: With ON, the statement grants privileges Without ON, the statement grants roles. It is permitted to assign both privileges and roles to an account, but you must use separate GRANT statements, each with syntax appropriate to what is to be granted. The REVOKE statement enables system administrators to revoke privileges and roles, which can be revoked from user accounts and roles. Examples In PostgreSQL, executing DCL is transactional, and can be rolled back. Grant and Revoke are the SQL commands are used to control the privileges given to the users in a Databases SQLite does not have any DCL commands as it does not have usernames or logins. Instead, SQLite depends on file-system permissions to define who can open and access a database. See also Data definition language Data manipulation language Data query language References Data modeling SQL Database management systems
Data control language
Engineering
1,064
61,926,174
https://en.wikipedia.org/wiki/Behavioral%20change%20support%20system
A Behavioral Change Support System (BCSS) is any information and communications technology (ICT) tool, web platform, or gamified environment which targets behavioral changes in its end-users. BCSS are built upon persuasive systems design techniques. Underlying theories and models The design of these systems and their contents are based on behavioral change theories and models for behavioral change over time. The theory of planned behavior describes the relationship between attitudes, intentions, and the desired behavior. It is considered to be one of the most influential determinant models. A supporting model is the Fogg Behaviour Model (FBM), which states that a user must be motivated first before having the ability to perform the change in their behavior, which is triggered by either intrinsic or extrinsic factors (The term "trigger" was changed by the author in late 2017 and the term "prompt" is now being used). BCSS makes use of extrinsic (perceptual) prompts like alarms, messages with offers or calls to action, ads, requests, and more. Other theories that aid in the design and mechanisms behind a BCSS include the social learning theory (SLT), which studies the interactions between a user and the environment, and the theory of planned behavior (initiated as the theory of reasoned action). Techniques and elements Applications of BCSS may include game and training elements in several market domains which can range from Health and Education and Quality of Life (QoL), to professional development and workability. Virtually any concept designed to cause a shift in a person's behavior can be considered a BCSS, even if this change is not directly observed by the users. When users are aware of this intention and choose to work within the system, the chances of favorable results from this system increase. This effect is attributed to metacognition, as most BCSS systems implement metacognitive strategies for goal attainment. These strategies help users understand the cause of their resistance to adopting the desired behavior. It requires that they monitor themselves whenever the targeted behavior can be observed to understand their progress towards the desired behavior, and record evidence (usually objective but also subjective measurements) of their behavioral changes. There can be a positive impact on people who have difficulties in changing their behavior by considering behaviors and the distance to the desired behavior. This can be achieved by helping them develop a personalized plan for reaching the targeted behavior and learning the ways to achieve their personal goals. In most cases, the general objective can be split into more than one objective or step, before the desired behavior is adopted by the users and becomes a routine. The positive feedback introduces self-management in BCSS applications since it is particularly helpful for people to take responsibility for their own actions and do things to the best of their ability. BCSS is very often equipped with additional features like game elements to foster user engagement leading to serious game applications. Moreover, they implement machine learning techniques to predict the future behavior of users based on their past performance. The evidence of the achieved change in behavior, as well as important notifications during self-evaluation, are communicated with visual analytics tools such as performance graphs. Additional tools frequently found in BCSS include checklists and questionnaires to collect users' feedback, hardware sensing components like the Internet of things (IoT) devices (e.g., cameras), and social collaboration to help the members of a user community to support each other. Occasionally, some BCSS allow professionals (trainers, educators, medical personnel and social professionals) to participate in the BCSS activities. This can be done by giving advice and support and also by making decisions and alterations to the treatment plan according to the observed performance and the personal needs of the targeted users. Taxonomies Most BCSSes work on a single profile (targeted user), while some can monitor and report progress made by a group of people. There are BCSS applications purely made using software, while others include hardware components like sensors and IoT devices to introduce physical computing in a hybrid physical-digital approach. The devices used to access a BCSS are usually internet-connected mobile devices like smartphones, tablets, or smartwatches. The success in this category of BCSS applications lies in monitoring and notifying the users constantly in regards to daily activities. On the other hand, there are BCSSes which are less intrusive and rely on less frequent access to the system. Another way to distinguish BCSSes is by the knowledge domain they refer to. Theoretically, a BCSS can be built in any knowledge domain. Knowledge domains eHealth/mHealth Examples of BCSS applied in eHealth domains include CAREGIVERSPRO-MMD, which is a community-based intervention to support people living with dementia and their caregivers using game elements to engage users in non-pharmacological interventions; , which trains nurses in lifting and transfer techniques to prevent lower-back injuries, and We4Fit which is more like a game environment. A more extensive review of health BCSS can be found on the work of Alahäivälä & Oinas-Kukkonen (2016) and Bridle et al. (2005). Education As Arlinghaus and Johnston implied, “Although not sufficient, education is a necessary component for behaviour change” (2018). BCSSes are used in education less for imparting knowledge and testing knowledge gained, and more for teaching a difficult subject like "responsible sexual behaviour" in middle-school students, or for changing attitudes and beliefs about a topic of interest. Adopting new behavioral patterns is difficult and people are not motivated to change their behavior if they do not recognize the blocking issue. Gamification is used to help recognition by providing rewards, competition, and motivational cues of a BCSS. Prochaska et al. (2007) proposed a six-stage behavioral change model (pre-contemplation, contemplation, preparation, action, maintenance, and termination) which can be applied in educational uses of a BCSS, as it appears in an ideal environment for making the first step (contemplation) after a long period of resistance (pre-contemplation). BCSSes affect the physical world and help people experiment with an alternative behavioral pattern without thinking of possible coincidences (such as social exposure). The virtual activities performed in a BCSS help in the next step (preparation) where the user makes a transition from a passive to an active state in a safe environment. The user-monitoring and reward system of a BCSS helps users complete the rest of the stages of the behavior change (action, maintenance, and termination) and avoid regression to the previous unwanted behavior. Schmied (2017) proposes a similar seven-step process: the Designing for Behaviour Change (DBC) framework. Overall, a positive behavioral change in education settings is facilitated by technology through digital intervention strategies, where a teacher or educator makes adjustments to personalize the interventions to the student's profiles and performance. Although ICT tools may not be necessary to change behavior in schools, when used in the form of serious game-assisted learning, they can provide a more in-depth perception of important concepts in a field of study despite some disadvantages. Other Domains BCSS has been applied in other knowledge and study areas, including workers' behaviour, consumers' brand-loyalty, and footprints and energy consumption. Examples include applications designed to raise water-saving awareness, apps used by drivers to reduce fuel consumption by adopting an eco-friendly driving style, and educational games for simulating energy consumption in domestic environments like in Casals et al. (2017). A systematic review of the application of game elements to behavioural change in domestic energy consumption can be found in Johnson et al. (2017) An example from the Industry 4.0 domain is SATISFACTORY, which proposes a gamified social collaboration platform that is integrated into the shop-floor of industries to improve productivity, safety and workers' engagement. In the marketing context, behavioural change techniques do not aim to change the way people think, but how they consume products and services. In politics, behavioural change interventions are delivered in the form of mass-media campaigns on existing social media platforms rather than standalone applications. Overall, there is a continually growing number of domains in which ICT tools are introduced as tools to implement and deliver behavioral change campaigns in a systematic way. Some researchers refer to persuasive technology to identify the computer-mediated communication between humans or human-computer interaction technologies used to deliver persuasive evidence. A BCSS should be treated as a more complex ICT-based construct which may use persuasive technologies, but also supports the full life-cycle of behavioral change interventions (from authoring to publishing), implements various campaigns to achieve its goals, and is adaptive to specific user profiles. Criticism Behavior Change Support Systems have been criticized for a lack of grounding in independent behavioral theory, as well as the lack of industry standards to measure performance or effect. Another source of criticism refers to the dominant behavioral change models as products of the theory of planned behavior. According to some researchers (Kollmus & Agyeman, 2002), there is a gap between attitude and intention, and target behavior. Thus, it is difficult to find a widely accepted model that can take all relevant behavioral parameters into account. Additionally, even if BCSSes help to effect a change in a targeted user's behavior, the user usually fails to maintain the target behavior. This could be the result of underestimating the long-term influence that environmental factors have on behavior. There is currently an open discussion on how intrusive a BCSS should be, but this appears to be dependent upon the physical and social context of the environment in which the BCSS is being used. As BCSS makes use of personal data coming from users' profiles and the user-monitoring system, the use of BCSSes in everyday life may be legally restricted. References See also Theory of planned behavior Human behavior Behavior modification
Behavioral change support system
Biology
2,045
65,288,458
https://en.wikipedia.org/wiki/Whitechapel%20Mount
Whitechapel Mount was a large artificial mound of disputed origin. A prominent landmark in 18th century London, it stood in the Whitechapel Road beside the newly constructed London Hospital, being not only older, but significantly taller. It was crossed by tracks, served as a scenic viewing-point (and a hiding place for stolen goods), could be ascended by horses and carts, and supported some trees and formal dwelling-houses. It has been interpreted as: a defensive fortification in the English Civil War; a burial place for victims of the Great Plague; rubble from the Great Fire of London; and as a laystall (hence it was sometimes called Whitechapel Dunghill). Possibly all of these theories are true to some extent. Whitechapel Mount was physically removed around 1807. Because Londoners widely believed in the "Great Fire rubble" theory, the remains were sifted by antique hunters, and some sensational finds were claimed. It survives in its present-day placename Mount Terrace, E1. Location Neighbourhood Whitechapel Mount was on the south side of the Whitechapel Road, on the ancient route from the City of London to Mile End, Stratford, Colchester and Harwich. In the 18th century and later the surroundings were mostly fields: grazing for cows or market gardens. Leaving London, a traveller would pass the church of St Mary Matfelon (origin of the name "white chapel") on the right, then a windmill, before arriving at the Mount. Across the Whitechapel Road was a burying ground and the Ducking Pond. Further east, at the turnpike, the name changed to Mile End Road, with Dog Row (today Cambridge Heath Road) branching off to Bethnal Green. According to John Strype (1720) the neighbourhood was a busy one, with good inns for travellers in Whitechapel and good houses for sea captains in Mile End. Even so, said Strype, the Whitechapel Road was "pestered" by illegally built, poor quality dwellings. Another source said it was infested by highwaymen, footpads and riff-raff of all kinds who preyed on travellers to and from London. News and court reports speak of murders and robberies. In two Old Bailey cases stolen property was buried in, and recovered from, Whitechapel Mount. It was a place of resort for pugilists and dog-fighters. From the summit of Whitechapel Mount an extensive view of the hamlets of Limehouse, Shadwell, and Ratcliff could be obtained. On maps Whitechapel Mount's position is first depicted in a 1673 building plan by Sir Christopher Wren, where he refers to it as "the mud wall called the Fort". In Joel Gascoyne's survey of the parish of Stepney (1703) it is called The Dunghill: it is at least 400 yards long, and is crossed not only by a path, but a road. In John Rocque's map of London (1746) it has been horizontally truncated, but is shown with substantial elevation, with at least one dwelling-house – if not a terrace of houses – on its western end. In Richard Blome's map of 1755 it has a large dwelling-house with front drive and appears alongside the newly built London Hospital. In John Cary's 1795 map its western portion has been truncated by the newly built New Road, but appears to have a substantial building in its northwest corner. Possible origins Civil War fortification In 1643 London was hastily fortified against the Royalist armies, for "there is terrible news that [Prince] Rupert will sack it and so a complete and sufficient dike and earthern wall and bulwarks must be made". Twenty-three or 24 earthen forts were built at intervals around the city and its main suburbs; neighbouring forts were in sight of one another. These forts were manned by volunteers e.g. men too old to be in the regular militia; local innkeepers were ordered to provide them with food. The forts were interconnected by an earth bank and trench, dug by "great numbers of men, women and young children". All social classes joined in the labour:From ladies down to oyster-wenches Labour'd like pioneers in trenches, Fell to their pick-axes and tools, And help'd the men to dig like moles. The completed ring was 18 miles in circumference. A visiting Scotsman walked it; it took him 12 hours. Fort No.2, officially a hornwork with two flanks, commanded the Whitechapel Road. The Scottish traveller, who inspected it, said it was a "nine angled fort only pallosaded and single ditched and planted with seven pieces of brazen ordnance [brass cannon], and a court du guard [guardhouse] composed of timber and thatched with tyle stone as all the rest are". There was a trench around its base. Daniel Lysons said the earthwork at Whitechapel was 329 foot long, 182 foot broad and more than 25 foot above ground level. After the civil war these fortifications were swiftly removed because they spoiled productive agricultural land. However traces remained and one of these was Whitechapel Mount. Wrote Lysons (1811): "The east end was till of late years very perfect; on the west side some houses had been built. The surface on the top, except where it had been dug away, was perfectly level. Mount Street, Mayfair, is another relict name of one of these civil war forts. Great Plague burial ground In the bubonic epidemic of 1665 an estimated 70,000-100,000 Londoners died of the plague. Aldgate, Whitechapel and Stepney were badly affected. This placed a strain on the burial facilities and some were buried in mass graves. Official mass graves were made by opening a deep pit and leaving it open for as long as it took to fill with cadavers, which was done downwind. The number of lost plague pits in London is commonly exaggerated — most mass graves were actually in pre-existing churchyards . However there were some gaps in the records and "undoubtedly some temporary and irregular plague burial sites". Those willing to man the dead carts (and dispose of the corpses) were not fastidious; a contemporary said they were ‘very idle base liveing men and very rude", drawing attention to their task by swearing and cursing. No contemporaneous record confirms corpses were officially buried in Whitechapel Mount. It is known that there were several burial grounds or plague pits in the vicinity e.g. across the road. Whether the mound was used as a burial ground has been disputed. According to popular tradition, it was. In one version, Whitechapel Mount's origin was that rubble from the Fire of London was thrown over a plague pit to cover it up. These rumours were denied by the authorities when they proposed to remove Whitechapel Mount; they had it "pierced" in an effort to refute them.   The clearest reputable source is the author Joseph Moser, who wrote In the course of last summer [1802], when part of the rubbish of [Whitechapel Mount] had just been removed, I had the curiosity to inspect the place, and observed in the different strata a great number of human bones, together with those, apparently, of different animals, oxen, or cows, and sheep’s horns, bricks, tiles, &c. The bones and other exuvia of animals were in many places, especially towards the bottom, bedded in a stiff, viscid earth, of the blueish colour and consistence of potter's clay, which was unquestionably the original ground, thrown into different directions, as different interments operated upon its surface. A complication is that the London Hospital itself may have used the Mount for burials: a hospital history says that by 1764 "The Mount Burying Ground was full". Great Fire detritus The Great Fire of London occurred the year after the plague. The nearest burnt environment was a mile away. Even so, respectable sources asserted that Whitechapel Mount was formed or augmented from rubble from the Fire of London, including Dr Markham the rector of Whitechapel Church, who took pains to investigate the subject. This origin theory was widely believed, though some denied it. When the Mount was dismantled in the 19th century the belief attracted flocks of antique hunters; see below. Laystall The above theories about the origin of Whitechapel Mount, by themselves, may not account for its sheer size as shown in the maps and illustrations cited. A laystall was an open rubbish tip; towns relied on them for disposing of garbage, and did so until the sanitary landfills of the 20th century. By legislation of October 1671 seven laystalls were appointed for the City of London. All street sweepings and household rubbish was to be collected in carts and had to be tipped in one of these, and nowhere else. One of them was Whitechapel Mount: it was to receive the rubbish from the wards of Portsoken, Tower, Duke's Place and Lime Street. However, a laystall was more than a passive tip: it was a business. Its proprietor employed gangs of men, women and boys to sort the rubbish and recycle it. Without him, nothing prevented indiscriminate fly-tipping and random lateral spread onto adjoining properties. William Guy who investigated the laystalls of mid-19th century London reported In most of the laystalls or dustmen's yards, every species of refuse matter is collected and deposited:– nightsoil, the decomposing refuse of markets, the sweepings of narrow streets and courts, the sour-smelling grains from breweries, the surface soil of the leading thoroughfares, and the ashes from the houses. The proportion in which these several matters are collected, vary... In all these establishments the bulk of the deposits consists of dust from the houses, which is sifted on the spot by women and boys seated on the dust-heaps, assisted by men who are engaged in filling the sieves, sorting the heterogeneous materials, or removing and carting them away. While the legislation speaks of "dung, soil, filth and dirt", most domestic refuse (by volume) comprised household dust typically coal ash – hence the expression "dustman" – and this could be used for making bricks. At the close of the 18th century there was a tremendous demand for bricks to build the rapidly expanding London; "At night, a 'ring of fire' and pungent smoke encircled the City"; there were numerous local brickfields e.g. at Mile End. An Old Bailey case of 1809 records that bricks were being delivered from the diminishing Whitechapel Mount. Removal The construction of the East and West India Docks early in the nineteenth century caused roads to be made through the low marshy fields extending from Shadwell and Ratcliff to Whitechapel. New Street/Cannon Street Road, leading from Whitechapel Mount to St George in the East so increased the value of the land on each side of it, that the Corporation of London decided to take down the Mount. This occurred in 1807–8, and Mount Place, Mount Terrace, and Mount Street were then built on that site, thus marking the spot where the Mount stood. The process took several years, efforts at first being desultory. There was even a proposal – a prefiguring of the Regent's Canal – to bring a canal from Paddington to the docks at Wapping by passing through Whitechapel Mount. The soil of the Mount was used to make bricks, see above, and these were delivered to e.g. Wentworth Street, Bethnal Green and "Hanbury's Brewhouse" (the Black Eagle Brewery), buildings that stand today. Antique hunters Believing it to contain detritus from the Fire of London, flocks of cockney enthusiasts examined the Mount's soil. Various antiques were found in it – or alleged to be found in it, since it gave them provenance – including a silver tankard and a Roman coin. The most spectacular find was a carved boar's head with silver tusks; it was, said reputable antiquarians, an authentic memento of the Boar's Head Inn, Eastcheap: a setting for Shakespeare plays, Falstaff, Mistress Quickly and so forth, and genuinely burnt down in the Fire of London. Archaeology For lack of access, there has been no systematic archaeological investigation of the site. However, redevelopment of the Royal London Hospital has allowed occasional glimpses. Mackinder (1994), London Hospital Medical College, Newark Building, Grid Reference TQ3456815: Aitken (2005), The Front Green, Royal London Hospital, Grid Reference TQ347816: In theatre, literature and popular song The Skeleton Witness: or, The Murder at the Mound by William Leman Rede was a play performed on the English and American stage. The villain does a murder and conceals the body in Whitechapel Mount. The crime is done in such a way that, should the remains be discovered, the impoverished hero will get the blame, though innocent. The hero goes abroad for seven years and makes his fortune. On his return to London, about to marry the heroine, he learns, to his horror, that the Mount is to be cleared away, digging to start on the morrow. To stop this, he hurries off to the Mount's owner and buys it at an extravagant price. The vendor, suspicious, wonders if the Mount contains a buried treasure and sends some men to poke around. They discover the skeleton. By a complicated plot twist the hero proves his innocence and lives happily ever after with his female "love interest." The Mount was used as a metonymy to denote the east end or extremity of London town, as in "from the farthest extent of Whitechapel mount to the utmost limits of St Giles", or "[the news] was all over town, from Hyde Park Corner to Whitechapel dunghill". The Coachman, a popular 18th century song, began I'm a boy full spunk, and my name's little Joe, It's I that can tip the long trot; From Whitechapel-mount up to fam'd Rotten-row, With the ladies sometimes is my lot. References and notes Sources External links Mount Terrace in the Survey of London 17th-century forts in England 18th century in London Archaeology of London Buildings and structures in Whitechapel Death in London Great Fire of London History of the London Borough of Tower Hamlets Mounds Waste Whitechapel Great Plague of London
Whitechapel Mount
Physics
3,034
12,168,252
https://en.wikipedia.org/wiki/SymPy
SymPy is an open-source Python library for symbolic computation. It provides computer algebra capabilities either as a standalone application, as a library to other applications, or live on the web as SymPy Live or SymPy Gamma. SymPy is simple to install and to inspect because it is written entirely in Python with few dependencies. This ease of access combined with a simple and extensible code base in a well known language make SymPy a computer algebra system with a relatively low barrier to entry. SymPy includes features ranging from basic symbolic arithmetic to calculus, algebra, discrete mathematics, and quantum physics. It is capable of formatting the result of the computations as LaTeX code. SymPy is free software and is licensed under the 3-clause BSD. The lead developers are Ondřej Čertík and Aaron Meurer. It was started in 2005 by Ondřej Čertík. Features The SymPy library is split into a core with many optional modules. Currently, the core of SymPy has around 260,000 lines of code (it also includes a comprehensive set of self-tests: over 100,000 lines in 350 files as of version 0.7.5), and its capabilities include: Core capabilities Basic arithmetic: *, /, +, -, ** Simplification Expansion Functions: trigonometric, hyperbolic, exponential, roots, logarithms, absolute value, spherical harmonics, factorials and gamma functions, zeta functions, polynomials, hypergeometric, special functions, etc. Substitution Arbitrary precision integers, rationals and floats Noncommutative symbols Pattern matching Polynomials Basic arithmetic: division, gcd, etc. Factorization Square-free factorization Gröbner bases Partial fraction decomposition Resultants Calculus Limits Differentiation Integration: Implemented Risch–Norman heuristic Taylor series (Laurent series) Solving equations Systems of linear equations Systems of algebraic equations that are solvable by radicals Differential equations Difference equations Discrete math Binomial coefficients Summations Products Number theory: generating Prime numbers, primality testing, integer factorization, etc. Logic expressions Matrices Basic arithmetic Eigenvalues and their eigenvectors when the characteristic polynomial is solvable by radicals Determinants Inversion Solving Geometry Points, lines, rays, ellipses, circles, polygons, etc. Intersections Tangency Similarity Plotting Note, plotting requires the external Matplotlib or Pyglet module. Coordinate models Plotting Geometric Entities 2D and 3D Interactive interface Colors Animations Physics Units Classical mechanics Continuum mechanics Quantum mechanics Gaussian optics Linear control Statistics Normal distributions Uniform distributions Probability Combinatorics Permutations Combinations Partitions Subsets Permutation group: Polyhedral, Rubik, Symmetric, etc. Prufer sequence and Gray codes Printing Pretty-printing: ASCII/Unicode pretty-printing, LaTeX Code generation: C, Fortran, Python Related projects SageMath: an open source alternative to Mathematica, Maple, MATLAB, and Magma (SymPy is included in Sage) SymEngine: a rewriting of SymPy's core in C++, in order to increase its performance. Work is currently in progress to make SymEngine the underlying engine of Sage too. mpmath: a Python library for arbitrary-precision floating-point arithmetic SympyCore: another Python computer algebra system SfePy: Software for solving systems of coupled partial differential equations (PDEs) by the finite element method in 1D, 2D and 3D. GAlgebra: Geometric algebra module (previously ). Quameon: Quantum Monte Carlo in Python. Lcapy: Experimental Python package for teaching linear circuit analysis. LaTeX Expression project: Easy LaTeX typesetting of algebraic expressions in symbolic form with automatic substitution and result computation. Symbolic statistical modeling: Adding statistical operations to complex physical models. Diofant: a fork of SymPy, started by Sergey B Kirpichev Dependencies Since version 1.0, SymPy has the mpmath package as a dependency. There are several optional dependencies that can enhance its capabilities: : If is installed, SymPy's polynomial module will automatically use it for faster ground types. This can provide a several times boost in performance of certain operations. matplotlib: If matplotlib is installed, SymPy can use it for plotting. Pyglet: Alternative plotting package. Usage examples Pretty-printing Sympy allows outputs to be formatted into a more appealing format through the pprint function. Alternatively, the init_printing() method will enable pretty-printing, so pprint need not be called. Pretty-printing will use unicode symbols when available in the current environment, otherwise it will fall back to ASCII characters. >>> from sympy import pprint, init_printing, Symbol, sin, cos, exp, sqrt, series, Integral, Function >>> >>> x = Symbol("x") >>> y = Symbol("y") >>> f = Function("f") >>> # pprint will default to unicode if available >>> pprint(x ** exp(x)) ⎛ x⎞ ⎝ℯ ⎠ x >>> # An output without unicode >>> pprint(Integral(f(x), x), use_unicode=False) / | | f(x) dx | / >>> # Compare with same expression but this time unicode is enabled >>> pprint(Integral(f(x), x), use_unicode=True) ⌠ ⎮ f(x) dx ⌡ >>> # Alternatively, you can call init_printing() once and pretty-print without the pprint function. >>> init_printing() >>> sqrt(sqrt(exp(x))) 4 ╱ x ╲╱ ℯ >>> (1/cos(x)).series(x, 0, 10) 2 4 6 8 x 5⋅x 61⋅x 277⋅x ⎛ 10⎞ 1 + ── + ──── + ───── + ────── + O⎝x ⎠ 2 24 720 8064 Expansion >>> from sympy import init_printing, Symbol, expand >>> init_printing() >>> >>> a = Symbol("a") >>> b = Symbol("b") >>> e = (a + b) ** 3 >>> e (a + b)³ >>> e.expand() a³ + 3⋅a²⋅b + 3⋅a⋅b² + b³ Arbitrary-precision example >>> from sympy import Rational, pprint >>> e = 2**50 / Rational(10) ** 50 >>> pprint(e) 1/88817841970012523233890533447265625 Differentiation >>> from sympy import init_printing, symbols, ln, diff >>> init_printing() >>> x, y = symbols("x y") >>> f = x**2 / y + 2 * x - ln(y) >>> diff(f, x) 2⋅x ─── + 2 y >>> diff(f, y) 2 x 1 - ── - ─ 2 y y >>> diff(diff(f, x), y) -2⋅x ──── 2 y Plotting >>> from sympy import symbols, cos >>> from sympy.plotting import plot3d >>> x, y = symbols("x y") >>> plot3d(cos(x * 3) * cos(y * 5) - y, (x, -1, 1), (y, -1, 1)) <sympy.plotting.plot.Plot object at 0x3b6d0d0> Limits >>> from sympy import init_printing, Symbol, limit, sqrt, oo >>> init_printing() >>> >>> x = Symbol("x") >>> limit(sqrt(x**2 - 5 * x + 6) - x, x, oo) -5/2 >>> limit(x * (sqrt(x**2 + 1) - x), x, oo) 1/2 >>> limit(1 / x**2, x, 0) ∞ >>> limit(((x - 1) / (x + 1)) ** x, x, oo) -2 ℯ Differential equations >>> from sympy import init_printing, Symbol, Function, Eq, dsolve, sin, diff >>> init_printing() >>> >>> x = Symbol("x") >>> f = Function("f") >>> >>> eq = Eq(f(x).diff(x), f(x)) >>> eq d ──(f(x)) = f(x) dx >>> >>> dsolve(eq, f(x)) x f(x) = C₁⋅ℯ >>> >>> eq = Eq(x**2 * f(x).diff(x), -3 * x * f(x) + sin(x) / x) >>> eq 2 d sin(x) x ⋅──(f(x)) = -3⋅x⋅f(x) + ────── dx x >>> >>> dsolve(eq, f(x)) C₁ - cos(x) f(x) = ─────────── x³ Integration >>> from sympy import init_printing, integrate, Symbol, exp, cos, erf >>> init_printing() >>> x = Symbol("x") >>> # Polynomial Function >>> f = x**2 + x + 1 >>> f 2 x + x + 1 >>> integrate(f, x) 3 2 x x ── + ── + x 3 2 >>> # Rational Function >>> f = x / (x**2 + 2 * x + 1) >>> f x ──────────── 2 x + 2⋅x + 1 >>> integrate(f, x) 1 log(x + 1) + ───── x + 1 >>> # Exponential-polynomial functions >>> f = x**2 * exp(x) * cos(x) >>> f 2 x x ⋅ℯ ⋅cos(x) >>> integrate(f, x) 2 x 2 x x x x ⋅ℯ ⋅sin(x) x ⋅ℯ ⋅cos(x) x ℯ ⋅sin(x) ℯ ⋅cos(x) ──────────── + ──────────── - x⋅ℯ ⋅sin(x) + ───────── - ───────── 2 2 2 2 >>> # A non-elementary integral >>> f = exp(-(x**2)) * erf(x) >>> f 2 -x ℯ ⋅erf(x) >>> integrate(f, x) ___ 2 ╲╱ π ⋅erf (x) ───────────── 4 Series >>> from sympy import Symbol, cos, sin, pprint >>> x = Symbol("x") >>> e = 1 / cos(x) >>> pprint(e) 1 ────── cos(x) >>> pprint(e.series(x, 0, 10)) 2 4 6 8 x 5⋅x 61⋅x 277⋅x ⎛ 10⎞ 1 + ── + ──── + ───── + ────── + O⎝x ⎠ 2 24 720 8064 >>> e = 1/sin(x) >>> pprint(e) 1 ────── sin(x) >>> pprint(e.series(x, 0, 4)) 3 1 x 7⋅x ⎛ 4⎞ ─ + ─ + ──── + O⎝x ⎠ x 6 360 Logical reasoning Example 1 >>> from sympy import * >>> x = Symbol("x") >>> y = Symbol("y") >>> facts = Q.positive(x), Q.positive(y) >>> with assuming(*facts): ... print(ask(Q.positive(2 * x + y))) True Example 2 >>> from sympy import * >>> x = Symbol("x") >>> # Assumption about x >>> fact = [Q.prime(x)] >>> with assuming(*fact): ... print(ask(Q.rational(1 / x))) True See also Comparison of computer algebra systems References External links Planet SymPy SymPy Tutorials Collection Code Repository on GitHub Support and development forum Gitter chat room Articles with example Python (programming language) code Computer algebra system software for Linux Computer algebra system software for macOS Computer algebra system software for Windows Free computer algebra systems Free mathematics software Free software programmed in Python Python (programming language) scientific libraries
SymPy
Mathematics
2,948
13,307,469
https://en.wikipedia.org/wiki/Empirical%20software%20engineering
Empirical software engineering (ESE) is a subfield of software engineering (SE) research that uses empirical research methods to study and evaluate an SE phenomenon of interest. The phenomenon may refer to software development tools/technology, practices, processes, policies, or other human and organizational aspects. ESE has roots in experimental software engineering, but as the field has matured the need and acceptance for both quantitative and qualitative research has grown. Today, common research methods used in ESE for primary and secondary research are the following: Primary research (experimentation, case study research, survey research, simulations in particular software Process simulation) Secondary research methods (Systematic reviews, Systematic mapping studies, rapid reviews, tertiary review) Teaching empirical software engineering Some comprehensive books for students, professionals and researchers interested in ESE are available. Research community Journals, conferences, and communities devoted specifically to ESE: Empirical Software Engineering: An International Journal International Symposium on Empirical Software Engineering and Measurement International Software Engineering Research Network (ISERN) References Software engineering
Empirical software engineering
Technology,Engineering
205
6,119,867
https://en.wikipedia.org/wiki/ADAM%20%28protein%29
ADAMs (short for a disintegrin and metalloproteinase) are a family of single-pass transmembrane and secreted metalloendopeptidases. All ADAMs are characterized by a particular domain organization featuring a pro-domain, a metalloprotease, a disintegrin, a cysteine-rich, an epidermal-growth factor like and a transmembrane domain, as well as a C-terminal cytoplasmic tail. Nonetheless, not all human ADAMs have a functional protease domain, which indicates that their biological function mainly depends on protein–protein interactions. Those ADAMs which are active proteases are classified as sheddases because they cut off or shed extracellular portions of transmembrane proteins. For example, ADAM10 can cut off part of the HER2 receptor, thereby activating it. ADAM genes are found in animals, choanoflagellates, fungi and some groups of green algae. Most green algae and all land plants likely lost ADAM proteins. ADAMs are categorized under the enzyme group, and in the MEROPS peptidase family M12B. The terms adamalysin and MDC family (metalloproteinase-like, disintegrin-like, cysteine rich) have been used to refer to this family historically. ADAM family members Medicine Therapeutic ADAM inhibitors might potentiate anti-cancer therapy. See also ADAMTS (A disintegrin and metalloproteinase with thrombospondin motifs) family Ectodomain shedding References External links http://www.healthvalue.net/sheddase.html Protein families Single-pass transmembrane proteins Proteases EC 3.4.24
ADAM (protein)
Biology
363
23,574,472
https://en.wikipedia.org/wiki/Great%20Atlantic%20%26%20Pacific%20Tea%20Co.%20v.%20Supermarket%20Equipment%20Corp.
Great Atlantic & Pacific Tea Co. v. Supermarket Equipment Corp., 340 U.S. 147 (1950), is a patent case decided by the United States Supreme Court. The Court held that a patent for a cashier's counter and movable frame for grocery stores was invalid because it was a combination of known elements that added nothing new to the total stock of knowledge. Background Patent number 2,242,408 ("the Turnham patent") claimed the invention of a cashier's counter equipped with a three-sided frame with no top or bottom which, when pushed or pulled, moved groceries deposited in it by a customer to the clerk and left them there when pushed back to repeat the operation. The district court found that, although each element of the device was known to prior art, a counter with an extension to receive a self-unloading tray with which to push the contents of the tray in front of the cashier was a novel feature and constituted a new and useful combination. The Court of Appeals affirmed the district court's decision. Both courts found that every element claimed in the Turnham patent was known to prior art, except the extension of the counter. Supreme Court decision The Supreme Court disagreed with the lower courts' conclusion that the extension of the counter constituted an invention because (1) the extension was not mentioned in the claim, (2) an invention cannot be found in a mere elongation of a merchant's counter, and (3) the Turnham patent overclaimed the invention by including old elements, unless together with its other old elements, the extension made up a new patentable combination. The Court explained that the key to the patentability of a mechanical device that brings old factors into cooperation is the presence or lack of invention: "[O]nly when the whole in some way exceeds the sum of its parts is the accumulation of old devices patentable." The Court concluded that the invention claimed by the Turnham patent lacked any "unusual or surprising consequences" from the combination of old elements. The Court added that patents are intended to add to the sum of useful knowledge, and they cannot be sustained when their effect is to subtract from resources freely available. The Court also emphasized that commercial success without invention is not sufficient for purposes of patentability. Concurrence In his concurrence, Justice Douglas stated that to be patentable, an invention must push back the frontiers of science. In his view, the Patent Office took advantage of the opportunity to expand its own jurisdiction and granted patents to inventions that had no place in the constitutional scheme of advancing scientific knowledge. References External links 1950 in United States case law The Great Atlantic & Pacific Tea Company United States patent case law United States Supreme Court cases United States Supreme Court cases of the Vinson Court Retail point of sale systems
Great Atlantic & Pacific Tea Co. v. Supermarket Equipment Corp.
Technology
576
63,428,308
https://en.wikipedia.org/wiki/1139th%20Engineer%20Combat%20Group
The 1139th Engineer Combat Group (1139th Engr C Gp) was a technical United States Army Headquarters Unit providing administration and supervision support to Combat Engineers on bridge building and other construction activities during World War II. The 1139th Engineer Combat Group was part of the Third Army and was attached for operations to the XX Corps in direct support of the 7th Armored Division. The 1139th Engineer Combat Group fought from northern France to Austria in World War II, supporting General George Patton's Third Army's rapid movements during the war. The 1139th Engineer Combat Group is credited with opening routes for the advancement of troops which included building 119 tactical bridges, of which 39 were 200 feet or longer, including the 1,896 foot "longest floating tactical bridge constructed in the European Theater of Operations." The Group was first formed in December 1943 and deactivated in December 1946. Development of the Engineer Combat Group Concept Starting with WWII, the United States Army defined a "group" to be similar in concept to a regiment, the unit type from which the group was evolved. In general, a regiment is composed of a fixed number of assigned battalions, whose personnel and equipment are likewise fixed. The battalions have only the minimum administrative personnel, with the result that administrative details are handled by the regimental headquarters. In warfare as mobile and on as vast a scale as the European campaign was, it was felt regiments were sometimes not sufficiently agile for achieving maximum effectiveness. Therefore, the group was designed as a flexible organization set up to efficiently meet rapidly changing tactical requirements. The group would consist of a small headquarters and a varying number of separate companies and battalions which could be attached to and detached from the group headquarters as the situation demanded. The Engineer Combat Group had many duties: construction, repair and dismantling of various bridge types; clearing of obstacles; road repair and construction; ferrying troops in river crossings; laying smoke screens; detecting, removing, and laying of mines and demolitions, in addition to offensive and defensive infantry functions. In the instance of the 1139th Engineer Combat Group, there were as many as five battalions and three separate companies under their command at one time, and at other times the group headquarters controlled as little as a single company. The job of the 1139th was twofold. Primarily, they were responsible for the administration of the units under their command, and secondarily, they acted in a supervisory capacity for all work assigned to the group. In addition to the combat engineer battalions, the group could have separate battalions and companies that specialized in constructing specific types of bridges such as heavy and light pontoon bridges, treadway bridges, Bailey bridges, or other specialized support functions such as area lighting or smoke generation. The units under control of the group headquarters were used to support divisions' engineer operations, providing special equipment which the division engineers did not carry, and augmenting the engineer personnel strength of the divisions' engineer battalions. 1139th Engineer Combat Group Training and Deployment The 1139th Engineer Combat Group was activated on August 25, 1943, at Camp Beale, California. Many of the troops that would eventually make up the 1139th Engineer Combat Group received their initial training at Camp Ellis, Illinois, an Army Services Training Center for the Corps of Engineers located southwest of Peoria, Illinois. Most were not career soldiers, and were consigned to "the duration plus six months" of service. From there they were sent to Camp Beale, California, to join the 1139th and complete training as a unit. In addition to their basic training courses, the men of the 1139th received courses in camouflage, mine field clearing, mapping, reconnaissance, bridge building, and road construction. The 1139th was commanded by Colonel John S. Niles, with Lt. Colonel George H. Walker as the executive officer. On July 1, 1944, the 1139th Engineer Combat Group, consisting of 16 officers and 66 enlisted men, left Camp Shanks by train to the New York Port of Embarkation and began loading on to the USAT Thomas H. Barry. Early the next morning the ship joined a convoy heading to Scotland. They arrived at the town of Greenock, Firth of Clyde, Scotland on July 12, 1944. The next day they disembarked and traveled via a twelve-hour train trip to Doddington, Nantwich, England, where they organized their men and supplies, and drew vehicles, trailers and other T/E equipment (equipment specified for their specific operations) as required. On August 1, 1944, they traveled via truck convoy from Doddington to Bournemouth, England; a large troop marshaling area in preparation of movement to Continental Europe. On August 5, 1944, they left England sailing on the USAT George Custer and arrived on Utah Beach just two months after the D-Day landings there. Over the next two days, the unit disembarked and bivouacked in the Utah Beach area. The 1139th in the European Theater of Operations On August 8, 1944, they proceeded in motor convoy across the Cherbourg Peninsula moving north to Bricquebec, France, and at that point they began to work as an operational unit taking its position in the line of advance, with the First Army on its left flank and the XII Corps on the right flank. The 1139th Engineer Combat Group was assigned to the Third Army and attached for operations to the XX Corps in direct support of the 7th Armored Division. (When a unit is "assigned", it becomes a permanent part of the headquarters to which it is assigned. When the unit is "attached", it comes under the administrative and operational control of that unit to which it is attached.) It was at that point that their executive officer, Lt. Col. George H. Walker, outlined their responsibilities; as part of General Patton's army the mission of the 1139th Engineer Combat Group was to travel directly behind the armor and remove mines, build bridges, fight as infantry and perform other engineering activities that allowed the rapid advancement of the XX Corps. (The XX Corps played a measurable role in Patton's dash across France in August and early September 1944, earning the nickname "Ghost Corps" for the speed of its advance; starting in Bricquebec, France, they traveled 600 miles in their first 30 days). The XX Corps armored divisions typically traveled many miles ahead of the main body of the army, which resulted in many enemy positions being overrun before alarm could be given. The 1139th Engineer Combat Group arrived in the vicinity of Saint Jean sur Erve, France, on August 10, 1944, and was joined by their first battalions; the 135th, the 179th, and the 206th Engineer Combat Battalions. The initial activity of the Group was to support the 80th Infantry Division in its thrust north against the enemy near Sainte-Suzanne, France. On August 13, 1944, while en route during this advance, the Group removed landmines from roads and shoulders, and the first of the 1139th's bridges were constructed; two 60 foot Bailey Bridges built at St. Mars and St. George, France, on the sites of two bridges that had been blown up by the enemy. (A Bailey Bridge is a temporary pre-fabricated, truss bridge, quickly assembled from prefabricated, standardized parts, assembled on land and then launched across an opening). Two days later they constructed two Treadway Bridges M2 across the Huisne River at La Ferté-Bernard, and at Nogent-le-Rotrou, France. (A Treadway Bridge is a floating bridge, using pontoons to support a continuous deck for pedestrian and vehicle travel). In his European campaign, General Eisenhower recognized that the Treadway bridge was an essential piece of equipment to the campaign, easy to transport, quickly installed, and capable of sustaining heavy military loads. On August 17, 1944, the 1139th meet stiff resistance as they were trying to enter Chartres, France, requiring the 1139th Engineer Combat Group to take on their infantry role. They engaged in firefight operations by cleaning out machine-gun and anti-tank gun nests in Bonville, France, suffering 6 dead and 12 wounded. They constructed two short Treadway bridges at Luce, France, west of Chartres and removed twelve large bombs from bridges in Chartres. By the next day they were joined by the 7th Armored Division, and the city was finally liberated. The Group then moved north to the vicinity of Treon, France, in support of the 7th Armored Division. There they were tasked with clearing the roads of mines, removing booby-trapped vehicles from roads, de-booby-trapping the Railroad station at Dreux, France, and filling in road craters in the routes of the advancement. They continued to move rapidly forward, and the Seine River was reached on August 23, 1944, where they bivouacked in an open field near Melun, France. At Sainte Sauveur, France, the 1139th Engineer Combat Group supported the 7th Armored Division in its initial crossing of the Seine River at Tilly, France by manning assault boats, by establishing a beachhead and constructing a 504-foot Treadway Bridge while under direct fire, and by constructing a 204-foot footbridge across the canal at Melun, France. The construction of the foot-bridge was carried on under heavy small arms and 88mm fire. An 88mm shell blew out 50 feet of the bridge during construction, which had to be replaced while under intense enemy fire. Over the next few days, the Seine was bridged in three more places by the 991st and the 994th Engineer Treadway Bridge Companies, the 537th Engineer Light Ponton Company, and the 135th, 179th, and 206th Engineer Combat Battalions. These crossings consisted of multiple span continuous Bailey bridges, a 210-foot Bailey at Vulaines-sur-Seine, Frances, and a 420-foot Bailey at Champagne, France. Most of the construction was carried out at night, and took place under enemy fire, resulting in several lives lost. When the bridges were completed, they were guarded and maintained while under considerable fire. Arriving at Montmiral, France on August 28, 1944, the 1139th Engineer Combat Group underwent a leadership change with executive officer Lt. Col. Walker leaving to become the commanding officer of the 1103rd Engineer Combat Group, and Lt. Col Robert Eininger replacing him. The next major objective of the Third Army was Verdun, France and the Moselle Valley, which offered a natural gap in Germany's frontier line. Early in the morning of September 2, 1944, the XX Corps arrived 5 miles west of Verdun. Within a matter of hours, the city fell to the armored division, and on September 4, 1944, the 509th Engineer Light Ponton Company had built a 200-foot Baily bridge over the Meuse River and a bridgehead was established on the eastern side of the river. At this point, the Third Army's Armored Divisions had outpaced the Redball gasoline supply column, and although emergency supply convoys were rushed south, the Germans had time to reinforce their positions at Metz, Germany. The American bridgehead had to be withdrawn from across the Meuse. While awaiting gasoline reinforcements the 1139th Engineer Combat Group completed a considerable amount of repair on the railroads in the vicinity. Repairs were made by filling in bomb craters in the yards east of Verdun, and by rehabilitation of the track and roadbed from Conflans to Verdun, France, including construction of a 280-foot single track bridge at Conflans. On September 15, 1944, when adequate supplies had finally reached the Third Army, they moved on toward Thiacourt, France. The 135th Engineer Combat Battalion working with the 180th Engineer Heavy Ponton Battalion constructed a 260-foot Heavy Pontoon bridge at Pagny, France. Because of a shortage of Engineer troops, it was necessary to form a provisional platoon from the Group Headquarters Company to assist in the construction. Then working with the 509th Engineer Light Ponton Company, they constructed two 80 foot Bailey bridges across the Moselle River. The bridge building units worked under constant heavy mortar and artillery fire at these crossings, but were aided by the 161st Chemical Smoke Generating Company laying down a heavy smoke screen from hundreds of smoke generators and pots. The cost in lives was high, but it helped pave the way towards taking the heavily defended city of Metz. Despite the pressure of the continued rapid advance of the Group, morale among the troops remained at a high level. During the month of October 1944, the 1139th Engineer Combat Group grew to one of the largest tactical engineer groups in the United States Army. Over 5000 troops were under their command, including the 5th, 83rd, 90th, and 95th Infantry Divisions, the 179th and 206th Engineer Combat Battalions, the 991st Engineer Treadway Company, the 509th Engineer Light Ponton Company, and the 623rd Engineer Light Equipment Company. This concentration of men and equipment was in preparation of supporting the crossing of the Moselle River by the 10th Armored Division and the 90th and 95th Infantry Divisions. In preparation for this, the troops were instructed in crossing techniques using Assault Boats, and the Group experimented with building a rapid bridge-launching-device constructed on a tank frame. (While functional, this vehicle was unfortunately found to be so heavy that it was not practical to move for any real distance). The month of November 1944, was of extreme importance to the XX Corps because the crossing of the Moselle River and advancement eastward depended upon the success by the 1139th Engineer Combat Group in bridging the River. Eight bridges were constructed across the Moselle River by the 1139th Engineer Combat Group during the month; two Treadway M2s of 350 and 396 feet in length, one 630 foot Treadway M1, two Floating Baileys of 530 and 440 feet in length, one Baily DS, and two Heavy Pontoon bridges of 730 and 675 feet in length. These crossings were unusually difficult because the Moselle River was in a state of 30-year flood, and the enemy was continually making the bridging difficult with heavy artillery, machine gun and small arms fire. The enemy fire at Cattenom, France, was so intense that the 440 foot Floating Bailey Bridge was eventually lost. In most of these crossings the engineers had to first resort to ferrying operations in order for the infantry to clear a beachhead on the eastern shore so that the bridges could be constructed. Numerous troop losses occurred during these crossings. After the successful crossing of the Moselle River by the 90th and 95th Infantry Divisions and the 10th Armored Division, studies of aerial photographs revealed the need for all the engineers in the Group to help overcome the obstacles left by the enemy; extensive antitank ditches, craters, pill boxes, blown bridges, and numerous mine fields (one minefield alone was estimated to contain 10,000 mines). The burden of this work was so great that the 135th Engineer Combat Battalion was reattached to the 1139th Engineer Combat Group for the purpose of taking over the work in the rear of the division, including filling craters and ditches, destroying pill boxes, constructing fords, marking minefields, posting road signs, changing town signs, and removing the dead to Graves Registration. Because of the large number of bridges under construction during November, it was also necessary to utilize the 88th Engineer Heavy Ponton Bridge Battalion to haul Bailey bridges from Army Supply Point No. 8 to Thionville, France, and to maintain the Group Bridge Dump there. In December 1944, while planning the crossing of the Saar River, the 1139th Engineer Combat Group was stationed at Niedaltdorf, France. On December 6, 1944, the 90th Infantry Division, located near Wallerfangen, France, was ready to make the assault crossing. The 20th Engineer Combat Battalion began ferrying operations in Assault and storm boats. They ferried across two Battalions and part of a third battalion of the 38th Infantry Regiment, while the 179th Engineer Combat Battalion ferried the 1st and 2nd Battalions of the 357th Infantry Regiment. Following these operations the storm boats continued to evacuate wounded and to deliver supplies. Although all operations of the river were carried out under continuous heavy artillery and small arms fire day and night, there was no letup in the steady flow of materials and supplies by the storm boats. However, the incoming fire was so heavy that power boats could not be brought down to the river for launching. After several unsuccessful attempts, a ferry was constructed on December 8, 1944, for ferrying vehicles, anti-tank guns and tanks across the river at Wallerfangen, with additional ferries at Pachten-Dillingen and Rehlingen-Siersburg, France, which made it possible to ferry all the vehicles and tanks of the 90th Infantry as rapidly as they were needed to secure a beachhead on the eastern shore. This was of importance since a floating bridge was not able to be constructed, since sections were shot away as rapidly as they were placed into the water. The Saar across from Dillingen, Germany, continued to prove difficult to cross because of heavy fire from a large number of occupied German pill boxes on the eastern side of the river where the Siegfried Line had been most heavily constructed. As a result, a bridgehead was not expanded there to any great depth, although ferrying operations continued until December 20, 1944. These activities were interrupted as Germany made its last major offensive campaign; the Battle of the Bulge (also known as the Ardennes Counteroffensive). When General Patton learned of the attack, he quickly repositioned six full divisions of his army from their locations along the Saar River to launch a counterattack north to relieve the U.S. 101st Airborne Division which had been trapped at Bastogne. To replace the repositioned defensive divisions along the Saar River, the 90th Infantry Division withdrew from their eastern beachhead at Dillengen to join the 95th Infantry Division in defending the western side of the River. The 1139th Engineer Combat Group supported both divisions in constructing defensive barriers consisting of mine fields, roadblocks, and bridges prepared for demolition, as well as guarding many of these deterrents. During January 1945, the 1139th Engineer Combat Group continued to support defensive operations on the west bank of the Saar River, consisting of barrier zone and roadwork maintenance, installation of an anti-mine boom at Maimühle, Germany, roadway snow-plowing and sanding, guarding displaced German citizens at Niedaltdorf, removing mines and booby traps at Thionville, and patrolling the banks of the Moselle River. By February, the Germans were in full retreat from the Ardennes. During the first three weeks of February 1945, the 1139th Engineer Combat Group was engaged in miscellaneous engineer activities in support of the 94th Infantry Division and the 3rd Cavalry Regiment. Because of sudden thaws supplemented by heavy rains, the Moselle River was swollen to flood heights again. As a result, great ice floes were swept down the river with such force that it necessitated the removal of all floating bridges. The salvaged materials were removed and placed in the storage dumps, and as soon as possible the bridges were reconstructed to allow traffic over the Moselle again. On February 20, 1945, the 1139th Engineer Combat Group was given the mission of assisting the 94th Infantry Division in its assault crossing of the Saar River. Under heavy enemy fire coming from strategically located pill boxes on the east bank, three successful assault crossings were made between the 21st and 23 February at Taben, Serrig, and Ayl, Germany. The first bridge crossings of the Saar River under the direction of the 1139th occurred February 24 and 26, 1945. The 135th Engineer Combat Battalion installed a 240-foot Treadway bridge at Taben, Germany, and a 286' Heavy Pontoon Bridge at Saarburg, Germany, and the 993rd Engineers Treadway Bridge Company built a 320-foot Treadway Bridge at Serrig, allowing the 94th Infantry Division to cross the Saar River and established a vital bridgehead at Serrig, Germany. This, in turn, allowed a 286-foot Heavy Pontoon bridge and a 336-foot Treadway bridge to be built across the Sarr at Niederleuken and Schoden, Germany, so that both Infantry and Armor could cross and reduce the enemy on the eastern side of the Saar. On March 6, 1945, an additional 336-foot Treadway M2 C1 bridge was then established at Konz Karthaus, Germany by the 179th and the 206th Engineer Combat Battalions. During the first part of March 1945, the 1139th Group was engaged in engineering work necessary for the rapid advancement of the XX Corps from the Saar to the Rhine Rivers. In addition to construction of many Bailey Bridges and small fixed bridges, they were tasked with supervising civilian labor on road work, preparing gun emplacements for artillery pieces, constructing camouflage screens and anti-mine booms at bridges, removing dead animals from minefields, hauling prisoners, demolishing buildings to widen roads, painting town signs, and destroying enemy tanks and artillery pieces. In quick secession the American Armies were converging on the enemy in the Rhine-Moselle triangle; the 4th Armored Division from the North, the 10th and 6th Armored Divisions from the South, and the 3rd and the 7th Armies from various locations. Lieutenant Colonel Omar Bradley told General George S. Patton, whose U.S. Third Army had been fighting through the Palatinate to "take the Rhine on the run." The Third Army did just that on the night of March 22, 1945, crossing the river with a hasty assault south of Mainz at Oppenheim by the 17th Armored Engineer Battalion on a 1,152 foot long Treadway M2 Bridge. It was there on March 24, 1945, that Patton showing his contempt for the enemy, made good on his pledge to "piss in the Rhine", which he did from a pontoon bridge in full view of his men and news cameras. Then on March 25, 1945, while in the area of Ober-Hilbersheim, Germany, the 1139th received orders to make a major river crossing over the Rhine River at Mainz, Germany. Extensive studies were made by the 1139th Engineer Combat Group to determine the best site for crossing. Study of all available maps and photos determined that the downtown area of Mainz, Germany, was the best site, even though the Rhine was wider at that point. Profiles were prepared for the entire area, showing which terrain areas had to be captured or neutralized by artillery fire, and hydraulic data and river bed materials were studied to understand the types and sizes of anchors necessary for the bridge. Finally, site reconnaissance at the river's edge by Group personnel determined the exact location for the bridge. This crossing required two assault operations led by the 80th Infantry Division with support from the 135th Engineer Combat Battalion. The first infantry crossing (using assault boats) incurred a high casualty rate, so the two battalions were then crossed by the 1139th Engineer Combat Group supported Naval units. Once the infantry was ashore, ferrying operations were carried on by Navy operated boats, using 6 LCMs and 6 Higgens Boats. During the launching there was heavy artillery fire, 20mm and small arms fire on the site, knocking out a bulldozer and killing a naval officer. However, the 997th Engineer Treadway Company with assistance from the 160th Engineer Combat Battalion was able to construct a Treadway M2 Bridge across the Rhine 1,896 feet long; the Longest Tactical Bridge built in the European Theater of Operations. This bridge served the entire XX Corps in crossing the Rhine and marked the end of the assault phase of the Rhine in the Third Army area. Once across the Rhine, they found the enemy largely routed and in full retreat. On March 31, 1945, the Third Army started its run towards Berlin, from Mainz, Germany, traveling north-east through Frankfurt to Friedberg, to Alsfeld, then crossing the Fulda River with a 312-foot Treadway M2 Bridge, on to Reichensachsen, Germany. Then traveling eastward they next bridged the Werra River and traveled to Langensalza, Germany, and after bridging the Helme River on April 11, 1945, they traveled along the Autobahn for two days constructing Bailey Bridges where needed to repair sections of the highway that had been destroyed by allied bombardment. While traveling the Autobahn, they came across an area where the median strip had been paved as a landing airstrip and painted green, and on it sat an unusual-looking plane with no propeller. It turned out to be one of the German Jet planes that had been abandoned during the rapid retreat of the German Army. The 1139th stopped in Weimar, Germany, on April 13, 1945. Here they were situated in the most luxurious barracks that they had stayed in; formerly used by German Anti-aircraft troops. However, it was also here that they saw their most horrible of all sights; the Buchenwald Concentration Camp. The Third Army had liberated the camp just two days earlier. At the urging of General Patton, many of the 1139th troops visited the camp and saw the walking skeletons, stacks of withered corpses, and incompletely burned bodies, pits of lime covered bodies, and all saw emaciated inmates from the camp walking along the road towards Weimer, and heard firsthand accounts about the horrors from a former Captain of the Dutch army, who was taken in from the camp to act as an interpreter. On April 16, 1945, the Third army continued moving eastward until it arrived just outside of Chemnitz, Germany. At this point, they were only 135 miles from Berlin. However, with the threat of a National Redoubt (potential reorganization and resistance by the remainder of the German army), Eisenhower ordered Patton's army southward toward Bavaria and Czechoslovakia, anticipating a last stand there by Nazi German forces. The XX Corps raced south-westward to Lichtenfels, Germany; bridging the Saale and the Weisse rivers, arriving April 18, 1945. Then turning southeast they bridged the Main River and traveled to Pegnitz, Germany, and after spanning the Naab River with a 240-foot Treadway M2 Bridge arrived in Burglengenfeld, Germany on April 25, 1945. On April 27, 1945, they traveled to Regensburg where they stopped to develop plans for crossing the Danube River. Then continuing into southern Bavaria, they constructed a 516-foot Treadway M2 Bridge to cross the Danube, arriving in Straubing, Germany on April 30, 1945. On May 2, 1945, they left Straubing and constructed a 288-foot Treadway M2 bridge to cross the Isar River at Landau an der Isar, Germany, and arrived in Ortenburg on May 3, 1945. On May 4, 1945, the 1139th constructed a 560-foot Heavy Pontoon Bridge over the Inn River at Passau, Germany, and the next day they left Ortenburg, Germany and moved southeastward to Lambach, Austria, where contact was made with the Russian forces. This was the furthest they traveled in search for the National Redoubt by the German Army. It was here, on May 7, 1945, that the troops heard that finally Germany would officially surrender the next day. The same day General Walton H. Walker, commander of the XX Corps, received the unconditional surrender of Generaloberst Lothar Rendulic, commander of German Army Group South. The 1139th constructed a 516-foot Treadway M2 Bridge across the Isar River at Plattling, Austria, and on May 9, 1945, they constructed their final bridge, a 755-foot Heavy Pontoon Bridge over the Inn River at Schaerding, Austria. On May 10, 1945, the 1139th left Lambach, Austria and began their return home moving north-eastward to Ried im Innkreis, Austria. The 1139th as part of the Army of Occupation in Germany After VE Day, the 1139th was involved in assisting the German people rebuild their state as part of the U.S. Army of Occupation in Germany. Initially, the 1139th Engineer Combat Group expected to be broken up following VE Day, with some troops destined for the Pacific, while others moved to fill in personnel gaps in other units. However, General Eisenhower felt that rehabilitation of the Ruhr area in Germany was vital to rebuilding the German economy, because nowhere else in Europe were there coal deposits of that quality and ease of access. This coal was key to allowing Germany to rebuild its industries and in turn to feed its own population. Colonel Niles and the 1139th Engineer Combat Group were given this duty. The 1139th was assigned to get the Bavarian and Saar coal mines back in operation. This included clearing and preparing the mine sites for return to operations, and because there were few returning coal miners, the 1139th's Engineers were also required to providing training and education to the surviving populous on running the mine. The 1139th worked on the mine projects for many months. During these activities the 1139th was experiencing a slow reduction in troop size as the GIs with 45 VE Day points or more were discharged and returned home to the states. The 1139th Engineer Combat Group returned to the United States at Boston, Massachusetts on October 26, 1945, and was inactivated the next day at Camp Myles Standish, Massachusetts. Tables and Documents Table 1: Divisions and Regiments that were supported by the 1139th Engineer Combat Group Table 2: Battalions and Companies that were attached to the 1139th Engineer Combat Group Table 3: Bridges Constructed by the 1139th Engineer Combat Group Table 4: Awards received by soldiers of the 1139th Engineer Combat Group External links Liberty Ships built by the United States Maritime Commission in World War II (includes USAT #0646 George A. Custer) History of Doddington Park, UK 1946-1960 Patton's 3rd Army, advance to Moselle Valley, at Verdun during WW2, Sept 2 1944 Further reading Notes and References 20 Combat Explosives engineers Military units and formations established in 1943
1139th Engineer Combat Group
Engineering
6,169
1,212,472
https://en.wikipedia.org/wiki/Gliese%20436
Gliese 436 is a red dwarf located away in the zodiac constellation of Leo. It has an apparent visual magnitude of 10.67, which is much too faint to be seen with the naked eye. However, it can be viewed with even a modest telescope of aperture. In 2004, the existence of an extrasolar planet, Gliese 436 b, was verified as orbiting the star. This planet was later discovered to transit its host star. Nomenclature The designation Gliese 436 comes from the Gliese Catalogue of Nearby Stars. This was the 436th star listed in the first edition of the catalogue. In August 2022, this planetary system was included among 20 systems to be named by the third NameExoWorlds project. The approved names, proposed by a team from the United States, were announced in June 2023. Gliese 436 is named Noquisi and its planet is named Awohali, after the Cherokee words for "star" and "eagle". Properties Gliese 436 is a M2.5V star, which means it is a red dwarf. Stellar models give both an estimated mass and size of about 43% that of the Sun. The same model predicts that the outer atmosphere has an effective temperature of 3,480 K, giving it the orange-red hue of an M-type star. Small stars such as this generate energy at a low rate, giving it only 2.5% of the Sun's luminosity. Gliese 436 is older than the Sun by several billion years and it has an abundance of heavy elements (with masses greater than helium-4) less than half% that of the Sun. The projected rotation velocity is 1.0 km/s, and the chromosphere has a low level of magnetic activity. Gliese 436 is a member of the "old-disk population" with velocity components in the galactic coordinate system of U=+44, V=−20 and W=+20 km/s. Planetary system The star is orbited by one known planet, designated Gliese 436 b. The planet has an orbital period of 2.6 Earth days and transits the star as viewed from Earth. It has a mass of 22.2 Earth masses and is roughly 55,000 km in diameter, giving it a mass and radius similar to the ice giant planets Uranus and Neptune in the Solar System. In general, Doppler spectroscopy measurements do not measure the true mass of the planet, but instead measure the product m sin i, where m is the true mass and i is the inclination of the orbit (the angle between the line-of-sight and the normal to the planet's orbital plane), a quantity that is generally unknown. However, for Gliese 436 b, the transits enable the determination of the inclination, as they show that the planet's orbital plane is very nearly in the line of sight (i.e. that the inclination is close to 90 degrees). Hence the mass quoted is the actual mass. The planet is thought to be largely composed of hot ices with an outer envelope of hydrogen and helium, and is termed a "hot Neptune". GJ 436 b's orbit is likely misaligned with its star's rotation. In addition the planet's orbit is eccentric. Because tidal forces would tend to circularise the orbit of the planet on short timescales, this suggested that Gliese 436 b is being perturbed by an additional planet orbiting the star. Claims of additional planets In 2008, a second planet, designated "Gliese 436 c" was claimed to have been discovered, with an orbital period of 5.2 days and an orbital semimajor axis of 0.045 AU. The planet was thought to have a mass of roughly 5 Earth masses and have a radius about 1.5 times larger than the Earth's. Due to its size, the planet was thought to be a rocky, terrestrial planet. It was announced by Spanish scientists in April 2008 by analyzing its influence on the orbit of Gliese 436 b. Further analysis showed that the transit length of the inner planet is not changing, a situation which rules out most possible configurations for this system. Also, if it did orbit at these parameters, the system would be the only "unstable" orbit on UA's Extrasolar Planet Interactions chart. The existence of this "Gliese 436 c" was thus regarded as unlikely, and the discovery was eventually retracted at the Transiting Planets conference in Boston, 2008. Despite the retraction, studies concluded that the possibility that there is an additional planet orbiting Gliese 436 remained plausible. With the aid of an unnoticed transit automatically recorded at NMSU on January 11, 2005, and observations by amateur astronomers, it has been suggested that there is a trend of increasing inclination of the orbit of Gliese 436 b, though this trend remains unconfirmed. This trend is compatible with a perturbation by a planet of less than 12 Earth masses on an orbit within about 0.08 AU of the star. In July 2012, NASA announced that astronomers at the University of Central Florida, using the Spitzer Space Telescope, strongly believed they had observed a second planet. This candidate planet was given the preliminary designation UCF-1.01, after the University of Central Florida. It was measured to have a radius of around two thirds that of Earth and, assuming an Earth-like density of 5.5 g/cm3, was estimated to have a mass of 0.3 times that of Earth and a surface gravity of around two thirds that of Earth. It was thought to orbit at 0.0185 AU from the star, every 1.3659 days. The astronomers also believed they had found some evidence for an additional planet candidate, UCF-1.02, which is of a similar size, though with only one detected transit its orbital period is unknown. Follow up observations with the Hubble Space Telescope as well as a reanalysis of the Spitzer Space Telescope data were unable to confirm these planets. See also List of stars in Leo References External links Leo (constellation) M-type main-sequence stars 057087 0436 905 Planetary transit variables Planetary systems with one confirmed planet J11421096+2642251 Noquisi
Gliese 436
Astronomy
1,310
10,069,097
https://en.wikipedia.org/wiki/Compiler%20Description%20Language
Compiler Description Language (CDL) is a programming language based on affix grammars. It is very similar to Backus–Naur form (BNF) notation. It was designed for the development of compilers. It is very limited in its capabilities and control flow, and intentionally so. The benefits of these limitations are twofold. On the one hand, they make possible the sophisticated data and control flow analysis used by the CDL2 optimizers resulting in extremely efficient code. The other benefit is that they foster a highly verbose naming convention. This, in turn, leads to programs that are, to a great extent, self-documenting. The language looks a bit like Prolog (this is not surprising since both languages arose at about the same time out of work on affix grammars). However, as opposed to Prolog, control flow in CDL is deterministically based on success/failure, i.e., no other alternatives are tried when the current one succeeds. This idea is also used in parsing expression grammars. CDL3 is the third version of the CDL language, significantly different from the previous two versions. Design The original version, designed by Cornelis H. A. Koster at the University of Nijmegen, which emerged in 1971, had a rather unusual concept: it had no core. A typical programming language source is translated to machine instructions or canned sequences of those instructions. Those represent the core, the most basic abstractions that the given language supports. Such primitives can be the additions of numbers, copying variables to each other, and so on. CDL1 lacks such a core. It is the responsibility of the programmer to provide the primitive operations in a form that can then be turned into machine instructions by means of an assembler or a compiler for a traditional language. The CDL1 language itself has no concept of primitives, no concept of data types apart from the machine word (an abstract unit of storage - not necessarily a real machine word as such). The evaluation rules are rather similar to the Backus–Naur form syntax descriptions; in fact, writing a parser for a language described in BNF is rather simple in CDL1. Basically, the language consists of rules. A rule can either succeed or fail. A rule consists of alternatives that are sequences of other rule invocations. A rule succeeds if any of its alternatives succeed; these are tried in sequence. An alternative succeeds if all of its rule invocations succeed. The language provides operators to create evaluation loops without recursion (although this is not strictly necessary in CDL2 as the optimizer achieves the same effect) and some shortcuts to increase the efficiency of the otherwise recursive evaluation, but the basic concept is as above. Apart from the obvious application in context-free grammar parsing, CDL is also well suited to control applications since a lot of control applications are essentially deeply nested if-then rules. Each CDL1 rule, while being evaluated, can act on data, which is of unspecified type. Ideally, the data should not be changed unless the rule is successful (no side effects on failure). This causes problems as although this rule may succeed, the rule invoking it might still fail, in which case the data change should not take effect. It is fairly easy (albeit memory intensive) to assure the above behavior if all the data is dynamically allocated on a stack. However, it is rather hard when there's static data, which is often the case. The CDL2 compiler is able to flag the possible violations thanks to the requirement that the direction of parameters (input, output, input-output) and the type of rules (can fail: test, predicate; cannot fail: function, action; can have a side effect: predicate, action; cannot have a side effect: test, function) must be specified by the programmer. As the rule evaluation is based on calling simpler and simpler rules, at the bottom, there should be some primitive rules that do the actual work. That is where CDL1 is very surprising: it does not have those primitives. You have to provide those rules yourself. If you need addition in your program, you have to create a rule with two input parameters and one output parameter, and the output is set to be the sum of the two inputs by your code. The CDL compiler uses your code as strings (there are conventions on how to refer to the input and output variables) and simply emits it as needed. If you describe your adding rule using assembly, you will need an assembler to translate the CDL compiler's output into the machine code. If you describe all the primitive rules (macros in CDL terminology) in Pascal or C, then you need a Pascal or C compiler to run after the CDL compiler. This lack of core primitives can be very painful when you have to write a snippet of code, even for the simplest machine instruction operation. However, on the other hand, it gives you great flexibility in implementing esoteric, abstract primitives acting on exotic abstract objects (the 'machine word' in CDL is more like 'unit of data storage, with no reference to the kind of data stored there). Additionally, large projects made use of carefully crafted libraries of primitives. These were then replicated for each target architecture and OS allowing the production of highly efficient code for all. To get a feel for the language, here is a small code fragment adapted from the CDL2 manual: ACTION quicksort + >from + >to -p -q: less+from+to, split+from+to+p+q, quicksort+from+q, quicksort+p+to; +. ACTION split + >i + >j + p> + q> -m: make+p+i, make+q+j, add+i+j+m, halve+m, (again: move up+j+p+m, move down+i+q+m, (less+p+q, swap item+p+q, incr+p, decr+q, *again; less+p+m, swap item+p+m, incr+p; less+m+q, swap item+q+m, decr+q; +)). FUNCTION move up + >j + >p> + >m: less+j+p; smaller item+m+p; incr+p, *. FUNCTION move down + >i + >q> + >m: less+q+j; smaller item+q+m; decr+q, *. TEST less+>a+>b:=a"<"b. FUNCTION make+a>+>b:=a"="b. FUNCTION add+>a+>b+sum>:=sum"="a"+"b. FUNCTION halve+>a>:=a"/=2". FUNCTION incr+>a>:=a"++". FUNCTION decr+>a>:=a"--". TEST smaller item+>i+>j:="items["i"]<items["j"]". ACTION swap items+>i+>j-t:=t"=items["i"];items["i"]=items["j"];items["j"]="t. The primitive operations are here defined in terms of Java (or C). This is not a complete program; we must define the Java array items elsewhere. CDL2, which appeared in 1976, kept the principles of CDL1 but made the language suitable for large projects. It introduced modules, enforced data-change-only-on-success, and extended the capabilities of the language somewhat. The optimizers in the CDL2 compiler and especially in the CDL2 Laboratory (an IDE for CDL2) were world-class and not just for their time. One feature of the CDL2 Laboratory optimizer is almost unique: it can perform optimizations across compilation units, i.e., treating the entire program as a single compilation. CDL3 is a more recent language. It gave up the open-ended feature of the previous CDL versions, and it provides primitives to basic arithmetic and storage access. The extremely puritan syntax of the earlier CDL versions (the number of keywords and symbols both run in single digits) has also been relaxed. Some basic concepts are now expressed in syntax rather than explicit semantics. In addition, data types have been introduced to the language. Use The commercial mbp Cobol (a Cobol compiler for the PC) as well as the MProlog system (an industrial-strength Prolog implementation that ran on numerous architectures (IBM mainframe, VAX, PDP-11, Intel 8086, etc.) and OS-s (DOS/OS/CMS/BS2000, VMS/Unix, DOS/Windows/OS2)). The latter, in particular, is testimony to CDL2's portability. While most programs written with CDL have been compilers, there is at least one commercial GUI application that was developed and maintained in CDL. This application was a dental image acquisition application now owned by DEXIS. A dental office management system was also once developed in CDL. The software for the Mephisto III chess computer was written with CDL2. References Further reading A book about the CDL1 / CDL2 language The description of CDL3 Bedő Árpád: Programkészítési Módszerek; Közgazdasági és Jogi Könyvkiadó, 1979. Parser generators Compiler construction Formal languages Compiler theory
Compiler Description Language
Mathematics
2,055
19,989,435
https://en.wikipedia.org/wiki/F%28R%29%20gravity
{{DISPLAYTITLE:f(R) gravity}} In physics, f(R) is a type of modified gravity theory which generalizes Einstein's general relativity. f(R) gravity is actually a family of theories, each one defined by a different function, , of the Ricci scalar, . The simplest case is just the function being equal to the scalar; this is general relativity. As a consequence of introducing an arbitrary function, there may be freedom to explain the accelerated expansion and structure formation of the Universe without adding unknown forms of dark energy or dark matter. Some functional forms may be inspired by corrections arising from a quantum theory of gravity. f(R) gravity was first proposed in 1970 by Hans Adolph Buchdahl (although was used rather than for the name of the arbitrary function). It has become an active field of research following work by Alexei Starobinsky on cosmic inflation. A wide range of phenomena can be produced from this theory by adopting different functions; however, many functional forms can now be ruled out on observational grounds, or because of pathological theoretical problems. Introduction In f(R) gravity, one seeks to generalize the Lagrangian of the Einstein–Hilbert action: to where is the determinant of the metric tensor, and f(R) is some function of the Ricci scalar. There are two ways to track the effect of changing R to f(R), i.e., to obtain the theory field equations. The first is to use metric formalism and the second is to use the Palatini formalism. While the two formalisms lead to the same field equations for General Relativity, i.e., when f(R) = R, the field equations may differ when f(R) ≠ R. Metric f(R) gravity Derivation of field equations In metric f(R) gravity, one arrives at the field equations by varying the action with respect to the metric and not treating the connection independently. For completeness we will now briefly mention the basic steps of the variation of the action. The main steps are the same as in the case of the variation of the Einstein–Hilbert action (see the article for more details) but there are also some important differences. The variation of the determinant is as always: The Ricci scalar is defined as Therefore, its variation with respect to the inverse metric is given by For the second step see the article about the Einstein–Hilbert action. Since is the difference of two connections, it should transform as a tensor. Therefore, it can be written as Substituting into the equation above: where is the covariant derivative and is the d'Alembert operator. Denoting , the variation in the action reads: Doing integration by parts on the second and third terms (and neglected the boundary contributions), we get: By demanding that the action remains invariant under variations of the metric, , one obtains the field equations: where is the energy–momentum tensor defined as where is the matter Lagrangian. Generalized Friedmann equations Assuming a Robertson–Walker metric with scale factor we can find the generalized Friedmann equations to be (in units where ): where is the Hubble parameter, the dot is the derivative with respect to the cosmic time , and the terms m and rad represent the matter and radiation densities respectively; these satisfy the continuity equations: Modified gravitational constant An interesting feature of these theories is the fact that the gravitational constant is time and scale dependent. To see this, add a small scalar perturbation to the metric (in the Newtonian gauge): where and are the Newtonian potentials and use the field equations to first order. After some lengthy calculations, one can define a Poisson equation in the Fourier space and attribute the extra terms that appear on the right-hand side to an effective gravitational constant eff. Doing so, we get the gravitational potential (valid on sub-horizon scales ): where m is a perturbation in the matter density, is the Fourier scale and eff is: with Massive gravitational waves This class of theories when linearized exhibits three polarization modes for the gravitational waves, of which two correspond to the massless graviton (helicities ±2) and the third (scalar) is coming from the fact that if we take into account a conformal transformation, the fourth order theory () becomes general relativity plus a scalar field. To see this, identify and use the field equations above to get Working to first order of perturbation theory: and after some tedious algebra, one can solve for the metric perturbation, which corresponds to the gravitational waves. A particular frequency component, for a wave propagating in the -direction, may be written as where and g() = d/d is the group velocity of a wave packet centred on wave-vector . The first two terms correspond to the usual transverse polarizations from general relativity, while the third corresponds to the new massive polarization mode of () theories. This mode is a mixture of massless transverse breathing mode (but not traceless) and massive longitudinal scalar mode. The transverse and traceless modes (also known as tensor modes) propagate at the speed of light, but the massive scalar mode moves at a speed G < 1 (in units where  = 1), this mode is dispersive. However, in () gravity metric formalism, for the model (also known as pure model), the third polarization mode is a pure breathing mode and propagate with the speed of light through the spacetime. Equivalent formalism Under certain additional conditions we can simplify the analysis of () theories by introducing an auxiliary field . Assuming for all , let () be the Legendre transformation of () so that and . Then, one obtains the O'Hanlon (1972) action: We have the Euler–Lagrange equations: Eliminating , we obtain exactly the same equations as before. However, the equations are only second order in the derivatives, instead of fourth order. We are currently working with the Jordan frame. By performing a conformal rescaling: we transform to the Einstein frame: after integrating by parts. Defining , and substituting This is general relativity coupled to a real scalar field: using f(R) theories to describe the accelerating universe is practically equivalent to using quintessence. (At least, equivalent up to the caveat that we have not yet specified matter couplings, so (for example) f(R) gravity in which matter is minimally coupled to the metric (i.e., in Jordan frame) is equivalent to a quintessence theory in which the scalar field mediates a fifth force with gravitational strength.) Palatini f(R) gravity In Palatini f(R) gravity, one treats the metric and connection independently and varies the action with respect to each of them separately. The matter Lagrangian is assumed to be independent of the connection. These theories have been shown to be equivalent to Brans–Dicke theory with . Due to the structure of the theory, however, Palatini () theories appear to be in conflict with the Standard Model, may violate Solar system experiments, and seem to create unwanted singularities. Metric-affine f(R) gravity In metric-affine f(R) gravity, one generalizes things even further, treating both the metric and connection independently, and assuming the matter Lagrangian depends on the connection as well. Observational tests As there are many potential forms of f(R) gravity, it is difficult to find generic tests. Additionally, since deviations away from General Relativity can be made arbitrarily small in some cases, it is impossible to conclusively exclude some modifications. Some progress can be made, without assuming a concrete form for the function f(R) by Taylor expanding The first term is like the cosmological constant and must be small. The next coefficient 1 can be set to one as in general relativity. For metric f(R) gravity (as opposed to Palatini or metric-affine () gravity), the quadratic term is best constrained by fifth force measurements, since it leads to a Yukawa correction to the gravitational potential. The best current bounds are or equivalently The parameterized post-Newtonian formalism is designed to be able to constrain generic modified theories of gravity. However, () gravity shares many of the same values as General Relativity, and is therefore indistinguishable using these tests. In particular light deflection is unchanged, so () gravity, like General Relativity, is entirely consistent with the bounds from Cassini tracking. Starobinsky gravity Starobinsky gravity has the following form where has the dimensions of mass. Starobinsky gravity provides a mechanism for the cosmic inflation, just after the Big Bang when R was still large. However, it is not suited to describe the present universe acceleration since at present R is very small. This implies that the quadratic term in is negligible, i.e., one tends to f(R) = R which is general relativity with a zero cosmological constant. Gogoi–Goswami gravity Gogoi–Goswami gravity (named after Dhruba Jyoti Gogoi and Umananda Dev Goswami) has the following form where and are two dimensionless positive constants and Rc is a characteristic curvature constant. Tensorial generalization f(R) gravity as presented in the previous sections is a scalar modification of general relativity. More generally, we can have a coupling involving invariants of the Ricci tensor and the Weyl tensor. Special cases are () gravity, conformal gravity, Gauss–Bonnet gravity and Lovelock gravity. Notice that with any nontrivial tensorial dependence, we typically have additional massive spin-2 degrees of freedom, in addition to the massless graviton and a massive scalar. An exception is Gauss–Bonnet gravity where the fourth order terms for the spin-2 components cancel out. See also Extended theories of gravity Gauss–Bonnet gravity Lovelock gravity References Further reading See Chapter 29 in the textbook on "Particles and Quantum Fields" by Kleinert, H. (2016), World Scientific (Singapore, 2016) (also available online) Salvatore Capozziello and Mariafelicia De Laurentis, (2015) "F(R) theories of gravitation". Scholarpedia, doi:10.4249/scholarpedia.31422 Kalvakota, Vaibhav R., (2021) "Investigating f(R)" gravity and cosmologies". Mathematical physics preprint archive, https://web.ma.utexas.edu/mp_arc/c/21/21-38.pdf External links f(R) gravity on arxiv.org Extended Theories of Gravity Theories of gravity
F(R) gravity
Physics
2,259
43,318,505
https://en.wikipedia.org/wiki/Gemtech
Gemtech (stylized as GEMTECH) is an American manufacturer of silencers (suppressors) for pistols, rifles, submachine guns, and personal defense weapons (PDWs). The company also produces ammunition and various accessories. Gemtech was founded in 1993 and is headquartered in Eagle, Idaho. GSL Technology of Jackson, Michigan designed and manufactured Gemtech Suppressors from 1994 to 2016. Suppressors Gemtech offers a variety of different silencers. Rimfire suppressors Outback: The Outback was a "thread-on" suppressor for handguns and rifles chambered in .22 lr. Quantum-200: The Quantum-200 was a .22 lr suppressor designed and sold in the 1990s. Vortex-2: The Vortex-2 was a .22 lr muzzle suppressor designed for handguns or rifles. LDES-2: The LDES-2 was a .22 lr handgun suppressor that is no longer in production. Oasis: The Oasis was a .22 lr integrally suppressed aluminum upper receiver for the Ruger MK II and Ruger MK III automatic pistols; it is no longer in production. Centerfire handgun suppressors GM-45: The GM-45 is a suppressor for pistols chambered in .45 ACP, 9mm Luger, 10mm Auto and .40 S&W. GM-9: The GM-9 is for 9mm and .300 Blackout (subsonic loads) firearms. It is rated for full-automatic fire. Tundra: The Tundra was a 9mm suppressor and it is designed to be fired "dry." Blackside-45: The Blackside-45 was a suppressor designed for handguns chambered in .45 ACP and .40 S&W. SFN-57: The SFN-57 was designed for use with the FN Five-seven automatic pistol chambered in SFN-57 5.7×28mm. It may also be utilized on firearms chambered in .17 HMR, .22 lr, and .22 WMR. Vortex-9: The Vortex-9 is a discontinued 9mm handgun suppressor. Submachine gun and PDW suppressors RAPTOR-II: The RAPTOR-II is a suppressor for 9mm submachine guns such as the Uzi and MP5. RAPTOR-40: The RAPTOR-40 is a suppressor designed for submachine guns chambered in .40S&W and 10mm Auto, such as the UMP-40 or MP-5/10. VIPER: The VIPER is a suppressor designed for the MAC line of submachine guns (e.g., MAC-10, MAC-11) and will work with firearms chambered in .380 ACP, 9mm, and .45 ACP. The VIPER is small, lighter, and more efficient than original MAC suppressors. MOSSAD-II: The MOSSAD-II is a suppressor designed for the Uzi family of submachine guns. MK-9K: The MK-9K is a 9mm suppressor designed for use with open-bolt submachine guns. SAR57: The SAR57 is a 5.7mm suppressor designed for use with the SAR57. It is not recommended for use with the FN 5-7 pistol or P90 PDW. Centerfire rifle suppressors GMT-300BLK: The GMT-300BLK is a suppressor for .300 Blackout rifles and carbines. It may be utilized with both super and subsonic ammunition. GMT-300WM: The GMT-300WM is for rifles chambered in .300 Winchester Magnum. GMT-556LE: The GMT-556LE is a 5.56mm rifle or carbine suppressor for designed for law enforcement use. GMT-556QM: The GMT-556QM is a 5.56mm automatic rifle or carbine suppressor for designed for military use. STORMFRONT: The STORMFRONT was a suppressor for .50 BMG rifles. TREK: The TREK is a 5.56mm "thread-on" suppressor for carbines and rifles. SANDSTORM: The SANDSTORM was a titanium 7.62×51mm NATO / .308 Winchester suppressor. QUICKSAND: The QUICKSAND was a light-weight, quick-detach version of the SANDSTORM. HVT-QM: The HVT-QM was a stainless steel, .30-caliber suppressor that uses Gemtech's Quickmount system. Ammunition In 2011, Gemtech developed their own line of subsonic .22 Long Rifle ammunition optimized for use with sound suppressors. Kel Whelan, working with Brett Olin of CCI Ammunition came up with a round utilizing a unique 42 grain bullet and travelling at 1050 feet per second. Two years later, the company began producing .300 Blackout ammunition in both supersonic and subsonic loads. American Suppressor Association Gemtech was instrumental in forming the American Suppressor Association (ASA), a nonprofit trade association "to further the pursuit of education, public relations, legislation, hunting applications and military applications for the silencer industry". Purchase by Smith & Wesson In July 2017, it was announced that Gemtech was purchased by firearm manufacturer, Smith & Wesson. See also Title II weapons References External links Firearm components Noise control
Gemtech
Technology
1,129
56,103,654
https://en.wikipedia.org/wiki/Opigolix
Opigolix (, ; developmental code name ASP-1707) is a small-molecule, non-peptide, orally active gonadotropin-releasing hormone antagonist (GnRH antagonist) which was under development by Astellas Pharma for the treatment of endometriosis and rheumatoid arthritis. It was also under investigation for the treatment of prostate cancer. It reached phase II clinical trials for both endometriosis and rheumatoid arthritis prior to the discontinuation of its development in April 2018. See also Gonadotropin-releasing hormone receptor § Antagonists References External links Opigolix - AdisInsight Abandoned drugs Secondary alcohols Amidines Benzimidazoles Fluoroarenes GnRH antagonists Aromatic ketones
Opigolix
Chemistry
166
28,524
https://en.wikipedia.org/wiki/RNA%20splicing
RNA splicing is a process in molecular biology where a newly-made precursor messenger RNA (pre-mRNA) transcript is transformed into a mature messenger RNA (mRNA). It works by removing all the introns (non-coding regions of RNA) and splicing back together exons (coding regions). For nuclear-encoded genes, splicing occurs in the nucleus either during or immediately after transcription. For those eukaryotic genes that contain introns, splicing is usually needed to create an mRNA molecule that can be translated into protein. For many eukaryotic introns, splicing occurs in a series of reactions which are catalyzed by the spliceosome, a complex of small nuclear ribonucleoproteins (snRNPs). There exist self-splicing introns, that is, ribozymes that can catalyze their own excision from their parent RNA molecule. The process of transcription, splicing and translation is called gene expression, the central dogma of molecular biology. Splicing pathways Several methods of RNA splicing occur in nature; the type of splicing depends on the structure of the spliced intron and the catalysts required for splicing to occur. Spliceosomal complex Introns The word intron is derived from the terms intragenic region, and intracistron, that is, a segment of DNA that is located between two exons of a gene. The term intron refers to both the DNA sequence within a gene and the corresponding sequence in the unprocessed RNA transcript. As part of the RNA processing pathway, introns are removed by RNA splicing either shortly after or concurrent with transcription. Introns are found in the genes of most organisms and many viruses. They can be located in a wide range of genes, including those that generate proteins, ribosomal RNA (rRNA), and transfer RNA (tRNA). Within introns, a donor site (5' end of the intron), a branch site (near the 3' end of the intron) and an acceptor site (3' end of the intron) are required for splicing. The splice donor site includes an almost invariant sequence GU at the 5' end of the intron, within a larger, less highly conserved region. The splice acceptor site at the 3' end of the intron terminates the intron with an almost invariant AG sequence. Upstream (5'-ward) from the AG there is a region high in pyrimidines (C and U), or polypyrimidine tract. Further upstream from the polypyrimidine tract is the branchpoint, which includes an adenine nucleotide involved in lariat formation. The consensus sequence for an intron (in IUPAC nucleic acid notation) is: G-G-[cut]-G-U-R-A-G-U (donor site) ... intron sequence ... Y-U-R-A-C (branch sequence 20-50 nucleotides upstream of acceptor site) ... Y-rich-N-C-A-G-[cut]-G (acceptor site). However, it is noted that the specific sequence of intronic splicing elements and the number of nucleotides between the branchpoint and the nearest 3' acceptor site affect splice site selection. Also, point mutations in the underlying DNA or errors during transcription can activate a cryptic splice site in part of the transcript that usually is not spliced. This results in a mature messenger RNA with a missing section of an exon. In this way, a point mutation, which might otherwise affect only a single amino acid, can manifest as a deletion or truncation in the final protein. Formation and activity Splicing is catalyzed by the spliceosome, a large RNA-protein complex composed of five small nuclear ribonucleoproteins (snRNPs). Assembly and activity of the spliceosome occurs during transcription of the pre-mRNA. The RNA components of snRNPs interact with the intron and are involved in catalysis. Two types of spliceosomes have been identified (major and minor) which contain different snRNPs. The major spliceosome splices introns containing GU at the 5' splice site and AG at the 3' splice site. It is composed of the U1, U2, U4, U5, and U6 snRNPs and is active in the nucleus. In addition, a number of proteins including U2 small nuclear RNA auxiliary factor 1 (U2AF35), U2AF2 (U2AF65) and SF1 are required for the assembly of the spliceosome. The spliceosome forms different complexes during the splicing process: Complex E The U1 snRNP binds to the GU sequence at the 5' splice site of an intron; Splicing factor 1 binds to the intron branch point sequence; U2AF1 binds at the 3' splice site of the intron; U2AF2 binds to the polypyrimidine tract; Complex A (pre-spliceosome) The U2 snRNP displaces SF1 and binds to the branch point sequence and ATP is hydrolyzed; Complex B (pre-catalytic spliceosome) The U5/U4/U6 snRNP trimer binds, and the U5 snRNP binds exons at the 5' site, with U6 binding to U2; Complex B* The U1 snRNP is released, U5 shifts from exon to intron, and the U6 binds at the 5' splice site; Complex C (catalytic spliceosome) U4 is released, U6/U2 catalyzes transesterification, making the 5'-end of the intron ligate to the A on intron and form a lariat, U5 binds exon at 3' splice site, and the 5' site is cleaved, resulting in the formation of the lariat; Complex C* (post-spliceosomal complex) U2/U5/U6 remain bound to the lariat, and the 3' site is cleaved and exons are ligated using ATP hydrolysis. The spliced RNA is released, the lariat is released and degraded, and the snRNPs are recycled. This type of splicing is termed canonical splicing or termed the lariat pathway, which accounts for more than 99% of splicing. By contrast, when the intronic flanking sequences do not follow the GU-AG rule, noncanonical splicing is said to occur (see "minor spliceosome" below). The minor spliceosome is very similar to the major spliceosome, but instead it splices out rare introns with different splice site sequences. While the minor and major spliceosomes contain the same U5 snRNP, the minor spliceosome has different but functionally analogous snRNPs for U1, U2, U4, and U6, which are respectively called U11, U12, U4atac, and U6atac. Recursive splicing In most cases, splicing removes introns as single units from precursor mRNA transcripts. However, in some cases, especially in mRNAs with very long introns, splicing happens in steps, with part of an intron removed and then the remaining intron is spliced out in a following step. This has been found first in the Ultrabithorax (Ubx) gene of the fruit fly, Drosophila melanogaster, and a few other Drosophila genes, but cases in humans have been reported as well. Trans-splicing Trans-splicing is a form of splicing that removes introns or outrons, and joins two exons that are not within the same RNA transcript. Trans-splicing can occur between two different endogenous pre-mRNAs or between an endogenous and an exogenous (such as from viruses) or artificial RNAs. Self-splicing Self-splicing occurs for rare introns that form a ribozyme, performing the functions of the spliceosome by RNA alone. There are three kinds of self-splicing introns, Group I, Group II and Group III. Group I and II introns perform splicing similar to the spliceosome without requiring any protein. This similarity suggests that Group I and II introns may be evolutionarily related to the spliceosome. Self-splicing may also be very ancient, and may have existed in an RNA world present before protein. Two transesterifications characterize the mechanism in which group I introns are spliced: 3'OH of a free guanine nucleoside (or one located in the intron) or a nucleotide cofactor (GMP, GDP, GTP) attacks phosphate at the 5' splice site. 3'OH of the 5' exon becomes a nucleophile and the second transesterification results in the joining of the two exons. The mechanism in which group II introns are spliced (two transesterification reaction like group I introns) is as follows: The 2'OH of a specific adenosine in the intron attacks the 5' splice site, thereby forming the lariat The 3'OH of the 5' exon triggers the second transesterification at the 3' splice site, thereby joining the exons together. tRNA splicing tRNA (also tRNA-like) splicing is another rare form of splicing that usually occurs in tRNA. The splicing reaction involves a different biochemistry than the spliceosomal and self-splicing pathways. In the yeast Saccharomyces cerevisiae, a yeast tRNA splicing endonuclease heterotetramer, composed of TSEN54, TSEN2, TSEN34, and TSEN15, cleaves pre-tRNA at two sites in the acceptor loop to form a 5'-half tRNA, terminating at a 2',3'-cyclic phosphodiester group, and a 3'-half tRNA, terminating at a 5'-hydroxyl group, along with a discarded intron. Yeast tRNA kinase then phosphorylates the 5'-hydroxyl group using adenosine triphosphate. Yeast tRNA cyclic phosphodiesterase cleaves the cyclic phosphodiester group to form a 2'-phosphorylated 3' end. Yeast tRNA ligase adds an adenosine monophosphate group to the 5' end of the 3'-half and joins the two halves together. NAD-dependent 2'-phosphotransferase then removes the 2'-phosphate group. Evolution Splicing occurs in all the kingdoms or domains of life, however, the extent and types of splicing can be very different between the major divisions. Eukaryotes splice many protein-coding messenger RNAs and some non-coding RNAs. Prokaryotes, on the other hand, splice rarely and mostly non-coding RNAs. Another important difference between these two groups of organisms is that prokaryotes completely lack the spliceosomal pathway. Because spliceosomal introns are not conserved in all species, there is debate concerning when spliceosomal splicing evolved. Two models have been proposed: the intron late and intron early models (see intron evolution). Biochemical mechanism Spliceosomal splicing and self-splicing involve a two-step biochemical process. Both steps involve transesterification reactions that occur between RNA nucleotides. tRNA splicing, however, is an exception and does not occur by transesterification. Spliceosomal and self-splicing transesterification reactions occur via two sequential transesterification reactions. First, the 2'OH of a specific branchpoint nucleotide within the intron, defined during spliceosome assembly, performs a nucleophilic attack on the first nucleotide of the intron at the 5' splice site, forming the lariat intermediate. Second, the 3'OH of the released 5' exon then performs a nucleophilic attack at the first nucleotide following the last nucleotide of the intron at the 3' splice site, thus joining the exons and releasing the intron lariat. Alternative splicing In many cases, the splicing process can create a range of unique proteins by varying the exon composition of the same mRNA. This phenomenon is then called alternative splicing. Alternative splicing can occur in many ways. Exons can be extended or skipped, or introns can be retained. It is estimated that 95% of transcripts from multiexon genes undergo alternative splicing, some instances of which occur in a tissue-specific manner and/or under specific cellular conditions. Development of high throughput mRNA sequencing technology can help quantify the expression levels of alternatively spliced isoforms. Differential expression levels across tissues and cell lineages allowed computational approaches to be developed to predict the functions of these isoforms. Given this complexity, alternative splicing of pre-mRNA transcripts is regulated by a system of trans-acting proteins (activators and repressors) that bind to cis-acting sites or "elements" (enhancers and silencers) on the pre-mRNA transcript itself. These proteins and their respective binding elements promote or reduce the usage of a particular splice site. The binding specificity comes from the sequence and structure of the cis-elements, e.g. in HIV-1 there are many donor and acceptor splice sites. Among the various splice sites, ssA7, which is 3' acceptor site, folds into three stem loop structures, i.e. Intronic splicing silencer (ISS), Exonic splicing enhancer (ESE), and Exonic splicing silencer (ESSE3). Solution structure of Intronic splicing silencer and its interaction to host protein hnRNPA1 give insight into specific recognition. However, adding to the complexity of alternative splicing, it is noted that the effects of regulatory factors are many times position-dependent. For example, a splicing factor that serves as a splicing activator when bound to an intronic enhancer element may serve as a repressor when bound to its splicing element in the context of an exon, and vice versa. In addition to the position-dependent effects of enhancer and silencer elements, the location of the branchpoint (i.e., distance upstream of the nearest 3' acceptor site) also affects splicing. The secondary structure of the pre-mRNA transcript also plays a role in regulating splicing, such as by bringing together splicing elements or by masking a sequence that would otherwise serve as a binding element for a splicing factor. Role of nuclear speckles in RNA splicing The location of pre-mRNA splicing is throughout the nucleus, and once mature mRNA is generated, it is transported to the cytoplasm for translation. In both plant and animal cells, nuclear speckles are regions with high concentrations of splicing factors. These speckles were once thought to be mere storage centers for splicing factors. However, it is now understood that nuclear speckles help concentrate splicing factors near genes that are physically located close to them. Genes located farther from speckles can still be transcribed and spliced, but their splicing is less efficient compared to those closer to speckles. Cells can vary their genomic positions of genes relative to nuclear speckles as a mechanism to modulate the expression of genes via splicing. Role of splicing/alternative splicing in HIV-integration The process of splicing is linked with HIV integration, as HIV-1 targets highly spliced genes. Splicing response to DNA damage DNA damage affects splicing factors by altering their post-translational modification, localization, expression and activity. Furthermore, DNA damage often disrupts splicing by interfering with its coupling to transcription. DNA damage also has an impact on the splicing and alternative splicing of genes intimately associated with DNA repair. For instance, DNA damages modulate the alternative splicing of the DNA repair genes Brca1 and Ercc1. Experimental manipulation of splicing Splicing events can be experimentally altered by binding steric-blocking antisense oligos, such as Morpholinos or Peptide nucleic acids to snRNP binding sites, to the branchpoint nucleotide that closes the lariat, or to splice-regulatory element binding sites. The use of antisense oligonucleotides to modulate splicing has shown great promise as a therapeutic strategy for a variety of genetic diseases caused by splicing defects. Recent studies have shown that RNA splicing can be regulated by a variety of epigenetic modifications, including DNA methylation and histone modifications. Splicing errors and variation It has been suggested that one third of all disease-causing mutations impact on splicing. Common errors include: Mutation of a splice site resulting in loss of function of that site. Results in exposure of a premature stop codon, loss of an exon, or inclusion of an intron. Mutation of a splice site reducing specificity. May result in variation in the splice location, causing insertion or deletion of amino acids, or most likely, a disruption of the reading frame. Displacement of a splice site, leading to inclusion or exclusion of more RNA than expected, resulting in longer or shorter exons. Although many splicing errors are safeguarded by a cellular quality control mechanism termed nonsense-mediated mRNA decay (NMD), a number of splicing-related diseases also exist, as suggested above. Allelic differences in mRNA splicing are likely to be a common and important source of phenotypic diversity at the molecular level, in addition to their contribution to genetic disease susceptibility. Indeed, genome-wide studies in humans have identified a range of genes that are subject to allele-specific splicing. In plants, variation for flooding stress tolerance correlated with stress-induced alternative splicing of transcripts associated with gluconeogenesis and other processes. Protein splicing In addition to RNA, proteins can undergo splicing. Although the biomolecular mechanisms are different, the principle is the same: parts of the protein, called inteins instead of introns, are removed. The remaining parts, called exteins instead of exons, are fused together. Protein splicing has been observed in a wide range of organisms, including bacteria, archaea, plants, yeast and humans. Splicing and genesis of circRNAs The existence of backsplicing was first suggested in 2012. This backsplicing explains the genesis of circular RNAs resulting from the exact junction between the 3' boundary of an exon with the 5' boundary of an exon located upstream. In these exonic circular RNAs, the junction is a classic 3'-5'link. The exclusion of intronic sequences during splicing can also leave traces, in the form of circular RNAs. In some cases, the intronic lariat is not destroyed and the circular part remains as a lariat-derived circRNA.In these lariat-derived circular RNAs, the junction is a 2'-5'link. See also cDNA DBASS3/5 Exon junction complex mRNA capping Polyadenylation Post-transcriptional modification RNA editing SWAP protein domain, a splicing regulator References External links Virtual Cell Animation Collection: mRNA Splicing Gene expression Spliceosome
RNA splicing
Chemistry,Biology
4,246
59,862,590
https://en.wikipedia.org/wiki/Aerodynamics%20Research%20Institute
The Aerodynamische Versuchsanstalt (AVA) in Göttingen was one of the four predecessor organizations of the 1969 founded "German Research and Experimental Institute for Aerospace", which in 1997 was renamed German Aerospace Center (DLR). History The AVA was created in 1919 from the 1907 Göttingen by Ludwig Prandtl founded "Modellversuchsanstalt für Aerodynamik der Motorluftschiff-Studiengesellschaft". In its founding years, it was still concerned with the development of the "best" form of airship. In 1908, the first wind tunnel was built in Göttingen for tests on models for aviation. In 1915, founded in 1911 Kaiser Wilhelm Society (KWG) and under the direction of Ludwig Prandtl the "Modellversuchsanstalt aerodynamics" was founded in 1919 as the "Aerodynamic Research Institute of the Kaiser Wilhelm Society" (AVA) was transferred to the KWG and converted in 1925 into the "Kaiser Wilhelm Institute for Flow Research linked to the Aerodynamic Research Institute". Ludwig Prandtl headed the institute until 1937, his successor became Albert Betz. In the same year a spin-off from the institute took place under the name "Aerodynamische Versuchsanstalt Göttingen e. V. in the Kaiser Wilhelm Society ", in which the Reich Ministry of Aviation was involved. The remaining after the spin-off part was continued under the name "Kaiser Wilhelm Institute for Flow Research" from the 1948, the Max Planck Institute for Fluid Research (today Max Planck Institute for Dynamics and Self-Organization). The AVA was confiscated in 1945 by the British (until 1948), 1953 as "Aerodynamic Research Institute Göttingen e. V. re-opened in the Max Planck Society and fully integrated in 1956 as the "Aerodynamic Research Institute in the Max Planck Society". In 1969, the spin-off from the Max Planck Society and the founding of the "German Research and Experimental Institute for Aerospace e. V.". Bibliography Aerodynamische Versuchsanstalt Göttingen e.V. in der Kaiser-Wilhelm-/Max-Planck-Gesellschaft (CPTS), in: Eckart Henning, Marion Kazemi: Handbuch zur Institutsgeschichte der Kaiser-Wilhelm-/ Max-Planck-Gesellschaft zur Förderung der Wissenschaften 1911–2011 – Daten und Quellen, Berlin 2016, 2 subvolumes, volume 1: Institute und Forschungsstellen A–L (online, PDF, 75 MB), pages 27–45 (Chronologie des Instituts) Sources Historie des DLR – Gesellschaft von Freunden des DLR e. V. 100 Jahre DLR – Homepage des DLR Archiv zur Geschichte der Max-Planck-Gesellschaft Former research institutes Aerodynamics Research institutes in Göttingen Max Planck Institutes Aviation history of Germany 1907 establishments in Germany 1969 disestablishments in West Germany History of Lower Saxony
Aerodynamics Research Institute
Chemistry,Engineering
613
6,158,953
https://en.wikipedia.org/wiki/Modal%20%CE%BC-calculus
In theoretical computer science, the modal μ-calculus (Lμ, Lμ, sometimes just μ-calculus, although this can have a more general meaning) is an extension of propositional modal logic (with many modalities) by adding the least fixed point operator μ and the greatest fixed point operator ν, thus a fixed-point logic. The (propositional, modal) μ-calculus originates with Dana Scott and Jaco de Bakker, and was further developed by Dexter Kozen into the version most used nowadays. It is used to describe properties of labelled transition systems and for verifying these properties. Many temporal logics can be encoded in the μ-calculus, including CTL* and its widely used fragments—linear temporal logic and computational tree logic. An algebraic view is to see it as an algebra of monotonic functions over a complete lattice, with operators consisting of functional composition plus the least and greatest fixed point operators; from this viewpoint, the modal μ-calculus is over the lattice of a power set algebra. The game semantics of μ-calculus is related to two-player games with perfect information, particularly infinite parity games. Syntax Let P (propositions) and A (actions) be two finite sets of symbols, and let Var be a countably infinite set of variables. The set of formulas of (propositional, modal) μ-calculus is defined as follows: each proposition and each variable is a formula; if and are formulas, then is a formula; if is a formula, then is a formula; if is a formula and is an action, then is a formula; (pronounced either: box or after necessarily ) if is a formula and a variable, then is a formula, provided that every free occurrence of in occurs positively, i.e. within the scope of an even number of negations. (The notions of free and bound variables are as usual, where is the only binding operator.) Given the above definitions, we can enrich the syntax with: meaning (pronounced either: diamond or after possibly ) meaning means , where means substituting for in all free occurrences of in . The first two formulas are the familiar ones from the classical propositional calculus and respectively the minimal multimodal logic K. The notation (and its dual) are inspired from the lambda calculus; the intent is to denote the least (and respectively greatest) fixed point of the expression where the "minimization" (and respectively "maximization") are in the variable , much like in lambda calculus is a function with formula in bound variable ; see the denotational semantics below for details. Denotational semantics Models of (propositional) μ-calculus are given as labelled transition systems where: is a set of states; maps to each label a binary relation on ; , maps each proposition to the set of states where the proposition is true. Given a labelled transition system and an interpretation of the variables of the -calculus, , is the function defined by the following rules: ; ; ; ; ; , where maps to while preserving the mappings of everywhere else. By duality, the interpretation of the other basic formulas is: ; ; Less formally, this means that, for a given transition system : holds in the set of states ; holds in every state where and both hold; holds in every state where does not hold. holds in a state if every -transition leading out of leads to a state where holds. holds in a state if there exists -transition leading out of that leads to a state where holds. holds in any state in any set such that, when the variable is set to , then holds for all of . (From the Knaster–Tarski theorem it follows that is the greatest fixed point of , and its least fixed point.) The interpretations of and are in fact the "classical" ones from dynamic logic. Additionally, the operator can be interpreted as liveness ("something good eventually happens") and as safety ("nothing bad ever happens") in Leslie Lamport's informal classification. Examples is interpreted as " is true along every a-path". The idea is that " is true along every a-path" can be defined axiomatically as that (weakest) sentence which implies and which remains true after processing any a-label. is interpreted as the existence of a path along a-transitions to a state where holds. The property of a state being deadlock-free, meaning no path from that state reaches a dead end, is expressed by the formula Decision problems Satisfiability of a modal μ-calculus formula is EXPTIME-complete. Like for linear temporal logic, the model checking, satisfiability and validity problems of linear modal μ-calculus are PSPACE-complete. See also Finite model theory Alternation-free modal μ-calculus Notes References , chapter 7, Model checking for the μ-calculus, pp. 97–108 , chapter 5, Modal μ-calculus, pp. 103–128 , chapter 6, The μ-calculus over powerset algebras, pp. 141–153 is about the modal μ-calculus Yde Venema (2008) Lectures on the Modal μ-calculus; was presented at The 18th European Summer School in Logic, Language and Information External links Sophie Pinchinat, Logic, Automata & Games video recording of a lecture at ANU Logic Summer School '09 μ-calculus Model checking
Modal μ-calculus
Mathematics
1,114
5,310,452
https://en.wikipedia.org/wiki/Etretinate
Etretinate (trade name Tegison) is a medication developed by Hoffmann–La Roche that was approved by the FDA in 1986 to treat severe psoriasis. It is a second-generation retinoid. It was subsequently removed from the Canadian market in 1996 and the United States market in 1998 due to the high risk of birth defects. It remains on the market in Japan as Tigason. Pharmacology Etretinate is a highly lipophilic, aromatic retinoid. It is stored and released from adipose tissue, so its effects can continue long after dosage stops. It is detectable in the plasma for up to three years following therapy. Etretinate has a low therapeutic index and a long elimination half-life (t1/2) of 120 days, which make dosing difficult. Etretinate has been replaced by acitretin, the free acid (without the ethyl ester). While acitretin is less lipophilic and has a half-life of only 50 hours, it is partly metabolized to etretinate in the body, so that it is still a long-acting teratogen and pregnancy is prohibited for two years after therapy. Precautions Etretinate is a teratogen, and may cause birth defects long after use. Therefore, birth control is advised during therapy, and for at least three years after therapy has stopped. Etretinate should be avoided in children, as it may interfere with bone growth. If a patient has ever taken etretinate, they are not eligible to donate blood in the United States, the United Kingdom, Australia, Ireland or Québec, due to the risk of birth defects. In Japan, people may not donate blood for two years after ceasing to use the medication. Side effects Side effects are those typical of hypervitaminosis A, most commonly bone or joint pain, stiffness; in long-term treatment diffuse idiopathic skeletal hyperostosis muscular or abdominal cramps dry, burning, itching eyelids unusual bruising History The drug was approved by FDA in 1986 to treat severe psoriasis. It was subsequently removed from the Canadian market in 1996 and the United States market in 1998 due to the high risk of birth defects. In Japan, the drug remains on market branded Tigason. See also Isotretinoin List of withdrawn drugs References Retinoids Withdrawn drugs Ethyl esters Phenol ethers Polyenes
Etretinate
Chemistry
503
171,513
https://en.wikipedia.org/wiki/Jetboat
A jetboat is a boat propelled by a jet of water ejected from the back of the craft. Unlike a powerboat or motorboat that uses an external propeller in the water below or behind the boat, a jetboat draws the water from under the boat through an intake and into a pump-jet inside the boat, before expelling it through a nozzle at the stern. The modern jetboat was developed by New Zealand engineer Sir William Hamilton in the mid-1950s. His goal was a boat to run up the fast-flowing rivers of New Zealand that were too shallow for propellers. Previous attempts at waterjet propulsion had very short lifetimes, generally due to the inefficient design of the units and the fact that they offered few advantages over conventional propellers. Unlike these previous waterjet developments, such as Campini's and the Hanley Hydrojet, Hamilton had a specific need for a propulsion system to operate in very shallow water, and the waterjet proved to be the ideal solution. The popularity of the jet unit and jetboat increased rapidly. It was found the waterjet was better than propellers for a wide range of vessel types, and waterjets are now used widely for many high-speed vessels including passenger ferries, rescue craft, patrol boats and offshore supply vessels. Jetboats are highly manoeuvrable, and many can be reversed from full speed and brought to a stop within little more than their own length, in a manoeuvre known as a "crash stop". The well known Hamilton turn or "jet spin" is a high-speed manoeuvre where the boat's engine throttle is cut, the steering is turned sharply and the throttle opened again, causing the boat to spin quickly around with a large spray of water. There is no engineering limit to the size of jetboats, though whether they are useful depends on the type of application. Classic prop-drives are generally more efficient and economical at low speeds, up to about , but as boat speed increases, the extra hull resistance generated by struts, rudders, shafts and so on means waterjets are more efficient up to . For very large propellers turning at slow speeds, such as in tugboats, the equivalent size waterjet would be too big to be practical. The vast majority of waterjet units are therefore installed in high-speed vessels and in situations where shallow draught, maneuverability, and load flexibility are the main concerns. The biggest jet-driven vessels are found in military use and the high-speed passenger and car ferry industry. South Africa's s (approximately long) and the long United States Littoral Combat Ship are among the biggest jet-propelled vessels . Even these vessels are capable of performing "crash stops". Function A conventional screw propeller works within the body of water below a boat hull, effectively "screwing" through the water to drive a vessel forward by generating a difference in pressure between the forward and rear surfaces of the propeller blades and by accelerating a mass of water rearward. By contrast, a waterjet unit delivers a high-pressure "push" from the stern of a vessel by accelerating a volume of water as it passes through a specialised pump mounted above the waterline inside the boat hull. Both methods yield thrust due to Newton's third law— every action has an equal and opposite reaction. In a jetboat, the waterjet draws water from beneath the hull, where it passes through a series of impellers and stators – known as stages – which increase the velocity of the waterflow. Most modern jets are single-stage, while older waterjets may have as many as three stages. The tail section of the waterjet unit extends out through the transom of the hull, above the waterline. This jetstream exits the unit through a small nozzle at high velocity to push the boat forward. Steering is accomplished by moving this nozzle to either side, or less commonly, by small gates on either side that deflect the jetstream. Because the jetboat relies on the flow of water through the nozzle for control, it is not possible to steer a conventional jetboat without the engine running. Unlike conventional propeller systems where the rotation of the propeller is reversed to provide astern movement, a waterjet will continue to pump normally while a deflector is lowered into the jetstream after it leaves the outlet nozzle. This deflector redirects thrust forces forward to provide reverse thrust. Most highly developed reverse deflectors redirect the jetstream down and to each side to prevent recirculation of the water through the jet again, which may cause aeration problems, or increase reverse thrust. Steering is still available with the reverse deflector lowered so the vessel will have full maneuverability. With the deflector lowered about halfway into the jetstream, forward and reverse thrust are equal so the boat maintains a fixed position, but steering is still available to allow the vessel to turn on the spot – something which is impossible with a conventional single propeller. Unlike hydrofoils, which use underwater wings or struts to lift the vessel clear of the water, standard jetboats use a conventional planing hull to ride across the water surface, with only the rear portion of the hull displacing any water. With the majority of the hull clear of the water, there is reduced drag, greatly enhancing speed and maneuverability, so jetboats are normally operated at planing speed. At slower speeds with less water pumping through the jet unit, the jetboat will lose some steering control and maneuverability and will quickly slow down as the hull comes off its planing state and hull resistance is increased. However, loss of steering control at low speeds can be overcome by lowering the reverse deflector slightly and increasing throttle – so an operator may increase thrust and thus control without increasing boat speed itself. A conventional river-going jetboat will have a shallow-angled (but not flat-bottomed) hull to improve its high-speed cornering control and stability, while also allowing it to traverse very shallow water. At speed, jetboats can be safely operated in less than 7.5 cm (3 inches) of water. One of the most significant breakthroughs, in the development of the waterjet, was to change the design so it expelled the jetstream above the water line, contrary to many people's intuition. Hamilton discovered early on that this greatly improved performance, compared to expelling below the waterline, while also providing a "clean" hull bottom (i.e. nothing protruding below the hull line) to allow the boat to skim through very shallow water. It makes no difference to the amount of thrust generated whether the outlet is above or below the waterline, but having it above the waterline reduces hull resistance and draught. Hamilton's first waterjet design had the outlet below the hull and actually in front of the inlet. This probably meant that disturbed water was entering the jet unit and reducing its performance, and the main reason why the change to above the waterline made such a difference. Applications Applications for jetboats include most activities where conventional propellers are also used, but in particular passenger ferry services, coastguard and police patrol, navy and military, adventure tourism (which is becoming increasingly popular around the globe), pilot boat operations, surf rescue, farming, fishing, exploration, pleasure boating, and other water activities where motor boats are used. Jetboats can also be raced for sport, both on rivers (World Champion Jet Boat Marathon held in Mexico, Canada, USA and New Zealand) and on specially designed racecourses known as sprint tracks. Recently there has been increasing use of jetboats in the form of rigid-hulled inflatable boats and as luxury yacht tenders. Many jetboats are small enough to be carried on a trailer and towed by car. As jetboats have no external rotating parts they are safer for swimmers and marine life, though they can be struck by the hull. The safety benefit itself can sometimes be reason enough to use this type of propulsion. In 1977, Sir Edmund Hillary led a jetboat expedition, titled "Ocean to Sky", from the mouth of the Ganges River to its source. One of the jetboats was sunk by a friend of Hillary. Drawbacks The fuel efficiency and performance of a jetboat can be affected by anything that disrupts the smooth flow of water through the jet unit. For example, a plastic bag sucked onto the jet unit's intake grill can have quite an adverse effect. Another disadvantage of jetboats appears to be that they are more sensitive to engine/jet unit mismatch, compared with the problem of engine/propeller mismatch in propeller-driven craft. If the jet-propulsion unit is not well-matched to the engine performance, excessive fuel consumption and poor performance can result. See also Personal water craft List of water sports Jet sprint boat racing References External links Hamilton waterjet history Jet boat origins and history Motorboats Marine propulsion New Zealand inventions
Jetboat
Engineering
1,829
16,130,126
https://en.wikipedia.org/wiki/New%20digraph%20reconstruction%20conjecture
The reconstruction conjecture of Stanisław Ulam is one of the best-known open problems in graph theory. Using the terminology of Frank Harary it can be stated as follows: If G and H are two graphs on at least three vertices and ƒ is a bijection from V(G) to V(H) such that G\{v} and H\{ƒ(v)} are isomorphic for all vertices v in V(G), then G and H are isomorphic. In 1964 Harary extended the reconstruction conjecture to directed graphs on at least five vertices as the so-called digraph reconstruction conjecture. Many results supporting the digraph reconstruction conjecture appeared between 1964 and 1976. However, this conjecture was proved to be false when P. K. Stockmeyer discovered several infinite families of counterexample pairs of digraphs (including tournaments) of arbitrarily large order. The falsity of the digraph reconstruction conjecture caused doubt about the reconstruction conjecture itself. Stockmeyer even observed that “perhaps the considerable effort being spent in attempts to prove the (reconstruction) conjecture should be balanced by more serious attempts to construct counterexamples.” In 1979, Ramachandran revived the digraph reconstruction conjecture in a slightly weaker form called the new digraph reconstruction conjecture. In a digraph, the number of arcs incident from (respectively, to) a vertex v is called the outdegree (indegree) of v and is denoted by od(v) (respectively, id(v)). The new digraph conjecture may be stated as follows: The new digraph reconstruction conjecture reduces to the reconstruction conjecture in the undirected case, because if all the vertex-deleted subgraphs of two graphs are isomorphic, then the corresponding vertices must have the same degree. Thus, the new digraph reconstruction conjecture is stronger than the reconstruction conjecture, but weaker than the disproved digraph reconstruction conjecture. Several families of digraphs have been shown to satisfy the new digraph reconstruction conjecture and these include all the digraphs in the known counterexample pairs to the digraph reconstruction conjecture. Reductions All digraphs are N-reconstructible if all digraphs with 2-connected underlying graphs are N-reconstructible. All digraphs are N-reconstructible if and only if either of the following two classes of digraphs are N-reconstructible, where diam(D) and radius(D) are defined to be the diameter and radius of the underlying graph of D. Digraphs with diam(D) ≤ 2 or diam(D) = diam(Dc) = 3 Digraphs D with 2-connected underlying graphs and radius(D) ≤ 2 Present status As of 2024, no counterexample to the new digraph reconstruction conjecture is known. This conjecture is now also known as the degree associated reconstruction conjecture. References Conjectures Unsolved problems in graph theory Directed graphs
New digraph reconstruction conjecture
Mathematics
614
12,404,232
https://en.wikipedia.org/wiki/Isthmura%20naucampatepetl
Isthmura naucampatepetl, commonly known as the Cofre de Perote salamander, is a species of salamanders in the family Plethodontidae. It is endemic to the Sierra Madre Oriental in central Veracruz, Mexico, where it is known from between Cofre de Perote and Cerro Volcancillo, a satellite peak of Cofre de Perote. Etymology The specific name naucampatepetl is Nahuatl name for Cofre de Perote. Description Adult males measure and females up to in snout–vent length (SVL). The tail is slender and shorter than SVL; it tapers gradually but has a blunt tip. The body is moderately robust. The head is prominent and the eyes are large and relatively protuberant. The snout is large and broadly rounded. The limbs are long and robust. The digits are well developed, and there is no appreciable webbing. The coloration is striking, with solid black background color and with bright pink to pinkish-cream dorsal spots: there is a pair of rounded spots on the back of the head, about the size of the eyeball in diameter, a small mid-dorsal spot in the neck, and a pair of large spots at the level of the forelimbs. These larger spots are followed by 11 pairs of small spots. Finally, there is a conspicuous U-shaped mark behind the hips, pointing backward. The venter is pale gray to dark gray. Habitat and conservation Its natural habitat is pine-oak forest at elevations of above sea level, with plenty of bunch grass. All specimens in the type series were found on roadside banks, under a surface layer of moist soil with a somewhat dry outer crust. This species is known from only six specimens. An individual was photographed in 2015, but surveys have failed to locate the species. Extensive logging, farming, and expanding human settlements have led to loss of much of the original habitat, and what remains is very degraded. The IUCN SSC Amphibian Specialist Group considers Isthmura naucampatepetl critically endangered. References naucampatepetl EDGE species Endemic amphibians of Mexico Fauna of the Trans-Mexican Volcanic Belt Amphibians described in 2001 Taxa named by David B. Wake Taxonomy articles created by Polbot
Isthmura naucampatepetl
Biology
477
49,142,466
https://en.wikipedia.org/wiki/Spierings%20Kranen
Spierings Kranen is a Dutch manufacturer of large mobile cranes such as the SK 1265-AT6 "Mighty Tiny" model which can lift up to 10 tons up to 35 metres in height. The company was founded by Leo Spierings and his wife Tiny in 1987. References Crane manufacturers Construction equipment manufacturers of the Netherlands Dutch brands
Spierings Kranen
Engineering
73
17,303,474
https://en.wikipedia.org/wiki/Patterns%20of%20self-organization%20in%20ants
Ants are simple animals and their behavioural repertory is limited to somewhere between ten and forty elementary behaviours. This is an attempt to explain the different patterns of self-organization in ants. Ants as complex systems Ant colonies are self-organized systems: complex collective behaviors arise as the product of interactions between many individuals each following a simple set of rules, not via top-down instruction from elite individuals or the queen. No one worker has universal knowledge of the colony's needs; individual workers react only to their local environment. Because of this, ants are a popular source of inspiration for design in software engineering, robotics, industrial design, and other fields involving many simple parts working together to perform complex tasks. The most popular current model of self-organization in ants and other social insects is the response threshold model. A threshold for a particular task is the amount of stimulus, such as a pheromone or interactions with other workers, necessary to cause the worker to perform the associated task. A higher threshold requires a stronger stimulus, and thus translates into less preference for performing a specific task. Different workers have different thresholds for different tasks, allowing certain workers to function as specialists that preferentially perform one or more tasks. Threshold levels can be affected by several factors: worker age, since workers frequently switch from within-nest work to outside-nest work with age; size, since larger workers often perform different tasks, such as defense or seed processing; caste; health, since injuries can encourage young workers to switch to outside-nest work earlier; or be randomly distributed. As demand for a task increases, so does the proportion of workers whose thresholds are met; as demand decreases, fewer workers' thresholds are met and fewer workers are allocated to that task. In this way, simple individual rules allow for the regulation of work on a large scale in diverse settings. This system can also evolve in response to different environments and life history strategies, leading to the immense variation observed in ants. Bifurcation This is an instant transition of the whole system to a new stable pattern when a threshold is reached. Bifurcation is also known as multi-stability in which many stable states are possible. Examples of pattern types: Transition between disordered and ordered pattern Transition from an even use of many food sources to one source. Formation of branched nest galleries. Group preference of one exit by escaping ants. Chain formation of mutual leg grasping. Synchronization Oscillating patterns of activity in which individuals at different activity levels stimulate one another emerging from mutual activation. Examples of pattern types: Short scale rhythms arising from mechanical activation from physical contact. Long scale rhythms in which temporal changes in food needs and larvae stimulate changes in the reproductive cycle. Self-organized waves Traveling waves of chemical concentration or mechanical deformation. Examples of pattern types: Alarm waves propagated by physical contact. Rotating trails from spatial changes in food resources acting on trail laying activity. Self-organized criticality Self-organized criticality is an abrupt disturbance in a system resulting from a buildup of events without external stimuli. Examples of pattern types: Abrupt changes in feeding activity. Mechanical grasping of legs forming ant droplets. References Ants Behavioral ecology Hymenoptera ecology Superorganisms Myrmecology
Patterns of self-organization in ants
Biology
655
217,828
https://en.wikipedia.org/wiki/JXTA
JXTA (Juxtapose) was an open-source peer-to-peer protocol specification begun by Sun Microsystems in 2001. The JXTA protocols were defined as a set of XML messages which allow any device connected to a network to exchange messages and collaborate independently of the underlying network topology. As JXTA was based upon a set of open XML protocols, it could be implemented in any modern computer language. Implementations were developed for Java SE, C/C++, C# and Java ME. The C# Version used the C++/C native bindings and was not a complete re-implementation in its own right. JXTA peers create a virtual overlay network which allows a peer to interact with other peers even when some of the peers and resources are behind firewalls and NATs or use different network transports. In addition, each resource is identified by a unique ID, a 160 bit SHA-1 URN in the Java binding, so that a peer can change its localization address while keeping a constant identification number. Status "In November 2010, Oracle officially announced its withdrawal from the JXTA projects". As of August 2011, the JXTA project has not yet been continued or otherwise announced to retain operations, neither a decision was made on the assembly of its Board nor an answer by Oracle regarding a pending request to move the source-code to Apache license version 2. Protocols in JXTA Peer Resolver Protocol Peer Information Protocol Rendezvous Protocol Peer Membership Protocol Pipe Binding Protocol Endpoint Routing Protocol Categories of peers JXTA defines two main categories of peers: edge peers and super-peers. The super-peers can be further divided into rendezvous and relay peers. Each peer has a well defined role in the JXTA peer-to-peer model. The edge peers are usually defined as peers which have transient, low bandwidth network connectivity. They usually reside on the border of the Internet, hidden behind corporate firewalls or accessing the network through non-dedicated connections. A Rendezvous peer is a special purpose peer which is in charge of coordinating the peers in the JXTA network and provides the necessary scope to message propagation. If the peers are located in different subnets then the network should have at least one Rendezvous peer. A Relay peer allows the peers which are behind firewalls or NAT systems to take part in the JXTA network. This is performed by using a protocol which can traverse the firewall, like HTTP, for example. Any peer in a JXTA network can be a rendezvous or relay as soon as they have the necessary credentials or network/storage/memory/CPU requirements. Advertisements An Advertisement is an XML document which describes any resource in a P2P network (peers, groups, pipes, services, etc.). The communication in JXTA can be thought as the exchange of one or more advertisements through the network. Pipes Pipes are a virtual communication channel used by JXTA to exchange messages and data. Pipes are asynchronous, unreliable, and unidirectional. There are basically three types of pipes: Unicast Uni-cast Secure Propagate Peer groups A peer group provides a scope for message propagation and a logical clustering of peers. In JXTA, every peer is a member of a default group, NetPeerGroup, but a given peer can be member of many sub-groups at the same time. A peer may play different roles in different groups; it may act as an edge peer in one group, but a rendezvous in another. Each group should have at least one rendezvous peer and it is not possible to send messages between two groups. Rendezvous network The Rendezvous peers have an optimized routing mechanism which allows an efficient propagation of messages pushed by edge peers connected to them. This is achieved through the use of a loosely consistent network. Each Rendezvous peer maintains a Rendezvous Peer View (RPV), a list of known rendezvous peers ordered by the Peer ID. There is not any mechanism to enforce the consistency of all RPVs across the JXTA network, so a given RPV can have a temporary or permanent inconsistent view of the other rendezvous peers. As soon as there is a low churn rate, that is, a stable network where peers don't join or leave too frequently, the RPV list of each peer will converge as each rendezvous peer exchange a random subset of its RPV with other rendezvous peers from time to time. When an edge peer publishes an Advertisement, the index of this advertisement is pushed to the rendezvous through a system called Shared Resource Distributed Index (SRDI). After that, the rendezvous applies a Distributed Hash Table (DHT) function so that it can forward the index to another peer in the RPV list. For replication purposes, it will send this index to the neighbours of the chosen rendezvous peer in the RPV list. The lookup process requires the use of the same DHT function to discover the rendezvous peer which is in charge of storing that index. Once the rendezvous peer is reached it will forward the query to the edge peer which published the advertisement and this peer will get in touch with the peer which issues the query. If the DHT function cannot find a peer which is in charge of the advertisement then the query will be forwarded up and down the RPV list until a match is found, the query is aborted, or it reaches the limits of the RPV list. This process is called random walk. See also Peer-to-peer Rendezvous protocol Chimera. References External links Official web site Java implementation of JXTA french site Overview of JXTA Sonatype Repo Distributed data storage Cross-platform free software Free network-related software Java platform Network protocols Sun Microsystems software
JXTA
Technology
1,182
4,237,207
https://en.wikipedia.org/wiki/Error%20correction%20code
In computing, telecommunication, information theory, and coding theory, forward error correction (FEC) or channel coding is a technique used for controlling errors in data transmission over unreliable or noisy communication channels. The central idea is that the sender encodes the message in a redundant way, most often by using an error correction code, or error correcting code (ECC). The redundancy allows the receiver not only to detect errors that may occur anywhere in the message, but often to correct a limited number of errors. Therefore a reverse channel to request re-transmission may not be needed. The cost is a fixed, higher forward channel bandwidth. The American mathematician Richard Hamming pioneered this field in the 1940s and invented the first error-correcting code in 1950: the Hamming (7,4) code. FEC can be applied in situations where re-transmissions are costly or impossible, such as one-way communication links or when transmitting to multiple receivers in multicast. Long-latency connections also benefit; in the case of satellites orbiting distant planets, retransmission due to errors would create a delay of several hours. FEC is also widely used in modems and in cellular networks. FEC processing in a receiver may be applied to a digital bit stream or in the demodulation of a digitally modulated carrier. For the latter, FEC is an integral part of the initial analog-to-digital conversion in the receiver. The Viterbi decoder implements a soft-decision algorithm to demodulate digital data from an analog signal corrupted by noise. Many FEC decoders can also generate a bit-error rate (BER) signal which can be used as feedback to fine-tune the analog receiving electronics. FEC information is added to mass storage (magnetic, optical and solid state/flash based) devices to enable recovery of corrupted data, and is used as ECC computer memory on systems that require special provisions for reliability. The maximum proportion of errors or missing bits that can be corrected is determined by the design of the ECC, so different forward error correcting codes are suitable for different conditions. In general, a stronger code induces more redundancy that needs to be transmitted using the available bandwidth, which reduces the effective bit-rate while improving the received effective signal-to-noise ratio. The noisy-channel coding theorem of Claude Shannon can be used to compute the maximum achievable communication bandwidth for a given maximum acceptable error probability. This establishes bounds on the theoretical maximum information transfer rate of a channel with some given base noise level. However, the proof is not constructive, and hence gives no insight of how to build a capacity achieving code. After years of research, some advanced FEC systems like polar code come very close to the theoretical maximum given by the Shannon channel capacity under the hypothesis of an infinite length frame. Method ECC is accomplished by adding redundancy to the transmitted information using an algorithm. A redundant bit may be a complicated function of many original information bits. The original information may or may not appear literally in the encoded output; codes that include the unmodified input in the output are systematic, while those that do not are non-systematic. A simplistic example of ECC is to transmit each data bit three times, which is known as a (3,1) repetition code. Through a noisy channel, a receiver might see eight versions of the output, see table below. This allows an error in any one of the three samples to be corrected by "majority vote", or "democratic voting". The correcting ability of this ECC is: Up to one bit of triplet in error, or up to two bits of triplet omitted (cases not shown in table). Though simple to implement and widely used, this triple modular redundancy is a relatively inefficient ECC. Better ECC codes typically examine the last several tens or even the last several hundreds of previously received bits to determine how to decode the current small handful of bits (typically in groups of two to eight bits). Averaging noise to reduce errors ECC could be said to work by "averaging noise"; since each data bit affects many transmitted symbols, the corruption of some symbols by noise usually allows the original user data to be extracted from the other, uncorrupted received symbols that also depend on the same user data. Because of this "risk-pooling" effect, digital communication systems that use ECC tend to work well above a certain minimum signal-to-noise ratio and not at all below it. This all-or-nothing tendency – the cliff effect – becomes more pronounced as stronger codes are used that more closely approach the theoretical Shannon limit. Interleaving ECC coded data can reduce the all or nothing properties of transmitted ECC codes when the channel errors tend to occur in bursts. However, this method has limits; it is best used on narrowband data. Most telecommunication systems use a fixed channel code designed to tolerate the expected worst-case bit error rate, and then fail to work at all if the bit error rate is ever worse. However, some systems adapt to the given channel error conditions: some instances of hybrid automatic repeat-request use a fixed ECC method as long as the ECC can handle the error rate, then switch to ARQ when the error rate gets too high; adaptive modulation and coding uses a variety of ECC rates, adding more error-correction bits per packet when there are higher error rates in the channel, or taking them out when they are not needed. Types The two main categories of ECC codes are block codes and convolutional codes. Block codes work on fixed-size blocks (packets) of bits or symbols of predetermined size. Practical block codes can generally be hard-decoded in polynomial time to their block length. Convolutional codes work on bit or symbol streams of arbitrary length. They are most often soft decoded with the Viterbi algorithm, though other algorithms are sometimes used. Viterbi decoding allows asymptotically optimal decoding efficiency with increasing constraint length of the convolutional code, but at the expense of exponentially increasing complexity. A convolutional code that is terminated is also a 'block code' in that it encodes a block of input data, but the block size of a convolutional code is generally arbitrary, while block codes have a fixed size dictated by their algebraic characteristics. Types of termination for convolutional codes include "tail-biting" and "bit-flushing". There are many types of block codes; Reed–Solomon coding is noteworthy for its widespread use in compact discs, DVDs, and hard disk drives. Other examples of classical block codes include Golay, BCH, Multidimensional parity, and Hamming codes. Hamming ECC is commonly used to correct NAND flash memory errors. This provides single-bit error correction and 2-bit error detection. Hamming codes are only suitable for more reliable single-level cell (SLC) NAND. Denser multi-level cell (MLC) NAND may use multi-bit correcting ECC such as BCH or Reed–Solomon. NOR Flash typically does not use any error correction. Classical block codes are usually decoded using hard-decision algorithms, which means that for every input and output signal a hard decision is made whether it corresponds to a one or a zero bit. In contrast, convolutional codes are typically decoded using soft-decision algorithms like the Viterbi, MAP or BCJR algorithms, which process (discretized) analog signals, and which allow for much higher error-correction performance than hard-decision decoding. Nearly all classical block codes apply the algebraic properties of finite fields. Hence classical block codes are often referred to as algebraic codes. In contrast to classical block codes that often specify an error-detecting or error-correcting ability, many modern block codes such as LDPC codes lack such guarantees. Instead, modern codes are evaluated in terms of their bit error rates. Most forward error correction codes correct only bit-flips, but not bit-insertions or bit-deletions. In this setting, the Hamming distance is the appropriate way to measure the bit error rate. A few forward error correction codes are designed to correct bit-insertions and bit-deletions, such as Marker Codes and Watermark Codes. The Levenshtein distance is a more appropriate way to measure the bit error rate when using such codes. Code-rate and the tradeoff between reliability and data rate The fundamental principle of ECC is to add redundant bits in order to help the decoder to find out the true message that was encoded by the transmitter. The code-rate of a given ECC system is defined as the ratio between the number of information bits and the total number of bits (i.e., information plus redundancy bits) in a given communication package. The code-rate is hence a real number. A low code-rate close to zero implies a strong code that uses many redundant bits to achieve a good performance, while a large code-rate close to 1 implies a weak code. The redundant bits that protect the information have to be transferred using the same communication resources that they are trying to protect. This causes a fundamental tradeoff between reliability and data rate. In one extreme, a strong code (with low code-rate) can induce an important increase in the receiver SNR (signal-to-noise-ratio) decreasing the bit error rate, at the cost of reducing the effective data rate. On the other extreme, not using any ECC (i.e., a code-rate equal to 1) uses the full channel for information transfer purposes, at the cost of leaving the bits without any additional protection. One interesting question is the following: how efficient in terms of information transfer can an ECC be that has a negligible decoding error rate? This question was answered by Claude Shannon with his second theorem, which says that the channel capacity is the maximum bit rate achievable by any ECC whose error rate tends to zero: His proof relies on Gaussian random coding, which is not suitable to real-world applications. The upper bound given by Shannon's work inspired a long journey in designing ECCs that can come close to the ultimate performance boundary. Various codes today can attain almost the Shannon limit. However, capacity achieving ECCs are usually extremely complex to implement. The most popular ECCs have a trade-off between performance and computational complexity. Usually, their parameters give a range of possible code rates, which can be optimized depending on the scenario. Usually, this optimization is done in order to achieve a low decoding error probability while minimizing the impact to the data rate. Another criterion for optimizing the code rate is to balance low error rate and retransmissions number in order to the energy cost of the communication. Concatenated ECC codes for improved performance Classical (algebraic) block codes and convolutional codes are frequently combined in concatenated coding schemes in which a short constraint-length Viterbi-decoded convolutional code does most of the work and a block code (usually Reed–Solomon) with larger symbol size and block length "mops up" any errors made by the convolutional decoder. Single pass decoding with this family of error correction codes can yield very low error rates, but for long range transmission conditions (like deep space) iterative decoding is recommended. Concatenated codes have been standard practice in satellite and deep space communications since Voyager 2 first used the technique in its 1986 encounter with Uranus. The Galileo craft used iterative concatenated codes to compensate for the very high error rate conditions caused by having a failed antenna. Low-density parity-check (LDPC) Low-density parity-check (LDPC) codes are a class of highly efficient linear block codes made from many single parity check (SPC) codes. They can provide performance very close to the channel capacity (the theoretical maximum) using an iterated soft-decision decoding approach, at linear time complexity in terms of their block length. Practical implementations rely heavily on decoding the constituent SPC codes in parallel. LDPC codes were first introduced by Robert G. Gallager in his PhD thesis in 1960, but due to the computational effort in implementing encoder and decoder and the introduction of Reed–Solomon codes, they were mostly ignored until the 1990s. LDPC codes are now used in many recent high-speed communication standards, such as DVB-S2 (Digital Video Broadcasting – Satellite – Second Generation), WiMAX (IEEE 802.16e standard for microwave communications), High-Speed Wireless LAN (IEEE 802.11n), 10GBase-T Ethernet (802.3an) and G.hn/G.9960 (ITU-T Standard for networking over power lines, phone lines and coaxial cable). Other LDPC codes are standardized for wireless communication standards within 3GPP MBMS (see fountain codes). Turbo codes Turbo coding is an iterated soft-decoding scheme that combines two or more relatively simple convolutional codes and an interleaver to produce a block code that can perform to within a fraction of a decibel of the Shannon limit. Predating LDPC codes in terms of practical application, they now provide similar performance. One of the earliest commercial applications of turbo coding was the CDMA2000 1x (TIA IS-2000) digital cellular technology developed by Qualcomm and sold by Verizon Wireless, Sprint, and other carriers. It is also used for the evolution of CDMA2000 1x specifically for Internet access, 1xEV-DO (TIA IS-856). Like 1x, EV-DO was developed by Qualcomm, and is sold by Verizon Wireless, Sprint, and other carriers (Verizon's marketing name for 1xEV-DO is Broadband Access, Sprint's consumer and business marketing names for 1xEV-DO are Power Vision and Mobile Broadband, respectively). Local decoding and testing of codes Sometimes it is only necessary to decode single bits of the message, or to check whether a given signal is a codeword, and do so without looking at the entire signal. This can make sense in a streaming setting, where codewords are too large to be classically decoded fast enough and where only a few bits of the message are of interest for now. Also such codes have become an important tool in computational complexity theory, e.g., for the design of probabilistically checkable proofs. Locally decodable codes are error-correcting codes for which single bits of the message can be probabilistically recovered by only looking at a small (say constant) number of positions of a codeword, even after the codeword has been corrupted at some constant fraction of positions. Locally testable codes are error-correcting codes for which it can be checked probabilistically whether a signal is close to a codeword by only looking at a small number of positions of the signal. Not all testing codes are locally decoding and testing of codes Not all locally decodable codes (LDCs) are locally testable codes (LTCs) neither locally correctable codes (LCCs), q-query LCCs are bounded exponentially while LDCs can have subexponential lengths. Interleaving Interleaving is frequently used in digital communication and storage systems to improve the performance of forward error correcting codes. Many communication channels are not memoryless: errors typically occur in bursts rather than independently. If the number of errors within a code word exceeds the error-correcting code's capability, it fails to recover the original code word. Interleaving alleviates this problem by shuffling source symbols across several code words, thereby creating a more uniform distribution of errors. Therefore, interleaving is widely used for burst error-correction. The analysis of modern iterated codes, like turbo codes and LDPC codes, typically assumes an independent distribution of errors. Systems using LDPC codes therefore typically employ additional interleaving across the symbols within a code word. For turbo codes, an interleaver is an integral component and its proper design is crucial for good performance. The iterative decoding algorithm works best when there are not short cycles in the factor graph that represents the decoder; the interleaver is chosen to avoid short cycles. Interleaver designs include: rectangular (or uniform) interleavers (similar to the method using skip factors described above) convolutional interleavers random interleavers (where the interleaver is a known random permutation) S-random interleaver (where the interleaver is a known random permutation with the constraint that no input symbols within distance S appear within a distance of S in the output). a contention-free quadratic permutation polynomial (QPP). An example of use is in the 3GPP Long Term Evolution mobile telecommunication standard. In multi-carrier communication systems, interleaving across carriers may be employed to provide frequency diversity, e.g., to mitigate frequency-selective fading or narrowband interference. Example Transmission without interleaving: Error-free message: Transmission with a burst error: Here, each group of the same letter represents a 4-bit one-bit error-correcting codeword. The codeword is altered in one bit and can be corrected, but the codeword is altered in three bits, so either it cannot be decoded at all or it might be decoded incorrectly. With interleaving: Error-free code words: Interleaved: Transmission with a burst error: Received code words after deinterleaving: In each of the codewords "", "", "", and "", only one bit is altered, so one-bit error-correcting code will decode everything correctly. Transmission without interleaving: Original transmitted sentence: Received sentence with a burst error: The term "" ends up mostly unintelligible and difficult to correct. With interleaving: Transmitted sentence: Error-free transmission: Received sentence with a burst error: Received sentence after deinterleaving: No word is completely lost and the missing letters can be recovered with minimal guesswork. Disadvantages of interleaving Use of interleaving techniques increases total delay. This is because the entire interleaved block must be received before the packets can be decoded. Also interleavers hide the structure of errors; without an interleaver, more advanced decoding algorithms can take advantage of the error structure and achieve more reliable communication than a simpler decoder combined with an interleaver. An example of such an algorithm is based on neural network structures. Software for error-correcting codes Simulating the behaviour of error-correcting codes (ECCs) in software is a common practice to design, validate and improve ECCs. The upcoming wireless 5G standard raises a new range of applications for the software ECCs: the Cloud Radio Access Networks (C-RAN) in a Software-defined radio (SDR) context. The idea is to directly use software ECCs in the communications. For instance in the 5G, the software ECCs could be located in the cloud and the antennas connected to this computing resources: improving this way the flexibility of the communication network and eventually increasing the energy efficiency of the system. In this context, there are various available Open-source software listed below (non exhaustive). AFF3CT(A Fast Forward Error Correction Toolbox): a full communication chain in C++ (many supported codes like Turbo, LDPC, Polar codes, etc.), very fast and specialized on channel coding (can be used as a program for simulations or as a library for the SDR). IT++: a C++ library of classes and functions for linear algebra, numerical optimization, signal processing, communications, and statistics. OpenAir: implementation (in C) of the 3GPP specifications concerning the Evolved Packet Core Networks. List of error-correcting codes AN codes Algebraic geometry code BCH code, which can be designed to correct any arbitrary number of errors per code block. Barker code used for radar, telemetry, ultra sound, Wifi, DSSS mobile phone networks, GPS etc. Berger code Constant-weight code Convolutional code Expander codes Group codes Golay codes, of which the Binary Golay code is of practical interest Goppa code, used in the McEliece cryptosystem Hadamard code Hagelbarger code Hamming code Latin square based code for non-white noise (prevalent for example in broadband over powerlines) Lexicographic code Linear Network Coding, a type of erasure correcting code across networks instead of point-to-point links Long code Low-density parity-check code, also known as Gallager code, as the archetype for sparse graph codes LT code, which is a near-optimal rateless erasure correcting code (Fountain code) m of n codes Nordstrom-Robinson code, used in Geometry and Group Theory Online code, a near-optimal rateless erasure correcting code Polar code (coding theory) Raptor code, a near-optimal rateless erasure correcting code Reed–Solomon error correction Reed–Muller code Repeat-accumulate code Repetition codes, such as Triple modular redundancy Spinal code, a rateless, nonlinear code based on pseudo-random hash functions Tornado code, a near-optimal erasure correcting code, and the precursor to Fountain codes Turbo code Walsh–Hadamard code Cyclic redundancy checks (CRCs) can correct 1-bit errors for messages at most bits long for optimal generator polynomials of degree , see Locally Recoverable Codes See also Burst error-correcting code Code rate Erasure codes Error detection and correction Error-correcting codes with feedback Linear code Quantum error correction Soft-decision decoder References Further reading (xxii+762+6 pages) (x+2+208+4 pages) "Error Correction Code in Single Level Cell NAND Flash memories" 2007-02-16 "Error Correction Code in NAND Flash memories" 2004-11-29 Observations on Errors, Corrections, & Trust of Dependent Systems, by James Hamilton, 2012-02-26 Sphere Packings, Lattices and Groups, By J. H. Conway, Neil James Alexander Sloane, Springer Science & Business Media, 2013-03-09 – Mathematics – 682 pages. External links error correction zoo. Database of error correcting codes. lpdec: library for LP decoding and related things (Python) Error detection and correction
Error correction code
Engineering
4,705
15,641,992
https://en.wikipedia.org/wiki/Petit%20appartement%20du%20roi
The petit appartement du roi () of the Palace of Versailles is a suite of rooms used by Louis XIV, Louis XV, and Louis XVI. Located on the first floor of the palace, the rooms are found in the oldest part of the palace dating from the reign of Louis XIII. Under Louis XIV, these rooms housed the King's collections of artworks and books, forming a museum of sorts. Under Louis XV and Louis XVI, the rooms were modified to accommodate private living quarters. At this time, the rooms were transformed and their decoration represent some of the finest extant examples of the Louis XV style and Louis XVI style at Versailles (Kimball, 1943). Louis XIV Beginning in 1678, Louis XIV began to modify these rooms for his particular private needs. The configuration of the rooms dating from the time of Louis XIII was modified. The most significant alteration for this era was the relocation of the degré du roi from the exterior cour de marbre to the interior cour du roi. This relocation of the staircase precipitated the rearrangement of rooms in this part of the château to become the petit appartement du roi. In 1684, as the influence of Louis’ mistress – Françoise-Athénaïs, marquise de Montespan – waned due to her alleged involvement in the Affair of the Poisons, the king attached her rooms to his petit appartement after the marquise moved into the appartement des bains on the ground floor of the palace (Le Guillou, 1986; Verlet 1985, pp. 227–228). In Louis XIV's day, these rooms – cabinets de curiosités – formed a veritable museum for the king's private collections. In contrast to the grand appartement du roi and the appartement du roi, which were open to members of the court and the general public, the petit appartement du roi was only accessible though the personal consent of the king (Bluche, 1991). Located on the first floor on the northern side of the cour de marbre, the petit appartement du roi comprised nine rooms: Salle du billard (cabinet des chiens) Salon du degré du roi Cabinet aux tableaux Cabinet des Coquilles (later cabinet des livres) Salon ovale Premier salon de la petite galerie Petite galerie Deuxième salon de la petite galerie Cabinet des Médailles The salle du billiard (1693 plan #1) contained a billiard table, a game at which Louis XIV was adept. Additionally, the King kept several of his hunting dogs in this room so that he could care for them personally, which gave rise to the room's other name: cabinet des chiens (Verlet 1985, p. 227). The salon du degré du roi (1693 plan #2) occupies the site of a staircase dating from the time of Louis XIII. By 1684 (Dangeau), a new staircase – the degré du roi (1693 plan #3) – had been constructed just north of the old staircase in the cour du roi. The salon du degré du roi served as entrance to the staircase that was reserved for Louis XIV's personal use. The decoration of this room was given over almost exclusively to paintings by Nicolas Poussin (Félibien, 66; Piganiole de la Force, 126) The cabinet aux tableaux (1693 plan #4) with its southern exposure served as a Pinacotheca for part of Louis XIV's collection of paintings. Among the masters displayed in the room were the works from the Italian schools by Correggio, Raphael, Giorgione, Giulio Romano and Titian. Additionally, there were cabinets arranged in the room in which Louis XIV kept his collection of carved rock crystal (Brejon de Lavergnée, 1985; Félibien, 67; Piganiole de la Force, 129; Verlet 1985, p. 229). In 1692, the cabinet des coquilles (1693 plan #5) and the salon ovale (1693 plan #6) were created. These rooms, along with cabinet des médailles formed the main rooms of the Louis XIV's cabinets de curiosités. In addition to some of the most highly prized paintings of the royal collection, the salon ovale housed in four niches four bronze sculptural groups – “Jupiter” and “Juno” by Allesandro Algardi; the “Abduction of Orithyia” after the marble by Gaspard Marsy, and the “Abduction of Persephone” by François Girardon – that were esteemed as some of the finest of this genre in the King's collection. The richness of the decoration – fully gilt paneling and mirrors – complemented the arrangement of some of the most valuable paintings in Louis XIV's collection (Félibien, 67; Piganiole de la Force, 129; Verlet 1985, p. 229). The cabinet des coquilles originally housed some a portion of the King's gem collection. In 1708, the room was converted into a library – cabinet aux livres – in which Louis XIV kept his collection of rare books and manuscripts (Verlet 1985, p. 230). The following rooms – premier salon de la petite galerie, petite galerie, and deuxième salon de la petite galerie (1693 plan #7, 8, & 9) – were formed from rooms that the marquise de Montespan occupied before she moved to the appartement des bains in 1684 (Dangeau vol. 1 77–78; Verlet 1985, p. 232). As with the previous rooms, the petite galerie and its two salons housed precious gems and paintings that the king had either inherited or collected. In the years that preceded the War of the League of Augsburg, Louis XIV engaged in an aggressive collecting campaign that necessitated his expanding space at Versailles to display newly acquired works of art (Verlet 1985, p. 229). Pierre Mignard, Charles Le Brun's archrival was charged with the painting of the ceilings of the petite galerie and its two salons (Félibien, 68; Piganiole de la Force, 140; Verlet 1985, p. 233). In the petite galerie and its two salons, Louis XIV displayed many of the most valued paintings in his collection. The petite galerie was given almost entirely to works by Italian masters with the works by Francesco Albani, Annibale Carracci, Guido Reni and Parmigianino predominating (Piganiole de la Force, 141–149; Verlet 1985, p. 234). The petite galerie also housed the collection of gifts Louis XIV received from foreign embassies; most notable among these diplomatic offerings were the gifts from the Chinese Jesuit, Shen Fu-Tsung (1684), which included an enormous pearl, and the gifts from the Siamese Embassy of 1685-1686 (Josephson, 1926). The premier salon de la petite galerie is of particular importance as it was in this room that Louis XIV kept the painting described by Piganiole de la Force as “Le Portrait de Vie, femme d’un Florentin nommé Giaconde,” better known in English as the Mona Lisa (Piganiole de la Force, 137). Louis XIV lavished much attention to these rooms intending to have the walls clad with panels inlaid with tortoise shell and lapis-lazuli. However, owing to the financial demands of the War of the League of Augsburg, the plans were abandoned. Nevertheless, the petite gallerie and its two salons were used by Louis XIV for entertaining foreign dignitaries, such as the Crown Prince of Denmark in 1693 and the Elector of Cologne in 1706 (Verlet 1985, p. 233-234). Of all the rooms that formed the petit apartment du roi during the reign of Louis XIV, the cabinet des médailles (1693 plan #10) was one of the most remarkable of its sort ever assembled in France (Hulftegger, 1954). Taking its name from the 12 cabinets in which Louis XIV's numismatic collections were kept, the cabinet des medailles also housed the King's collections of miniatures by Flemish, Dutch, and German masters, objects of carved porphyry and carved jade, as well as those rare items made of silver or gold (Verlet 1985, p. 230-232). Forming part of Louis XIV's collection of items made of gold was the treasure of the Merovingian king, Childeric I found in Tournai in 1653 and presented to Louis XIV by Leopold I, Holy Roman Emperor in 1665 (Cochet, 1859) and the gold and jewel encrusted nef, which was used by Louis XIV when he dined au grand couvert. Louis XV – 1740 After the return of the King and court to Versailles in 1722, life assumed a rhythm similar to that under Louis XIV. The young Louis XV occupied his great-grandfather's bedroom, the chambre de Louis XIV, where the ceremonies of the daily lever and coucher were executed with the same exacting precision as during the reign of the Sun King. However, owing to the discomfort of the room in winter – its size and eastern exposure made it difficult, if not impossible, to heat – Louis XV was compelled to establish his bedroom elsewhere (Verlet 313–314). In 1738, Louis XV ordered a new bedroom – chambre de Louis XV (1740 plan #4) – constructed on the site of |Louis XIV's salle du billard, which was enlarged to the north into the cour du roi to accommodate an alcove for the bed (Verlet 1985, p. 444-447). In the same year, the degré du roi was demolished and a new staircase was built just north of the old location. A new room was constructed on the site formerly occupied by the degré du roi of Louis XIV, the antichambre des chiens (Verlet 1985, p. 442). As with his great-grandfather in his cabinet des chiens, Louis XV kept some of his hunting dogs in this room. Further modifications of the petit appartement du roi at this time included the creation of the salon des pendules and the cabinet intérieur. These rooms were created when the salon du degré du roi and the cabinet aux tableaux of Louis XIV were destroyed (Le Guillou, 1985). The salon des pendules (1740 plan #3) (also called the salon ovale due to its elliptic shape) was given this name due to the dials arranged in the apsidal recess of the eastern wall that showed the times of the rising and setting of the sun and the moon (Verlet 1985, p. 450). The cabinet intérieur (1740 plan #4) served a number of purposes: it housed part of Louis XV's numismatic collection and collection of miniature paintings; it served as a dining room; and, it served as a workroom. Of all the rooms of the petit appartement du roi during the reign of Louis XV, this was perhaps one of the most richly decorated and opulently appointed (Verlet 1985, p. 452). The cabinet des livres, salon ovale of Louis XIV, the peite galerie with its two salons, and the cabinet des médailles were retained (1740 plan #6, 7, 8, & 9). By 1740, the petit appartement du roi had expanded to such an extent into the cour du roi that the eastern part of this courtyard became a separate courtyard. This new courtyard was called the cour intérieur du roi (1740 plan II) and the cour du roi was renamed cour des cerfs. This new name was due to the two dozen sculpted deer heads that Louis XV ordered placed on the walls of the courtyard (Verlet 1985, p. 457). Louis XV – 1760 The modifications of the late 1750s of the petit appartement du roi were in response to a general reorganization of the apartments in the corps de logis of the château and the destruction of the escalier des ambassadeurs (1740 plan #10). To accommodate a new apartment for his daughter, Madame Adélaïde, Louis XV ordered the construction of rooms on the same floor as the petit appartement du roi. This new apartment occupied space that had been the petite galerie and the two salons as well as new space created by the suppression of the escalier des ambassadeurs (1760 plan #9). The most significant modifications to the petit apartment du roi at this time were the relocation of the degré du roi (1760 plan #4), the construction of the salle à manger des retours de chasses (1750) (1760 plan #5), and the pièce des buffets (1754) (1760 plan #6) (Verlet 1985, p. 473-474). The salle à manger des retours de chasses was built upon the site of Louis XV's bath (1740 plan g) when the King wanted a dining room on the first floor in which he could entertain a small group of friends, most frequently after hunting (Bluche, 2000; Marie, 1984). The decoration of the salle à manger des retours de chasses incorporated paneling and decorative elements from the salon du billard of Louis XIV (Verlet 1985, p. 442-443). This era during which Louis XV decorated the petit appartement du roi was significant in the evolution of French decorative styles of the 18th century. Many of these rooms represent some of the finest examples of the Louis XV style. Of the rooms of the appartement du roi, the salon des pendules is one of the most significant. With paneling by Jacques Verberckt, the room was furnished with chairs and table and served for gaming parties hosted by Louis XV (Verlet 1985, p. 449). However, it would the delivery of 1754 that would set this room apart from others. In January of that year, Louis XV had brought from the Château de Choisy and placed in this room the famed Passemant Astronomical clock. The clock, which was designed by the engineer, Claude-Simon Passemant, clockmaker Louis Dauthiau, and set in an ormolu case by Philippe Caffieri, was a marvel of its day. Taking 12 years to complete, the clock is surmounted by a crystal sphere in which a mechanical armillary sphere – after the Copernican model – operated. The time, days of the week, months of the year (even calculating for bissextile years), and year were accurately displayed. On account of this clock, the room received the definitive name, salon de la pendule (1760 plan #2) (Kuraszewski, 1976; Verlet 1985, p. 450). By 1760, the cabinet intérieur (1760 plan #7) had become to be also known as the bureau du roi and this room came best to represent not only the personal taste of Louis XV, but it also stands as one of the finest examples of the Louis XV style. In 1755, the cabinetmaker Gilles Joubert delivered two corner cabinets, complementing those by Antoine-Robert Gaudreau, which had been delivered in 1739, to house numismatic record of Louis XV's reign (Verlet 1985, p. 452). In 1769, the mechanical roll-top desk by Jean-François Oeben was delivered (Verlet 1985, p. 454). With the evolution of the cabinet intérieur, Louis XV also pursued the construction of his arrière cabinet (1760 plan #8). In suppressing the cabinet des livres and the salon ovale of Louis XIV, Louis XV created a private room (with a small cabinet de la chaise) that communicated directly with the degré du roi in which he conducted much of the day-to-day governance of France. The utilitarian décor – a simple table, chairs and rows of shelving – reflects this usage (Verlet 1985, p. 459). Louis XVI With the exception of reclaiming part of the apartment of Madame Adélaïde, Louis XVI chose to retain the décor of the petit appartement du roi as his grandfather had left it. The arrière cabinet of Louis XV was rechristened cabinet des dépêches (1789 plan #8); however, Louis XVI continued to use the room as day-to-day workroom as his grandfather had (Rogister, 1993). The pièce de la vaisselle d'or (1789 plan #9) – originally the premier salon de la petite galerie – formed part of the appartement de Madame Adélaïde. Under Louis XVI, the pièce de la vaisselle d’or was where the King kept his collection of rare porcelains and curiosities, many received as diplomatic gifts (Verlet 1985, p. 526) The small room north and behind the pièce de la vaisselle d'or is the cabinet de la cassette du roi (1789 plan #10). This room was converted into a bathroom for Louis XV around 1769. Louis XVI used the room – allegedly – as a place where he could maintain his personal financial accounts (Verlet 1985, p. 526). The paneling dates from the remodeling for Louis XV; however, Louis XVI ordered a total regilding of the room in 1784 (Verlet 1985, p. 526). When Pierre de Nolhac assumed the directorship of the museum of Versailles, he discovered that this room was being used as a broom closet by the janitorial staff. This discovery was the impetus that compelled Nolhac to begin exhaustive research on the subject of the history of Versailles (Nolhac, 1937). The bibliothèque de Louis XVI (1789 plan #11) located directly east of the pièce de la vaisselle d'or occupies the space that was the chambre de Madame Adélaïde (which Louis XV rechristened salon d’assemblée in 1769) and previously the petite galerie. In 1774, construction on the library began with the decoration being executed by the workshop of the Rousseau brothers, who had previously worked on the paneling of the cabinet de la cassette du roi and on part of the sculptural decorations of the Opéra (Verlet 1985, p. 513). This room represents not only the personal taste of Louis XVI it also stands as one of the finest examples of the Louis XVI style. The room located just to the east of the bibliothèque de Louis XVI is the salle à manger aux salles neuves (1789 plan #12). This room, once the deuxième salon de la petite galerie and once one of the rooms of Madame Adélaïde, was remodeled into a dining room for Louis XV in 1769. The paneling by Jacques Verberckt dates from the 1769 redecoration of Louis XV and the present blue upholstery, draperies, and hunting scenes by Jean-Baptiste Oudry date from 1774 when Louis XVI redecorated the room (Baulez, 1976; Verlet 1985, p. 527). The room was also known as the salle des porcelains on account of the annual display of the production of the Sèvres factory that was arranged in this room during Christmas (Baulez, 1976). The pièce des buffets or salle du billiard (1789 plan #13) occupies area that had once been the landing of the escalier des ambassadeurs. During dinners, the billiard table would be covered with a wooden plank on which a buffet would be dressed for the King's guests (Verlet 1985, p. 527). The room originally had a window opening onto the cave du roi (1789 plan III), the courtyard that was created when the escalier des ambassadeurs was destroyed in 1752. Occupying the site of the cabinet des médailles of Louis XIV is the cabinet des jeux (1789 plan #14) of Louis XVI. Upon the return of Louis XV and the court to Versailles, there had been a systematic rearrangement of the collections of Louis XIV that had been housed in the petite appartement du roi, particularly the items kept in the Louis XIV's cabinet des médailles. The collection was either reorganized in other rooms of the petit appartement du roi or sent to the bibliothèque du roi in Paris. With the destruction of the escalier des ambassadeurs in 1752 and the subsequent construction of the apartment for Madame Adélaïde, the cabinet des médailles of Louis XIV was completely transformed into an antichambre for Madame Adélaïde. Dating from 1775, the room was redecorated in 1785 during the construction of a theater next to the salon d’Hercule, Louis XVI decided to remodel this room as a game room (Verlet 1985, p. 528). The salle à manger aux salles neuves, salle du billiard and the cabinet des jeux were used for the intimate dinner parties given by Louis XVI and Marie Antoinette for their friends and selected members of the royal family. Gallery Notes Sources Books Journals External links Palace of Versailles Rooms
Petit appartement du roi
Engineering
4,486
74,201,328
https://en.wikipedia.org/wiki/Six-dimensional%20holomorphic%20Chern%E2%80%93Simons%20theory
In mathematical physics, six-dimensional holomorphic Chern–Simons theory or sometimes holomorphic Chern–Simons theory is a gauge theory on a three-dimensional complex manifold. It is a complex analogue of Chern–Simons theory, named after Shiing-Shen Chern and James Simons who first studied Chern–Simons forms which appear in the action of Chern–Simons theory. The theory is referred to as six-dimensional as the underlying manifold of the theory is three-dimensional as a complex manifold, hence six-dimensional as a real manifold. The theory has been used to study integrable systems through four-dimensional Chern–Simons theory, which can be viewed as a symmetry reduction of the six-dimensional theory. For this purpose, the underlying three-dimensional complex manifold is taken to be the three-dimensional complex projective space , viewed as twistor space. Formulation The background manifold on which the theory is defined is a complex manifold which has three complex dimensions and therefore six real dimensions. The theory is a gauge theory with gauge group a complex, simple Lie group The field content is a partial connection . The action is where where is a holomorphic (3,0)-form and with denoting a trace functional which as a bilinear form is proportional to the Killing form. On twistor space P3 Here is fixed to be . For application to integrable theory, the three form must be chosen to be meromorphic. See also Chern–Simons theory Four-dimensional Chern-Simons theory External links Holomorphic Chern–Simons theory nLab References Gauge theories Integrable systems
Six-dimensional holomorphic Chern–Simons theory
Physics
346
68,373,977
https://en.wikipedia.org/wiki/Tellurite%20fluoride
A tellurite fluoride is a mixed anion compound containing tellurite and fluoride ions. They have also been called oxyfluorotellurate(IV) where IV is the oxidation state of tellurium in tellurite. Comparable compounds are sulfite fluorides or selenite fluorides. List References Fluorides Tellurites Mixed anion compounds
Tellurite fluoride
Physics,Chemistry
85
6,882,090
https://en.wikipedia.org/wiki/Insect%20migration
Insect migration is the seasonal movement of insects, particularly those by species of dragonflies, beetles, butterflies and moths. The distance can vary with species and in most cases, these movements involve large numbers of individuals. In some cases, the individuals that migrate in one direction may not return and the next generation may instead migrate in the opposite direction. This is a significant difference from bird migration. Definition All insects move to some extent. The range of movement can vary from within a few centimeters for some sucking insects and wingless aphids to thousands of kilometers in the case of other insects such as locusts, butterflies and dragonflies. The definition of migration is therefore particularly difficult in the context of insects. A behavior-oriented definition proposed is This definition disqualifies movements made in the search of resources and which are terminated upon finding the resource. Migration involves longer distance movement and these movements are not affected by the availability of the resource items. All cases of long-distance insect migration concern winged insects. General patterns Many migrating butterflies fly at low altitudes. The airspeeds in this region are typically lower than the flight speed of the insect, allowing them to travel against the wind if need be. These 'boundary-layer' migrants include the larger day-flying insects, and their low-altitude flight is easier to observe than that of most high-altitude windborne migrants. Some species of butterfly (such as Vanessa atalanta and Danaus plexippus) are known to migrate using high-altitude, high-speed winds during their yearly migrations. Many migratory species tend to have polymorphic forms, a migratory one, and a resident phase. The migratory phases are marked by their well-developed and long wings. Such polymorphism is well known in aphids and grasshoppers. In the migratory locusts, there are distinct long and short-winged forms. The energetic cost of migration has been studied in the context of life-history strategies. It has been suggested that adaptations for migration would be more valuable for insects that live in habitats where resource availability changes seasonally. Others have suggested that species living in isolated islands of suitable habitats are more likely to evolve migratory strategies. The role of migration in gene flow has also been studied in many species. Parasite loads affect migration. Severely infected individuals are weak and have shortened lifespans. Infection creates an effect known as culling whereby migrating animals are less likely to complete the migration. This results in populations with lower parasite loads. Orientation Migration is usually marked by well defined destinations which need navigation and orientation. A flying insect needs to make corrections for crosswinds. It has been demonstrated that many migrating insects sense wind speed and direction and make suitable corrections. Day-flying insects primarily make use of the sun for orientation, however, this requires that they compensate for the movement of the sun. Endogenous time-compensation mechanisms have been proposed and tested by releasing migrating butterflies that have been captured and kept in darkness to shift their internal clocks and observing changes in the directions chosen by them. Some species appear to make corrections while it has not been demonstrated in others. Most insects are capable of sensing polarized light and they are able to use the polarization of the sky when the sun is occluded by clouds. The orientation mechanisms of nocturnal moths and other insects that migrate have not been well studied, however magnetic cues have been suggested in short distance fliers. Recent studies suggest that migratory butterflies may be sensitive to the Earth's magnetic field on the basis of the presence of magnetite particles. In an experiment on the monarch butterfly, it was shown that a magnet changed the direction of initial flight of migrating monarch butterflies. However this result was not a strong demonstration since the directions of the experimental butterflies and the controls did not differ significantly in the direction of flight. Lepidoptera Migration of butterflies and moths is particularly well known. The Bogong moth is a native insect of Australia that is known to migrate to cooler climates. The Madagascan sunset moth (Chrysiridia rhipheus) has migrations of up to thousands of individuals, occurring between the eastern and western ranges of their host plant, when they become depleted or unsuitable for consumption. The hummingbird hawk-moth (Macroglossum stellatarum) migrates from Africa and southern Asia to Europe and northern Asia. In southern India, mass migrations of many species occur before monsoons. As many as 250 species of butterflies in India are migratory. These include members of the Pieridae and Nymphalidae. Many species Vanessa butterfly are also known to migrate. The Australian painted lady (Vanessa kershawi) periodically migrates down the coast of Australia, and occasionally, in periods of strong migration in Australia, migrate to New Zealand. The painted lady (Vanessa cardui) is a butterfly whose annual 15,000 km round trip from Scandinavia and Great Britain to West Africa involves up to six generations. The red admiral (Vanessa atalanta) periodically migrates from southern to northern Europe for the summer, although sometimes movement north is observed in early autumn. The monarch butterfly, Danaus plexippus, migrates from southern Canada to wintering sites in central Mexico where they spend the winter. In the late winter or early spring, the adult monarchs leave the Transvolcanic mountain range in Mexico to travel north. Mating occurs and the females seek out milkweed to lay their eggs, usually first in northern Mexico and southern Texas. The caterpillars hatch and develop into adults that move north, where more offspring can go as far as Central Canada until the next migratory cycle. The entire annual migration cycle involves around five generations. More detailed information on this migration can be found under monarch butterfly migration. Orthoptera Short-horned grasshoppers sometimes form swarms that will make long flights. These are often irregular and may be related to resource availability and thus not fulfilling some definitions of insect migration. There are however some populations of species such as locusts (Schistocerca gregaria) that make regular seasonal movements in parts of Africa; exceptionally, the species migrates very long distances, as in 1988 when swarms flew across the Atlantic Ocean. Odonata Dragonflies are among the longest distance insect migrants. Many species of Libellula, Sympetrum and Pantala are known for their mass migration. Pantala flavescens is thought to make the longest ocean crossings among insects, flying between India and Africa on their migrations. Their movements are often assisted by winds. Coleoptera Ladybird beetles such as Hippodamia convergens, Adalia bipunctata and Coccinella undecimpunctata have been noted in large numbers in some places. In some cases, these movements appear to be made in the search for hibernation sites. Heteroptera Some Oncopeltus fasciatus will journey from northern states and southern Canada to southern states; others will overwinter where they are. Murgantia histrionica relies on seasonal winds on the Mississippi valley for travel. Homoptera Leafhoppers Macrosteles fascifrons and Empoasca fabae rely on seasonal winds on the Mississippi valley for travel. See also Animal migration References Migration Animal migration
Insect migration
Biology
1,471
70,179,173
https://en.wikipedia.org/wiki/TX%20Ursae%20Majoris
TX Ursae Majoris is an eclipsing binary star system in the northern circumpolar constellation of Ursa Major. With a combined apparent visual magnitude of 6.97, the system is too faint to be readily viewed with the naked eye. The pair orbit each other with a period of 3.063 days in a circular orbit, with their orbital plane aligned close to the line of sight from the Earth. During the primary eclipse, the net brightness decreases by 1.74 magnitudes, while the secondary eclipse results in a drop of just 0.07 magnitude. TX UMa is located at a distance of approximately 780 light years from the Sun based on parallax measurements, but is drifting closer with a mean radial velocity of −13 km/s. In 1931, H. Rügemer and H. Schneller independently discovered this is an eclipsing binary system of the Algol type. Rügemer later found that the eclipse period was not constant, a behavior that was subsequently explained as apsidal precession. B. Cester and associates in 1977 confirmed this is a semidetached binary system consisting of a main sequence primary star and an evolved giant companion. A study of the system by J. M. Kreiner and J. Tremko in 1980 disproved that changes in the eclipse period are due to apsidal motion. The light curve of this system shows little impact from proximity effects between the two stars, making it only weakly interacting. The primary eclipse is very deep with less than 5% of the brighter star's light appearing at central eclipse, allowing the spectrum of the fainter secondary to be directly examined. In addition to a steady decrease in the system orbital period, multiple irregular changes in the period were observed between 1903 and 1996. The slowing orbit may be due in part from magnetic breaking of the mass-donor secondary, causing a transfer of angular momentum to the system. An accretion disk may be a contributing factor. Spectral evidence supports an accretion disk in orbit around the primary that is sustained by mass transfer. A faint emission from the system is evidence of a circumbinary ionized shell. The cooler secondary component is the more evolved member of the pair with a stellar classification of G0III-I, having previously exhausted the supply of hydrogen at its core and evolved off the main sequence. This star has filled its Roche lobe and is contributing mass to the primary. It now has 1.2 times the Sun's mass but has expanded to 4.2 times the solar radius. The secondary is rotating synchronously with its orbit. The primary component of this system is a B-type main-sequence star with a stellar classification of B8V. It is rotating 1.5 times as fast as the orbital rate due to the impact of mass accretion from the secondary. The primary has 4.8 times the mass and 2.8 times the radius of the Sun. References Further reading B-type main-sequence stars G-type giants Algol variables Binary stars Ursa Major BD+30 2162 093033 052599 Ursae Majoris, TU
TX Ursae Majoris
Astronomy
649
1,401,706
https://en.wikipedia.org/wiki/Contagious%20disease
A contagious disease is an infectious disease that can be spread rapidly in several ways, including direct contact, indirect contact, and Droplet contact. These diseases are caused by organisms such as parasites, bacteria, fungi, and viruses. While many types of organisms live on the human body and are usually harmless, these organisms can sometimes cause disease. Some common infectious diseases are influenza, COVID-19, ebola, hepatitis, HIV/AIDS, Human papillomavirus infection, Polio, and Zika virus. A disease is often known to be contagious before medical science discovers its causative agent. Koch's postulates, which were published at the end of the 19th century, were the standard for the next 100 years or more, especially with diseases caused by bacteria. Microbial pathogenesis attempts to account for diseases caused by a virus. Historical meaning Originally, the term referred to a contagion or disease transmissible only by direct physical contact. In the modern-day, the term has sometimes been broadened to encompass any communicable or infectious disease. Often the word can only be understood in context, where it is used to emphasize very infectious, easily transmitted, or especially severe communicable diseases. In 1849, John Snow first proposed that cholera was a contagious disease. Effect on public health response Most epidemics are caused by contagious diseases, with occasional exceptions, such as yellow fever. The spread of non-contagious communicable diseases is changed either very little or not at all by medical isolation of ill persons or medical quarantine for exposed persons. Thus, a "contagious disease" is sometimes defined in practical terms, as a disease for which isolation or quarantine are useful public health responses. Some locations are better suited for the research into the contagious pathogens due to the reduced risk of transmission afforded by a remote or isolated location. The basic reproduction number of a disease is used to measure how easily the disease spreads through contact with infected individuals. Negative room pressure is a technique in health care facilities based on aerobiological designs. See also Germ theory of disease Herd immunity Notifiable disease References Infectious diseases Microbiology Epidemiology Causality
Contagious disease
Physics,Chemistry,Biology,Environmental_science
457
693,126
https://en.wikipedia.org/wiki/Salt%20dome
A salt dome is a type of structural dome formed when salt (or other evaporite minerals) intrudes into overlying rocks in a process known as diapirism. Salt domes can have unique surface and subsurface structures, and they can be discovered using techniques such as seismic reflection. They are important in petroleum geology as they can function as petroleum traps. Formation Stratigraphically, salt basins developed periodically from the Proterozoic to the Neogene. The formation of a salt dome begins with the deposition of salt in a restricted basin. In these basins, the outflow of water exceeds inflow. Specifically, the basin loses water through evaporation, resulting in the precipitation and deposition of salt. While the rate of sedimentation of salt is significantly larger than the rate of sedimentation of clastics, it is recognized that a single evaporation event is rarely enough to produce the vast quantities of salt needed to form a layer thick enough for the formation of salt diapirs, indicating that a sustained period of episodic flooding and evaporation of the basin must occur. Over time, the layer of salt is covered with deposited sediment, becoming buried under an increasingly large overburden. Previously, researchers believed that the compaction of overlying sediment and subsequent decrease in buoyancy led to salt rising and intruding into the overburden due to its ductility, thereby creating a salt diapir. However, after the 1980s, the primary force that drives the flow of salt is considered to be differential loading. Differential loading can be caused by gravitational forces (gravitational loading), forced displacement of salt boundaries (displacement loading), or thermal gradients (thermal loading). The flow of the salt overcomes the strength of the overburden as well as boundary friction aided by overburden extension, erosion, thrust faults, ductile thinning, or other forms of regional deformation. The vertical growth of salt formations creates pressure on the upward surface, causing extension and faulting. Once the salt completely pierces the overburden, it can rise through a process known as passive diapirism where the accumulation of sediments around the diapir contribute to its growth and eventually form into a dome. Discovery mechanisms Some salt domes can be seen from Earth's surface. They can also be located by finding unique surface structures and surrounding phenomena. For instance, salt domes can contain or be near sulfur springs and natural gas vents. Some salt domes have salt sheets that extrude from the top of the dome; these are referred to as salt plugs. These plugs can coalesce to form salt canopies, which can then be remobilized by roof sedimentation, with the most prominent example in the northern Gulf of Mexico basin. Another structure that can form from salt domes are salt welds. These occur when the growth of a dome is prevented by an exhausted supply of salt, and the top and bottom contacts merge. Salt domes have also been located using seismic refraction and seismic reflection. The latter was developed based on techniques from the former and is more effective. Seismic refraction uses seismic waves to characterize subsurface geologic conditions and structures. Seismic reflection highlights the presence of a stark density contrast between the salt and surrounding sediment. Seismic techniques are particularly effective as salt domes are typically depressed blocks of crust bordered by parallel normal faults (graben) that can be flanked by reverse faults. Advances in seismic reflection and the expansion of offshore petroleum exploration efforts led to the discovery of numerous salt domes soon after World War II. Commercial uses Salt domes are the site of many of the world's hydrocarbon provinces. The rock salt of the salt dome is mostly impermeable, so, as it moves up towards the surface, it penetrates and bends existing rock along with it. As strata of rock are penetrated, they are, generally, bent upwards where they meet the dome, forming pockets and reservoirs of petroleum and natural gas (known as petroleum traps). In 1901, an exploratory oil well was drilled into Spindletop Hill near Beaumont, Texas. This led to the discovery of the first salt dome, revealed the importance of salt to the formation of hydrocarbon accumulations, and produced enough oil for petroleum to become an economically feasible fuel for the United States. Several countries use solution mining to form caverns for holding large amounts of oil or gas reserves. The caprock above the salt domes can contain deposits of native sulfur (recovered by the Frasch process). They can also contain deposits of metals, sodium salts, nitrates, and other substances, which can be used in products such as table salt and chemical de-icers. Occurrence Salt domes occur in many parts of the world where there is a sufficiently thick layer of rock salt developed. Hormuz Formation In the Middle East, the upper Neoproterozoic salt of the Hormuz Formation is associated with widespread salt dome formation in most parts of the Persian Gulf and onshore in Iran, Iraq, United Arab Emirates, and Oman. The thicker salt is found in a series of basins: the Western Gulf, the Southern Gulf, and the Oman salt basins. Paradox Basin Pennsylvanian age salt of the Paradox Formation forms salt domes throughout the Paradox Basin in the US, which extends from eastern Utah, through southwestern Colorado into northwestern New Mexico. An example of an emergent salt dome is at Onion Creek, Utah / Fisher Towers near Moab, Utah. A Paradox Formation salt body that has risen as a ridge through several hundred meters of overburden, predominantly sandstone. As the salt body rose, the overburden formed an anticline (arching upward along its center line) which fractured and eroded to expose the salt body. Barents Sea Offshore northern Norway in the southwestern Barents Sea, thick Upper Carboniferous–Lower Permian salt was deposited, forming salt domes in the Hammerfest and Nordkapp basins. Zechstein basin In northwest Europe Upper Permian salt of the Zechstein Group has formed salt domes over the central and southern North Sea, extending eastwards into Germany. Morocco–Nova Scotia Upper Triassic salt forms salt domes in the Essaouira Basin onshore and offshore Morocco. An equivalent salt sequence, the Argo Formation, is associated with salt dome formation on the conjugate Nova Scotia margin. Gulf of Mexico The Gulf Coast is home to over 500 salt domes formed from Middle Jurassic Louann Salt. This region is home to most of the US Strategic Petroleum Reserve. Avery Island was formed by a salt dome. South Atlantic salt basins During the break-up of the south Atlantic, Aptian (Lower Cretaceous) age salt was deposited within the area of thinned crust on both the Brazilian and conjugate Angola/Gabon margins forming many salt domes. Messinian salt During the Messinian salinity crisis (Late Miocene), thick salt layers were formed as the Mediterranean Sea dried out. Later deposition, once the sea refilled, triggered the formation of salt domes. See also Plasticity (physics) Salt glacier Underground hydrogen storage Gorleben salt dome References External links Salt Dome Cutaway – Louisiana State Exhibit Museum Salt production Economic geology Evaporite Oil storage
Salt dome
Chemistry
1,465
2,817,767
https://en.wikipedia.org/wiki/Quasi-delay-insensitive%20circuit
A quasi-delay-insensitive circuit (QDI circuit) is an asynchronous circuit design methodology employed in digital logic design. Developed in response to the performance challenges of building sub-micron, multi-core architectures with conventional synchronous designs, QDI circuits exhibit lower power consumption, extremely fine-grain pipelining, high circuit robustness against process–voltage–temperature variations, on-demand (event-driven) operation, and data-dependent completion time. Overview Advantages Robust against process variation, temperature fluctuation, circuit redesign, and FPGA remapping. Natural event sequencing facilitates complex control circuitry. Automatic clock gating and compute-dependent cycle time can save dynamic power and increase throughput by optimizing for average-case workload characteristics instead of worst-case. Disadvantages Delay insensitive encodings generally require twice as many wires for the same data. Communication protocols and encodings generally require twice as many devices for the same functionality. Chips QDI circuits have been used to manufacture a large number of research chips, a small selection of which follows. Caltech's asynchronous microprocessor and MIPS R3000 clone Tokyo University's TITAC and TITAC-2 processors Theory The simplest QDI circuit is a ring oscillator implemented using a cycle of inverters. Each gate drives two events on its output node. Either the pull up network drives node's voltage from GND to Vdd or the pull down network from VDD to GND. This gives the ring oscillator six events in total. Multiple cycles may be connected using a multi-input gate. A c-element, which waits for its inputs to match before copying the value to its output, may be used to synchronize multiple cycles. If one cycle reaches the c-element before another, it is forced to wait. Synchronizing three or more of these cycles creates a pipeline allowing the cycles to trigger one after another. If cycles are known to be mutually exclusive, then they may be connected using combinational logic (AND, OR). This allows the active cycle to continue regardless of the inactive cycles, and is generally used to implement delay insensitive encodings. For larger systems, this is too much to manage. So, they are partitioned into processes. Each process describes the interaction between a set of cycles grouped into channels, and the process boundary breaks these cycles into channel ports. Each port has a set of request nodes that tend to encode data and acknowledge nodes that tend to be dataless. The process that drives the request is the sender while the process that drives the acknowledgement is the receiver. Now, the sender and receiver communicate using certain protocols and the sequential triggering of communication actions from one process to the next is modeled as a token traversing the pipeline. Stability and non-interference The correct operation of a QDI circuit requires that events be limited to monotonic digital transitions. Instability (glitch) or interference (short) can force the system into illegal states causing incorrect/unstable results, deadlock, and circuit damage. The previously described cyclic structure that ensures stability is called acknowledgement. A transition T1 acknowledges another T2 if there is a causal sequence of events from T1 to T2 that prevents T2 from occurring until T1 has completed. For a DI circuit, every transition must acknowledge every input to its associated gate. For a QDI circuit, there are a few exceptions in which the stability property is maintained using timing assumptions guaranteed with layout constraints rather than causality. Isochronic fork assumption An isochronic fork is a wire fork in which one end does not acknowledge the transition driving the wire. A good example of such a fork can be found in the standard implementation of a pre-charge half buffer. There are two types of Isochronic forks. An asymmetric isochronic fork assumes that the transition on the non-acknowledging end happens before or when the transition has been observed on the acknowledging end. A symmetric isochronic fork ensures that both ends observe the transition simultaneously. In QDI circuits, every transition that drives a wire fork must be acknowledged by at least one end of that fork. This concept was first introduced by A. J. Martin to distinguish between asynchronous circuits that satisfy QDI requirements and those that do not. Martin also established that it is impossible to design useful systems without including at least some isochronic forks given reasonable assumptions about the available circuit elements. Isochronic forks were long thought to be the weakest compromise away from fully delay-insensitive systems. In fact, every CMOS gate has one or more internal isochronic forks between the pull-up and pull-down networks. The pull-down network only acknowledges the up-going transitions of the inputs while the pull-up network only acknowledges the down-going transitions. Adversarial path assumption The adversarial path assumption also deals with wire forks, but is ultimately weaker than the isochronic fork assumption. At some point in the circuit after a wire fork, the two paths must merge back into one. The adversarial path is the one that fails to acknowledge the transition on the wire fork. This assumption states that the transition propagating down the acknowledging path reaches the merge point after it would have down the adversarial path. This effectively extends the isochronic fork assumption beyond the confines of the forked wire and into the connected paths of gates. Half-cycle timing assumption This assumption relaxes the QDI requirements a little further in the quest for performance. The c-element is effectively three gates, the logic, the driver, and the feedback and is non-inverting. This gets to be cumbersome and expensive if there is a need for a large amount of logic. The acknowledgement theorem states that the driver must acknowledge the logic. The half-cycle timing assumption assumes that the driver and feedback will stabilize before the inputs to the logic are allowed to switch. This allows the designer use the output of the logic directly, bypassing the driver and making shorter cycles for higher frequency processing. Atomic complex gates A large amount of the automatic synthesis literature uses atomic complex gates. A tree of gates is assumed to transition completely before any of the inputs at the leaves of the tree are allowed to switch again. While this assumption allows automatic synthesis tools to bypass the bubble reshuffling problem, the reliability of these gates tends to be difficult to guarantee. Relative timing Relative Timing is a framework for making and implementing arbitrary timing assumptions in QDI circuits. It represents a timing assumption as a virtual causality arc to complete a broken cycle in the event graph. This allows designers to reason about timing assumptions as a method to realize circuits with higher throughput and energy efficiency by systematically sacrificing robustness. Representations Communicating hardware processes (CHP) Communicating hardware processes (CHP) is a program notation for QDI circuits inspired by Tony Hoare's communicating sequential processes (CSP) and Edsger W. Dijkstra's guarded commands. The syntax is described below in descending precedence. Skip skip does nothing. It simply acts as a placeholder for pass-through conditions. Dataless assignment a+ sets the voltage of the node a to Vdd while a- sets the voltage of a to GND. Assignment a := e evaluates the expression e then assigns the resulting value to the variable a. Send X!e evaluates the expression e then sends the resulting value across the channel X. X! is a dataless send. Receive X?a waits until there is a valid value on the channel X then assigns that value to the variable a. X? is a dataless receive. Probe #X returns the value waiting on the channel X without executing the receive. Simultaneous composition S * T executes the process fragments S and T at the same time. Internal parallel composition S, T executes the process fragments S and T in any order. Sequential composition S; T executes the process fragments S followed by T. Parallel composition S || T executes the process fragments S and T in any order. This is functionally equivalent to internal parallel composition but with lower precedence. Deterministic selection [G0 -> S0[]G1 -> S1[]...[]Gn -> Sn] implements choice in which G0,G1,...,Gn are guards which are dataless boolean expressions or data expressions that are implicitly cast using a validity check and S0,S1,...,Sn are process fragments. Deterministic selection waits until one of the guards evaluates to Vdd, then proceeds to execute the guard's associated process fragment. If two guards evaluate to Vdd during the same window of time, an error occurs. [G] is shorthand for [G -> skip] and simply implements a wait. Non-deterministic selection [G0 -> S0:G1 -> S1:...:Gn -> Sn] is the same as deterministic selection except that more than one guard is allowed to evaluate to Vdd. Only the process fragment associated with the first guard to evaluate to Vdd is executed. Repetition *[G0 -> S0[]G1 -> S1[]...[]Gn -> Sn] or *[G0 -> S0:G1 -> S1:...:Gn -> Sn] is similar to the associated selection statements except that the action is repeated while any guard evaluates to Vdd. *[S] is shorthand for *[Vdd -> S] and implements infinite repetition. Hand-shaking expansions (HSE) Hand-shaking expansions are a subset of CHP in which channel protocols are expanded into guards and assignments and only dataless operators are permitted. This is an intermediate representation toward the synthesis of QDI circuits. Petri nets (PN) A petri net (PN) is a bipartite graph of places and transitions used as a model for QDI circuits. Transitions in the petri net represent voltage transitions on nodes in the circuit. Places represent the partial states between transitions. A token inside a place acts as a program counter identifying the current state of the system and multiple tokens may exist in a petri net simultaneously. However, for QDI circuits multiple tokens in the same place is an error. When a transition has tokens on every input place, that transition is enabled. When the transition fires, the tokens are removed from the input places and new tokens are created on all of the output places. This means that a transition that has multiple output places is a parallel split and a transition with multiple input places is a parallel merge. If a place has multiple output transitions, then any one of those transitions could fire. However, doing so would remove the token from the place and prevent any other transition from firing. This effectively implements choice. Therefore, a place with multiple output transitions is a conditional split and a place with multiple input transitions is a conditional merge. Event-rule systems (ER) Event-rule systems (ER) use a similar notation to implement a restricted subset of petri net functionality in which there are transitions and arcs, but no places. This means that the baseline ER system lacks choice as implemented by conditional splits and merges in a petri net and disjunction implemented by conditional merges. The baseline ER system also doesn't allow feedback. While petri nets are used to model the circuit logic, an ER system models the timing and execution trace of the circuit, recording the delays and dependencies of each transition. This is generally used to determine which gates need to be faster and which gates can be slower, optimizing the sizing of devices in the system. Repetitive event-rule systems (RER) add feedback by folding the trace back on itself, marking the fold point with a tick mark. Extended event-rule systems (XER) add disjunction. Production rule set (PRS) A production rule specifies either the pull-up or pull-down network of a gate in a QDI circuit and follows the syntax G -> S in which G is a guard as described above and S is one or more dataless assignments in parallel as described above. In states not covered by the guards, it is assumed that the assigned nodes remain at their previous states. This can be achieved using a staticizor of either weak or combinational feedback (shown in red). The most basic example is the C-element in which the guards do not cover the states where A and B are not the same value. Synthesis There are many techniques for constructing a QDI circuits, but they can generally be classified into two strategies. Formal synthesis Formal synthesis was introduced by Alain Martin in 1991. The method involves making successive program transformations which are proven to maintain program correctness. The goal of these transformations is to convert the original sequential program into a parallel set of communicating process which each map well to a single pipeline stage. The possible transformations include: Projection splits a process which has disparate, non-interacting sets of variables into a separate process per set. Process decomposition splits a process with minimally interacting variables sets into a separate process per set in which each process communicates to another only as necessary across channels. Slack matching involves adding pipeline stages between two communicating processes in order to increase overall throughput. Once the program is decomposed into a set of small communicating processes, it is expanded into hand-shaking expansions (HSE). Channel actions are expanded into their constituent protocols and multi-bit operators are expanded into their circuit implementations. These HSE are then reshuffled to optimize the circuit implementation by reducing the number of dependencies. Once the reshuffling is decided upon, state variables are added to disambiguate circuit states for a complete state encoding. Next, minimal guards are derived for each signal assignment, producing production rules. There are multiple methods for doing this including guard strengthening, guard weakening, and others. The production rules are not necessarily CMOS implementable at this point, so bubble reshuffling moves signal inversions around the circuit in an attempt to make it so. However, bubble reshuffling is not guaranteed to succeed. This is where atomic complex gates are generally used in automated synthesis programs. Syntax directed translation The second strategy, syntax directed translation, was first introduced in 1988 by Steven Burns. This seeks a simpler approach at the expense of circuit performance by mapping each CHP syntax to a hand-compiled circuit template. Synthesizing a QDI circuit using this method strictly implements the control flow as dictated by the program. This was later adopted by Philips Research Laboratories in their implementation of Tangram. Unlike Steven Burns' approach using circuit templates, Tangram mapped the syntax to a strict set of standard cells, facilitating layout as well as synthesis. Templated synthesis A hybrid approach introduced by Andrew Lines in 1998 transforms the sequential specification into parallel specifications as in formal synthesis, but then uses predefined pipeline templates to implement those parallel processes similar to syntax-directed translation. Lines outlined three efficient logic families or reshufflings. Weak condition half buffer (WCHB) Weak condition half buffer (WCHB) is the simplest and fastest of the logic families with a 10 transition pipeline cycle (or 6 using the half-cycle timing assumption). However, it is also limited to simpler computations because more complex computations tend to necessitate long chains of transistors in the pull-up network of the forward driver. More complex computations can generally be broken up into simpler stages or handled directly with one of the pre-charge families. The WCHB is a half buffer meaning that a pipeline of N stages can contain at most N/2 tokens at once. This is because the reset of the output request Rr must wait until after the reset of the input Lr. Pre-charge half buffer (PCHB) Pre-charge half buffer (PCHB) uses domino logic to implement a more complex computational pipeline stage. This removes the long pull-up network problem, but also introduces an isochronic fork on the input data which must be resolved later in the cycle. This causes the pipeline cycle to be 14 transitions long (or 10 using the half-cycle timing assumption). Pre-charge full buffer (PCFB) Pre-charge full buffers (PCFB) are very similar to PCHB, but adjust the reset phase of the reshuffling to implement full buffering. This means that a pipeline of N PCFB stages can contain at most N tokens at once. This is because the reset of the output request Rr is allowed to happen before the reset of the input Lr. Verification Along with the normal verification techniques of testing, coverage, etc, QDI circuits may be verified formally by inverting the formal synthesis procedure to derive a CHP specification from the circuit. This CHP specification can then be compared against the original to prove correctness. References Synthesis Timing Verification Sizing Layout Chips External links Tools Tutorials Introduction to Self Timed Circuits (, , ) ASYNC 2022 Summer School () Silicon Compilation at Yale () Electrical circuits
Quasi-delay-insensitive circuit
Engineering
3,579
237,213
https://en.wikipedia.org/wiki/Quotient%20space%20%28topology%29
In topology and related areas of mathematics, the quotient space of a topological space under a given equivalence relation is a new topological space constructed by endowing the quotient set of the original topological space with the quotient topology, that is, with the finest topology that makes continuous the canonical projection map (the function that maps points to their equivalence classes). In other words, a subset of a quotient space is open if and only if its preimage under the canonical projection map is open in the original topological space. Intuitively speaking, the points of each equivalence class are or "glued together" for forming a new topological space. For example, identifying the points of a sphere that belong to the same diameter produces the projective plane as a quotient space. Definition Let be a topological space, and let be an equivalence relation on The quotient set is the set of equivalence classes of elements of The equivalence class of is denoted The construction of defines a canonical surjection As discussed below, is a quotient mapping, commonly called the canonical quotient map, or canonical projection map, associated to The quotient space under is the set equipped with the quotient topology, whose open sets are those subsets whose preimage is open. In other words, is open in the quotient topology on if and only if is open in Similarly, a subset is closed if and only if is closed in The quotient topology is the final topology on the quotient set, with respect to the map Quotient map A map is a quotient map (sometimes called an identification map) if it is surjective and is equipped with the final topology induced by The latter condition admits two more-elementary formulations: a subset is open (closed) if and only if is open (resp. closed). Every quotient map is continuous but not every continuous map is a quotient map. Saturated sets A subset of is called saturated (with respect to ) if it is of the form for some set which is true if and only if The assignment establishes a one-to-one correspondence (whose inverse is ) between subsets of and saturated subsets of With this terminology, a surjection is a quotient map if and only if for every subset of is open in if and only if is open in In particular, open subsets of that are saturated have no impact on whether the function is a quotient map (or, indeed, continuous: a function is continuous if and only if, for every saturated such that is open in the set is open in Indeed, if is a topology on and is any map, then the set of all that are saturated subsets of forms a topology on If is also a topological space then is a quotient map (respectively, continuous) if and only if the same is true of Quotient space of fibers characterization Given an equivalence relation on denote the equivalence class of a point by and let denote the set of equivalence classes. The map that sends points to their equivalence classes (that is, it is defined by for every ) is called . It is a surjective map and for all if and only if consequently, for all In particular, this shows that the set of equivalence class is exactly the set of fibers of the canonical map If is a topological space then giving the quotient topology induced by will make it into a quotient space and make into a quotient map. Up to a homeomorphism, this construction is representative of all quotient spaces; the precise meaning of this is now explained. Let be a surjection between topological spaces (not yet assumed to be continuous or a quotient map) and declare for all that if and only if Then is an equivalence relation on such that for every which implies that (defined by ) is a singleton set; denote the unique element in by (so by definition, ). The assignment defines a bijection between the fibers of and points in Define the map as above (by ) and give the quotient topology induced by (which makes a quotient map). These maps are related by: From this and the fact that is a quotient map, it follows that is continuous if and only if this is true of Furthermore, is a quotient map if and only if is a homeomorphism (or equivalently, if and only if both and its inverse are continuous). Related definitions A is a surjective map with the property that for every subset the restriction is also a quotient map. There exist quotient maps that are not hereditarily quotient. Examples Gluing. Topologists talk of gluing points together. If is a topological space, gluing the points and in means considering the quotient space obtained from the equivalence relation if and only if or (or ). Consider the unit square and the equivalence relation ~ generated by the requirement that all boundary points be equivalent, thus identifying all boundary points to a single equivalence class. Then is homeomorphic to the sphere Adjunction space. More generally, suppose is a space and is a subspace of One can identify all points in to a single equivalence class and leave points outside of equivalent only to themselves. The resulting quotient space is denoted The 2-sphere is then homeomorphic to a closed disc with its boundary identified to a single point: Consider the set of real numbers with the ordinary topology, and write if and only if is an integer. Then the quotient space is homeomorphic to the unit circle via the homeomorphism which sends the equivalence class of to A generalization of the previous example is the following: Suppose a topological group acts continuously on a space One can form an equivalence relation on by saying points are equivalent if and only if they lie in the same orbit. The quotient space under this relation is called the orbit space, denoted In the previous example acts on by translation. The orbit space is homeomorphic to Note: The notation is somewhat ambiguous. If is understood to be a group acting on via addition, then the quotient is the circle. However, if is thought of as a topological subspace of (that is identified as a single point) then the quotient (which is identifiable with the set ) is a countably infinite bouquet of circles joined at a single point This next example shows that it is in general true that if is a quotient map then every convergent sequence (respectively, every convergent net) in has a lift (by ) to a convergent sequence (or convergent net) in Let and Let and let be the quotient map so that and for every The map defined by is well-defined (because ) and a homeomorphism. Let and let be any sequences (or more generally, any nets) valued in such that in Then the sequence converges to in but there does not exist any convergent lift of this sequence by the quotient map (that is, there is no sequence in that both converges to some and satisfies for every ). This counterexample can be generalized to nets by letting be any directed set, and making into a net by declaring that for any holds if and only if both (1) and (2) if then the -indexed net defined by letting equal and equal to has no lift (by ) to a convergent -indexed net in Properties Quotient maps are characterized among surjective maps by the following property: if is any topological space and is any function, then is continuous if and only if is continuous. The quotient space together with the quotient map is characterized by the following universal property: if is a continuous map such that implies for all then there exists a unique continuous map such that In other words, the following diagram commutes: One says that descends to the quotient for expressing this, that is that it factorizes through the quotient space. The continuous maps defined on are, therefore, precisely those maps which arise from continuous maps defined on that respect the equivalence relation (in the sense that they send equivalent elements to the same image). This criterion is copiously used when studying quotient spaces. Given a continuous surjection it is useful to have criteria by which one can determine if is a quotient map. Two sufficient criteria are that be open or closed. Note that these conditions are only sufficient, not necessary. It is easy to construct examples of quotient maps that are neither open nor closed. For topological groups, the quotient map is open. Compatibility with other topological notions Separation In general, quotient spaces are ill-behaved with respect to separation axioms. The separation properties of need not be inherited by and may have separation properties not shared by is a T1 space if and only if every equivalence class of is closed in If the quotient map is open, then is a Hausdorff space if and only if ~ is a closed subset of the product space Connectedness If a space is connected or path connected, then so are all its quotient spaces. A quotient space of a simply connected or contractible space need not share those properties. Compactness If a space is compact, then so are all its quotient spaces. A quotient space of a locally compact space need not be locally compact. Dimension The topological dimension of a quotient space can be more (as well as less) than the dimension of the original space; space-filling curves provide such examples. See also Topology Algebra Notes References Theory of continuous functions General topology Group actions (mathematics) Space (topology) Topology
Quotient space (topology)
Physics,Mathematics
1,986
7,625,922
https://en.wikipedia.org/wiki/Violin%20acoustics
Violin acoustics is an area of study within musical acoustics concerned with how the sound of a violin is created as the result of interactions between its many parts. These acoustic qualities are similar to those of other members of the violin family, such as the viola. The energy of a vibrating string is transmitted through the bridge to the body of the violin, which allows the sound to radiate into the surrounding air. Both ends of a violin string are effectively stationary, allowing for the creation of standing waves. A range of simultaneously produced harmonics each affect the timbre, but only the fundamental frequency is heard. The frequency of a note can be raised by the increasing the string's tension, or decreasing its length or mass. The number of harmonics present in the tone can be reduced, for instance by the using the left hand to shorten the string length. The loudness and timbre of each of the strings is not the same, and the material used affects sound quality and ease of articulation. Violin strings were originally made from catgut but are now usually made of steel or a synthetic material. Most strings are wound with metal to increase their mass while avoiding excess thickness. During a bow stroke, the string is pulled until the string's tension causes it to return, after which it receives energy again from the bow. Violin players can control bow speed, the force used, the position of the bow on the string, and the amount of hair in contact with the string. The static forces acting on the bridge, which supports one end of the strings' playing length, are large: dynamic forces acting on the bridge force it to rock back and forth, which causes the vibrations from the strings to be transmitted. A violin's body is strong enough to resist the tension from the strings, but also light enough to vibrate properly. It is made of two arched wooden plates with ribs around the sides and has two f-holes on either side of the bridge. It acts as a sound box to couple the vibration of strings to the surrounding air, with the different parts of the body all respond differently to the notes that are played, and every part (including the bass bar concealed inside) contributing to the violin's characteristic sound. In comparison to when a string is bowed, a plucked string dampens more quickly. The other members of the violin family have different, but similar timbres. The viola and the double bass’s characteristics contribute to them being used less in the orchestra as solo instruments, in contrast to the cello (violoncello), which is not adversely affected by having the optimum dimensions to correspond with the pitch of its open strings. Historical background The nature of vibrating strings was studied by the ancient Ionian Greek philosopher Pythagoras, who is thought to have been the first to observe the relationship between the lengths of vibrating strings and the consonant sounds they make. In the sixteenth century, the Italian lutenist and composer Vincenzo Galilei pioneered the systematic testing and measurement of stretched strings, using lute strings. He discovered that while the ratio of an interval is proportional to the length of the string, it was directly proportional to the square root of the tension. His son Galileo Galilei published the relationship between frequency, length, tension and diameter in Two New Sciences (1638). The earliest violin makers, though highly skilled, did not advance any scientific knowledge of the acoustics of stringed instruments. During the nineteenth century, the multi-harmonic sound from a bowed string was first studied in detail by the French physicist Félix Savart. The German physicist Hermann von Helmholtz investigated the physics of the plucked string, and showed that the bowed string travelled in a triangular shape with the apex moving at a constant speed. The violin's modes of vibration were researched in Germany during the 1930s by Hermann Backhaus and his student Hermann Meinel, whose work included the investigation of frequency responses of violins. Understanding of the acoustical properties of violins was developed by F.A. Saunders in the 1930s and 40s, work that was continued over the following decades by Saunders and his assistant Carleen Hutchins, and also Werner Lottermoser, Jürgen Meyer, and Simone Sacconi. Hutchins' work dominated the field of violin acoustics for twenty years from the 1960s onwards, until it was superseded by the use of modal analysis, a technique that was, according to the acoustician George Bissinger, "of enormous importance for understanding [the] acoustics of the violin". Strings The open strings of a violin are of the same length from the bridge to the nut of the violin, but vary in pitch because they have different masses per unit length. Both ends of a violin string are essentially stationary when it vibrates, allowing for the creation of standing waves (eigenmodes), caused by the superposition of two sine waves travelling past each other. A vibrating string does not produce a single frequency. The sound may be described as a combination of a fundamental frequency and its overtones, which cause the sound to have a quality that is individual to the instrument, known as the timbre. The timbre is affected by the number and comparative strength of the overtones (harmonics) present in a tone. Even though they are produced at the same time, only the fundamental frequency—which has the greatest amplitude—is heard. The violin is unusual in that it produces frequencies beyond the upper audible limit for humans. The fundamental frequency and overtones of the resulting sound depend on the material properties of the string: tension, length, and mass, as well as damping effects and the stiffness of the string. Violinists stop a string with a left-hand fingertip, shortening its playing length. Most often the string is stopped against the violin's fingerboard, but in some cases a string lightly touched with the fingertip is enough, causing an artificial harmonic to be produced. Stopping the string at a shorter length has the effect of raising its pitch, and since the fingerboard is unfretted, any frequency on the length of the string is possible. There is a difference in timbre between notes made on an 'open' string and those produced by placing the left hand fingers on the string, as the finger acts to reduce the number of harmonics present. Additionally, the loudness and timbre of the four strings is not the same. The fingering positions for a particular interval vary according to the length of the vibrating part of the string. For a violin, the whole tone interval on an open string is about —at the other end of the string, the same interval is less than a third of this size. The equivalent numbers are successively larger for a viola, a cello (violoncello) and a double bass. When the violinist is directed to pluck a string (Ital. pizzicato), the sound produced dies away, or dampens, quickly: the dampening is more striking for a violin compared with the other members of the violin family because of its smaller dimensions, and the effect is greater if an open string is plucked. During a pizzicato note, the decaying higher harmonics diminish more quickly than the lower ones. The vibrato effect on a violin is achieved when muscles in the arm, hand and wrist act to cause the pitch of a note to oscillate. A typical vibrato has a frequency of 6 Hz and causes the pitch to vary by a quarter of a tone. Tension The tension (T) in a stretched string is given by where E is the Young's modulus, S is the cross-sectional area, ΔL is the extension, and L is the string length. For vibrations with a large amplitude, the tension is not constant. Increasing the tension on a string results in a higher frequency note: the frequency of the vibrating string, which is directly proportional to the square root of the tension, can be represented by the following equation: where f is the fundamental frequency of the string, T is the tension force and M is the mass. The strings of a violin are attached to adjustable tuning pegs and (with some strings) finer tuners. Tuning each string is done by loosening or tightening it until the desired pitch is reached. The tension of a violin string ranges from . Length For any wave travelling at a speed v, travelling a distance λ in one period T, . For a frequency f For the fundamental frequency of a vibrating string on a violin, the string length is λ, where λ is the associated wavelength, so . Materials String material influences the overtone mix and affects the quality of the sound. Response and ease of articulation are also affected by choice of string materials. Violin strings were originally made from catgut, which is still available and used by some professional musicians, although strings made of other materials are less expensive to make and are not as sensitive to temperature. Modern strings are made of steel-core, stranded steel-core, or a synthetic material such as Perlon. Violin strings (with the exception of most E strings) are helically wound with metal chosen for its density and cost. The winding on a string increases the mass of the string, alters the tone (quality of sound produced) to make it sound brighter or warmer, and affects the response. A plucked steel string sounds duller than one made of gut, as the action does not deform steel into a pointed shape as easily, and so does not produce as many higher frequency harmonics. The bridge The bridge, which is placed on the top of the body of the violin where the soundboard is highest, supports one end of the strings' playing length. The static forces acting on the bridge are large, and dependent on the tension in the strings: passes down through the bridge as a result of a tension in the strings of . The string 'break' angle made by the string across the bridge affects the downward force, and is typically 13 to 15° to the horizontal. The bridge transfers energy from the strings to the body of the violin. As a first approximation, it is considered to act as a node, as otherwise the fundamental frequencies and their related harmonics would not be sustained when a note is played, but its motion is critical in determining how energy is transmitted from the strings to the body, and the behaviour of the strings themselves. One component of its motion is side-to-side rocking as it moves with the string. It may be usefully viewed as a mechanical filter, or an arrangement of masses and "springs" that filters and shapes the timbre of the sound. The bridge is shaped to emphasize a singer's formant at about 3000 Hz. Since the early 1980s it has been known that high quality violins have vibrated better at frequencies around 2–3 kHz because of an effect attributed to the resonance properties of the bridge, and now referred as the 'bridge-hill' effect. Muting is achieved by fitting a clip onto the bridge, which absorbs a proportion of the energy transmitted to the body of the instrument. Both a reduction in sound intensity and a different timbre are produced, so that using a mute is not seen by musicians as the main method to use when wanting to play more quietly. The bow A violin can sustain its tone by the process of bowing, when friction causes the string to be pulled sideways by the bow until an opposing force caused by the string's tension becomes great enough to cause the string to slip back. The string returns to its equilibrium position and then moves sideways past this position, after which it receives energy again from the moving bow. The bow consists of a flat ribbon of parallel horse hairs stretched between the ends of a stick, which is generally made of Pernambuco wood, used because of its particular elastic properties. The hair is coated with rosin to provide a controlled 'stick-slip oscillation' as it moves at right angles to the string. In 2004, Jim Woodhouse and Paul Galluzzo of Cambridge University described the motion of a bowed string as being "the only stick-slip oscillation which is reasonably well understood". The length, weight, and balance point of modern bows are standardized. Players may notice variations in sound and handling from bow to bow, based on these parameters as well as stiffness and moment of inertia. A violinist or violist would naturally tend to play louder when pushing the bow across the string (an 'up-bow'), as the leverage is greater. At its quietest, the instrument has a power of 0.0000038 watts, compared with 0.09 watts for a small orchestra: the range of sound pressure levels of the instrument is from 25 to 30dB. Physics of bowing Violinists generally bow between the bridge and the fingerboard, and are trained to keep the bow perpendicular to the string. In bowing, the three most prominent factors under the player's immediate control are bow speed, force, and the place where the hair crosses the string (known as the 'sounding point'): a vibrating string with a shorter length causes the sounding point to be positioned closer to the bridge. The player may also vary the amount of hair in contact with the string, by tilting the bow stick more or less away from the bridge. The string twists as it is bowed, which adds a 'ripple' to the waveform: this effect is increased if the string is more massive. Bowing directly above the fingerboard (Ital. sulla tastiera) produces what the 20th century American composer and author Walter Piston described as a "very soft, floating quality", caused by the string being forced to vibrate with a greater amplitude. Sul ponticello—when the bow is played close to the bridge—is the opposite technique, and produces what Piston described as a "glassy and metallic" sound, due to normally unheard harmonics becoming able to affect the timbre. Helmholtz motion Modern research on the physics of violins began with Helmholtz, who showed that the shape of the string as it is bowed is in the form of a 'V', with an apex (known as the 'Helmholtz corner') that moves along the main part of the string at a constant speed. Here, the nature of the friction between bow and string changes, and slipping or sticking occurs, depending on the direction the corner is moving. The wave produced rotates as the Helmholtz corner moves along a plucked string, which caused a reduced amount of energy to be transmitted to the bridge when the plane of rotation is not parallel to the fingerboard. Less energy still is supplied when the string is bowed, as a bow tends to dampen any oscillations that are at an angle to the bow hair, an effect enhanced if an uneven bow pressure is applied, e.g. by a novice player. The Indian physicist C. V. Raman was the first to obtain an accurate model for describing the mechanics of the bowed string, publishing his research in 1918. His model was able to predict the motion described by Helmholtz (known nowadays as Helmholtz motion), but he had to assume that the vibrating string was perfectly flexible, and lost energy when the wave was reflected with a reflection coefficient that depended upon the bow speed. Raman's model was later developed by the mathematicians Joseph Keller and F.G. Friedlander. Helmholtz and Raman produced models that included sharp cornered waves: the study of smoother corners was undertaken by Cremer and Lazarus in 1968, who showed that significant smoothing occurs (i.e. there are fewer harmonics present) only when normal bowing forces are applied. The theory was further developed during the 1970s and 1980s to produce a digital waveguide model, based on the complex relationship behaviour of the bow's velocity and the frictional forces that were present. The model was a success in simulating Helmholtz motion (including the 'flattening' effect of the motion caused by larger forces), and was later extended to take into account the string's bending stiffness, its twisting motion, and the effect on the string of body vibrations and the distortion of the bow hair. However, the model assumed that the coefficient of friction due to the rosin was solely determined by the bow's speed, and ignored the possibility that the coefficient could depend on other variables. By the early 2000s, the importance of variables such the energy supplied by friction to the rosin on the bow, and the player's input into the action of the bow were recognised, showing the need for an improved model. The body The body of a violin is oval and hollow, and has two f-shaped holes, called sound holes, located on either side of the bridge. The body must be strong enough to support the tension from the strings, but also light and thin enough to vibrate properly. It is made of two arched wooden plates known as the belly and the backplate, whose sides are formed by thin curved ribs. It acts as a sound box to couple the vibration of strings to the surrounding air, making it audible. In comparison, the strings, which move almost no air, are silent. The existence of expensive violins is dependent on small differences in their physical behaviour in comparison with cheaper ones. Their construction, and especially the arching of the belly and the backplate, has a profound effect on the overall sound quality of the instrument, and its many different resonant frequencies are caused by the nature of the wooden structure. The different parts all respond differently to the notes that are played, displaying what Carleen Hutchins described as 'wood resonances'. The response of the string can be tested by detecting the motion produced by the current through a metal string when it is placed in an oscillating magnetic field. Such tests have shown that the optimum 'main wood resonance' (the wood resonance with the lowest frequency) occurs between 392 and 494 Hz, equivalent to a tone below and above A4. The ribs are reinforced at their edges with lining strips, which provide extra gluing surface where the plates are attached. The wooden structure is filled, glued and varnished using materials which all contribute to a violin's characteristic sound. The air in the body also acts to enhance the violin's resonating properties, which are affected by the volume of enclosed air and the size of the f-holes. The belly and the backplate can display modes of vibration when they are forced to vibrate at particular frequencies. The many modes that exist can be found using fine dust or sand, sprinkled on the surface of a violin-shaped plate. When a mode is found, the dust accumulates at the (stationary) nodes: elsewhere on the plate, where it is oscillating, the dust fails to appear. The patterns produced are named after the German physicist Ernst Chladni, who first developed this experimental technique. Modern research has used sophisticated techniques such as holographic interferometry, which enables analysis of the motion of the violin surface to be measured, a method first developed by scientists in the 1960s, and the finite element method, where discrete parts of the violin are studied with the aim of constructing an accurate simulation. The British physicist Bernard Richardson has built virtual violins using these techniques. At East Carolina University, the American acoustician George Bissinger has used laser technology to produce frequency responses that have helped him to determine how the efficiency and damping of the violin's vibrations depend on frequency. Another technique, known as modal analysis, involves the use of 'tonal copies' of old instruments to compare a new instrument with an older one. The effects of changing the new violin in the smallest way can be identified, with the aim of replicating the tonal response of the older model. The bass bar and the sound post A bass bar and a sound post concealed inside the body both help transmit sound to the back of the violin, with the sound post also serving to support the structure. The bass bar is glued to the underside of the top, whilst the sound post is held in place by friction. The bass bar was invented to strengthen the structure, and is positioned directly below one of the bridge's feet. Near the foot of the bridge, but not directly below it, is the sound post. When the bridge receives energy from the strings, it rocks, with the sound post acting as a pivot and the bass bar moving with the plate as the result of leverage. This behaviour enhances the violin tone quality: if the sound post's position is adjusted, or if the forces acting on it are changed, the sound produced by the violin can be adversely affected. Together they make the shape of the violin body asymmetrical, which allows different vibrations to occur, which causing the timbre to become more complex. In addition to the normal modes of the body structure, the enclosed air in the body exhibits Helmholtz resonance modes as it vibrates. Wolf tones Bowing is an example of resonance where maximum amplification occurs at the natural frequency of the system, and not the forcing frequency, as the bow has no periodic force. A wolf tone is produced when small changes in the fundamental frequency—caused by the motion of the bridge—become too great, and the note becomes unstable. A sharp resonance response from the body of a cello (and occasionally a viola or a violin) produces a wolf tone, an unsatisfactory sound that repeatedly appears and disappears. A correctly positioned suppressor can remove the tone by reducing the resonance at that frequency, without dampening the sound of the instrument at other frequencies. Comparison with other members of the violin family The physics of the viola are the same as that of the violin, and the construction and acoustics of the cello and the double bass are similar. The viola is a larger version of the violin, and has on average a total body length of , with strings tuned a fifth lower than a violin (with a length of about ). The viola's larger size is not proportionally great enough to correspond to the strings being pitched as they are, which contributes to its different timbre. Violists need to have hands large enough to be able to accomplish fingering comfortably. The C string has been described by Piston as having a timbre that is "powerful and distinctive", but perhaps in part because the sound it produces is easily covered, the viola is not so frequently used in the orchestra as a solo instrument. According to the American physicist John Rigden, the lower notes of the viola (along with the cello and the double bass) suffer from strength and quality. This is because typical resonant frequencies for a viola lie between the natural frequencies of the middle open strings, and are too high to reinforce the frequencies of the lower strings. To correct this problem, Rigden calculated that a viola would need strings that were half as long again as on a violin, which would making the instrument inconvenient to play. The cello, with an overall length of , is pitched an octave below the viola. The proportionally greater thickness of its body means that its timbre is not adversely affected by having dimensions that do not correspond to its pitch of its open strings, as is the case with the viola. The double bass, in comparison with the other members of the family, is more pointed where the belly is joined by the neck, possibly to compensate for the strain caused by the tension of the strings, and is fitted with cogs for tuning the strings. The average overall length of an orchestral bass is . The back can be arched or flat. The bassist's fingers have to stretch twice as far as a cellist's, and greater force is required to press them against the finger-board. The pizzicato tone, which is 'rich' sounding due to the slow speed of vibrations, is changeable according to which of the associated harmonies are more dominant. The technical capabilities of the double bass are limited. Quick passages are seldom written for it; they lack clarity because of the time required for the strings to vibrate. The double bass is the foundation of the whole orchestra and therefore musically of great importance. According to John Rigden, a double bass would need to be twice as large as its present size for its bowed notes to sound powerful enough to be heard over an orchestra. Notes Bibliography Hutchins, Carleen Maley The Acoustics of Violin Plates. Scientific American, vol 245, No. 4. Oct 1981 Further reading Part 1 (pp. 1-276). Part 2 (pp. 277-331). Part 3 (pp. 332-389). Saunders, F.A. (October 1937). "The mechanical action of violins", in The Journal of the Acoustical Society of America, Vol. 9, No. 2, pp. 81–98. (May 2020) (in French) (May 2020) External links How does a violin work? An introduction to violin acoustics published by the University of New South Wales Path Through the Woods - The Use of Medical Imaging in Examining Historical Instruments The use of computer aided tomography (CT Scanning) to examine great Italian instruments in order to replicate their acoustics in modern instruments. Modal Animations - animations of violins showing how the plates vibrate at various frequencies, from Borman Violins. Wire-frame animation of a 1712 Stradivari violin at various eigenmode frequencies Piastra di Chladni: violino a YouTube video of the patterns produced on a violin-shaped Chladni plate, uploaded by the University of Milan Physics Department (text in Italian). Acoustics Violins
Violin acoustics
Physics
5,239
29,859,885
https://en.wikipedia.org/wiki/Quassia%20sp.%20%27Moonee%20Creek%27
{{Automatic taxobox | name = Moonee quassia | image = Moonee Quassia Coffs.jpg | taxon = Quassia | species_text = Q. sp. 'Moonee Creek| binomial_text = Quassia sp. 'Moonee Creek' |synonyms = }}Quassia sp. 'Moonee Creek, the Moonee quassia, is a shrub found in north eastern New South Wales, Australia. A rare plant, listed as endangered by extinction. However, it has yet to be formally named. It grows below eucalyptus forest, often in high rainfall areas. From the Coffs Harbour, Dorrigo and Grafton regions, mostly at lower altitudes. Known from 18 locations between Moonee Beach in the south, to McCraes Knob which is some 12 km east of the Clarence River in the north. It was originally known from an ash heap at Glenreagh, next to the Dorrigo railway line. A bush to 2 metres tall, with narrow leaves, 10 cm long, alternate on the stem. Stems are usually not straight, (which indicates spasmodic growth patterns). Leaves are dark glossy green above, and paler below. The leaf veins are at a wide angle in relation to the mid-rib. Small green/red flowers form in clusters of one to four flowers. The fruit is a hairy red drupe, 5 to 10 mm long. References External links Quassia Flora of New South Wales Undescribed plant species
Quassia sp. 'Moonee Creek'
Biology
315
36,449,578
https://en.wikipedia.org/wiki/Intihuatana%2C%20Urubamba
Intihuatana (possibly from in the Quechua spelling Inti Watana or Intiwatana) at the archaeological site of Machu Picchu (Machu Pikchu) is a notable ritual stone associated with the astronomic clock or calendar of the Inca in South America. Machu Picchu was thought to have been built c. 1450 by the Sapa Inca Pachacuti as a country estate. In the late 16th century, the Viceroy Francisco de Toledo and the clergy destroyed those Intihuatana which they could find. They did so as they believed that the Incas' religion was a blasphemy and the religious significance of the Intihuatana could be a political liability. The Intihuatana of Machu Picchu was found intact by Bingham in 1911, indicating that the Spanish conquerors had not found it. Intihuatana was damaged on September 8, 2000 when a crane being used in an ad shoot toppled over and chipped off a piece of the granite. Design The Intihuatana of Machu Picchu was carved directly into the bedrock of the mountain's summit area. It is characterized by complex surfaces, planes and angles whose purpose at this time is unknown. Featuring a slightly inclined plane at its top, an upright stone column tilts 13 degrees northward. Function Possibly used as a sundial, it was aligned with the sun's position during the winter solstice. The Inca believed the stone held the sun in its place along its annual path in the sky. At midday on the equinoxes the sun stands almost above the pillar, casting no shadow at all. At midday on 11 November and 30 January, the sun stands almost exactly above the pillar, casting no shadow. On June 21, the stone is casting the longest shadow on its southern side and on December 21, a much shorter one on its northern side. The base is said to be "in the shape of a map of the Inca Empire" but most archaeologists disagree, observing that the base is squat and stubby whereas the Tawantinsuyu is long and thin. Pedro Sueldo Nava describes the landmark as "perhaps one of the most beautiful and enigmatic places to be found in Machu Picchu." See also Inti, the ancient Incan sun god Kusichaka River Machu Q'inti Pakaymayu Phutuq K'usi Wayna Pikchu Wayna Q'inti Sallqantay References Buildings and structures completed in 1450 Archaeological sites in Peru Ancient astronomy Inca Empire Archaeological sites in Cusco Region 15th-century establishments in the Inca civilization 16th-century disestablishments in the Inca civilization 1911 archaeological discoveries Machu Picchu
Intihuatana, Urubamba
Astronomy
562
2,739,252
https://en.wikipedia.org/wiki/Primitive%20permutation%20group
In mathematics, a permutation group G acting on a non-empty finite set X is called primitive if G acts transitively on X and the only partitions the G-action preserves are the trivial partitions into either a single set or into |X| singleton sets. Otherwise, if G is transitive and G does preserve a nontrivial partition, G is called imprimitive. While primitive permutation groups are transitive, not all transitive permutation groups are primitive. The simplest example is the Klein four-group acting on the vertices of a square, which preserves the partition into diagonals. On the other hand, if a permutation group preserves only trivial partitions, it is transitive, except in the case of the trivial group acting on a 2-element set. This is because for a non-transitive action, either the orbits of G form a nontrivial partition preserved by G, or the group action is trivial, in which case all nontrivial partitions of X (which exists for |X| ≥ 3) are preserved by G. This terminology was introduced by Évariste Galois in his last letter, in which he used the French term équation primitive for an equation whose Galois group is primitive. Properties In the same letter in which he introduced the term "primitive", Galois stated the following theorem:If G is a primitive solvable group acting on a finite set X, then the order of X is a power of a prime number p. Further, X may be identified with an affine space over the finite field with p elements, and G acts on X as a subgroup of the affine group.If the set X on which G acts is finite, its cardinality is called the degree of G. A corollary of this result of Galois is that, if is an odd prime number, then the order of a solvable transitive group of degree is a divisor of In fact, every transitive group of prime degree is primitive (since the number of elements of a partition fixed by must be a divisor of ), and is the cardinality of the affine group of an affine space with elements. It follows that, if is a prime number greater than 3, the symmetric group and the alternating group of degree are not solvable, since their order are greater than Abel–Ruffini theorem results from this and the fact that there are polynomials with a symmetric Galois group. An equivalent definition of primitivity relies on the fact that every transitive action of a group G is isomorphic to an action arising from the canonical action of G on the set G/H of cosets for H a subgroup of G. A group action is primitive if it is isomorphic to G/H for a maximal subgroup H of G, and imprimitive otherwise (that is, if there is a proper subgroup K of G of which H is a proper subgroup). These imprimitive actions are examples of induced representations. The numbers of primitive groups of small degree were stated by Robert Carmichael in 1937: There are a large number of primitive groups of degree 16. As Carmichael notes, all of these groups, except for the symmetric and alternating group, are subgroups of the affine group on the 4-dimensional space over the 2-element finite field. Examples Consider the symmetric group acting on the set and the permutation Both and the group generated by are primitive. Now consider the symmetric group acting on the set and the permutation The group generated by is not primitive, since the partition where and is preserved under , i.e. and . Every transitive group of prime degree is primitive The symmetric group acting on the set is primitive for every n and the alternating group acting on the set is primitive for every n > 2. See also Block (permutation group theory) Jordan's theorem (symmetric group) O'Nan–Scott theorem, a classification of finite primitive groups into various types References Roney-Dougal, Colva M. The primitive permutation groups of degree less than 2500, Journal of Algebra 292 (2005), no. 1, 154–183. The GAP Data Library "Primitive Permutation Groups". Carmichael, Robert D., Introduction to the Theory of Groups of Finite Order. Ginn, Boston, 1937. Reprinted by Dover Publications, New York, 1956. Permutation groups Integer sequences
Primitive permutation group
Mathematics
906
4,564,415
https://en.wikipedia.org/wiki/IC%201337
IC 1337 is an intermediate spiral galaxy in the constellation Capricornus. The galaxy is located close to the celestial equator. It was discovered by Stéphane Javelle on July 22, 1892. One supernova has been observed in IC 1337: SN2019gwl (typeII, mag. 19.15). References NGC/IC Project 1337 65760 -03-53-012 Capricornus Intermediate spiral galaxies
IC 1337
Astronomy
94
24,103,948
https://en.wikipedia.org/wiki/Bounded%20quantification
In type theory, bounded quantification (also bounded polymorphism or constrained genericity) refers to universal or existential quantifiers which are restricted ("bounded") to range only over the subtypes of a particular type. Bounded quantification is an interaction of parametric polymorphism with subtyping. Bounded quantification has traditionally been studied in the functional setting of System F<:, but is available in modern object-oriented languages supporting parametric polymorphism (generics) such as Java, C# and Scala. Overview The purpose of bounded quantification is to allow for polymorphic functions to depend on some specific behaviour of objects instead of type inheritance. It assumes a record-based model for object classes, where every class member is a record element and all class members are named functions. Object attributes are represented as functions that take no argument and return an object. The specific behaviour is then some function name along with the types of the arguments and the return type. Bounded quantification considers all objects with such a function. An example would be a polymorphic min function that considers all objects that are comparable to each other. F-bounded quantification F-bounded quantification or recursively bounded quantification, introduced in 1989, allows for more precise typing of functions that are applied on recursive types. A recursive type is one that includes a function that uses it as a type for some argument or its return value. Example This kind of type constraint can be expressed in Java with a generic interface. The following example demonstrates how to describe types that can be compared to each other and use this as typing information in polymorphic functions. The Test.min function uses simple bounded quantification and does not ensure the objects are mutually comparable, in contrast with the Test.fMin function which uses F-bounded quantification. In mathematical notation, the types of the two functions are min: ∀ T, ∀ S ⊆ {compareTo: T → int}. S → S → S fMin: ∀ T ⊆ Comparable[T]. T → T → T where Comparable[T] = {compareTo: T → int} interface Comparable<T> { int compareTo(T other); } public class Integer implements Comparable<Integer> { @Override public int compareTo(Integer other) { // ... } } public class String implements Comparable<String> { @Override public int compareTo(String other) { // ... } } public class Test { public static void main(String[] args) { final String a = min("cat", "dog"); final Integer b = min(10, 3); final Comparable c = min("cat", 3); // Throws ClassCastException at runtime final String str = fMin("cat", "dog"); final Integer i = fMin(10, 3); // final Object o = fMin("cat", 3); // Does not compile } public static <S extends Comparable> S min(S a, S b) { if (a.compareTo(b) <= 0) { return a; } else { return b; } } public static <T extends Comparable<T>> T fMin(T a, T b) { if (a.compareTo(b) <= 0) { return a; } else { return b; } } } See also Covariance and contravariance (computer science) Curiously recurring template pattern Wildcard (Java) Notes References Peter S. Canning, William R. Cook, Walter L. Hill, John C. Mitchell, and William Olthoff. "F-bounded polymorphism for object-oriented programming". In Conference on Functional Programming Languages and Computer Architecture, 1989. Benjamin C. Pierce "Intersection types and bounded polymorphism". Lecture Notes in Computer Science 664, 1993. Gilad Bracha, Martin Odersky, David Stoutamire, and Philip Wadler. "Making the future safe for the past: Adding genericity to the Java programming language". In Object-Oriented Programming: Systems, Languages, Applications (OOPSLA). ACM, October 1998. Andrew Kennedy and Don Syme. "Design and Implementation of Generics for the .NET Common Language Runtime". In Programming Language Design and Implementation, 2001. , Chapter 26: Bounded quantification External links Bounded Polymorphism at the Portland Pattern Repository "F-bounded Polymorphism" in The Cecil Language: Specification and Rationale Polymorphism (computer science) Object-oriented programming Type theory Articles with example Java code
Bounded quantification
Mathematics
965
1,715,906
https://en.wikipedia.org/wiki/Time%20scale
Time scale may refer to: Time standard, a specification of either the rate at which time passes, points in time, or both A duration or quantity of time: Orders of magnitude (time) as a power of 10 in seconds; A specific unit of time Geological time scale, a scale that divides up the history of Earth into scientifically meaningful periods In astronomy and physics: Dynamical time scale, in stellar physics, the time in which changes in one part of a body can be communicated to the rest of that body, or in celestial mechanics, a realization of a time-like argument based on a dynamical theory Nuclear timescale, an estimate of the lifetime of a star based solely on its rate of fuel consumption Thermal time scale, an estimate of the lifetime of a star once the fuel reserves at its center are used up In cosmology and particle physics: Planck time, the time scale beneath which quantum effects are comparable in significance to gravitational effects In mathematics: Time-scale calculus, the unification of the theory of difference equations with differential equations In music: Rhythm, a temporal pattern of events Time scale (music), which divides music into sections of time In project management: Man-hour, the time scale used in project management to account for human labor planned or utilized
Time scale
Physics,Astronomy
259
35,250,330
https://en.wikipedia.org/wiki/Tom%20Johnson%20%28astronomer%29
Thomas Jasper Johnson or Tom Johnson (January 11, 1923 – March 13, 2012) was an American electronics engineer and astronomer who founded Celestron, a company which manufacturers telescopes, which revolutionized the amateur astronomy industry and hobby. Sky & Telescope magazine has called him "among the most important figures shaping the last half century of amateur astronomy." Johnson was born in 1923. He served as a military radar technician during World War II. In 1955, Johnson, an engineer, established Valor Electronics, which produced electronics for military and industrial use. Valor, which was headquartered in Gardena, California, had more than one hundred employees by the early 1960s. Johnson, who had a strong interest in amateur astronomy, originally created Celestron as the "Astro-Optical" division of Valor Electronics in 1960. Around 1960, Johnson had been looking for a telescope which could be used by his two sons, but found no child-friendly models on the market at the time. Johnson built a new telescope, a 6-inch reflector telescope, by himself, in 1960. He was visiting his brother in Costa Mesa, California when he came upon his nephew, Roger, trying to grind the 6 inch diameter lens he purchased from the clearance table at a local hobby shop. Roger was tired of the project and offered the lens-grinding kit to his uncle. Thomas Jasper took the kit home and after several days of hand grinding, he invented a machine that would grind the lens for him. Thus, by accepting the lens grinding kit from his nephew, Roger L. Johnson, "TJ" (as the family called him) created that first lens of many. On July 28, 1962, he publicly unveiled a new invention, a portable -inch Cassegrain telescope, at the party held by the Los Angeles Astronomical Society on Mount Pinos. The new transportable telescope proved so groundbreaking that Johnson's invention was featured on the cover of a 1963 issue of Sky & Telescope. Johnson's interest in telescopes soon became a full-fledged business. Johnson's new company, Celestron, which descended from the "Astro-Optical" division of Valor Electronics, soon began selling more sophisticated Schmidt–Cassegrain telescopes in models ranging from just 4 inches to 22 inches. However, the Schmidt–Cassegrain telescope proved difficult to mass-produce because the models needed Schmidt corrector plate, an advanced aspheric lens, which could be hard to manufacture. To solve this production problem, Johnson and the company's engineers invented a new type of telescope, the Celestron 8, in 1970. The Celestron 8 was more compact, affordable and easier to manufacture than traditional telescopes, like the Schmidt–Cassegrain. Johnson's new telescope proved very popular in the amateur astronomy and educational industries, allowing the hobby to rapidly expand and reach more consumers. Johnson sold Celestron in 1980. Johnson was awarded the David Richardson Medal from the Optical Society of America in 1978, the Bruce Blair Medal from the Western Amateur Astronomers in 1993, and the Lifetime Achievement Award by the Small Telescope & Astronomical Society in 2009. Tom Johnson died at 5 a.m. PST on March 13, 2012, at the age of 89. References 1923 births 2012 deaths American electronics engineers American astronomers Amateur astronomers American technology company founders American military personnel of World War II
Tom Johnson (astronomer)
Astronomy
673
40,723,520
https://en.wikipedia.org/wiki/Imitative%20learning
Imitative learning is a type of social learning whereby new behaviors are acquired via imitation. Imitation aids in communication, social interaction, and the ability to modulate one's emotions to account for the emotions of others, and is "essential for healthy sensorimotor development and social functioning". The ability to match one's actions to those observed in others occurs in humans and animals; imitative learning plays an important role in humans in cultural development. Imitative learning is different from observational learning in that it requires a duplication of the behaviour exhibited by the model, whereas observational learning can occur when the learner observes an unwanted behaviour and its subsequent consequences and as a result learns to avoid that behaviour. Imitative learning in animals On the most basic level, research performed by A.L. Saggerson, David N. George, and R.C. Honey showed that pigeons were able to learn a basic process that would lead to the delivery of a reward by watching a demonstrator pigeon. A demonstrator pigeon was trained to peck a panel in response to one stimulus (e.g. a red light) and hop on the panel in response to a second stimulus (e.g. a green light). After proficiency in this task was established in the demonstrator pigeon, other learner pigeons were placed in a video-monitored observation chamber. After every second observed trial, these learner pigeons were then individually placed in the demonstrator pigeon's box and presented the same test. The learner pigeons displayed competent performance on the task, and thus it was concluded that the learner pigeons had formed a response-outcome association while observing. However, the researchers noted that an alternative interpretation of these results could be that the learner pigeons had instead acquired outcome-response associations that guided their behavior and that further testing was needed to establish if this was a valid alternative. A similar study was conducted by Chesler, which compared kittens learning to press a lever for food after seeing their mother do it to kittens who had not. A stimulus in the form of a flickering light was presented, after which the kitten has to press a lever in order to obtain a food reward. The experiment tested the responses of three groups of kittens: those that observed their mother's performance first before attempting the task, those that observed a strange female's performance, and those that did not have a demonstrator and had to complete it through trial and error (the control group). The study found that the kittens that observed their mother before attempting the task acquired the lever-pressing response faster than the kittens that observed a strange female's response. The kittens conducting the task through trial and error never acquired the response. This result suggests that the kittens learned from imitating a model. The study also speculates whether the primacy of imitative learning, as opposed to trial end error, was due to a social and biological response to the mother (a type of learning bias). Whether true imitation occurs in animals is a debated topic. For an action to be an instance of imitative learning, an animal must observe and reproduce the specific pattern of movements produced by the model. Some researchers have proposed evidence that true imitation does not occur in non-primates, and that the observational learning exhibited involves less cognitively complex means such as stimulus enhancement. Chimpanzees are more apt to learning by emulation rather than true imitation. The exception is encultured chimpanzees, which are chimpanzees raised as if they were children. In one study by Buttelman et al., encultured chimpanzees were found to behave similarly to young children and imitate even those actions that were non instrumental to achieving the desired goal. In other studies of true imitation, encultered chimpanzees even imitated the behaviour of a model some time after initially observing it. Imitative learning in humans Imitative learning has been well documented in humans; they are often used as a comparison group in studies of imitative learning in primates. A study by Horner and Whiten compared the actions of (non-encultured) chimpanzees to human children and found that the children over-imitated actions beyond necessity. In the study, children and chimpanzees between the ages of 3-4 were shown a series of actions to open an opaque puzzle box with a reward inside. Two of the actions were necessary to open the box, but one was not, however this was not known by the subjects. A demonstrator performed all three actions to open the box, after which both the chimpanzees and the children attempted the task. Both the children and the chimpanzees copied all three of the behaviours and received the reward inside of the box. The next phase of the study involved a transparent box instead of the opaque box. Due to the transparency of this box, it could clearly be seen that one of the three actions was not necessary to receive the reward. The chimpanzees did not perform the unnecessary action and only performed the two actions necessary to achieve the desired goal. The young children imitated all three actions, despite the fact that they could have selectively ignored irrelevant actions. One explanation for this is that humans follow conventions. A study by Clegg and Legare tested this by demonstrating a method of making a necklace to young children. In demonstrations, the model added a step which was not necessary for the achievement of the final goal of completing the necklace. In one demonstration, the model used a language cue to inform the children that the making of the necklace is instrumental, e.g., "I am going to make a necklace. Let's watch what I am doing. I am going to make a necklace." In another demonstration, the model used language cues to imply that they were making the necklace according to convention, e.g., "I always do it this way. Everyone always does it this way. Let's watch what I am doing. Everyone always does it this way." In the conventional condition, children copied the model with more fidelity, including the unnecessary step. In the instrumental condition, they did not copy the unnecessary step. The study suggests that children discern when to imitate, viewing convention as a salient reason for copying behaviour in order to fit in with the convention. Taking cues for proper behaviour from the actions of others, rather than using independent judgement, is called a conformity bias. Recent research has shown that humans are also subject to other biases when selecting whose behaviour to imitate. Humans imitate individuals they deem successful in the field they also wish to be successful in (success bias), as well respected, prestigious individuals that others preferentially learn from (prestige bias). In a study by Chudek et al., an attentional cue was used to indicate to children that a particular model was prestigious. In an experiment with two models playing with a toy in different ways, prestige was indicated by two observers watching the prestigious model for 10 seconds. The study found that children picked up on the cue that signified prestige and preferentially imitated the prestigious model. The study suggests that such biases help humans pick up direct and indirect cues that an individual possesses knowledge that is worth learning. These cues can lead to humans imitating harmful behaviours. Copycat suicides occur when the person attempting suicide copies the method of a suicide attempt they had heard about or seen in the media, with a significant rise in attempts seen after celebrity suicides (see Werther effect). Suicides can spread through social networks like an epidemic due large groups of people imitating the behaviour of a model or group of models (see Blue Whale Challenge). Imitative learning in robotics Initiative learning can be used in robotics as an alternative to traditional reinforcement learning. Traditional reinforcement learning algorithms start from essentially taking random actions, and are left to figure out the correct sequence of actions to achieve the goal by themselves. However, this approach can fail in robotics, where the reward function may be extremely sparse (e.g. the robot either succeeds or fails, no in-between). If success requires the robot to complete a complex sequence of actions, the reinforcement learning algorithm may struggle to make progress in training. Imitative learning can be used to create a set of successful examples for the reinforcement learning algorithm to learn from by having a human researcher manually pilot the robot, and record the actions taken. These successful examples can guide the reinforcement learning algorithm to the right path better than taking purely random actions would. References Behavioral concepts Learning Social learning theory
Imitative learning
Biology
1,766
52,415,701
https://en.wikipedia.org/wiki/Butterfly%20network
A butterfly network is a technique to link multiple computers into a high-speed network. This form of multistage interconnection network topology can be used to connect different nodes in a multiprocessor system. The interconnect network for a shared memory multiprocessor system must have low latency and high bandwidth unlike other network systems, like local area networks (LANs) or internet for three reasons: Messages are relatively short as most messages are coherence protocol requests and responses without data. Messages are generated frequently because each read-miss or write-miss generates messages to every node in the system to ensure coherence. Read/write misses occur when the requested data is not in the processor's cache and must be fetched either from memory or from another processor's cache. Messages are generated frequently, therefore rendering it difficult for the processors to hide the communication delay. Components The major components of an interconnect network are: Processor nodes, which consist of one or more processors along with their caches, memories and communication assist. Switching nodes (Router), which connect communication assist of different processor nodes in a system. In multistage topologies, higher level switching nodes connect to lower level switching nodes as shown in figure 1, where switching nodes in rank 0 connect to processor nodes directly while switching nodes in rank 1 connect to switching nodes in rank 0. Links, which are physical wires between two switching nodes. They can be uni-directional or bi-directional. These multistage networks have lower cost than a cross bar, but obtain lower contention than a bus. The ratio of switching nodes to processor nodes is greater than one in a butterfly network. Such topology, where the ratio of switching nodes to processor nodes is greater than one, is called an indirect topology. The network derives its name from connections between nodes in two adjacent ranks (as shown in figure 1), which resembles a butterfly. Merging top and bottom ranks into a single rank, creates a Wrapped Butterfly Network. In figure 1, if rank 3 nodes are connected back to respective rank 0 nodes, then it becomes a wrapped butterfly network. BBN Butterfly, a massive parallel computer built by Bolt, Beranek and Newman in the 1980s, used a butterfly interconnect network. Later in 1990, Cray Research's machine Cray C90, used a butterfly network to communicate between its 16 processors and 1024 memory banks. Butterfly network building For a butterfly network with p processor nodes, there need to be p(log2 p + 1) switching nodes. Figure 1 shows a network with 8 processor nodes, which implies 32 switching nodes. It represents each node as N(rank, column number). For example, the node at column 6 in rank 1 is represented as (1,6) and node at column 2 in rank 0 is represented as (0,2). For any 'i' greater than zero, a switching node N(i,j) gets connected to N(i-1, j) and N(i-1, m), where, m is inverted bit on ith location of j. For example, consider the node N(1,6): i equals 1 and j equals 6, therefore m is obtained by inverting the ith bit of 6. As a result, the nodes connected to N(1,6) are : Thus, N(0,6), N(1,6), N(0,2), N(1,2) form a butterfly pattern. Several butterfly patterns exist in the figure and therefore, this network is called a Butterfly Network. Butterfly network routing In a wrapped butterfly network (which means rank 0 gets merged with rank 3), a message is sent from processor 5 to processor 2. In figure 2, this is shown by replicating the processor nodes below rank 3. The packet transmitted over the link follows the form: The header contains the destination of the message, which is processor 2 (010 in binary). The payload is the message, M and trailer contains checksum. Therefore, the actual message transmitted from processor 5 is: Upon reaching a switching node, one of the two output links is selected based on the most significant bit of the destination address. If that bit is zero, the left link is selected. If that bit is one, the right link is selected. Subsequently, this bit is removed from the destination address in the packet transmitted through the selected link. This is shown in figure 2. The above packet reaches N(0,5). From the header of the packet it removes the leftmost bit to decide the direction. Since it is a zero, left link of N(0,5) (which connects to N(1,1)) gets selected. The new header is '10'. The new packet reaches N(1,1). From the header of the packet it removes the leftmost bit to decide the direction. Since it is a one, right link of N(1,1) (which connects to N(2,3)) gets selected. The new header is '0'. The new packet reaches N(2,3). From the header of the packet it removes the leftmost bit to decide the direction. Since it is a zero, left link of N(2,3) (which connects to N(3,2)) gets selected. The header field is empty. Processor 2 receives the packet, which now contains only the payload 'M' and the checksum. Butterfly network parameters Several parameters help evaluate a network topology. The prominent ones relevant in designing large-scale multi-processor systems are summarized below and an explanation of how they are calculated for a butterfly network with 8 processor nodes as shown in figure 1 is provided. Bisection Bandwidth: The maximum bandwidth required to sustain communication between all nodes in the network. This can be interpreted as the minimum number of links that need to be severed to split the system into two equal portions. For example, the 8 node butterfly network can be split into two by cutting 4 links that crisscross across the middle. Thus bisection bandwidth of this particular system is 4. It is a representative measure of the bandwidth bottleneck which restricts overall communication. Diameter: The worst case latency (between two nodes) possible in the system. It can be calculated in terms of network hops, which is the number of links a message must travel in order to reach the destination node. In the 8 node butterfly network, it appears that N(0,0) and N(3,7) are farthest away, but upon inspection, it is apparent that due to the symmetric nature of the network, traversing from any rank 0 node to any rank 3 node requires only 3 hops. Therefore, the diameter of this system is 3. Links: Total number of links required to construct the entire network structure. This is an indicator of overall cost and complexity of implementation. The example network shown in figure 1 requires a total of 48 links (16 links each between rank 0 and 1, rank 1 and 2, rank 2 and 3). Degree: The complexity of each router in the network. This is equal to the number of in/out links connected to each switching node. The butterfly network switching nodes have 2 input links and 2 output links, hence it is a 4-degree network. Comparison with other network topologies This section compares the butterfly network with linear array, ring, 2-D mesh and hypercube networks. Linear array can be considered as a 1-D mesh topology. Relevant parameters are compiled in the table (‘p’ represents the number of processor nodes). Advantages Butterfly networks have lower diameter than other topologies like a linear array, ring and 2-D mesh. This implies that in butterfly network, a message sent from one processor would reach its destination in a lower number of network hops. Butterfly networks have higher bisection bandwidth than other topologies. This implies that in butterfly network, a higher number of links need to be broken in order to prevent global communication. It has a bigger computer range. Disadvantages Butterfly networks are more complex and costlier than other topologies due to the higher number of links required to sustain the network. The difference between hypercube and butterfly lies within their implementation. Butterfly network has a symmetric structure where all processor nodes between two ranks are equidistant to each other, whereas hypercube is more suitable for a multi-processor system which demands unequal distances between its nodes. By looking at the number of links required, it may appear that hypercube is cheaper and simpler compared to a butterfly network, but as the number of processor nodes go beyond 16, the router cost and complexity (represented by degree) of butterfly network becomes lower than hypercube because its degree is independent of the number of nodes. In conclusion, no single network topology is best for all scenarios. The decision is made based on factors like the number of processor nodes in the system, bandwidth-latency requirements, cost and scalability. See also Parallel computing Network topology Mesh networking References Internet architecture Network topology
Butterfly network
Mathematics,Technology
1,868
72,766,708
https://en.wikipedia.org/wiki/32%20Virginis
32 Virginis, also known as FM Virginis, is a star located about 250 light years from the Earth, in the constellation Virgo. Its apparent magnitude ranges from 5.20 to 5.28, making it faintly visible to the naked eye of an observer well away from city lights. 32 Virginis is a binary star, and the more massive component of the binary is a Delta Scuti variable star which oscillates with a dominant period of 103.51 minutes. In 1914, Walter Sydney Adams announced that 32 Virginis is a spectroscopic binary. John Beattie Cannon published the first set of orbital elements for the binary system in 1915. Corrado Bartolini et al. made photometric observations of the star in early 1971, and found that the star showed variability due to pulsations. In 1974, 32 Virginis was given the variable star designation FM Virginis. Donald Kurtz et al. determined that the star was a Delta Scuti variable, in 1976. The primary star is believed to be an Am star similar to rho Puppis - a pulsating post-main sequence star. References Delta Scuti variables Virgo (constellation) 62267 110951 Durchmusterung objects Virginis, FM Virginis, 32 Virginis, d02
32 Virginis
Astronomy
266
216,049
https://en.wikipedia.org/wiki/Adaptive%20optics
Adaptive optics (AO) is a technique of precisely deforming a mirror in order to compensate for light distortion. It is used in astronomical telescopes and laser communication systems to remove the effects of atmospheric distortion, in microscopy, optical fabrication and in retinal imaging systems to reduce optical aberrations. Adaptive optics works by measuring the distortions in a wavefront and compensating for them with a device that corrects those errors such as a deformable mirror or a liquid crystal array. Adaptive optics should not be confused with active optics, which work on a longer timescale to correct the primary mirror geometry. Other methods can achieve resolving power exceeding the limit imposed by atmospheric distortion, such as speckle imaging, aperture synthesis, and lucky imaging, or by moving outside the atmosphere with space telescopes, such as the Hubble Space Telescope. History Adaptive optics was first envisioned by Horace W. Babcock in 1953, and was also considered in science fiction, as in Poul Anderson's novel Tau Zero (1970), but it did not come into common usage until advances in computer technology during the 1990s made the technique practical. Some of the initial development work on adaptive optics was done by the US military during the Cold War and was intended for use in tracking Soviet satellites. Microelectromechanical systems (MEMS) deformable mirrors and magnetics concept deformable mirrors are currently the most widely used technology in wavefront shaping applications for adaptive optics given their versatility, stroke, maturity of technology, and the high-resolution wavefront correction that they afford. Tip–tilt correction The simplest form of adaptive optics is tip–tilt correction, which corresponds to correction of the tilts of the wavefront in two dimensions (equivalent to correction of the position offsets for the image). This is performed using a rapidly moving tip–tilt mirror that makes small rotations around two of its axes. A significant fraction of the aberration introduced by the atmosphere can be removed in this way. Tip–tilt mirrors are effectively segmented mirrors having only one segment which can tip and tilt, rather than having an array of multiple segments that can tip and tilt independently. Due to the relative simplicity of such mirrors and having a large stroke, meaning they have large correcting power, most AO systems use these, first, to correct low-order aberrations. Higher-order aberrations may then be corrected with deformable mirrors. In astronomy Atmospheric seeing When light from a star or another astronomical object enters the Earth's atmosphere, atmospheric turbulence (introduced, for example, by different temperature layers and different wind speeds interacting) can distort and move the image in various ways. Visual images produced by any telescope larger than approximately are blurred by these distortions. Wavefront sensing and correction An adaptive optics system tries to correct these distortions, using a wavefront sensor which takes some of the astronomical light, a deformable mirror that lies in the optical path, and a computer that receives input from the detector. The wavefront sensor measures the distortions the atmosphere has introduced on the timescale of a few milliseconds; the computer calculates the optimal mirror shape to correct the distortions and the surface of the deformable mirror is reshaped accordingly. For example, an telescope (like the VLT or Keck) can produce AO-corrected images with an angular resolution of 30–60 milliarcsecond (mas) resolution at infrared wavelengths, while the resolution without correction is of the order of 1 arcsecond.} In order to perform adaptive optics correction, the shape of the incoming wavefronts must be measured as a function of position in the telescope aperture plane. Typically the circular telescope aperture is split up into an array of pixels in a wavefront sensor, either using an array of small lenslets (a Shack–Hartmann wavefront sensor), or using a curvature or pyramid sensor which operates on images of the telescope aperture. The mean wavefront perturbation in each pixel is calculated. This pixelated map of the wavefronts is fed into the deformable mirror and used to correct the wavefront errors introduced by the atmosphere. It is not necessary for the shape or size of the astronomical object to be known – even Solar System objects which are not point-like can be used in a Shack–Hartmann wavefront sensor, and time-varying structure on the surface of the Sun is commonly used for adaptive optics at solar telescopes. The deformable mirror corrects incoming light so that the images appear sharp. Using guide stars Natural guide stars Because a science target is often too faint to be used as a reference star for measuring the shape of the optical wavefronts, a nearby brighter guide star can be used instead. The light from the science target has passed through approximately the same atmospheric turbulence as the reference star's light and so its image is also corrected, although generally to a lower accuracy. The necessity of a reference star means that an adaptive optics system cannot work everywhere on the sky, but only where a guide star of sufficient luminosity (for current systems, about magnitude 12–15) can be found very near to the object of the observation. This severely limits the application of the technique for astronomical observations. Another major limitation is the small field of view over which the adaptive optics correction is good. As the angular distance from the guide star increases, the image quality degrades. A technique known as "multiconjugate adaptive optics" uses several deformable mirrors to achieve a greater field of view. Artificial guide stars An alternative is the use of a laser beam to generate a reference light source (a laser guide star, LGS) in the atmosphere. There are two kinds of LGSs: Rayleigh guide stars and sodium guide stars. Rayleigh guide stars work by propagating a laser, usually at near ultraviolet wavelengths, and detecting the backscatter from air at altitudes between . Sodium guide stars use laser light at 589 nm to resonantly excite sodium atoms higher in the mesosphere and thermosphere, which then appear to "glow". The LGS can then be used as a wavefront reference in the same way as a natural guide star – except that (much fainter) natural reference stars are still required for image position (tip/tilt) information. The lasers are often pulsed, with measurement of the atmosphere being limited to a window occurring a few microseconds after the pulse has been launched. This allows the system to ignore most scattered light at ground level; only light which has travelled for several microseconds high up into the atmosphere and back is actually detected.} In retinal imaging Ocular aberrations are distortions in the wavefront passing through the pupil of the eye. These optical aberrations diminish the quality of the image formed on the retina, sometimes necessitating the wearing of spectacles or contact lenses. In the case of retinal imaging, light passing out of the eye carries similar wavefront distortions, leading to an inability to resolve the microscopic structure (cells and capillaries) of the retina. Spectacles and contact lenses correct "low-order aberrations", such as defocus and astigmatism, which tend to be stable in humans for long periods of time (months or years). While correction of these is sufficient for normal visual functioning, it is generally insufficient to achieve microscopic resolution. Additionally, "high-order aberrations", such as coma, spherical aberration, and trefoil, must also be corrected in order to achieve microscopic resolution. High-order aberrations, unlike low-order, are not stable over time, and may change over time scales of 0.1s to 0.01s. The correction of these aberrations requires continuous, high-frequency measurement and compensation. Measurement of ocular aberrations Ocular aberrations are generally measured using a wavefront sensor, and the most commonly used type of wavefront sensor is the Shack–Hartmann. Ocular aberrations are caused by spatial phase nonuniformities in the wavefront exiting the eye. In a Shack-Hartmann wavefront sensor, these are measured by placing a two-dimensional array of small lenses (lenslets) in a pupil plane conjugate to the eye's pupil, and a CCD chip at the back focal plane of the lenslets. The lenslets cause spots to be focused onto the CCD chip, and the positions of these spots are calculated using a centroiding algorithm. The positions of these spots are compared with the positions of reference spots, and the displacements between the two are used to determine the local curvature of the wavefront allowing one to numerically reconstruct the wavefront information—an estimate of the phase nonuniformities causing aberration. Correction of ocular aberrations Once the local phase errors in the wavefront are known, they can be corrected by placing a phase modulator such as a deformable mirror at yet another plane in the system conjugate to the eye's pupil. The phase errors can be used to reconstruct the wavefront, which can then be used to control the deformable mirror. Alternatively, the local phase errors can be used directly to calculate the deformable mirror instructions. Open loop vs. closed loop operation If the wavefront error is measured before it has been corrected by the wavefront corrector, then operation is said to be "open loop". If the wavefront error is measured after it has been corrected by the wavefront corrector, then operation is said to be "closed loop". In the latter case then the wavefront errors measured will be small, and errors in the measurement and correction are more likely to be removed. Closed loop correction is the norm. Applications Adaptive optics was first applied to flood-illumination retinal imaging to produce images of single cones in the living human eye. It has also been used in conjunction with scanning laser ophthalmoscopy to produce (also in living human eyes) the first images of retinal microvasculature and associated blood flow and retinal pigment epithelium cells in addition to single cones. Combined with optical coherence tomography, adaptive optics has allowed the first three-dimensional images of living cone photoreceptors to be collected. In microscopy In microscopy, adaptive optics is used to correct for sample-induced aberrations. The required wavefront correction is either measured directly using wavefront sensor or estimated by using sensorless AO techniques. Other uses Besides its use for improving nighttime astronomical imaging and retinal imaging, adaptive optics technology has also been used in other settings. Adaptive optics is used for solar astronomy at observatories such as the Swedish 1-m Solar Telescope, Dunn Solar Telescope, and Big Bear Solar Observatory. It is also expected to play a military role by allowing ground-based and airborne laser weapons to reach and destroy targets at a distance including satellites in orbit. The Missile Defense Agency Airborne Laser program is the principal example of this. Adaptive optics has been used to enhance the performance of classical and quantum free-space optical communication systems, and to control the spatial output of optical fibers. Medical applications include imaging of the retina, where it has been combined with optical coherence tomography. Also the development of Adaptive Optics Scanning Laser Ophthalmoscope (AOSLO) has enabled correcting for the aberrations of the wavefront that is reflected from the human retina and to take diffraction limited images of the human rods and cones. Adaptive and active optics are also being developed for use in glasses to achieve better than 20/20 vision, initially for military applications. After propagation of a wavefront, parts of it may overlap leading to interference and preventing adaptive optics from correcting it. Propagation of a curved wavefront always leads to amplitude variation. This needs to be considered if a good beam profile is to be achieved in laser applications. In material processing using lasers, adjustments can be made on the fly to allow for variation of focus-depth during piercing for changes in focal length across the working surface. Beam width can also be adjusted to switch between piercing and cutting mode. This eliminates the need for optic of the laser head to be switched, cutting down on overall processing time for more dynamic modifications. Adaptive optics, especially wavefront-coding spatial light modulators, are frequently used in optical trapping applications to multiplex and dynamically reconfigure laser foci that are used to micro-manipulate biological specimens. Beam stabilization A rather simple example is the stabilization of the position and direction of laser beam between modules in a large free space optical communication system. Fourier optics is used to control both direction and position. The actual beam is measured by photo diodes. This signal is fed into analog-to-digital converters and then a microcontroller which runs a PID controller algorithm. The controller then drives digital-to-analog converters which drive stepper motors attached to mirror mounts. If the beam is to be centered onto 4-quadrant diodes, no analog-to-digital converter is needed. Operational amplifiers are sufficient. See also Active optics Adjustable-focus eyeglasses Angular diameter Angular size Atmospheric correction (for satellite imaging of the Earth) Claire Max, adaptive optics pioneer Deformable mirror Greenwood frequency Holography: real-time holography Image stabilization List of telescope parts and construction Nonlinear optics: optical phase conjugation Van Cittert–Zernike theorem#Adaptive optics Wavefront Wavefront sensor William Happer, adaptive optics pioneer References Bibliography External links 10th International Workshop on Adaptive Optics for Industry and Medicine, Padova (Italy), 15–19 June 2015 Adaptive Optics Tutorial at CTIO A. Tokovinin Research groups and companies with interests in Adaptive Optics Space-based vs. Ground-based telescopes with Adaptive Optics Ten Years of VLT Adaptive Optics (ESO : ann11078 : 25 November 2011) Center for Adaptive Optics Telescopes Astronomical imaging Optical devices Articles containing video clips sv:Teleskop#Adaptiv optik
Adaptive optics
Materials_science,Astronomy,Engineering
2,899
615,499
https://en.wikipedia.org/wiki/Phenylpropanolamine
Phenylpropanolamine (PPA), sold under many brand names, is a sympathomimetic agent which is used as a decongestant and appetite suppressant. It was previously commonly used in prescription and over-the-counter cough and cold preparations. The medication is taken by mouth. Side effects of phenylpropanolamine include increased heart rate and blood pressure, among others. Rarely, phenylpropanolamine has been associated with hemorrhagic stroke. Phenylpropanolamine acts as a norepinephrine releasing agent, thereby indirectly activating adrenergic receptors. As such, it is an indirectly acting sympathomimetic. It was previously thought to act as a mixed acting sympathomimetic with additional direct agonist actions on adrenergic receptors, but this proved not to be the case. Chemically, phenylpropanolamine is a substituted amphetamine and is closely related to ephedrine, pseudoephedrine, amphetamine, and cathinone. It is most commonly a racemic mixture of the (1R,2S)- and (1S,2R)-enantiomers of β-hydroxyamphetamine and is also known as dl-norephedrine. Phenylpropanolamine was first synthesized around 1910 and its effects on blood pressure were first characterized around 1930. It was introduced for medical use by the 1930s. The medication was withdrawn from many markets starting in 2000 following findings that it was associated with increased risk of hemorrhagic stroke. It was previously available both over-the-counter and by prescription. Phenylpropanolamine is available for medical and/or veterinary use in some countries. Medical uses Phenylpropanolamine is used as a decongestant to treat nasal congestion. It has also been used to suppress appetite and promote weight loss in the treatment of obesity and has shown effectiveness for this indication. Available forms Phenylpropanolamine was previously available over-the-counter and in certain combination forms by prescription in the United States. However, these forms have all been discontinued. Phenylpropanolamine is available in some countries. Side effects Phenylpropanolamine produces sympathomimetic effects and can cause side effects such as increased heart rate and blood pressure. It has been associated rarely with incidence of hemorrhagic stroke. Certain drugs increase the chances of déjà vu occurring in the user, resulting in a strong sensation that an event or experience currently being experienced has already been experienced in the past. Some pharmaceutical drugs, when taken together, have also been implicated in the cause of déjà vu. The reported the case of an otherwise healthy male who started experiencing intense and recurrent sensations of déjà vu upon taking the drugs amantadine and phenylpropanolamine together to relieve flu symptoms. He found the experience so interesting that he completed the full course of his treatment and reported it to the psychologists to write up as a case study. Because of the dopaminergic action of the drugs and previous findings from electrode stimulation of the brain, it was speculated that déjà vu occurs as a result of hyperdopaminergic action in the mesial temporal areas of the brain. Interactions There has been very little research on drug interactions with phenylpropanolamine. In one study, phenylpropanolamine taken with caffeine was found to quadruple caffeine levels. In another study, phenylpropanolamine reduced theophylline clearance by 50%. Pharmacology Pharmacodynamics Phenylpropanolamine acts primarily as a selective norepinephrine releasing agent. It also acts as a dopamine releasing agent with around 10-fold lower potency. The stereoisomers of the drug have only weak or negligible affinity for α- and β-adrenergic receptors. Phenylpropanolamine was originally thought to act as a direct agonist of adrenergic receptors and hence to act as a mixed acting sympathomimetic, However, phenylpropanolamine was subsequently found to show only weak or negligible affinity for these receptors and has been instead characterized as exclusively an indirectly acting sympathomimetic. It acts by inducing norepinephrine release and thereby indirectly activating adrenergic receptors. Many sympathetic hormones and neurotransmitters are based on the phenethylamine skeleton, and function generally in "fight or flight" type responses, such as increasing heart rate, blood pressure, dilating the pupils, increased energy, drying of mucous membranes, increased sweating, and a significant number of additional effects. Phenylpropanolamine has relatively low potency as a sympathomimetic. It is about 100 to 200times less potent than epinephrine (adrenaline) or norepinephrine (noradrenaline) in its sympathomimetic effects, although responses are variable depending on tissue. Pharmacokinetics Absorption Phenylpropanolamine is readily- and well-absorbed with oral administration. Immediate-release forms of the drug reached peak levels about 1.5hours (range 1.0 to 2.3hours) following administration. Conversely, extended-release forms of phenylpropanolamine reach peak levels after 3.0 to 4.5hours. The pharmacokinetics of phenylpropanolamine are linear across an oral dose range of 25 to 100mg. Steady-state levels of phenylpropanolamine are achieved within 12hours when the drug is taken once every 4hours. There is 62% accumulation of phenylpropanolamine at steady state in terms of peak levels, whereas area-under-the-curve levels are not increased with steady state. Distribution The volume of distribution of phenylpropanolamine is 3.0 to 4.5L/kg. Levels of phenylpropanolamine in the brain are about 40% of those in the heart and 20% of those in the lungs. The hydroxyl group of phenylpropanolamine at the β carbon increases its hydrophilicity, reduces its permeation through the blood–brain barrier, and limits its central nervous system (CNS) effects. Hence, phenylpropanolamine crosses into the brain only to some extent, has only weak CNS effects, and most of its effects are peripheral. In any case, phenylpropanolamine can produce amphetamine-like psychostimulant effects at very high doses. Phenylpropanolamine is more lipophilic than structurally related sympathomimetics with hydroxyl groups on the phenyl ring like epinephrine (adrenaline) and phenylephrine and has greater brain permeability than these agents. The plasma protein binding of phenylpropanolamine is approximately 20%. However, it has been said that no recent studies have substantiated this value. Metabolism Phenylpropanolamine is not substantially metabolized. It also does not undergo significant first-pass metabolism. Only about 3 to 4% of an oral dose of phenylpropanolamine is metabolized. Metabolites include hippuric acid (via oxidative deamination of the side chain) and 4-hydroxynorephedrine (via para-hydroxylation). The methyl group at the α carbon of phenylpropanolamine blocks metabolism by monoamine oxidases (MAOs). Phenylpropanolamine is also not a substrate of catechol O-methyltransferase. The hydroxyl group at the β carbon of phenylpropanolamine also helps to increase metabolic stability. Elimination Approximately 90% of a dose of phenylpropanolamine is excreted in the urine unchanged within 24hours. About 4% of excreted material is in the form of metabolites. The elimination half-life of immediate-release phenylpropanolamine is about 4hours, with a range in different studies of 3.7 to 4.9hours. The half-life of extended-release phenylpropanolamine has ranged from 4.3 to 5.8hours. The elimination of phenylpropanolamine is dependent on urinary pH. At a more acidic urinary pH, the elimination of phenylpropanolamine is accelerated and its half-life and duration are shortened, whereas at more basic urinary pH, the elimination of phenylpropanolamine is reduced and its half-life and duration are extended. Urinary acidifying agents like ascorbic acid and ammonium chloride can increase the excretion of and thereby reduce exposure to amphetamines including phenylpropanolamine, whereas urinary alkalinizing agents including antacids like sodium bicarbonate as well as acetazolamide can reduce the excretion of these agents and thereby increase exposure to them. Total body clearance of phenylpropanolamine has been reported to be 0.546L/h/kg, while renal clearance was 0.432L/h/kg. Miscellaneous As phenylpropanolamine is not extensively metabolized, it would probably not be affected by hepatic impairment. Conversely, there is likely to be accumulation of phenylpropanolamine with renal impairment due to its dependence on urinary excretion. Norephedrine is a minor metabolite of amphetamine and methamphetamine, as shown below. It is also a minor metabolite of ephedrine and a major metabolite of cathinone. Chemistry Phenylpropanolamine, also known as (1RS,2SR)-α-methyl-β-hydroxyphenethylamine or as (1RS,2SR)-β-hydroxyamphetamine, is a substituted phenethylamine and amphetamine derivative. It is closely related to the cathinones (β-ketoamphetamines). β-Hydroxyamphetamine exists as four stereoisomers, which include d- (dextrorotatory) and l-norephedrine (levorotatory), and d- and l-norpseudoephedrine. d-Norpseudoephedrine is also known as cathine, and is found naturally in Catha edulis (khat). Pharmaceutical drug preparations of phenylpropanolamine have varied in their stereoisomer composition in different countries, which may explain differences in misuse and side effect profiles. In any case, racemic dl-norephedrine, or (1RS,2SR)-phenylpropanolamine, appears to be the most commonly used formulation of phenylpropanolamine pharmaceutically. Analogues of phenylpropanolamine include ephedrine, pseudoephedrine, amphetamine, methamphetamine, and cathinone. Phenylpropanolamine, structurally, is in the substituted phenethylamine class, consisting of a cyclic benzene or phenyl group, a two carbon ethyl moiety, and a terminal nitrogen, hence the name phen-ethyl-amine. The methyl group on the alpha carbon (the first carbon before the nitrogen group) also makes this compound a member of the substituted amphetamine class. Ephedrine is the N-methyl analogue of phenylpropanolamine. Exogenous compounds in this family are degraded too rapidly by monoamine oxidase to be active at all but the highest doses. However, the addition of the α-methyl group allows the compound to avoid metabolism and confer an effect. In general, N-methylation of primary amines increases their potency, whereas β-hydroxylation decreases CNS activity, but conveys more selectivity for adrenergic receptors. Phenylpropanolamine is a small-molecule compound with the molecular formula C9H13NO and a molecular weight of 151.21g/mol. It has an experimental log P of 0.67, while its predicted log P values range from 0.57 to 0.89. The compound is relatively lipophilic, but is also more hydrophilic than other amphetamines. The lipophilicity of amphetamines is closely related to their brain permeability. For comparison to phenylpropanolamine, the experimental log P of methamphetamine is 2.1, of amphetamine is 1.8, of ephedrine is 1.1, of pseudoephedrine is 0.7, of phenylephrine is -0.3, and of norepinephrine is -1.2. Methamphetamine has high brain permeability, whereas phenylephrine and norepinephrine are peripherally selective drugs. The optimal log P for brain permeation and central activity is about 2.1 (range 1.5–2.7). Phenylpropanolamine has been used pharmaceutically exclusively as the hydrochloride salt. History Phenylpropanolamine was first synthesized in the early 20th century, in or around 1910. It was patented as a mydriatic in 1913. The pressor effects of phenylpropanolamine were characterized in the late 1920s and the 1930s. Phenylpropanolamine was first introduced for medical use by the 1930s. In the United States, phenylpropanolamine is no longer sold due to an increased risk of haemorrhagic stroke. In a few countries in Europe, however, it is still available either by prescription or sometimes over-the-counter. In Canada, it was withdrawn from the market on 31 May 2001. It was voluntarily withdrawn from the Australian market by July 2001. In India, human use of phenylpropanolamine and its formulations was banned on 10 February 2011, but the ban was overturned by the judiciary in September 2011. Society and culture Names Phenylpropanolamine is the generic name of the drug and its , , and , while phenylpropanolamine hydrochloride is its and in the case of the hydrochloride salt. It is also known by the synonym norephedrine. Brand names of phenylpropanolamine include Acutrim, Appedrine, Capton Diet, Control, Dexatrim, Emagrin Plus A.P., Glifentol, Kontexin, Merex, Monydrin, Mydriatine, Prolamine, Propadrine, Propagest, Recatol, Rinexin, Tinaroc, and Westrim, among many others. It has also been used in combinations under brand names including Allerest, Demazin, Dimetapp, and Sinarest, among others. Availability Phenylpropanolamine is available for medical and veterinary use in some countries. Exercise and sports There has been interest in phenylpropanolamine as a performance-enhancing drug in exercise and sports. However, clinical studies suggest that phenylpropanolamine is not effective in this regard. Phenylpropanolamine is not on the World Anti-Doping Agency (WADA) list of prohibited substances as of 2024. Legal status In Sweden, phenylpropanolamine is still available in prescription decongestants; Phenylpropanolamine is also still available in Germany. It is used in some polypill medications like Wick DayMed capsules. In the United Kingdom, phenylpropanolamine was available in many "all in one" cough and cold medications which usually also feature paracetamol or another analgesic and caffeine and could also be purchased on its own; however, it is no longer approved for human use. A European Category 1 Licence is required to purchase phenylpropanolamine for academic use. In the United States, the Food and Drug Administration (FDA) issued a public health advisory against the use of the drug in November 2000. In this advisory, the FDA requested but did not require that all drug companies discontinue marketing products containing phenylpropanolamine. The agency estimates that phenylpropanolamine caused between 200 and 500 strokes per year among 18-to-49-year-old users. In 2005, the FDA removed phenylpropanolamine from over-the-counter sale and removed its "generally recognized as safe and effective" (GRASE) status. Under the 2020 CARES Act, it requires FDA approval before it can be marketed again effectively banning the drug even as a prescription drug. Because of its potential use in amphetamine manufacture, phenylpropanolamine is controlled by the Combat Methamphetamine Epidemic Act of 2005. It is still available for veterinary use in dogs, however, as a treatment for urinary incontinence. Internationally, an item on the agenda of the 2000 Commission on Narcotic Drugs session called for including the stereoisomer norephedrine in Table I of United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances. Drugs containing phenylpropanolamine were banned in India on 27 January 2011. On 13 September 2011, Madras High Court revoked a ban on manufacture and sale of pediatric drugs phenylpropanolamine and nimesulide. Veterinary use Phenylpropanolamine is available for use in veterinary medicine. It is used to control urinary incontinence in dogs. In June 2024, the US Food and Drug Administration (FDA) approved Phenylpropanolamine Hydrochloride chewable tablets for the control of urinary incontinence due to a weakening of the muscles that control urination (urethral sphincter hypotonus) in dogs. This is the first generic phenylpropanolamine hydrochloride chewable tablets for dogs. Urinary incontinence happens when a dog loses its ability to control when it urinates. Urinary incontinence due to urethral sphincter hypotonus can happen as dogs age and as the dog’s muscle in its urethra (the body part that leads from the dog’s bladder to outside its body) weakens and loses control over its ability to hold urine. Phenylpropanolamine Hydrochloride chewable tablets contain the same active ingredient (phenylpropanolamine hydrochloride) in the same concentration and dosage form as the approved brand name drug product, Proin chewable tablets, which were first approved in August 2011. In addition, the FDA determined that Phenylpropanolamine Hydrochloride chewable tablets contain no inactive ingredients that may significantly affect the bioavailability of the active ingredient. Notes Reference notes References Amphetamine alkaloids Anorectics Anti-obesity drugs Antihypotensive agents Beta-Hydroxyamphetamines Decongestants Drugs acting on the cardiovascular system Drugs acting on the nervous system Drugs in sport Enantiopure drugs Ergogenic aids Human drug metabolites Norepinephrine releasing agents Peripherally selective drugs Recreational drug metabolites Stimulants Sympathomimetics Veterinary drugs Withdrawn anti-obesity drugs World Anti-Doping Agency prohibited substances
Phenylpropanolamine
Chemistry
4,166
2,994,664
https://en.wikipedia.org/wiki/Schmidt%20number
In fluid dynamics, the Schmidt number (denoted ) of a fluid is a dimensionless number defined as the ratio of momentum diffusivity (kinematic viscosity) and mass diffusivity, and it is used to characterize fluid flows in which there are simultaneous momentum and mass diffusion convection processes. It was named after German engineer Ernst Heinrich Wilhelm Schmidt (1892–1975). The Schmidt number is the ratio of the shear component for diffusivity (viscosity divided by density) to the diffusivity for mass transfer . It physically relates the relative thickness of the hydrodynamic layer and mass-transfer boundary layer. It is defined as: where (in SI units): is the kinematic viscosity (m2/s) is the mass diffusivity (m2/s). is the dynamic viscosity of the fluid (Pa·s = N·s/m2 = kg/m·s) is the density of the fluid (kg/m3) is the Peclet Number is the Reynolds Number. The heat transfer analog of the Schmidt number is the Prandtl number (). The ratio of thermal diffusivity to mass diffusivity is the Lewis number (). Turbulent Schmidt Number The turbulent Schmidt number is commonly used in turbulence research and is defined as: where: is the eddy viscosity in units of (m2/s) is the eddy diffusivity (m2/s). The turbulent Schmidt number describes the ratio between the rates of turbulent transport of momentum and the turbulent transport of mass (or any passive scalar). It is related to the turbulent Prandtl number, which is concerned with turbulent heat transfer rather than turbulent mass transfer. It is useful for solving the mass transfer problem of turbulent boundary layer flows. The simplest model for Sct is the Reynolds analogy, which yields a turbulent Schmidt number of 1. From experimental data and CFD simulations, Sct ranges from 0.2 to 6. Stirling engines For Stirling engines, the Schmidt number is related to the specific power. Gustav Schmidt of the German Polytechnic Institute of Prague published an analysis in 1871 for the now-famous closed-form solution for an idealized isothermal Stirling engine model. where: is the Schmidt number is the heat transferred into the working fluid is the mean pressure of the working fluid is the volume swept by the piston. References Dimensionless numbers of fluid mechanics Dimensionless numbers of thermodynamics Fluid dynamics
Schmidt number
Physics,Chemistry,Engineering
509
2,504,290
https://en.wikipedia.org/wiki/Superstabilization
Superstabilization is a concept of fault-tolerance in distributed computing. Superstabilizing distributed algorithms combine the features of self-stabilizing algorithms and dynamic algorithms. A superstabilizing algorithm – just like any other self-stabilizing algorithm – can be started in an arbitrary state, and it will eventually converge to a legitimate state. Additionally, a superstabilizing algorithm will recover rapidly from a single change in the network topology (adding or removing one edge or node in the network). Any self-stabilizing algorithm recovers from a change in the network topology – the system configuration after a topology change can be treated just like any other arbitrary starting configuration. However, in a self-stabilizing algorithm, the convergence after a single change in the network topology may be as slow as the convergence from an arbitrary starting state. In the study of superstabilizing algorithms, special attention is paid to the time it takes to recover from a single change in the network topology. Definitions The stabilization time of a superstabilizing algorithm is defined exactly as in the case of self-stabilizing algorithm: how long it takes to converge to a legitimate state from an arbitrary configuration. Depending on the computational model, time is measured, e.g., in synchronous communication rounds or in asynchronous cycles. The superstabilization time is the time to recover from a single topology change. It is assumed that the system is initially in a legitimate configuration. Then the network topology is changed; the superstabilization time is the maximum time it takes for the system to reach a legitimate configuration again. Similarly, the adjustment measure is the maximum number of nodes that have to change their state after such changes. The “almost-legitimate configurations” which occur after one topology change can be formally modelled by using passage predicates: a passage predicate is a predicate that holds after a single change in the network topology, and also during the convergence to a legitimate configuration. References , article 4. , Section 7.1. Distributed computing problems Fault-tolerant computer systems
Superstabilization
Mathematics,Technology,Engineering
428
435,261
https://en.wikipedia.org/wiki/Legendre%27s%20constant
Legendre's constant is a mathematical constant occurring in a formula constructed by Adrien-Marie Legendre to approximate the behavior of the prime-counting function . The value that corresponds precisely to its asymptotic behavior is now known to be 1. Examination of available numerical data for known values of led Legendre to an approximating formula. Legendre proposed in 1808 the formula (), as giving an approximation of with a "very satisfying precision". Today, one defines the real constant by which is solved by putting provided that this limit exists. Not only is it now known that the limit exists, but also that its value is equal to 1, somewhat less than Legendre's . Regardless of its exact value, the existence of the limit implies the prime number theorem. Pafnuty Chebyshev proved in 1849 that if the limit B exists, it must be equal to 1. An easier proof was given by Pintz in 1980. It is an immediate consequence of the prime number theorem, under the precise form with an explicit estimate of the error term (for some positive constant a, where O(...) is the big O notation), as proved in 1899 by Charles de La Vallée Poussin, that B indeed is equal to 1. (The prime number theorem had been proved in 1896, independently by Jacques Hadamard and La Vallée Poussin, but without any estimate of the involved error term). Being evaluated to such a simple number has made the term Legendre's constant mostly only of historical value, with it often (technically incorrectly) being used to refer to Legendre's first guess 1.08366... instead. Numerical values Using known values for , we can compute for values of far beyond what was available to Legendre: Values up to (the first two columns) are known exactly; the values in the third and fourth columns are estimated using the Riemann R function. References External links Conjectures about prime numbers Mathematical constants 1 (number) Integers Analytic number theory
Legendre's constant
Mathematics
418
1,069,091
https://en.wikipedia.org/wiki/Exponent%20bias
In IEEE 754 floating-point numbers, the exponent is biased in the engineering sense of the word – the value stored is offset from the actual value by the exponent bias, also called a biased exponent. Biasing is done because exponents have to be signed values in order to be able to represent both tiny and huge values, but two's complement, the usual representation for signed values, would make comparison harder. To solve this problem the exponent is stored as an unsigned value which is suitable for comparison, and when being interpreted it is converted into an exponent within a signed range by subtracting the bias. By arranging the fields such that the sign bit takes the most significant bit position, the biased exponent takes the middle position, then the significand will be the least significant bits and the resulting value will be ordered properly. This is the case whether or not it is interpreted as a floating-point or integer value. The purpose of this is to enable high speed comparisons between floating-point numbers using fixed-point hardware. If there are bits in the exponent, the bias is typically set as . Therefore, the possible integer values that the biased exponent can express lie in the range . To understand this range, with bits in the exponent, the possible unsigned integers lie in the range . However, the strings containing all zeros and all ones are reserved for special values, so the expressible integers lie in the range . It follows that: The maximum biased value is . The minimum biased value is . When interpreting the floating-point number, the bias is subtracted to retrieve the actual exponent. For a half-precision number, the exponent is stored in the range 1 .. 30 (0 and 31 have special meanings), and is interpreted by subtracting the bias for an 5-bit exponent (15) to get an exponent value in the range −14 .. +15. For a single-precision number, the exponent is stored in the range 1 .. 254 (0 and 255 have special meanings), and is interpreted by subtracting the bias for an 8-bit exponent (127) to get an exponent value in the range −126 .. +127. For a double-precision number, the exponent is stored in the range 1 .. 2046 (0 and 2047 have special meanings), and is interpreted by subtracting the bias for an 11-bit exponent (1023) to get an exponent value in the range −1022 .. +1023. For a quad-precision number, the exponent is stored in the range 1 .. 32766 (0 and 32767 have special meanings), and is interpreted by subtracting the bias for a 15-bit exponent (16383) to get an exponent value in the range −16382 .. +16383. History The floating-point format of the IBM 704 introduced the use of a biased exponent in 1954. See also Offset binary (also referred to as excess-K) References Computer arithmetic
Exponent bias
Mathematics
637