id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
11,009,430
https://en.wikipedia.org/wiki/Systems%20simulation
Computers are used to generate numeric models for the purpose of describing or displaying complex interaction among multiple variables within a system. The complexity of the system arises from the stochastic (probabilistic) nature of the events, rules for the interaction of the elements and the difficulty in perceiving the behavior of the systems as a whole with the passing of time. Systems Simulation in Video Games One of the most notable video games to incorporate systems simulation is Sim City, which simulates the multiple systems of a functioning city including but not limited to: electricity, water, sewage, public transportation, population growth, social interactions (including, but not limited to jobs, education and emergency response). See also Agent-based model Discrete event simulation NetLogo Systems Dynamics References External links A Brief Introduction to Systems Simulation Resources and Courses in Systems Simulation Guide to the Winter Simulation Conference Collection 1968-2003, 2013-2014 Stochastic simulation Systems theory
Systems simulation
[ "Technology" ]
190
[ "Computing stubs" ]
11,009,466
https://en.wikipedia.org/wiki/Uridine%20diphosphate%20galactose
Uridine diphosphate galactose (UDP-galactose) is an intermediate in the production of polysaccharides. It is important in nucleotide sugars metabolism, and is the substrate for the transferase B4GALT5. Sugar metabolism Uridine diphosphate (UDP)-galactose is relevant in glycolysis. UDP-galactose is the activated form of Gal, a crucial monosaccharide building block for human milk oligosaccharide (HMO). The activated form of galactose (Gal) serves as a donor molecule involved in catalyzing the conversion of UDP-galactose to UDP-glucose. The conversion is a rate-limiting step essential to the pace of UDP-glucose production that determines the completion of glycosylation reactions. To further explain, UDP-galactose is derived from a galactose molecule which is an epimer of glucose, and via the Leloir pathway, it is used be used as a precursor for the metabolism of glucose into pyruvate. When lactose is hydrolyzed, D-Galactose enters the liver via the bloodstream. There, galactokinase phosphorylates it to galactose-1-phosphate using ATP. This compound then engages in a "ping-pong" reaction with UDP-glucose, catalyzed by uridylyltransferase, yielding glucose-1-phosphate and UDP-galactose. This glucose-1-phosphate feeds into glycolysis, while UDP-galactose undergoes epimerization to regenerate UDP-glucose. See also Galactose UDP galactose epimerase Uridine diphosphate References Coenzymes Nucleotides
Uridine diphosphate galactose
[ "Chemistry" ]
388
[ "Organic compounds", "Coenzymes" ]
11,009,758
https://en.wikipedia.org/wiki/Realizability
In mathematical logic, realizability is a collection of methods in proof theory used to study constructive proofs and extract additional information from them. Formulas from a formal theory are "realized" by objects, known as "realizers", in a way that knowledge of the realizer gives knowledge about the truth of the formula. There are many variations of realizability; exactly which class of formulas is studied and which objects are realizers differ from one variation to another. Realizability can be seen as a formalization of the Brouwer–Heyting–Kolmogorov (BHK) interpretation of intuitionistic logic. In realizability the notion of "proof" (which is left undefined in the BHK interpretation) is replaced with a formal notion of "realizer". Most variants of realizability begin with a theorem that any statement that is provable in the formal system being studied is realizable. The realizer, however, usually gives more information about the formula than a formal proof would directly provide. Beyond giving insight into intuitionistic provability, realizability can be applied to prove the disjunction and existence properties for intuitionistic theories and to extract programs from proofs, as in proof mining. It is also related to topos theory via realizability topoi. Example: Kleene's 1945-realizability Kleene's original version of realizability uses natural numbers as realizers for formulas in Heyting arithmetic. A few pieces of notation are required: first, an ordered pair (n,m) is treated as a single number using a fixed primitive recursive pairing function; second, for each natural number n, φn is the computable function with index n. The following clauses are used to define a relation "n realizes A" between natural numbers n and formulas A in the language of Heyting arithmetic, known as Kleene's 1945-realizability relation: Any number n realizes an atomic formula s=t if and only if s=t is true. Thus every number realizes a true equation, and no number realizes a false equation. A pair (n,m) realizes a formula A∧B if and only if n realizes A and m realizes B. Thus a realizer for a conjunction is a pair of realizers for the conjuncts. A pair (n,m) realizes a formula A∨B if and only if the following hold: n is 0 or 1; and if n is 0 then m realizes A; and if n is 1 then m realizes B. Thus a realizer for a disjunction explicitly picks one of the disjuncts (with n) and provides a realizer for it (with m). A number n realizes a formula A→B if and only if, for every m that realizes A, φn(m) realizes B. Thus a realizer for an implication corresponds to a computable function that takes any realizer for the hypothesis and produces a realizer for the conclusion. A pair (n,m) realizes a formula (∃ x)A(x) if and only if m is a realizer for A(n). Thus a realizer for an existential formula produces an explicit witness for the quantifier along with a realizer for the formula instantiated with that witness. A number n realizes a formula (∀ x)A(x) if and only if, for all m, φn(m) is defined and realizes A(m). Thus a realizer for a universal statement is a computable function that produces, for each m, a realizer for the formula instantiated with m. With this definition, the following theorem is obtained: Let A be a sentence of Heyting arithmetic (HA). If HA proves A then there is an n such that n realizes A. On the other hand, there are classical theorems (even propositional formula schemas) that are realized but which are not provable in HA, a fact first established by Rose. So realizability does not exactly mirror intuitionistic reasoning. Further analysis of the method can be used to prove that HA has the "disjunction and existence properties": If HA proves a sentence (∃ x)A(x), then there is an n such that HA proves A(n) If HA proves a sentence A∨B, then HA proves A or HA proves B. More such properties are obtained involving Harrop formulas. Later developments Kreisel introduced modified realizability, which uses typed lambda calculus as the language of realizers. Modified realizability is one way to show that Markov's principle is not derivable in intuitionistic logic. On the contrary, it allows to constructively justify the principle of independence of premise: . Relative realizability is an intuitionist analysis of computable or computably enumerable elements of data structures that are not necessarily computable, such as computable operations on all real numbers when reals can be only approximated on digital computer systems. Classical realizability was introduced by Krivine and extends realizability to classical logic. It furthermore realizes axioms of Zermelo–Fraenkel set theory. Understood as a generalization of Cohen’s forcing, it was used to provide new models of set theory. Linear realizability extends realizability techniques to linear logic. The term was coined by Seiller to encompass several constructions, such as geometry of interaction models, ludics, interaction graphs models. Use in proof mining Realizability is one of the methods used in proof mining to extract concrete "programs" from seemingly non-constructive mathematical proofs. Program extraction using realizability is implemented in some proof assistants such as Coq. See also Curry–Howard correspondence Dialectica interpretation Harrop formula Notes References Kreisel G. (1959). "Interpretation of Analysis by Means of Constructive Functionals of Finite Types", in: Constructivity in Mathematics, edited by A. Heyting, North-Holland, pp. 101–128. Kleene, S. C. (1973). "Realizability: a retrospective survey" from , pp. 95–112. External links Realizability Collection of links to recent papers on realizability and related topics. Proof theory Constructivism (mathematics)
Realizability
[ "Mathematics" ]
1,317
[ "Mathematical logic", "Constructivism (mathematics)", "Proof theory" ]
11,010,249
https://en.wikipedia.org/wiki/Advanced%20Access%20Content%20System
The Advanced Access Content System (AACS) is a standard for content distribution and digital rights management, intended to restrict access to and copying of the post-DVD generation of optical discs. The specification was publicly released in April 2005. The standard has been adopted as the access restriction scheme for HD DVD and Blu-ray Disc (BD). It is developed by AACS Licensing Administrator, LLC (AACS LA), a consortium that includes Disney, Intel, Microsoft, Panasonic, Warner Bros., IBM, Toshiba and Sony. AACS has been operating under an "interim agreement" since the final specification (including provisions for Managed Copy) has not yet been finalized. Since appearing in devices in 2006, several AACS decryption keys have been extracted from software players and published on the Internet, allowing decryption by unlicensed software. System overview Encryption AACS uses cryptography to control and restrict the use of digital media. It encrypts content under one or more title keys using the Advanced Encryption Standard (AES). Title keys are decrypted using a media key (encoded in a Media Key Block) and the Volume ID of the media (e.g., a physical serial number embedded on a pre-recorded disc). The principal difference between AACS and CSS (the DRM system used on DVDs) lies in how the device decryption keys and codes are organized. Under CSS, all players of a given model group are provisioned with the same shared activated decryption key. Content is encrypted using a title-specific key, which is itself encrypted under each model's key. Thus, each disc contains a collection of several hundred encrypted keys, one for each licensed player model. In principle, this approach allows licensors to "revoke" a given player model (prevent it from playing back future content) by omitting to encrypt future title keys with the player model's key. In practice, however, revoking all players of a particular model is costly, as it causes many users to lose playback capability. Furthermore, the inclusion of a shared key across many players makes key compromise significantly more likely, as was demonstrated by a number of compromises in the mid-1990s. The approach of AACS provisions each individual player with a unique set of decryption keys which are used in a broadcast encryption scheme. This approach allows licensors to "revoke" individual players, or more specifically, the decryption keys associated with the player. Thus, if a given player's keys are compromised and published, the AACS LA can simply revoke those keys in future content, rendering the keys and the player useless for decrypting new titles. AACS also incorporates traitor tracing techniques. The standard allows for multiple versions of short sections of a movie to be encrypted with different keys, while a given player will only be able to decrypt one version of each section. The manufacturer embeds varying digital watermarks (such as Cinavia) in these sections, and upon subsequent analysis of the pirated release the compromised keys can be identified and revoked (this feature is called Sequence keys in the AACS specifications). Volume IDs Volume IDs are unique identifiers or serial numbers that are stored on pre-recorded discs with special hardware. They cannot be duplicated on consumers' recordable media. The point of this is to prevent simple bit-by-bit copies, since the Volume ID is required (though not sufficient) for decoding content. On Blu-ray discs, the Volume ID is stored in the BD-ROM Mark. To read the Volume ID, a cryptographic certificate (the Private Host Key) signed by the AACS LA is required. However, this has been circumvented by modifying the firmware of some HD DVD and Blu-ray drives. Decryption process To view the movie, the player must first decrypt the content on the disc. The decryption process is somewhat convoluted. The disc contains 4 items—the Media Key Block (MKB), the Volume ID, the Encrypted Title Keys, and the Encrypted Content. The MKB is encrypted in a subset difference tree approach. Essentially, a set of keys are arranged in a tree such that any given key can be used to find every other key except its parent keys. This way, to revoke a given device key, the MKB needs only be encrypted with that device key's parent key. Once the MKB is decrypted, it provides the Media Key, or the km. The km is combined with the Volume ID (which the program can only get by presenting a cryptographic certificate to the drive, as described above) in a one-way encryption scheme (AES-G) to produce the Volume Unique Key (Kvu). The Kvu is used to decrypt the encrypted title keys, and that is used to decrypt the encrypted content. Analog Outputs AACS-compliant players must follow guidelines pertaining to outputs over analog connections. This is set by a flag called the Image Constraint Token (ICT), which restricts the resolution for analog outputs to 960×540. Full 1920×1080 resolution is restricted to HDMI or DVI outputs that support HDCP. The decision to set the flag to restrict output ("down-convert") is left to the content provider. Warner Pictures is a proponent of ICT, and it is expected that Paramount and Universal will implement down-conversion as well. AACS guidelines require that any title which implements the ICT must clearly state so on the packaging. The German magazine "Der Spiegel" has reported about an unofficial agreement between film studios and electronics manufacturers to not use ICT until 2010 – 2012. However, some titles have already been released that apply ICT. Audio watermarking On 5 June 2009, the licensing agreements for AACS were finalized, which were updated to make Cinavia detection on commercial Blu-ray disc players a requirement. Managed Copy Managed Copy refers to a system by which consumers can make legal copies of films and other digital content protected by AACS. This requires the device to obtain authorization by contacting a remote server on the Internet. The copies will still be protected by DRM, so infinite copying is not possible (unless it is explicitly allowed by the content owner). It is mandatory for content providers to give the consumer this flexibility in both the HD DVD and the Blu-ray standards (commonly called Mandatory Managed Copy). The Blu-ray standards adopted Mandatory Managed Copy later than HD DVD, after HP requested it. Possible scenarios for Managed Copy include (but are not limited to): Create an exact duplicate onto a recordable disc for backup Create a full-resolution copy for storage on a media server Create a scaled-down version for watching on a portable device This feature was not included in the interim standard, so the first devices on the market did not have this capability. It was expected to be a part of the final AACS specification. In June 2009, the final AACS agreements were ratified and posted online, and include information on the Managed Copy aspects of AACS. History On 24 February 2001, Dalit Naor, Moni Naor and Jeff Lotspiech published a paper entitled "Revocation and Tracing Schemes for Stateless Receivers", where they described a broadcast encryption scheme using a construct called Naor-Naor-Lotspiech subset-difference trees. That paper laid the theoretical foundations of AACS. The AACS LA consortium was founded in 2004. With DeCSS in hindsight, the IEEE Spectrum magazine's readers voted AACS to be one of the technologies most likely to fail in the January 2005 issue. The final AACS standard was delayed, and then delayed again when an important member of the Blu-ray group voiced concerns. At the request of Toshiba, an interim standard was published which did not include some features, like managed copy. On July 5, 2009 the license of AACS1 went online. Unlicensed decryption On 26 December 2006, a person using the alias "muslix64" published a working, open-sourced AACS decrypting utility named BackupHDDVD, looking at the publicly available AACS specifications. Given the correct keys, it can be used to decrypt AACS-encrypted content. A corresponding BackupBluRay program was soon developed. Blu-ray Copy is a program capable of copying Blu-rays to the hard drive or to blank BD-R discs. Security Both title keys and one of the keys used to decrypt them (known as Processing Keys in the AACS specifications) have been found by using debuggers to inspect the memory space of running HD DVD and Blu-ray player programs. Hackers also claim to have found Device Keys (used to calculate the Processing Key) and a Host Private Key (a key signed by the AACS LA used for hand-shaking between host and HD drive; required for reading the Volume ID). The first unprotected HD movies were available soon afterwards. The processing key was widely published on the Internet after it was found and the AACS LA sent multiple DMCA takedown notices in the aim of censoring it. Some sites that rely on user-submitted content, like Digg and Wikipedia, tried to remove any mentions of the key. The Digg administrators eventually gave up trying to censor submissions that contained the key. The AACS key extractions highlight the inherent weakness in any DRM system that permit software players for PCs to be used for playback of content. No matter how many layers of encryption are employed, it does not offer any true protection, since the keys needed to obtain the unencrypted content stream must be available somewhere in memory for playback to be possible. The PC platform offers no way to prevent memory snooping attacks on such keys, since a PC configuration can always be emulated by a virtual machine, in theory without any running program or external system being able to detect the virtualization. The only way to wholly prevent attacks like this would require changes to the PC platform (see Trusted Computing) which could provide protection against such attacks. This would require that content distributors do not permit their content to be played on PCs without trusted computing technology, by not providing the companies making software players for non-trusted PCs with the needed encryption keys. On 16 April 2007, the AACS consortium announced that it had expired certain encryption keys used by PC-based applications. Patches were available for WinDVD and PowerDVD which used new and uncompromised encryption keys. The old, compromised keys can still be used to decrypt old titles, but not newer releases as they will be encrypted with these new keys. All users of the affected players (even those considered "legitimate" by the AACS LA) are forced to upgrade or replace their player software in order to view new titles. Despite all revocations, current titles can be decrypted using new MKB v7, v9 or v10 keys widely available in the Internet. Besides spreading processing keys on the Internet, there have also been efforts to spread title keys on various sites. The AACS LA has sent DMCA takedown notices to such sites on at least one occasion. There is also commercial software (AnyDVD HD) that can circumvent the AACS protection. Apparently this program works even with movies released after the AACS LA expired the first batch of keys. While great care has been taken with AACS to ensure that contents are encrypted right up to the display device, on the first versions of some Blu-ray and HD DVD software players a perfect copy of any still frame from a film could be made simply by utilizing the Print Screen function of the Windows operating system. Patent challenges On 30 May 2007, Canadian encryption vendor Certicom sued Sony alleging that AACS violated two of its patents, "Strengthened public key protocol" and "Digital signatures on a Smartcard." The patents were filed in 1999 and 2001 respectively, and in 2003 the National Security Agency paid $25 million for the right to use 26 of Certicom's patents, including the two that Sony is alleged to have infringed on. The lawsuit was dismissed on May 27, 2009. See also History of attacks against Advanced Access Content System AACS encryption key controversy References External links AACS homepage AACS specifications Understanding AACS, an introductory forum thread. with the diagrams working. ISAN homepage, ISAN as required in the Content ID defined in AACS Introduction and Common Cryptographic Elements rev 0.91 libaacs, an open source library implementing AACS Hal Finney on 'AACS and Processing Key', Hal Finney's post on metzdowd.com cryptography mailing list Digital rights management standards Compact Disc and DVD copy protection Blu-ray Disc
Advanced Access Content System
[ "Technology" ]
2,688
[ "Computer standards", "Digital rights management standards" ]
11,010,269
https://en.wikipedia.org/wiki/Phallolysin
Phallolysin is a protein found the Amanita phalloides species of the Amanita genus of mushrooms, the species commonly known as the death cap mushroom. The protein is toxic and causes cytolysis in many cells found in animals and is noted for its hemolytic properties. It was one of the first toxins discovered in Amanita phalloides when the various toxins in the species where first being researched. The protein itself is observed to come in 3 variations, with observed differences in isoelectric point. Cytolysis can be best described as being the destruction of cells, likely due to exposure from an external source such as pathogens and toxins. Hemolysis then follows a similar destructive pathway, but instead focuses specifically on the destruction of red blood cells. Phallolysin is known to be thermolabile, meaning that it is destroyed at high temperatures, and acid labile, meaning that it is easily broken down in acidic environments. History The toxic properties of death cap mushrooms have been known for most of recorded history, with historical accounts implicating it in the deaths of emperors. Attempts to isolate the toxic compounds began in the late 19th century, with the cytolytic elements of A. phalloides being isolated in 1891. It has been thought that the Roman Emperor Claudius, in 54 AD, and the Holy Roman Emperor Charles VI, in 1740, were some of the earliest victims of death cap poisoning. Due to this, the death cap mushroom has gained the nickname the ‘killer of kings.’ The beginning of this research into the hemolytic properties of the Amanita phalloides, or the Death Cap Mushroom, began with Eduard Rudolf Kobert in 1891, who originally denoted it ‘phallin,’ and was continued by John Jacob Abel and William Webber Ford in 1908. These mushrooms have been attributed to greater than 90% of all cases of mushroom poisoning, in which no active treatment of intoxication cases currently exists. This toxin targets mainly the liver, but may also impact the kidneys and central nervous system as well. As a result of the hemolytic and cytolytic properties, this toxin has been considered for anti-tumor treatments in the early 1970s, where the osmotic lysis of cell membranes was hoped to treat the uncontrollable cell division that tumor cells are notorious for. However, in addition to the non-specificity of the toxin, these trials resulted in the development of an increased potassium concentration in the bloodstream due to the extreme intravascular hemolysis and cytolysis of multiple cell types. Due to the discovery of these lethal side effects, this antitumor treatment route was halted to make room for more sophisticated treatment strategies. Physical properties Phallolysin has three variations, which differ in observed isoelectric point. The variations have differences in the amino acids that make up the protein structure, with identical amounts of some amino acids while varying in others. They have near identical molecular weights of 34 kDa. This protein has been found to be relatively stable in alkaline solutions. The structure of this toxin is a combination of two to three cytolytic proteins. Two of the three proteins have been found to be composed of amino acids with high solubility in water, and each containing one tryptophan residue. This protein is composed of roughly 25% neutral sugars such as galactose, glucose, and mannose, but lack amino sugars. Although they are inactivated by temperatures above 65 °C and acidic environments, they are able to remain stable when coming in contact with proteases or glycosidic enzymes. Such proteases range from pepsin, trypsin, alpha-chymotrypsin, subtilisin, pronase E, bromelin, proteinase K, alpha-amylase, and pancreatin.,. The cytolytic properties of Phallolysin are able to be attributed by the capability for the toxin to produce protrusions on the plasma membrane, and further rupture these protrusions, resulting in the formation of transmembrane ion channels in the cell membrane lipid bilayers. These openings then allow for water to diffuse into the cell at a rate that the cell is unable to withstand, further destroying the cell via cytolysis, or osmotic lysis. Each of the three types of Phallolysin are denoted as being Phallolysin A, B, and C. Phallolysin A maintains an isoelectric point of 8.1, Phallolysin B maintains an isoelectric point range of 7.5 - 7.6, and Phallolysin C maintains an isoelectric point of 7.0. This protein functions best within a weakly acidic environment, as a result of being denatured by more acidic environments. Temperatures of 65 °C sustained for roughly 30 minutes have the ability to destroy the toxin's hemolytic capabilities., Effects on animal cells Phallolysin has been observed to have hemolytic properties toward a variety of animal cells, with it primarily being observed in mammals. The toxic effects are reduced at higher temperatures., These properties are believed to be instigated by ion permeable membrane channels that form as a result of the hemolytic capabilities of phallolysin. In addition to hemolysis, phallolysin in high concentrations are also thought to cause damage to bovine phospholipids with a negative net charge, phosphatidylcholine, and sphingomyelin containing liposomes. However, phospholipid-membranes are only susceptible to phallolysin without receptor proteins being present. These effects are similar to those of staphylococcal 𝛼-toxin., Cytolysis can go into effect at concentrations beginning at 10-8 M, with a lag time of roughly only 2 to 3 minutes. This is accompanied by the rapid movement of Na+ ions into the cell, and the rapid movement of K+ ions out of the cell. This rapid rate of cytolysis occurs primarily in human erythrocytes, or human red blood cells, due to the presence of glycoproteins or glycolipids that act as specific receptors. This interaction further backs up the claim in which phallolysin does not target the cell's plasma membrane, but rather the glycoprotein receptors. This protein has also been known to increase levels of cellular phospholipase, which is a lipolytic enzyme that functions as a phospholipid hydrolyzer to break ester bonds in phospholipids. This has been discovered in the specific cellular phospholipase A2 in 3T3 Swiss mouse fibroblasts, which are key components in the structural formation of connective tissues. This study suggests that phallolysin additionally acts by hydrolyzing membrane phospholipids in fibroblasts. Such results also suggest that these cell surfaces in which phallolysin acts upon are also Ca2+ enzyme dependent, however the protein itself is not Ca2+ dependent., Phallolysin has additionally been discovered to interact mostly with D-galactose and the 𝛽-derivatives, with no glycosylation preferences between O-glycosylation and N-glycosylation. When performing studies on the treatment of rat mast cells with multiple fungal cytolysins, phallolysin was found to interact greatly with lecithin, a fatty substance found in the mice's tissue. It was also found to cause degranulation, or the release of histamine, of the mast cells, depending on the dosage. Various mammals were additionally tested to determine the sensitivity of red blood cells to this toxin. From this, it was determined that mice are more sensitive than rabbits, and rabbits and guinea pigs are roughly equal in sensitivity. Rabbits and guinea pigs are more sensitive than rats, rats are more sensitive than humans, humans are more sensitive than dogs and pigs, and dogs and pigs are more sensitive than sheep and cattle. This is further displayed in the order: mouse > rabbit = guinea pig > rat > man > dog ≃ pig > sheep-cattle. See also Amanita phalloides Amanita Hemolysis Phallotoxin Amatoxin Virotoxins Phalloidin Antamanide References Mycotoxins found in Basidiomycota Proteins
Phallolysin
[ "Chemistry" ]
1,776
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
11,012,481
https://en.wikipedia.org/wiki/Stephen%20J.%20Mellor
Stephen J. Mellor (born 1952) is an American computer scientist, developer of the Ward–Mellor method for real-time computing, the Shlaer–Mellor method, and Executable UML, and signatory to the Agile Manifesto. Biography Mellor received a BA in computer science from the University of Essex in 1974, and started working at CERN in Geneva, Switzerland as a programmer in BCPL. In 1977 he became software engineer at the Lawrence Berkeley Laboratory, and in 1982 consultant at Yourdon, Inc. At Yourdon in cooperation with Paul Ward they developed the Ward–Mellor method, and published the book-series Structured Development for Real Time Systems in 1985. Together with Sally Shlaer he founded Project Technology in 1985. That company was acquired by Mentor Graphics in 2004. Mellor stayed as chief scientist of the Embedded Systems Division at Mentor Graphics for another two years, and is self-employed since 2006. Since 1998 Mellor has contributed to the Object Management Group, chairing the consortium that added executable actions to the UML, and the specification of model-driven architecture (MDA). He is also chairing the advisory board of the IEEE Software magazine. Since 2013, Mellor has served as CTO for the Industrial Internet Consortium. Publications 1985. Structured Development for Real-Time Systems: Essential Modeling Techniques. With Paul T. Ward. Prentice Hall. 1986. Structured Development for Real-Time Systems: Implementation Modeling Techniques (Structured Development for Real-Time Systems Vol. 1). With Paul T. Ward. Prentice Hall. 1988. Object Oriented Systems Analysis: Modeling the World in Data. With Sally Shlaer. Prentice Hall. 1992. Object Life Cycles: Modeling the World In States. With Sally Shlaer. Prentice Hall. 2002. Executable UML: A Foundation for Model Driven Architecture. With Marc J. Balcer. Addison-Wesley. 2004. MDA Distilled. With Kendall Scott, Axel Uhl, Dirk Weise. Addison-Wesley. Articles, a selection: 1989. "An object-oriented approach to domain analysis" with S. Shlaer. In: ACM SIGSOFT Software Engineering Notes. Vol 14–5, July 1989. pp. 66–77. 1997. "Why explore object methods, patterns, and architectures?" with Ralph Johnson. In: IEEE Software. Vol. 14, no. 1, pp. 27–29. 1999. "Softwareplatform-independent, precise action specifications for UML". With S. Tockey, R. Arthaud, P. LeBlanc - The Unified Modeling ..., 1999. 2002. "Make models be assets". In: Commun. ACM Vol 45–11. pp. 76–78. 2003. "A framework for aspect-oriented modeling". Paper from 4th (AOSD) Modeling With (UML) Workshop, October 2003. 2004. "Agile MDA" White paper 2004. See also Data flow State transition References External links Stephen J. Mellor homepage R. Whetton, M. Jones and D. Murray, "The use of Ward and Mellor Structured Methodology for the design of a complex real time system," IEE Colloquium on Computer Aided Software Engineering Tools for Real-Time Control, 1991, pp. 5/1-5/4. Ward and Mellor methodology. 1952 births Living people British computer scientists Alumni of the University of Essex People associated with CERN Real-time technology Real-time computing Agile software development
Stephen J. Mellor
[ "Technology" ]
731
[ "Real-time computing", "nan" ]
11,012,831
https://en.wikipedia.org/wiki/Bioenergetic%20systems
Bioenergetic systems are metabolic processes that relate to the flow of energy in living organisms. Those processes convert energy into adenosine triphosphate (ATP), which is the form suitable for muscular activity. There are two main forms of synthesis of ATP: aerobic, which uses oxygen from the bloodstream, and anaerobic, which does not. Bioenergetics is the field of biology that studies bioenergetic systems. Overview The process that converts the chemical energy of food into ATP (which can release energy) is not dependent on oxygen availability. During exercise, the supply and demand of oxygen available to muscle cells is affected by duration and intensity and by the individual's cardio respiratory fitness level. It is also affected by the type of activity, for instance, during isometric activity the contracted muscles restricts blood flow (leaving oxygen and blood borne fuels unable to be delivered to muscle cells adequately for oxidative phosphorylation). Three systems can be selectively recruited, depending on the amount of oxygen available, as part of the cellular respiration process to generate ATP for the muscles. They are ATP, the anaerobic system and the aerobic system. Adenosine triphosphate ATP is the only type of usable form of chemical energy for musculoskeletal activity. It is stored in most cells, particularly in muscle cells. Other forms of chemical energy, such as those available from oxygen and food, must be transformed into ATP before they can be utilized by the muscle cells. Coupled reactions Since energy is released when ATP is broken down, energy is required to rebuild or resynthesize it. The building blocks of ATP synthesis are the by-products of its breakdown; adenosine diphosphate (ADP) and inorganic phosphate (Pi). The energy for ATP resynthesis comes from three different series of chemical reactions that take place within the body. Two of the three depend upon the food eaten, whereas the other depends upon a chemical compound called phosphocreatine. The energy released from any of these three series of reactions is utilized in reactions that resynthesize ATP. The separate reactions are functionally linked in such a way that the energy released by one is used by the other. Three processes can synthesize ATP: ATP–CP system (phosphagen system) – At maximum intensity, this system is used for up to 10–15 seconds. The ATP–CP system neither uses oxygen nor produces lactic acid if oxygen is unavailable and is thus called alactic anaerobic. This is the primary system behind very short, powerful movements like a golf swing, a 100 m sprint or powerlifting. Anaerobic system – This system predominates in supplying energy for intense exercise lasting less than two minutes. It is also known as the glycolytic system. An example of an activity of the intensity and duration that this system works under would be a 400 m sprint. Aerobic system – This is the long-duration energy system. After five minutes of exercise, the O2 system is dominant. In a 1 km run, this system is already providing approximately half the energy; in a marathon run it provides 98% or more. Around mile 20 of a marathon, runners typically "hit the wall," having depleted their glycogen reserves they then attain "second wind" which is entirely aerobic metabolism primarily by free fatty acids. Aerobic and anaerobic systems usually work concurrently. When describing activity, it is not a question of which energy system is working, but which predominates. Anaerobic and aerobic metabolism The term metabolism refers to the various series of chemical reactions that take place within the body. Aerobic refers to the presence of oxygen, whereas anaerobic means with a series of chemical reactions that does not require the presence of oxygen. The ATP-CP series and the lactic acid series are anaerobic, whereas the oxygen series is aerobic. Anaerobic metabolism ATP–CP: the phosphagen system Creatine phosphate (CP), like ATP, is stored in muscle cells. When it is broken down, a considerable amount of energy is released. The energy released is coupled to the energy requirement necessary for the resynthesis of ATP. The total muscular stores of both ATP and CP are small. Thus, the amount of energy obtainable through this system is limited. The phosphagen stored in the working muscles is typically exhausted in seconds of vigorous activity. However, the usefulness of the ATP-CP system lies in the rapid availability of energy rather than quantity. This is important with respect to the kinds of physical activities that humans are capable of performing. The phosphagen system (ATP-PCr) occurs in the cytosol (a gel-like substance) of the sarcoplasm of skeletal muscle, and in the myocyte's cytosolic compartment of the cytoplasm of cardiac and smooth muscle. During muscle contraction: H2O + ATP → H+ + ADP + Pi (Mg2+ assisted, utilization of ATP for muscle contraction by ATPase) H+ + ADP + CP → ATP + Creatine (Mg2+ assisted, catalyzed by creatine kinase, ATP is used again in the above reaction for continued muscle contraction) 2 ADP → ATP + AMP (catalyzed by adenylate kinase/myokinase when CP is depleted, ATP is again used for muscle contraction) Muscle at rest: ATP + Creatine → H+ + ADP + CP (Mg2+ assisted, catalyzed by creatine kinase) ADP + Pi → ATP (during anaerobic glycolysis and oxidative phosphorylation) When the phosphagen system has been depleted of phosphocreatine (creatine phosphate), the resulting AMP produced from the adenylate kinase (myokinase) reaction is primarily regulated by the purine nucleotide cycle. Anaerobic glycolysis This system is known as anaerobic glycolysis. "Glycolysis" refers to the breakdown of sugar. In this system, the breakdown of sugar supplies the necessary energy from which ATP is manufactured. When sugar is metabolized anaerobically, it is only partially broken down and one of the byproducts is lactic acid. This process creates enough energy to couple with the energy requirements to resynthesize ATP. When H+ ions accumulate in the muscles causing the blood pH level to reach low levels, temporary muscle fatigue results. Another limitation of the lactic acid system that relates to its anaerobic quality is that only a few moles of ATP can be resynthesized from the breakdown of sugar. This system cannot be relied on for extended periods of time. The lactic acid system, like the ATP-CP system, is important primarily because it provides a rapid supply of ATP energy. For example, exercises that are performed at maximum rates for between 1 and 3 minutes depend heavily upon the lactic acid system. In activities such as running 1500 meters or a mile, the lactic acid system is used predominantly for the "kick" at the end of the race. Aerobic metabolism Aerobic glycolysis Glycolysis – The first stage is known as glycolysis, which produces 2 ATP molecules, 2 reduced molecules of nicotinamide adenine dinucleotide (NADH) and 2 pyruvate molecules that move on to the next stage – the Krebs cycle. Glycolysis takes place in the cytoplasm of normal body cells, or the sarcoplasm of muscle cells. The Krebs cycle – This is the second stage, and the products of this stage of the aerobic system are a net production of one ATP, one carbon dioxide molecule, three reduced NAD+ molecules, and one reduced flavin adenine dinucleotide (FAD) molecule. (The molecules of NAD+ and FAD mentioned here are electron carriers, and if they are reduced, they have had one or two H+ ions and two electrons added to them.) The metabolites are for each turn of the Krebs cycle. The Krebs cycle turns twice for each six-carbon molecule of glucose that passes through the aerobic system – as two three-carbon pyruvate molecules enter the Krebs cycle. Before pyruvate enters the Krebs cycle it must be converted to acetyl coenzyme A. During this link reaction, for each molecule of pyruvate converted to acetyl coenzyme A, a NAD+ is also reduced. This stage of the aerobic system takes place in the matrix of the cells' mitochondria. Oxidative phosphorylation – The last stage of the aerobic system produces the largest yield of ATP – a total of 34 ATP molecules. It is called oxidative phosphorylation because oxygen is the final acceptor of electrons and hydrogen ions (hence oxidative) and an extra phosphate is added to ADP to form ATP (hence phosphorylation). This stage of the aerobic system occurs on the cristae (infoldings of the membrane of the mitochondria). The reaction of each NADH in this electron transport chain provides enough energy for 3 molecules of ATP, while reaction of FADH2 yields 2 molecules of ATP. This means that 10 total NADH molecules allow the regeneration of 30 ATP, and 2 FADH2 molecules allow for 4 ATP molecules to be regenerated (in total 34 ATP from oxidative phosphorylation, plus 4 from the previous two stages, producing a total of 38 ATP in the aerobic system). NADH and FADH2 are oxidized to allow the NAD+ and FAD to be reused in the aerobic system, while electrons and hydrogen ions are accepted by oxygen to produce water, a harmless byproduct. Fatty acid oxidation Triglycerides stored in adipose tissue and in other tissues, such as muscle and liver, release fatty acids and glycerol in a process known as lipolysis. Fatty acids are slower than glucose to convert into acetyl-CoA, as first it has to go through beta oxidation. It takes about 10 minutes for fatty acids to sufficiently produce ATP. Fatty acids are the primary fuel source at rest and in low to moderate intensity exercise. Though slower than glucose, its yield is much higher. One molecule of glucose produces through aerobic glycolysis a net of 30-32 ATP; whereas a fatty acid can produce through beta oxidation a net of approximately 100 ATP depending on the type of fatty acid. For example, palmitic acid can produce a net of 106 ATP. Amino acid degradation Normally, amino acids do not provide the bulk of fuel substrates. However, in times of glycolytic or ATP crisis, amino acids can convert into pyruvate, acetyl-CoA, and citric acid cycle intermediates. This is useful during strenuous exercise or starvation as it provides faster ATP than fatty acids; however, it comes at the expense of risking protein catabolism (such as the breakdown of muscle tissue) to maintain the free amino acid pool. Purine nucleotide cycle The purine nucleotide cycle is used in times of glycolytic or ATP crisis, such as strenuous exercise or starvation. It produces fumarate, a citric acid cycle intermediate, which enters the mitochondrion through the malate-aspartate shuttle, and from there produces ATP by oxidative phosphorylation. Ketolysis During starvation or while consuming a low-carb/ketogenic diet, the liver produces ketones. Ketones are needed as fatty acids cannot pass the blood-brain barrier, blood glucose levels are low and glycogen reserves depleted. Ketones also convert to acetyl-CoA faster than fatty acids. After the ketones convert to acetyl-CoA in a process known as ketolysis, it enters the citric acid cycle to produce ATP by oxidative phosphorylation. The longer that the person's glycogen reserves have been depleted, the higher the blood concentration of ketones, typically due to starvation or a low carb diet (βHB 3 - 5 mM). Prolonged high-intensity aerobic exercise, such as running 20 miles, where individuals "hit the wall" can create post-exercise ketosis; however, the level of ketones produced are smaller (βHB 0.3 - 2 mM). Ethanol metabolism Ethanol (alcohol) is first converted into acetaldehyde, consuming NAD+ twice, before being converted into acetate. The acetate is then converted into acetyl-CoA. When alcohol is consumed in small quantities, the NADH/NAD+ ratio remains in balance enough for the acetyl-CoA to be used by the Krebs cycle for oxidative phosphorylation. However, even moderate amounts of alcohol (1-2 drinks) results in more NADH than NAD+, which inhibits oxidative phosphorylation. When the NADH/NAD+ ratio is disrupted (far more NADH than NAD+), this is called pseudohypoxia. The Krebs cycle needs NAD+ as well as oxygen, for oxidative phosphorylation. Without sufficient NAD+, the impaired aerobic metabolism mimics hypoxia (insufficient oxygen), resulting in excessive use of anaerobic glycolysis and a disrupted pyruvate/lactate ratio (low pyruvate, high lactate). The conversion of pyruvate into lactate produces NAD+, but only enough to maintain anaerobic glycolysis. In chronic excessive alcohol consumption (alcoholism), the microsomal ethanol oxidizing system (MEOS) is used in addition to alcohol dehydrogenase. See also Hitting the wall (muscle fatigue due to glycogen depletion) Second wind (increased ATP synthesis primarily from free fatty acids) References Further reading Exercise Physiology for Health, Fitness and Performance. Sharon Plowman and Denise Smith. Lippincott Williams & Wilkins; Third edition (2010). . Ch. 38. Hormonal Regulation of Energy Metabolism. Berne and Levy Physiology, 6th ed (2008) The effects of increasing exercise intensity on muscle fuel utilisation in humans. Van Loon et al. Journal of Physiology (2001) (OTEP) Open Textbook of Exercise Physiology. Edited by Brian R. MacIntosh (2023) ATP metabolism
Bioenergetic systems
[ "Chemistry", "Biology" ]
3,041
[ "Exercise biochemistry", "Biochemistry", "Chemical energy sources" ]
11,013,368
https://en.wikipedia.org/wiki/British%20Pharmaceutical%20Codex
The British Pharmaceutical Codex (BPC) was first published in 1907, to supplement the British Pharmacopoeia which although extensive, did not cover all the medicinal items that a pharmacist might require in daily work. Other books existed, such as Squire's, but the BPC was intended to be official, published by the Pharmaceutical Society of Great Britain (PSGB). It laid down standards for the composition of medicines and surgical dressings. Subsequent editions were published in 1911, 1923, 1934, 1949, 1954, 1959, 1963, 1968, and finally 1973. The 1934 edition was described by the British Medical Journal as "one of the most useful reference books available to the medical profession". In 1963 Edward G Feldmann, director of revision for the US National Formulary, described it as "a compilation of highly authoritative and useful therapeutic (actions and doses) information as well as a valuable compendium of recognised standards and specifications". In 1979 a new edition was published with a new title, The Pharmaceutical Codex. The Medicines Commission had recommended in 1972 that the British Pharmacopoeia should henceforth be the only compendium of official standards for medicines in the UK, and the BPC lost its status as an official book. The PSGB remained as the publishers. The current edition is the 12th, published in 1994. References Pharmacology literature 1907 non-fiction books British books Pharmacy in the United Kingdom
British Pharmaceutical Codex
[ "Chemistry" ]
294
[ "Pharmacology", "Pharmacology literature" ]
11,013,758
https://en.wikipedia.org/wiki/Face-ism
Face-ism or facial prominence is the relative proportion of the face compared to the body in the portrayal of men and women. The media tends to give higher proportion to men's faces and women's bodies. Origin and evidence The term "face-ism" or "facial prominence" was initially defined in a 1983 study in which facial prominence was measured by a "Face-ism index", which is the ratio of two linear measurements, with the distance (in millimeters or any other unit) from the top of the head to the lowest visible point of the chin being the numerator and the distance from the top of the head to the lowest visible part of the subject's body the denominator. It was found that across societies and time, facial prominence of men has been much higher than that of women. Subsequent studies have generated consistent findings and thus helped confirm the pervasive presence of face-ism. For instance, a prevalent face-ism phenomenon was observed in news magazines and women's magazines of the 1970s and 1980s. Face-ism has been documented in prime-time television programs. Evidence has been shown that face-ism is still present in mainstream printed media from as recently as 2004, and showed that men in intellectually focused occupations tend to have higher face-to-body ratios than women in similar professions, while women in physical occupations tend to have higher face-to-body ratios than men in similar professions. A cross-cultural study on face-ism found that face-ism in photographs of politicians is more pronounced in gender-egalitarian societies compared to gender-unequal societies. There is no relation between face-ism and the perception of intellect. Implications It was found that regardless of gender difference, news photographs featuring high face prominence tend to generate more positive ratings with regard to intelligence, ambition and physical appearance than those with low face prominence. Similarly, another study argued that as a series of mental life dimensions including intelligence, personality, and character, are closely associated with the face and head; higher face-ism of men may convey impressions of greater intelligence, dominance, and control. In contrast, the greater body-ism of women (analyzed in television beer commercials) serves to reinforce the stereotypical images of women as trophies or sex objects without personalities. Face-ism may not be merely restricted to gender difference but can apply to racial difference as well. For instance, the study revealed that Caucasians have higher face-ism than blacks across different media types. See also Advertising Gender Media bias Sexism Stereotypes Notes References Media studies Face Media bias Ratios Visual perception
Face-ism
[ "Mathematics" ]
538
[ "Arithmetic", "Ratios" ]
11,014,361
https://en.wikipedia.org/wiki/American%20Society%20for%20Mass%20Spectrometry
The American Society for Mass Spectrometry (ASMS) is a professional association based in the United States that supports the scientific field of mass spectrometry. As of 2018, the society had approximately 10,000 members primarily from the US, but also from around the world. The society holds a large annual meeting, typically in late May or early June as well as other topical conferences and workshops. The society publishes the Journal of the American Society for Mass Spectrometry. Awards The Society recognizes achievements and promotes academic research through four annual awards. The Biemann Medal and the John B. Fenn Award for a Distinguished Contribution in Mass Spectrometry both are awarded in recognition of singular achievements or contributions in fundamental or applied mass spectrometry, with the Biemann Medal being focused on individuals who are early in their careers. The Ronald A. Hites Award is awarded for outstanding original research demonstrated in papers published in the Journal of the American Society for Mass Spectrometry. The Research Awards are given to young scientists in mass spectrometry, based on the evaluation of their proposed research. Publications Journal of the American Society for Mass Spectrometry Measuring Mass: From Positive Rays to Proteins Past presidents The past presidents of ASMS are: Conferences The Society holds an annual conference in late May or early June as well as topical conferences (at Asilomar State Beach in California and Sanibel Island, Florida) and a fall workshop, which is also focused on a single topic. Conferences on Mass Spectrometry and Allied Topics have been held yearly since 1953. See also International Mass Spectrometry Foundation List of female mass spectrometrists References External links ASMS website Chemistry societies Mass spectrometry Organizations established in 1969 1969 establishments in the United States
American Society for Mass Spectrometry
[ "Physics", "Chemistry" ]
364
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "nan", "Chemistry societies", "Matter" ]
11,014,498
https://en.wikipedia.org/wiki/DVD
The DVD (common abbreviation for digital video disc or digital versatile disc) is a digital optical disc data storage format. It was invented and developed in 1995 and first released on November 1, 1996, in Japan. The medium can store any kind of digital data and has been widely used to store video programs (watched using DVD players), software and other computer files. DVDs offer significantly higher storage capacity than compact discs (CD) while having the same dimensions. A standard single-layer DVD can store up to 4.7 GB of data, a dual-layer DVD up to 8.5 GB. Variants can store up to a maximum of 17.08 GB. Prerecorded DVDs are mass-produced using molding machines that physically stamp data onto the DVD. Such discs are a form of DVD-ROM because data can only be read and not written or erased. Blank recordable DVD discs (DVD-R and DVD+R) can be recorded once using a DVD recorder and then function as a DVD-ROM. Rewritable DVDs (DVD-RW, DVD+RW, and DVD-RAM) can be recorded and erased many times. DVDs are used in DVD-Video consumer digital video format and less commonly in DVD-Audio consumer digital audio format, as well as for authoring DVD discs written in a special AVCHD format to hold high definition material (often in conjunction with AVCHD format camcorders). DVDs containing other types of information may be referred to as DVD data discs. Etymology The Oxford English Dictionary comments that, "In 1995, rival manufacturers of the product initially named digital video disc agreed that, in order to emphasize the flexibility of the format for multimedia applications, the preferred abbreviation DVD would be understood to denote digital versatile disc." The OED also states that in 1995, "The companies said the official name of the format will simply be DVD. Toshiba had been using the name 'digital video disc', but that was switched to 'digital versatile disc' after computer companies complained that it left out their applications." "Digital versatile disc" is the explanation provided in a DVD Forum Primer from 2000 and in the DVD Forum's mission statement, which the purpose is to promote broad acceptance of DVD products on technology, across entertainment, and other industries. Because DVDs became highly popular for the distribution of movies in the 2000s, the term DVD became popularly used in English as a noun to describe specifically a full-length movie released on the format; for example the sentence to "watch a DVD" describes watching a movie on DVD. History Development and launch Released in 1987, CD Video used analog video encoding on optical discs matching the established standard size of audio CDs. Video CD (VCD) became one of the first formats for distributing digitally encoded films in this format, in 1993. In the same year, two new optical disc storage formats were being developed. One was the Multimedia Compact Disc (MMCD), backed by Philips and Sony (developers of the CD and CD-i), and the other was the Super Density (SD) disc, supported by Toshiba, Time Warner, Matsushita Electric, Hitachi, Mitsubishi Electric, Pioneer, Thomson, and JVC. By the time of the press launches for both formats in January 1995, the MMCD nomenclature had been dropped, and Philips and Sony were referring to their format as Digital Video Disc (DVD). On May 3, 1995, an ad hoc, industry technical group formed from five computer companies (IBM, Apple, Compaq, Hewlett-Packard, and Microsoft) issued a press release stating that they would only accept a single format. The group voted to boycott both formats unless the two camps agreed on a single, converged standard. They recruited Lou Gerstner, president of IBM, to pressure the executives of the warring factions. In one significant compromise, the MMCD and SD groups agreed to adopt proposal SD 9, which specified that both layers of the dual-layered disc be read from the same side—instead of proposal SD 10, which would have created a two-sided disc that users would have to turn over. Philips/Sony strongly insisted on the source code, EFMPlus, that Kees Schouhamer Immink had designed for the MMCD, because it makes it possible to apply the existing CD servo technology. Its drawback was a loss from 5 to 4.7 Gbyte of capacity. As a result, the DVD specification provided a storage capacity of 4.7 GB (4.38 GiB) for a single-layered, single-sided disc and 8.5 GB (7.92 GiB) for a dual-layered, single-sided disc. The DVD specification ended up similar to Toshiba and Matsushita's Super Density Disc, except for the dual-layer option. MMCD was single-sided and optionally dual-layer, whereas SD was two half-thickness, single-layer discs which were pressed separately and then glued together to form a double-sided disc. Philips and Sony decided that it was in their best interests to end the format war, and on September 15, 1995 agreed to unify with companies backing the Super Density Disc to release a single format, with technologies from both. After other compromises between MMCD and SD, the group of computer companies won the day, and a single format was agreed upon. The computer companies also collaborated with the Optical Storage Technology Association (OSTA) on the use of their implementation of the ISO-13346 file system (known as Universal Disk Format) for use on the new DVDs. The format's details were finalized on December 8, 1995. In November 1995, Samsung announced it would start mass-producing DVDs by September 1996. The format launched on November 1, 1996, in Japan, mostly with music video releases. The first major releases from Warner Home Video arrived on December 20, 1996, with four titles being available. The format's release in the U.S. was delayed multiple times, from August 1996, to October 1996, November 1996, before finally settling on early 1997. Players began to be produced domestically that winter, with March 24, 1997, as the U.S. launch date of the format proper in seven test markets. Approximately 32 titles were available on launch day, mainly from the Warner Bros., MGM, and New Line libraries, with the notable inclusion of the 1996 film Twister. However, the launch was planned for the following day (March 25), leading to a distribution change with retailers and studios to prevent similar violations of breaking the street date. The nationwide rollout for the format happened on August 22, 1997. DTS announced in late 1997 that they would be coming onto the format. The sound system company revealed details in a November 1997 online interview, and clarified it would release discs in early 1998. However, this date would be pushed back several times before finally releasing their first titles at the 1999 Consumer Electronics Show. In 2001, blank DVD recordable discs cost the equivalent of $27.34 US dollars in 2022. Adoption Movie and home entertainment distributors adopted the DVD format to replace the ubiquitous VHS tape as the primary consumer video distribution format. Immediately following the formal adoption of a unified standard for DVD, two of the four leading video game console companies (Sega and The 3DO Company) said they already had plans to design a gaming console with DVDs as the source medium. Sony stated at the time that they had no plans to use DVD in their gaming systems, despite being one of the developers of the DVD format and eventually the first company to actually release a DVD-based console. Game consoles such as the PlayStation 2, Xbox, and Xbox 360 use DVDs as their source medium for games and other software. Contemporary games for Windows were also distributed on DVD. Early DVDs were mastered using DLT tape, but using DVD-R DL or +R DL eventually became common. TV DVD combos, combining a standard definition CRT TV or an HD flat panel TV with a DVD mechanism under the CRT or on the back of the flat panel, and VCR/DVD combos were also available for purchase. For consumers, DVD soon replaced VHS as the favored choice for home movie releases. In 2001, DVD players outsold VCRs for the first time in the United States. At that time, one in four American households owned a DVD player. By 2007, about 80% of Americans owned a DVD player, a figure that had surpassed VCRs; it was also higher than personal computers or cable television. Specifications The DVD specifications created and updated by the DVD Forum are published as so-called DVD Books (e.g. DVD-ROM Book, DVD-Audio Book, DVD-Video Book, DVD-R Book, DVD-RW Book, DVD-RAM Book, DVD-AR (Audio Recording) Book, DVD-VR (Video Recording) Book, etc.). DVD discs are made up of two discs; normally one is blank, and the other contains data. Each disc is 0.6 mm thick, and they are glued together to form a DVD disc. The gluing process must be done carefully to make the disc as flat as possible to avoid both birefringence and "disc tilt", which is when the disc is not perfectly flat, preventing it from being read. Some specifications for mechanical, physical and optical characteristics of DVD optical discs can be downloaded as freely available standards from the ISO website. There are also equivalent European Computer Manufacturers Association (Ecma) standards for some of these specifications, such as Ecma-267 for DVD-ROMs. Also, the DVD+RW Alliance publishes competing recordable DVD specifications such as DVD+R, DVD+R DL, DVD+RW or DVD+RW DL. These DVD formats are also ISO standards. Some DVD specifications (e.g. for DVD-Video) are not publicly available and can be obtained only from the DVD Format/Logo Licensing Corporation (DVD FLLC) for a fee of US$5000. Every subscriber must sign a non-disclosure agreement as certain information on the DVD Books is proprietary and confidential. Double-sided discs Borrowing from the LaserDisc format, the DVD standard includes DVD-10 discs (Type B in ISO) with two recorded data layers such that only one layer is accessible from either side of the disc. This doubles the total nominal capacity of a DVD-10 disc to 9.4 GB (8.75 GiB), but each side is locked to 4.7 GB. Like DVD-5 discs, DVD-10 discs are defined as single-layer (SL) discs. Dual-layer discs DVD hardware accesses the additional layer (layer 1) by refocusing the laser through an otherwise normally-placed, semitransparent first layer (layer 0). This laser refocus—and the subsequent time needed to reacquire laser tracking—can cause a noticeable pause in A/V playback on earlier DVD players, the length of which varies between hardware. A printed message explaining that the layer-transition pause was not a malfunction became standard on DVD keep cases. During mastering, a studio could make the transition less obvious by timing it to occur just before a camera angle change or other abrupt shift, an early example being the DVD release of Toy Story. Later in the format's life, larger data buffers and faster optical pickups in DVD players made layer transitions effectively invisible regardless of mastering. Dual-layer DVDs are recorded using Opposite Track Path (OTP). Combinations of the above The DVD Book also permits an additional disc type called DVD-14: a hybrid double-sided disc with one dual-layer side, one single-layer side, and a total nominal capacity of 12.3 GB. DVD-14 has no counterpart in ISO. Both of these additional disc types are extremely rare due to their complicated and expensive manufacturing. For this reason, some DVDs that were initially issued as double-sided discs were later pressed as two-disc sets. Note: The above sections regarding disc types pertain to 12 cm discs. The same disc types exist for 8 cm discs: ISO standards still regard these discs as Types A–D, while the DVD Book assigns them distinct disc types. DVD-14 has no analogous 8 cm type. The comparative data for 8 cm discs is provided further down. DVD recordable and rewritable HP initially developed recordable DVD media from the need to store data for backup and transport. DVD recordables are now also used for consumer audio and video recording. Three formats were developed: DVD-R/RW, DVD+R/RW (plus), and DVD-RAM. DVD-R is available in two formats, General (650 nm) and Authoring (635 nm), where Authoring discs may be recorded with CSS encrypted video content but General discs may not. Dual-layer recording Dual-layer recording (occasionally called double-layer recording) allows DVD-R and DVD+R discs to store nearly double the data of a single-layer disc—8.5 and 4.7 gigabyte capacities, respectively. The additional capacity comes at a cost: DVD±DLs have slower write speeds as compared to DVD±R. DVD-R DL was developed for the DVD Forum by Pioneer Corporation; DVD+R DL was developed for the DVD+RW Alliance by Mitsubishi Kagaku Media (MKM) and Philips. Recordable DVD discs supporting dual-layer technology are backward-compatible with some hardware developed before the recordable medium. Capacity All units are expressed with SI/IEC prefixes (i.e., 1 Gigabyte = 1,000,000,000 bytes). All units are expressed with SI/IEC prefixes (i.e., 1 Gigabyte = 1,000,000,000 bytes). All units are expressed with SI/IEC prefixes (i.e., 1 Gigabyte = 1,000,000,000 bytes). DVD drives and players DVD drives are devices that can read DVD discs on a computer. DVD players are a particular type of devices that do not require a computer to work, and can read DVD-Video and DVD-Audio discs. Transfer rates Read and write speeds for the first DVD drives and players were 1,385 kB/s (1,353 KiB/s); this speed is usually called "1×". More recent models, at 18× or 20×, have 18 or 20 times that speed. For CD drives, 1× means 153.6 kB/s (150 KiB/s), about one-ninth as swift. DVDs can spin at much higher speeds than CDs – DVDs can spin at up to 32000 RPM vs 23000 for CDs. In practice, they are not spun by optical drives anywhere close to these speeds to provide a safety margin. DVD drives limit reading speed to 16× (constant angular velocity), which means 9280 rotations per minute. Early-generation drives released before the mid-2000s have lower limits. DVD recordable and rewritable discs can be read and written using either constant angular velocity (CAV), constant linear velocity (CLV), Partial constant angular velocity (P-CAV) or Zoned Constant Linear Velocity (Z-CLV or ZCLV). Due to the slightly lower data density of dual layer DVDs (4.25 GB instead of 4.7 GB per layer), the required rotation speed is around 10% faster for the same data rate, which means that the same angular speed rating equals a 10% higher physical angular rotation speed. For that reason, the increase of reading speeds of dual layer media has stagnated at 12× (constant angular velocity) for half-height optical drives released since around 2005, and slim type optical drives are only able to record dual layer media at 6× (constant angular velocity), while reading speeds of 8× are still supported by such. Disc quality measurements The quality and data integrity of optical media is measureable, which means that future data losses caused by deteriorating media can be predicted well in advance by measuring the rate of correctable data errors. Support of measuring the disc quality varies among optical drive vendors and models. DVD-Video DVD-Video is a standard for distributing video/audio content on DVD media. The format went on sale in Japan on November 1, 1996, in the United States on March 24, 1997, to line up with the 69th Academy Awards that day; in Canada, Central America, and Indonesia later in 1997; and in Europe, Australia, and Africa in 1998. DVD-Video became the dominant form of home video distribution in Japan when it first went on sale on November 1, 1996, but it shared the market for home video distribution in the United States for several years; it was June 15, 2003, when weekly DVD-Video in the United States rentals began outnumbering weekly VHS cassette rentals. DVD-Video is still the dominant form of home video distribution worldwide except for in Japan where it was surpassed by Blu-ray Disc when Blu-ray first went on sale in Japan on March 31, 2006. Security The purpose of CSS is twofold: CSS prevents byte-for-byte copies of an MPEG (digital video) stream from being playable since such copies do not include the keys that are hidden on the lead-in area of the restricted DVD. CSS provides a reason for manufacturers to make their devices compliant with an industry-controlled standard, since CSS scrambled discs cannot in principle be played on noncompliant devices; anyone wishing to build compliant devices must obtain a license, which contains the requirement that the rest of the DRM system (region codes, Macrovision, and user operation prohibition) be implemented. Successors and decline In 2006, two new formats called HD DVD and Blu-ray Disc were released as the successor to DVD. HD DVD competed unsuccessfully with Blu-ray Disc in the format war of 2006–2008. A dual layer HD DVD can store up to 30 GB and a dual layer Blu-ray disc can hold up to 50 GB. However, unlike previous format changes, e.g., vinyl to Compact Disc or VHS videotape to DVD, initially there was no immediate indication that production of the standard DVD will gradually wind down, as at the beginning of the 2010s they still dominated, with around 75% of video sales and approximately one billion DVD player sales worldwide as of April 2011. In fact, experts claimed that the DVD would remain the dominant medium for at least another five years as Blu-ray technology was still in its introductory phase, write and read speeds being poor and necessary hardware being expensive and not readily available. Consumers initially were also slow to adopt Blu-ray due to the cost. By 2009, 85% of stores were selling Blu-ray Discs. A high-definition television and appropriate connection cables are also required to take advantage of Blu-ray disc. Some analysts suggested that the biggest obstacle to replacing DVD was due to its installed base; a large majority of consumers were satisfied with DVDs. DVDs started to face competition from video on demand services around 2015. With increasing numbers of homes having high speed Internet connections, many people had the option to either rent or buy video from an online service, and view it by streaming it directly from that service's servers, meaning they no longer need any form of permanent storage media for video at all. By 2017, digital streaming services had overtaken the sales of DVDs and Blu-rays for the first time. Until the end of the 2010s, manufacturers continued to release standard DVD titles, and the format remained the preferred one for the release of older television programs and films. Shows that were shot and edited entirely on film, such as Star Trek: The Original Series, could not be released in high definition without being re-scanned from the original film recordings. Shows that were made between the early 1980s and the early 2000s were generally shot on film, then transferred to video tape, and then edited natively in either NTSC or PAL; this makes high-definition transfers impossible, as these SD standards were baked into the final cuts of the episodes. Star Trek: The Next Generation was the only such show that had a Blu-ray release, as prints were re-scanned and edited from the ground up. By the beginning of the 2020s, sales of DVD had dropped 86% with respect to the peak of DVD sales around 2005, while on-demand sales and, overall, subscription streaming of TV shows and movies grew by over 1,200%. At its peak, DVD sales represented almost two thirds of video market in the US; approximately 15 years later, around 2020, they fell to only 10% of the market. By 2022, there was an increased demand of high definition media, where Ultra HD Blu-ray and regular Blu-ray formats made up for almost half of the US market while sales of physical media continued to shrink in favor of streaming services. Longevity Longevity of a storage medium is measured by how long the data remains readable, assuming compatible devices exist that can read it: that is, how long the disc can be stored until data is lost. Numerous factors affect longevity: composition and quality of the media (recording and substrate layers), humidity and light storage conditions, the quality of the initial recording (which is sometimes a matter of mutual compatibility of media and recorder), etc. According to NIST, "[a] temperature of 64.4 °F (18 °C) and 40% RH [Relative Humidity] would be considered suitable for long-term storage. A lower temperature and RH is recommended for extended-term storage." As with CDs, the information and data storage will begin to degrade over time with most standard DVDs lasting up to 30 years depending on the type of environment they are stored and whether they are full with data. According to the Optical Storage Technology Association (OSTA), "Manufacturers claim lifespans ranging from 30 to 100 years for DVD, DVD-R and DVD+R discs and up to 30 years for DVD-RW, DVD+RW and DVD-RAM." According to a NIST/LoC research project conducted in 2005–2007 using accelerated life testing, "There were fifteen DVD products tested, including five DVD-R, five DVD+R, two DVD-RW and three DVD+RW types. There were ninety samples tested for each product. ... Overall, seven of the products tested had estimated life expectancies in ambient conditions of more than 45 years. Four products had estimated life expectancies of 30–45 years in ambient storage conditions. Two products had an estimated life expectancy of 15–30 years and two products had estimated life expectancies of less than 15 years when stored in ambient conditions." The life expectancies for 95% survival estimated in this project by type of product are tabulated below: See also Glossary of computer hardware terms Book type Comparison of popular optical data-storage systems Digital video recorder Hard disk drive performance characteristics DVD authoring Ripping DVD region code DVD TV game – Interactive film Professional Disc DVD single – Music video M-DISC Notes References Further reading External links Dvddemystified.com: DVD Frequently Asked Questions and Answers Dual Layer Explained – Informational Guide to the Dual Layer Recording Process 120 mm discs Audiovisual introductions in 1996 Products introduced in 1996 Consumer electronics Digital audio storage Home video . Dutch inventions Information technology in Japan Information technology in the Netherlands Japanese inventions Joint ventures Rotating disc computer storage media Science and technology in Japan Science and technology in the Netherlands . Video storage Digital media Optical computer storage media
DVD
[ "Technology" ]
4,847
[ "Multimedia", "Digital media" ]
11,015,023
https://en.wikipedia.org/wiki/Selective%20exposure%20theory
Selective exposure is a theory within the practice of psychology, often used in media and communication research, that historically refers to individuals' tendency to favor information which reinforces their pre-existing views while avoiding contradictory information. Selective exposure has also been known and defined as "congeniality bias" or "confirmation bias" in various texts throughout the years. According to the historical use of the term, people tend to select specific aspects of exposed information which they incorporate into their mindset. These selections are made based on their perspectives, beliefs, attitudes, and decisions. People can mentally dissect the information they are exposed to and select favorable evidence, while ignoring the unfavorable. The foundation of this theory is rooted in the cognitive dissonance theory , which asserts that when individuals are confronted with contrasting ideas, certain mental defense mechanisms are activated to produce harmony between new ideas and pre-existing beliefs, which results in cognitive equilibrium. Cognitive equilibrium, which is defined as a state of balance between a person's mental representation of the world and his or her environment, is crucial to understanding selective exposure theory. According to Jean Piaget, when a mismatch occurs, people find it to be "inherently dissatisfying". Selective exposure relies on the assumption that one will continue to seek out information on an issue even after an individual has taken a stance on it. The position that a person has taken will be colored by various factors of that issue that are reinforced during the decision-making process. According to Stroud (2008), theoretically, selective exposure occurs when people's beliefs guide their media selections. Selective exposure has been displayed in various contexts such as self-serving situations and situations in which people hold prejudices regarding outgroups, particular opinions, and personal and group-related issues. Perceived usefulness of information, perceived norm of fairness, and curiosity of valuable information are three factors that can counteract selective exposure. Also of great concern is the theory of "Selective Participation" proposed by Sir Godson David in 2024 This theory suggests that individuals have the ability to selectively participate in certain aspects of events or activities that are most meaningful or important to them, while being fully aware of the consequences of neglecting other aspects. In this theory, individuals may prioritize certain elements of an event based on personal values, interests, or goals, and may choose to invest their time, energy, and resources in these specific areas. They may also make conscious decisions to limit participation in other aspects of the event, recognizing that they cannot engage fully in all aspects simultaneously. By selectively participating in specific aspects of events, individuals can focus on what matters most to them, optimize their resources and efforts in those areas, and compensate for any potential neglect in other areas. This approach may allow individuals to maintain a sense of control, satisfaction, and well-being while navigating complex events or activities. Overall, the theory of Selective Participation emphasizes the importance of intentional decision-making and prioritization in event participation, acknowledging that individuals have the agency to choose where to direct their time and attention based on their individual preferences and goals. Effect on decision-making Individual versus group decision-making Selective exposure can often affect the decisions people make as individuals or as groups because they may be unwilling to change their views and beliefs either collectively or on their own, despite conflicting and reliable information. An example of the effects of selective exposure is the series of events leading up to the Bay of Pigs Invasion in 1961. President John F. Kennedy was given the go ahead by his advisers to authorize the invasion of Cuba by poorly trained expatriates despite overwhelming evidence that it was a foolish and ill-conceived tactical maneuver. The advisers were so eager to please the President that they confirmed their cognitive bias for the invasion rather than challenging the faulty plan. Changing beliefs about one's self, other people, and the world are three variables as to why people fear new information. A variety of studies has shown that selective exposure effects can occur in the context of both individual and group decision making. Numerous situational variables have been identified that increase the tendency toward selective exposure. Social psychology, specifically, includes research with a variety of situational factors and related psychological processes that eventually persuade a person to make a quality decision. Additionally, from a psychological perspective, the effects of selective exposure can both stem from motivational and cognitive accounts. Effect of information quantity According to research study by Fischer, Schulz-Hardt, et al. (2008), the quantity of decision-relevant information that the participants were exposed to had a significant effect on their levels of selective exposure. A group for which only two pieces of decision-relevant information were given had experienced lower levels of selective exposure than the other group who had ten pieces of information to evaluate. This research brought more attention to the cognitive processes of individuals when they are presented with a very small amount of decision-consistent and decision-inconsistent information. The study showed that in situations such as this, an individual becomes more doubtful of their initial decision due to the unavailability of resources. They begin to think that there is not enough data or evidence in this particular field in which they are told to make a decision about. Because of this, the subject becomes more critical of their initial thought process and focuses on both decision-consistent and inconsistent sources, thus decreasing his level of selective exposure. For the group who had plentiful pieces of information, this factor made them confident in their initial decision because they felt comfort from the fact that their decision topic was well-supported by a large number of resources. Therefore, the availability of decision-relevant and irrelevant information surrounding individuals can influence the level of selective exposure experienced during the process of decision-making. Selective exposure is prevalent within singular individuals and groups of people and can influence either to reject new ideas or information that is not commensurate with the original ideal. In Jonas et al. (2001) empirical studies were done on four different experiments investigating individuals' and groups' decision making. This article suggests that confirmation bias is prevalent in decision making. Those who find new information often draw their attention towards areas where they hold personal attachment. Thus, people are driven toward pieces of information that are coherent with their own expectations or beliefs as a result of this selective exposure theory occurring in action. Throughout the process of the four experiments, generalization is always considered valid and confirmation bias is always present when seeking new information and making decisions. Accuracy motivation and defense motivation Fischer and Greitemeyer (2010) explored individuals' decision making in terms of selective exposure to confirmatory information. Selective exposure posed that individuals make their decisions based on information that is consistent with their decision rather than information that is inconsistent. Recent research has shown that "Confirmatory Information Search" was responsible for the 2008 bankruptcy of the Lehman Brothers Investment Bank which then triggered the Global Financial Crisis. In the zeal for profit and economic gain, politicians, investors, and financial advisors ignored the mathematical evidence that foretold the housing market crash in favor of flimsy justifications for upholding the status quo. Researchers explain that subjects have the tendency to seek and select information using their integrative model. There are two primary motivations for selective exposure: Accuracy Motivation and Defense Motivation. Accuracy Motivation explains that an individual is motivated to be accurate in their decision making and Defense Motivation explains that one seeks confirmatory information to support their beliefs and justify their decisions. Accuracy motivation is not always beneficial within the context of selective exposure and can instead be counterintuitive, increasing the amount of selective exposure. Defense motivation can lead to reduced levels of selective exposure. Personal attributes Selective exposure avoids information inconsistent with one's beliefs and attitudes. For example, former Vice President Dick Cheney would only enter a hotel room after the television was turned on and tuned to a conservative television channel. When analyzing a person's decision-making skills, his or her unique process of gathering relevant information is not the only factor taken into account. Fischer et al. (2010) found it important to consider the information source itself, otherwise explained as the physical being that provided the source of information. Selective exposure research generally neglects the influence of indirect decision-related attributes, such as physical appearance. In Fischer et al. (2010) two studies hypothesized that physically attractive information sources resulted in decision makers to be more selective in searching and reviewing decision-relevant information. Researchers explored the impact of social information and its level of physical attractiveness. The data was then analyzed and used to support the idea that selective exposure existed for those who needed to make a decision. Therefore, the more attractive an information source was, the more positive and detailed the subject was with making the decision. Physical attractiveness affects an individual's decision because the perception of quality improves. Physically attractive information sources increased the quality of consistent information needed to make decisions and further increased the selective exposure in decision-relevant information, supporting the researchers' hypothesis. Both studies concluded that attractiveness is driven by a different selection and evaluation of decision-consistent information. Decision makers allow factors such as physical attractiveness to affect everyday decisions due to the works of selective exposure. In another study, selective exposure was defined by the amount of individual confidence. Individuals can control the amount of selective exposure depending on whether they have a low self-esteem or high self-esteem. Individuals who maintain higher confidence levels reduce the amount of selective exposure. Albarracín and Mitchell (2004) hypothesized that those who displayed higher confidence levels were more willing to seek out information both consistent and inconsistent with their views. The phrase "decision-consistent information" explains the tendency to actively seek decision-relevant information. Selective exposure occurs when individuals search for information and show systematic preferences towards ideas that are consistent, rather than inconsistent, with their beliefs. On the contrary, those who exhibited low levels of confidence were more inclined to examine information that did not agree with their views. The researchers found that in three out of five studies participants showed more confidence and scored higher on the Defensive Confidence Scale, which serves as evidence that their hypothesis was correct. Bozo et al. (2009) investigated the anxiety of fearing death and compared it to various age groups in relation to health-promoting behaviors. Researchers analyzed the data by using the terror management theory and found that age had no direct effect on specific behaviors. The researchers thought that a fear of death would yield health-promoting behaviors in young adults. When individuals are reminded of their own death, it causes stress and anxiety, but eventually leads to positive changes in their health behaviors. Their conclusions showed that older adults were consistently better at promoting and practicing good health behaviors, without thinking about death, compared to young adults. Young adults were less motivated to change and practice health-promoting behaviors because they used the selective exposure to confirm their prior beliefs. Selective exposure thus creates barriers between the behaviors in different ages, but there is no specific age at which people change their behaviors. Though physical appearance will impact one's personal decision regarding an idea presented, a study conducted by Van Dillen, Papies, and Hofmann (2013) suggests a way to decrease the influence of personal attributes and selective exposure on decision-making. The results from this study showed that people do pay more attention to physically attractive or tempting stimuli; however, this phenomenon can be decreased through increasing the "cognitive load." In this study, increasing cognitive activity led to a decreased impact of physical appearance and selective exposure on the individual's impression of the idea presented. This is explained by acknowledging that we are instinctively drawn to certain physical attributes, but if the required resources for this attraction are otherwise engaged at the time, then we might not notice these attributes to an equal extent. For example, if a person is simultaneously engaging in a mentally challenging activity during the time of exposure, then it is likely that less attention will be paid to appearance, which leads to a decreased impact of selective exposure on decision-making. Theories accounting for selective exposure Cognitive dissonance theory Leon Festinger is widely considered as the father of modern social psychology and as an important figure to that field of practice as Freud was to clinical psychology and Piaget was to developmental psychology. He was considered to be one of the most significant social psychologists of the 20th century. His work demonstrated that it is possible to use the scientific method to investigate complex and significant social phenomena without reducing them to the mechanistic connections between stimulus and response that were the basis of behaviorism. Festinger proposed the groundbreaking theory of cognitive dissonance that has become the foundation of selective exposure theory today despite the fact that Festinger was considered as an "avant-garde" psychologist when he had first proposed it in 1957. In an ironic twist, Festinger realized that he himself was a victim of the effects of selective exposure. He was a heavy smoker his entire life and when he was diagnosed with terminal cancer in 1989, he was said to have joked, "Make sure that everyone knows that it wasn't lung cancer!" Cognitive dissonance theory explains that when a person either consciously or unconsciously realizes conflicting attitudes, thoughts, or beliefs, they experience mental discomfort. Because of this, an individual will avoid such conflicting information in the future since it produces this discomfort, and they will gravitate towards messages sympathetic to their own previously held conceptions. Decision makers are unable to evaluate information quality independently on their own (Fischer, Jonas, Dieter & Kastenmüller, 2008). When there is a conflict between pre-existing views and information encountered, individuals will experience an unpleasant and self-threatening state of aversive-arousal which will motivate them to reduce it through selective exposure. They will begin to prefer information that supports their original decision and neglect conflicting information. Individuals will then exhibit confirmatory information to defend their positions and reach the goal of dissonance reduction. Cognitive dissonance theory insists that dissonance is a psychological state of tension that people are motivated to reduce . Dissonance causes feelings of unhappiness, discomfort, or distress. asserted the following: "These two elements are in a dissonant relation if, considering these two alone, the obverse of one element would follow from the other." To reduce dissonance, people add consonant cognition or change evaluations for one or both conditions in order to make them more consistent mentally. Such experience of psychological discomfort was found to drive individuals to avoid counterattitudinal information as a dissonance-reduction strategy. In Festinger's theory, there are two basic hypotheses: 1) The existence of dissonance, being psychologically uncomfortable, will motivate the person to try to reduce the dissonance and achieve consonance. 2) When dissonance is present, in addition to trying to reduce it, the person will actively avoid situations and information which would likely increase the dissonance . The theory of cognitive dissonance was developed in the mid-1950s to explain why people of strong convictions are so resistant in changing their beliefs even in the face of undeniable contradictory evidence. It occurs when people feel an attachment to and responsibility for a decision, position or behavior. It increases the motivation to justify their positions through selective exposure to confirmatory information (Fischer, 2011). Fischer suggested that people have an inner need to ensure that their beliefs and behaviors are consistent. In an experiment that employed commitment manipulations, it impacted perceived decision certainty. Participants were free to choose attitude-consistent and inconsistent information to write an essay. Those who wrote an attitude-consistent essay showed higher levels of confirmatory information search (Fischer, 2011). The levels and magnitude of dissonance also play a role. Selective exposure to consistent information is likely under certain levels of dissonance. At high levels, a person is expected to seek out information that increases dissonance because the best strategy to reduce dissonance would be to alter one's attitude or decision (Smith et al., 2008). Subsequent research on selective exposure within the dissonance theory produced weak empirical support until the dissonance theory was revised and new methods, more conducive to measuring selective exposure, were implemented. To date, scholars still argue that empirical results supporting the selective exposure hypothesis are still mixed. This is possibly due to the problems with the methods of the experimental studies conducted. Another possible reason for the mixed results may be the failure to simulate an authentic media environment in the experiments. According to Festinger, the motivation to seek or avoid information depends on the magnitude of dissonance experienced (Smith et al., 2008). It is observed that there is a tendency for people to seek new information or select information that supports their beliefs in order to reduce dissonance. There exist three possibilities which will affect extent of dissonance : Relative absence of dissonance. When little or no dissonance exists, there is little or no motivation to seek new information. For example, when there is an absence of dissonance, the lack of motivation to attend or avoid a lecture on 'The Advantages of Automobiles with Very High Horsepower Engines' will be independent of whether the car a new owner has recently purchased has a high or low horsepower engine. However, it is important to note the difference between a situation when there is no dissonance and when the information has no relevance to the present or future behavior. For the latter, accidental exposure, which the new car owner does not avoid, will not introduce any dissonance; while for the former individual, who also does not avoid information, dissonance may be accidentally introduced. The presence of moderate amounts of dissonance. The existence of dissonance and consequent pressure to reduce it will lead to an active search of information, which will then lead people to avoid information that will increase dissonance. However, when faced with a potential source of information, there will be an ambiguous cognition to which a subject will react in terms of individual expectations about it. If the subject expects the cognition to increase dissonance, they will avoid it. In the event that one's expectations are proven wrong, the attempt at dissonance reduction may result in increasing it instead. It may in turn lead to a situation of active avoidance. The presence of extremely large amounts of dissonance. If two cognitive elements exist in a dissonant relationship, the magnitude of dissonance matches the resistance to change. If the dissonance becomes greater than the resistance to change, then the least resistant elements of cognition will be changed, reducing dissonance. When dissonance is close to the maximum limit, one may actively seek out and expose oneself to dissonance-increasing information. If an individual can increase dissonance to the point where it is greater than the resistance to change, he will change the cognitive elements involved, reducing or even eliminating dissonance. Once dissonance is increased sufficiently, an individual may bring himself to change, hence eliminating all dissonance . The reduction in cognitive dissonance following a decision can be achieved by selectively looking for decision-consonant information and avoiding contradictory information. The objective is to reduce the discrepancy between the cognitions, but the specification of which strategy will be chosen is not explicitly addressed by the dissonance theory. It will be dependent on the quantity and quality of the information available inside and outside the cognitive system. Klapper's selective exposure In the early 1960s, Columbia University researcher Joseph T. Klapper asserted in his book The Effects Of Mass Communication that audiences were not passive targets of political and commercial propaganda from mass media but that mass media reinforces previously held convictions. Throughout the book, he argued that the media has a small amount of power to influence people and, most of the time, it just reinforces our preexisting attitudes and beliefs. He argued that the media effects of relaying or spreading new public messages or ideas were minimal because there is a wide variety of ways in which individuals filter such content. Due to this tendency, Klapper argued that media content must be able to ignite some type of cognitive activity in an individual in order to communicate its message. Prior to Klapper's research, the prevailing opinion was that mass media had a substantial power to sway individual opinion and that audiences were passive consumers of prevailing media propaganda. However, by the time of the release of The Effects of Mass Communication, many studies led to a conclusion that many specifically targeted messages were completely ineffective. Klapper's research showed that individuals gravitated towards media messages that bolstered previously held convictions that were set by peer groups, societal influences, and family structures and that the accession of these messages over time did not change when presented with more recent media influence. Klapper noted from the review of research in the social science that given the abundance of content within the mass media, audiences were selective to the types of programming that they consumed. Adults would patronize media that was appropriate for their demographics and children would eschew media that was boring to them. So individuals would either accept or reject a mass media message based upon internal filters that were innate to that person. The following are Klapper's five mediating factors and conditions to affect people: Predispositions and the related processes of selective exposure, selective perception, and selective retention. The groups, and the norms of groups, to which the audience members belong. Interpersonal dissemination of the content of communication The exercise of opinion leadership The nature of mass media in a free enterprise society. Three basic concepts: Selective exposure – people keep away from communication of opposite hue. Selective perception – If people are confronting unsympathetic material, they do not perceive it, or make it fit for their existing opinion. Selective retention – refers to the process of categorizing and interpreting information in a way that favors one category or interpretation over another. Furthermore, they just simply forget the unsympathetic material. Groups and group norms work as mediators. For example, one can be strongly disinclined to change to the Democratic Party if their family has voted Republican for a long time. In this case, the person's predisposition to the political party is already set, so they don't perceive information about Democratic Party or change voting behavior because of mass communication. Klapper's third assumption is inter-personal dissemination of mass communication. If someone is already exposed by close friends, which creates predisposition toward something, it will lead to an increase in exposure to mass communication and eventually reinforce the existing opinion. An opinion leader is also a crucial factor to form one's predisposition and can lead someone to be exposed by mass communication. The nature of commercial mass media also leads people to select certain types of media contents. Cognitive economy model This new model combines the motivational and cognitive processes of selective exposure. In the past, selective exposure had been studied from a motivational standpoint. For instance, the reason behind the existence of selective exposure was that people felt motivated to decrease the level of dissonance they felt while encountering inconsistent information. They also felt motivated to defend their decisions and positions, so they achieved this goal by exposing themselves to consistent information only. However, the new cognitive economy model not only takes into account the motivational aspects, but it also focuses on the cognitive processes of each individual. For instance, this model proposes that people cannot evaluate the quality of inconsistent information objectively and fairly because they tend to store more of the consistent information and use this as their reference point. Thus, inconsistent information is often observed with a more critical eye in comparison to consistent information. According to this model, the levels of selective exposure experienced during the decision-making process are also dependent on how much cognitive energy people are willing to invest. Just as people tend to be careful with their finances, cognitive energy or how much time they are willing to spend evaluating all the evidence for their decisions works the same way. People are hesitant to use this energy; they tend to be careful so they don't waste it. Thus, this model suggests that selective exposure does not happen in separate stages. Rather, it is a combined process of the individuals' certain acts of motivations and their management of the cognitive energy. Implications Media Recent studies have shown relevant empirical evidence for the pervasive influence of selective exposure on the greater population at large due to mass media. Researchers have found that individual media consumers will seek out programs to suit their individual emotional and cognitive needs. Individuals will seek out palliative forms of media during the recent times of economic crisis to fulfill a "strong surveillance need" and to decrease chronic dissatisfaction with life circumstances as well as fulfill needs for companionship. Consumers tend to select media content that exposes and confirms their own ideas while avoiding information that argues against their opinion. A study conducted in 2012 has shown that this type of selective exposure affects pornography consumption as well. Individuals with low levels of life satisfaction are more likely to have casual sex after consumption of pornography that is congruent with their attitudes while disregarding content that challenges their inherently permissive 'no strings attached' attitudes. Music selection is also affected by selective exposure. A 2014 study conducted by Christa L. Taylor and Ronald S. Friedman at the SUNY University at Albany, found that mood congruence was effected by self-regulation of music mood choices. Subjects in the study chose happy music when feeling angry or neutral but listened to sad music when they themselves were sad. The choice of sad music given a sad mood was due less to mood-mirroring but as a result of subjects having an aversion to listening to happy music that was cognitively dissonant with their mood. Politics are more likely to inspire selective exposure among consumers as opposed to single exposure decisions. For example, in their 2009 meta-analysis of Selective Exposure Theory, Hart et al. reported that "A 2004 survey by The Pew Research Center for the People & the Press (2006) found that Republicans are about 1.5 times more likely to report watching Fox News regularly than are Democrats (34% for Republicans and 20% of Democrats). In contrast, Democrats are 1.5 times more likely to report watching CNN regularly than Republicans (28% of Democrats vs. 19% of Republicans). Even more striking, Republicans are approximately five times more likely than Democrats to report watching "The O'Reilly Factor" regularly and are seven times more likely to report listening to "Rush Limbaugh" regularly." As a result, when the opinions of Republicans who only tune into conservative media outlets were compared to those of their fellow conservatives in a study by Stroud (2010), their beliefs were considered to be more polarized. The same result was retrieved from the study of liberals as well. Due to our greater tendency toward selective exposure, current political campaigns have been characterized as being extremely partisan and polarized. As Bennett and Iyengar (2008) commented, "The new, more diversified information environment makes it not only more feasible for consumers to seek out news they might find agreeable but also provides a strong economic incentive for news organizations to cater to their viewers' political preferences." Selective exposure thus plays a role in shaping and reinforcing individuals' political attitudes. In the context of these findings, Stroud (2008) comments "The findings presented here should at least raise the eyebrows of those concerned with the noncommercial role of the press in our democratic system, with its role in providing the public with the tools to be good citizens." The role of public broadcasting, through its noncommercial role, is to counterbalance media outlets that deliberately devote their coverage to one political direction, thus driving selective exposure and political division in a democracy. Many academic studies on selective exposure, however, are based on the electoral system and media system of the United States. Countries with a strong public service broadcasting like many European countries, on the other hand, have less selective exposure based on political ideology or political party. In Sweden, for instance, there were no differences in selective exposure to public service news between the political left and right over a period of 30 years. In early research, selective exposure originally provided an explanation for limited media effects. The "limited effects" model of communication emerged in the 1940s with a shift in the media effects paradigm. This shift suggested that while the media has effects on consumers' behavior such as their voting behavior, these effects are limited and influenced indirectly by interpersonal discussions and the influence of opinion leaders. Selective exposure was considered one necessary function in the early studies of media's limited power over citizens' attitudes and behaviors. Political ads deal with selective exposure as well because people are more likely to favor a politician that agrees with their own beliefs. Another significant effect of selective exposure comes from Stroud (2010) who analyzed the relationship between partisan selective exposure and political polarization. Using data from the 2004 National Annenberg Election Survey, analysts found that over time partisan selective exposure leads to polarization. This process is plausible because people can easily create or have access to blogs, websites, chats, and online forums where those with similar views and political ideologies can congregate. Much of the research has also shown that political interaction online tends to be polarized. Further evidence for this polarization in the political blogosphere can be found in the Lawrence et al. (2010)'s study on blog readership that people tend to read blogs that reinforce rather than challenge their political beliefs. According to Cass Sunstein's book, Republic.com, the presence of selective exposure on the web creates an environment that breeds political polarization and extremism. Due to easy access to social media and other online resources, people are "likely to hold even stronger views than the ones they started with, and when these views are problematic, they are likely to manifest increasing hatred toward those espousing contrary beliefs." This illustrates how selective exposure can influence an individual's political beliefs and subsequently his participation in the political system. One of the major academic debates on the concept of selective exposure is whether selective exposure contributes to people's exposure to diverse viewpoints or polarization. Scheufele and Nisbet (2012) discuss the effects of encountering disagreement on democratic citizenship. Ideally, true civil deliberation among citizens would be the rational exchange of non-like-minded views (or disagreement). However, many of us tend to avoid disagreement on a regular basis because we do not like to confront with others who hold views that are strongly opposed to our own. In this sense, the authors question about whether exposure to non-like-minded information brings either positive or negative effects on democratic citizenship. While there are mixed findings of peoples' willingness to participate in the political processes when they encounter disagreement, the authors argue that the issue of selectivity needs to be further examined in order to understand whether there is a truly deliberative discourse in online media environment. See also Cherry picking Selection bias Voldemort effect References Bibliography Media studies Sociology of technology
Selective exposure theory
[ "Technology" ]
6,299
[ "nan" ]
11,015,555
https://en.wikipedia.org/wiki/Finite%20element%20exterior%20calculus
Finite element exterior calculus (FEEC) is a mathematical framework that formulates finite element methods using chain complexes. Its main application has been a comprehensive theory for finite element methods in computational electromagnetism, computational solid and fluid mechanics. FEEC was developed in the early 2000s by Douglas N. Arnold, Richard S. Falk and Ragnar Winther, among others. Finite element exterior calculus is sometimes called as an example of a compatible discretization technique, and bears similarities with discrete exterior calculus, although they are distinct theories. One starts with the recognition that the used differential operators are often part of complexes: successive application results in zero. Then, the phrasing of the differential operators of relevant differential equations and relevant boundary conditions as a Hodge Laplacian. The Hodge Laplacian terms are split using the Hodge decomposition. A related variational saddle-point formulation for mixed quantities is then generated. Discretization to a mesh-related subcomplex is done requiring a collection of projection operators which commute with the differential operators. One can then prove uniqueness and optimal convergence as function of mesh density. FEEC is of immediate relevancy for diffusion, elasticity, electromagnetism, Stokes flow. For the important de Rham complex, pertaining to the grad, curl and div operators, suitable family of elements have been generated not only for tetrahedrons, but also for other shaped elements such as bricks. Moreover, also conforming with them, prism and pyramid shaped elements have been generated. For the latter, uniquely, the shape functions are not polynomial. The quantities are 0-forms (scalars), 1-forms (gradients), 2-forms (fluxes), and 3-forms (densities). Diffusion, electromagnetism, and elasticity, Stokes flow, general relativity, and actually all known complexes, can all be phrased in terms the de Rham complex. For Navier-Stokes, there may be possibilities too. References Finite element method
Finite element exterior calculus
[ "Mathematics" ]
411
[ "Applied mathematics", "Applied mathematics stubs" ]
11,015,826
https://en.wikipedia.org/wiki/Blu-ray
Blu-ray (Blu-ray Disc or BD) is a digital optical disc data storage format designed to supersede the DVD format. It was invented and developed in 2005 and released worldwide on June 20, 2006, capable of storing several hours of high-definition video (HDTV 720p and 1080p). The main application of Blu-ray is as a medium for video material such as feature films and for the physical distribution of video games for the PlayStation 3, PlayStation 4, PlayStation 5, Xbox One, and Xbox Series X. The name refers to the blue laser (actually a violet laser) used to read the disc, which allows information to be stored at a greater density than is possible with the longer-wavelength red laser used for DVDs, resulting in an increased capacity. The polycarbonate disc is in diameter and thick, the same size as DVDs and CDs. Conventional (or "pre-BD-XL") Blu-ray discs contain 25GB per layer, with dual-layer discs (50GB) being the industry standard for feature-length video discs. Triple-layer discs (100GB) and quadruple-layer discs (128GB) are available for BD-XL re-writer drives. While the DVD-Video specification has a maximum resolution of 480p (NTSC, pixels) or 576p (PAL, pixels), the initial specification for storing movies on Blu-ray discs defined a maximum resolution of 1080p ( pixels) at up to 24 progressive or 29.97 interlaced frames per second. Revisions to the specification allowed newer Blu-ray players to support videos with a resolution of pixels, with Ultra HD Blu-ray players extending the maximum resolution to 4K ( pixels) and progressive frame rates up to 60 frames per second. Aside from an 8K resolution ( pixels) Blu-ray format exclusive to Japan, videos with non-standard resolutions must use letterboxing to conform to a resolution supported by the Blu-ray specification. Besides these hardware specifications, Blu-ray is associated with a set of multimedia formats. Given that Blu-ray discs can contain ordinary computer files, there is no fixed limit as to which resolution of video can be stored when not conforming to the official specifications. The BD format was developed by the Blu-ray Disc Association, a group representing makers of consumer electronics, computer hardware, and motion pictures. Sony unveiled the first Blu-ray Disc prototypes in October 2000, and the first prototype player was released in Japan in April 2003. Afterward, it continued to be developed until its official worldwide release on June 20, 2006, beginning the high-definition optical disc format war, where Blu-ray Disc competed with the HD DVD format. Toshiba, the main company supporting HD DVD, conceded in February 2008, and later released its own Blu-ray Disc player in late 2009. According to Media Research, high-definition software sales in the United States were slower in the first two years than DVD software sales. Blu-ray's competition includes video on demand (VOD) and DVD. In January 2016, 44% of U.S. broadband households had a Blu-ray player. History Early history The information density of the DVD format was limited by the wavelength of the laser diodes used. Following protracted development, blue laser diodes operating at 405 nanometers became available on a production basis, allowing for the development of a denser storage format that could hold higher-definition media, with prototype discs made with diodes at a slightly longer wavelength of 407 nanometers in October 1998. Sony commenced two projects in collaboration with Panasonic, Philips, and TDK, applying the new diodes: UDO (Ultra Density Optical), and DVR Blue (together with Pioneer), a format of rewritable discs that would eventually become Blu-ray Disc (more specifically, BD-RE). The core technologies of the formats are similar. The first DVR Blue prototypes were unveiled by Sony at the CEATEC exhibition in October 2000. A trademark for the "Blue Disc" logo was filed on February 9, 2001. On February 19, 2002, the project was officially announced as Blu-ray Disc, and Blu-ray Disc Founders was founded by the nine initial members. The first consumer device arrived in stores on April 10, 2003: the Sony BDZ-S77, a US$3,800 BD-RE recorder that was made available only in Japan. However, there was no standard for pre-recorded video, and no movies were released for this player. Hollywood studios insisted that players be equipped with digital rights management before they would release movies for the new format, and they wanted a new DRM system that would protect more against unauthorized copying than the failed Content Scramble System (CSS) used on DVDs. On October 4, 2004, the name Blu-ray Disc Founders was officially changed to the Blu-ray Disc Association (BDA), and 20th Century Fox joined the BDA's Board of Directors. The Blu-ray Disc physical specifications were completed in 2004. The recording layer on which the data is stored lies under a protective layer and on top of a substrate made of polycarbonate plastic, compared to on either side on DVDs. Sony also announced in April 2004 a version using paper as the substrate developed with Toppan Printing, with up to 25GB storage. In January 2005, TDK announced that it had developed an ultra-hard yet very thin polymer coating ("Durabis") for Blu-ray Discs; this was a significant technical advance because a far tougher protection was desired in the consumer market to protect bare discs against scratching and damage compared to DVD, given that Blu-ray Discs technically required a much thinner layer for the denser and higher-frequency blue laser. Cartridges, originally used for scratch protection, were no longer necessary and were scrapped. The BD-ROM specifications were finalized in early 2006. Advanced Access Content System Licensing Administrator (AACS LA), a consortium founded in 2004, had been developing the DRM platform that could be used to distribute movies to consumers while preventing copying. However, the final AACS standard was delayed, and then delayed again when an important member of the Blu-ray Disc group voiced concerns. At the request of the initial hardware manufacturers, including Toshiba, Pioneer, and Samsung, an interim standard was published that did not include some features, such as managed copy, which would have let end users create copies limited to personal use. Launch and sales developments The first BD-ROM players (Samsung BD-P1000) were shipped in mid-June 2006, though HD DVD players beat them to market by a few months. The first Blu-ray Disc titles were released on June 20, 2006: 50 First Dates, The Fifth Element, Hitch, House of Flying Daggers, Underworld: Evolution, xXx (all from Sony), and MGM's The Terminator. The earliest releases used MPEG-2 video compression, the same method used on standard DVDs. The first releases using the newer VC-1 and AVC formats were introduced in September 2006. The first movies using 50GB dual-layer discs were introduced in October 2006. The first audio-only albums were released in May 2008. By June 2008, over 2,500 Blu-ray Disc titles were available in Australia and the United Kingdom, with 3,500 in the United States and Canada. In Japan, over 3,300 titles had been released as of July 2010. Competition from HD DVD The DVD Forum, chaired by Toshiba, was split over whether to develop the more expensive blue laser technology. In March 2002 the forum approved a proposal, which was endorsed by Warner Bros. and other motion picture studios. The proposal involved compressing high-definition video onto dual-layer standard DVD-9 discs. In spite of this decision, however, the DVD Forum's Steering Committee announced in April that it was pursuing its own blue-laser high-definition video solution. In August, Toshiba and NEC announced their competing standard, the Advanced Optical Disc. It was finally adopted by the DVD Forum and renamed HD DVD the next year, after being voted down twice by DVD Forum members who were also Blu-ray Disc Association members—a situation that drew preliminary investigations by the U.S. Department of Justice. HD DVD had a head start in the high-definition video market, as Blu-ray Disc sales were slow to gain market share. The first Blu-ray Disc player was perceived as expensive and buggy, and there were few titles available. The Sony PlayStation 3, which contained a Blu-ray Disc player for primary storage, helped support Blu-ray. Sony also ran a more thorough and influential marketing campaign for the format. AVCHD camcorders were also introduced in 2006. These recordings can be played back on many Blu-ray Disc players without re-encoding but are not compatible with HD DVD players. By January 2007, Blu-ray Discs had outsold HD DVDs, and during the first three quarters of 2007, BD outsold HD DVD by about two to one. At CES 2007, Warner proposed Total Hi Def—a hybrid disc containing Blu-ray on one side and HD DVD on the other, but it was never released. On June 28, 2007, 20th Century Fox cited Blu-ray Discs' adoption of the BD+ anticopying system as key to their decision to support the Blu-ray Disc format. On January 4, 2008, a day before CES 2008, Warner Bros., the only major studio still releasing movies in both HD DVD and Blu-ray Disc format, announced that it would release only in Blu-ray after May 2008. This effectively included other studios that came under the Warner umbrella, such as New Line Cinema and HBO—though in Europe, HBO's distribution partner, the BBC, announced it would continue to release product on both formats while keeping an eye on market forces. This led to a chain reaction in the industry, with major American retailers such as Best Buy, Walmart, and Circuit City and Canadian chains such as Future Shop dropping HD DVD in their stores. Woolworths, then a major European retailer, dropped HD DVD from its inventory. Major DVD rental companies Netflix and Blockbuster said they would no longer carry HD DVD. Following these new developments, on February 19, 2008, Toshiba announced it would end production of HD DVD devices, allowing Blu-ray Disc to become the industry standard for high-density optical discs. Universal Studios, the sole major studio to back HD DVD since its inception, said shortly after Toshiba's announcement: "While Universal values the close partnership we have shared with Toshiba, it is time to turn our focus to releasing new and catalog titles on Blu-ray Disc." Paramount Pictures, which started releasing movies only in HD DVD format during late 2007, also said it would start releasing on Blu-ray Disc. Both studios announced initial Blu-ray lineups in May 2008. With this, all major Hollywood studios supported Blu-ray. Ongoing development 2005–2010 Although the Blu-ray Disc specification has been finalized, engineers continue to work on advancing the technology. By 2005, quad-layer (100GB) discs had been demonstrated on a drive with modified optics and standard unaltered optics. Hitachi stated that such a disc could be used to store 7hours of video (HDTV) or 3hours and 30minutes of video (ultra-high-definition television). In April 2006, TDK canceled plans to produce 8-layer 200GB Blu-ray Discs. In August 2006, TDK announced that it had created a working experimental Blu-ray Disc capable of holding 200GB of data on a single side, using six 33GB data layers. In 2007, Hitachi was reported to have plans to produce 200GB discs by 2009. Behind closed doors at CES 2007, Ritek revealed that it had successfully developed a high-definition optical disc process that extended the disc capacity to ten layers, increasing the capacity of the discs to 250GB. However, it noted the major obstacle was that current read/write technology did not allow additional layers. JVC developed a three-layer technology that allows putting both standard-definition DVD data and HD data on a BD/(standard) DVD combination. This would have enabled the consumer to purchase a disc that can be played on DVD players and can also reveal its HD version when played on a BD player. Japanese optical disc manufacturer Infinity announced the first "hybrid" Blu-ray Disc/(standard) DVD combo, to be released on February 18, 2009. This disc set of the TV series Code Blue featured four hybrid discs containing a single Blu-ray Disc layer (25GB) and two DVD layers (9GB) on the same side of the disc. In January 2007, Hitachi showcased a 100GB Blu-ray Disc, consisting of four layers containing 25GB each. It claimed that, unlike TDK's and Panasonic's 100GB discs, this disc would be readable on standard Blu-ray Disc drives that were currently in circulation, and it was believed that a firmware update was the only requirement to make it readable by then-current players and drives. In October 2007, they revealed a 100GB Blu-ray Disc drive. In December 2008, Pioneer Corporation unveiled a 400GB Blu-ray Disc (containing 16 data layers, 25GB each) compatible with current players after a firmware update. Its planned launch was in the 2009–10 time frame for ROM and 2010–13 for rewritable discs. Ongoing development was underway to create a 1TB Blu-ray Disc. In October 2009, TDK demonstrated a 10-layer 320GB Blu-ray Disc. At CES 2009, Panasonic unveiled the DMP-B15, the first portable Blu-ray Disc player, and Sharp introduced the LC-BD60U and LC-BD80U series, the first LCD HDTVs with integrated Blu-ray Disc players. Sharp also announced that it would sell HDTVs with integrated Blu-ray Disc recorders in the United States by the end of 2009. Set-top box recorders were not being sold in the U.S. for fear of unauthorized copying. However, personal computers with Blu-ray recorder drives were available. On January 1, 2010, Sony, in association with Panasonic, announced plans to increase the storage capacity on their Blu-ray Discs from 25GB to 33.4GB via a technology called i-MLSE (maximum likelihood sequence estimation). The higher-capacity discs, according to Sony, would be readable on existing Blu-ray Disc players with a firmware upgrade. This technology was later used on BDXL discs. On July 20, 2010, the research team of Sony and Japanese Tohoku University announced the joint development of a blue-violet laser, to help create Blu-ray Discs with a capacity of 1TB using only two layers (and potentially more than 1TB with additional layering). By comparison, the first blue laser was invented in 1996, with the first prototype discs coming four years later. 2011–2015 On January 7, 2013, Sony announced that it would release "Mastered in 4K" Blu-ray Disc titles sourced at 4K and encoded at 1080p. "Mastered in 4K" Blu-ray Disc titles can be played on existing Blu-ray Disc players and have a larger color space using xvYCC. On January 14, 2013, Blu-ray Disc Association president Andy Parsons stated that a task force was created three months prior to conduct a study concerning an extension to the Blu-ray Disc specification that would add the ability to contain 4K UHD video. On August 5, 2015, the BDA announced it would commence licensing the Ultra HD Blu-ray video format starting on August 24, 2015. The Ultra HD Blu-ray format delivered support for high dynamic range video that significantly expanded the range between the brightest and darkest elements, an expanded color range, a high frame rate of up to 60 frames per second for a smoother motion appearance, an increase of the supported resolution to for a more detailed picture, object-based sound formats, and an optional "digital bridge" feature. New players were required to play this format, and they became able to play all three of DVDs, traditional Blu-rays, and the new format. New Ultra HD Blu-ray Discs hold up to 66GB and 100GB of data on dual- and triple-layer discs, respectively. Blu-ray's physical and file system specifications are publicly available on the BDA's website. Future scope and market trends According to Media Research, high-definition software sales in the United States were slower in the first two years than DVD software sales. 16.3 million DVD software units were sold in the first two years (1997–1998) compared to 8.3 million high-definition software units (2006–2007). One reason given for this difference was the smaller marketplace (26.5 million HDTVs in 2007 compared to 100 million SDTVs in 1998). Former HD DVD supporter Microsoft did not make a Blu-ray Disc drive for the Xbox 360. The 360's successor Xbox One features a Blu-ray drive, as does the PS4, with both supporting 3D Blu-ray after later firmware updates. Shortly after the "format war" ended, Blu-ray Disc sales began to increase. A study by the NPD Group found that awareness of Blu-ray Disc had reached 60% of households in the United States. Nielsen VideoScan sales numbers showed that for some titles, such as 20th Century Fox's Hitman, up to 14% of total disc sales were from Blu-ray, although the average Blu-ray sales for the first half of the year were only around 5%. In December 2008, the Blu-ray Disc version of Warner Bros.' The Dark Knight sold 600,000 copies on the first day of its launch in the United States, Canada, and the United Kingdom. A week after the launch, The Dark Knight BD had sold over 1.7 million copies worldwide, making it the first Blu-ray Disc title to sell over a million copies in the first week of release. According to Singulus Technologies AG, Blu-ray was adopted faster than the DVD format was at a similar period in its development. This conclusion was based on the fact that Singulus Technologies received orders for 21 Blu-ray dual-layer replication machines during the first quarter of 2008, while 17 DVD replication machines of this type were made in the same period in 1997. According to GfK Retail and Technology, in the first week of November 2008, sales of Blu-ray recorders surpassed DVD recorders in Japan. According to the Digital Entertainment Group, the number of Blu-ray Disc playback devices (both set-top box and game console) sold in the United States had reached 28.5 million by the end of 2010. Blu-ray faces competition from video on demand and from new technologies that allow access to movies on any format or device, such as Digital Entertainment Content Ecosystem or Disney's Keychest. Some commentators suggested that renting Blu-ray would play a vital part in keeping the technology affordable while allowing it to move forward. In an effort to increase sales, studios began releasing films in combo packs with Blu-ray Discs and DVDs, as well as digital copies that can be played on computers and mobile devices. Some are released on "flipper" discs with Blu-ray on one side and DVD on the other. Other strategies are to release movies with the special features only on Blu-ray Discs and none on DVDs. Blu-ray Discs cost no more to produce than DVD discs. However, reading and writing mechanisms are more complicated, making Blu-ray recorders, drives and players more expensive than their DVD counterparts. Adoption is also limited due to the widespread use of streaming media. Blu-ray Discs are used to distribute PlayStation 3, PlayStation 4, PlayStation 5, Xbox One and Xbox Series X games, and the aforementioned game consoles can play back regular Blu-ray Discs. In the mid-2010s, the Ultra HD Blu-ray format was released which is an enhanced variant of Blu-ray compatible with the 4K resolution. Ultra HD Blu-ray discs and players became available in the first quarter of 2016, having a storage capacity of up to 100GB. By December 2017, the specification for an 8K Blu-ray format was also completed. However, this specification was for Japan only so that it could be used by Japanese public broadcasters like NHK to broadcast in 8K resolution for the Tokyo 2020 Olympic Games in Japan. Beyond Blu-ray The Holographic Versatile Disc (HVD), described in the ECMA-377 standard, was in development by the Holography System Development (HSD) Forum using a green writing/reading laser (532nm) and a red positioning/addressing laser (650nm). It was to offer MPEG-2, MPEG-4 AVC (H.264), HEVC (H.265), and VC-1 encoding, supporting a maximum storage capacity of 6TB. No systems conforming to the Ecma International HVD standard have been released. The company responsible for HVD went bankrupt in 2010, making any releases unlikely. Rise of boutique labels A boutique Blu-ray label or specialty Blu-ray label is a home video distributor that releases films on Blu-ray or 4K Ultra HD Blu-ray format, characterized by a specific or niche target market and collectable features like "limited edition" or "special edition" releases, deluxe slipcases or packaging, and other materials. Examples of boutique Blu-ray labels include the American Genre Film Archive (AGFA), Arrow Films, Canadian International Pictures, The Criterion Collection, Kino Lorber, Severin Films, Shout! Factory, Twilight Time, Vinegar Syndrome, and the Warner Archive Collection. Boutique Blu-ray labels, which are popular among collectors and enthusiasts of film and physical media, have been credited as a factor in a "Blu-ray renaissance" dating back to at least 2018, with some consumers choosing to purchase films on physical formats in an age of digital streaming. Reasons some consumers prefer Blu-rays to streaming include higher video quality, the tactile nature of owning a film physically, elaborate packaging, bonus features, and the desire to own or watch films that are not available in streaming services' libraries. Physical media Laser and optics While a DVD uses a 650nm red laser, Blu-ray Disc uses a 405nm "blue" laser diode. Although the laser is called "blue", its color is actually in the violet range. The shorter wavelength can be focused to a smaller area, thus enabling it to read information recorded in pits that are less than half the size of those on a DVD, and can consequently be spaced more closely, resulting in a shorter track pitch, enabling a Blu-ray Disc to hold about five times the amount of information that can be stored on a DVD. The lasers are GaN (gallium nitride) laser diodes that produce 405nm light directly, that is, without frequency doubling or other nonlinear optical mechanisms. CDs use 780nm near-infrared lasers. The minimum "spot size" on which a laser can be focused is limited by diffraction and depends on the wavelength of the light and the numerical aperture of the lens used to focus it. By decreasing the wavelength, increasing the numerical aperture from 0.60 to 0.85, and making the cover layer thinner to avoid unwanted optical effects, designers can cause the laser beam to focus on a smaller spot, which effectively allows more information to be stored in the same area. For a Blu-ray Disc, the spot size is 580nm. This allows a reduction of the pit size from 400nm for DVD to 150nm for Blu-ray Disc, and of the track pitch from 740nm to 320nm. See compact disc for information on optical discs' physical structure. In addition to the optical improvements, Blu-ray Discs feature improvements in data encoding that further increase the amount of content that can be stored. Hard-coating technology Given that the Blu-ray Disc data layer is closer to the surface of the disc compared to the DVD standard, it was found in early designs to be more vulnerable to scratches. The first discs were therefore housed in cartridges for protection, resembling Professional Discs introduced by Sony in 2003. Using a cartridge would increase the price of an already expensive medium, and would increase the size of Blu-ray Disc drives, so designers chose hard-coating of the pickup surface instead. TDK was the first company to develop a working scratch-protection coating for Blu-ray Discs, naming it Durabis. In addition, both Sony's and Panasonic's replication methods include proprietary hard-coat technologies. Sony's rewritable media are spin-coated, using a scratch-resistant acrylic and antistatic coating. Verbatim's recordable and rewritable Blu-ray Discs use their own proprietary technology, called Hard Coat. Colloidal silica-dispersed UV-curable resins are used for the hard coating, given that, according to the Blu-ray Disc Association, they offer the best tradeoff between scratch resistance, optical properties, and productivity. The Blu-ray Disc specification requires the testing of resistance to scratches by mechanical abrasion. In contrast, DVD media are not required to be scratch-resistant, but since development of the technology, some companies, such as Verbatim, implemented hard-coating for more expensive lines of recordable DVDs. Drive speeds The table shows the speeds available. Even the lowest speed (1×) is sufficient to play and record real-time 1080p video; the higher speeds are relevant for general data storage and more sophisticated handling of video. BD discs are designed to cope with at least 5,000 rpm of rotational speed. The usable data rate of a Blu-ray Disc drive can be limited by the capacity of the drive's data interface. With a USB 2.0 interface, the maximum exploitable drive speed is or (also called 8× speed). A USB 3.0 interface (with proper cabling) does not have this limitation, nor do even the oldest version of Serial ATA (SATA, ) nor the latest Parallel ATA () standards. Internal Blu-ray drives that are integrated into a computer (as opposed to physically separate and connected via a cable) typically have a SATA interface. More recent half-height Blu-Ray writers have reached writing speeds of up to 16× (constant angular velocity) on single-layer BD-R media, while the highest reading speeds are 12×, presumably to prevent repeated physical stress on the disc. Slim type drives are limited to 6× speeds (constant angular velocity) due to spacial and power limitations. The Blu-ray format has a write verification feature, similar to that of DVD-RAM, but brings this feature to a write-once disc for the first time. If activated, the correctness of the written data is verified immediately after being written so unreadable data can be written again. In this case, the writing speed is halved because half of the disc rotations are for writing only. "Write verification" is not an official term for the feature, only a description for what it does. The feature may be activated by default, as is the case in the disc writing utility growisofs. Deactivating write verification may be desirable to save time when mass-producing physical copies of data, since errors are unlikely to occur on physically undamaged media. Media quality and data integrity The quality and data integrity of optical media can be determined by measuring the rate of errors, of which higher rates may be an indication for deteriorating media, low-quality media, physical damage such as scratches, dust, and/or media written using a defective optical drive. Errors on Blu-Ray media are measured using the so-called LDC (Long Distance Codes) and BIS (Burst Indication Subcodes) error parameters, of which rates below 13 and 15 respectively can be considered healthy. Not all vendors and models of optical drives have error scanning functionality implemented. Packaging Pre-recorded Blu-ray Disc titles usually ship in packages similar to, but slightly smaller (18.5mm shorter and 2mm thinner: 135mm × 171.5mm × 13mm) and more rounded than, a standard DVD keep case, generally with the format prominently displayed in a horizontal stripe across the top of the case (translucent blue for Blu-ray video discs, clear for Blu-ray 3D video releases, red for PlayStation 3 Greatest Hits Games, transparent for regular PlayStation 3 games, transparent dark blue for PlayStation 4 and PlayStation 5 games, transparent green for Xbox One and Xbox Series X games and black for Ultra HD Blu-ray video releases). Warren Osborn and The Seastone Media Group, LLC created the package that was adopted worldwide following the Blu-ray versus HD DVD market adoption choice. Because Blu-ray cases are smaller than DVD cases, more Blu-rays than DVDs can fit on a shelf. Types BD-ROM "Blu-ray Disc Read-Only Memory", or BD-ROM, is the technical term used for standard, factory-pressed Blu-ray discs. The content of these discs is written once and cannot be modified, and they can't be created by consumer optical disc recorders. Mini Blu-ray Disc The "Mini Blu-ray Disc" (also, "Mini-BD" and "Mini Blu-ray") is a compact variant of the Blu-ray Disc that can store 7.8GB of data in its single-layer configuration, or 15.6GB on a dual-layer disc. It is similar in concept to the MiniDVD and Mini CD. Recordable (BD-R) and rewritable (BD-RE) versions of Mini Blu-ray Disc have been developed specifically for compact camcorders and other compact recording devices. Blu-ray Disc recordable "Blu-ray Disc recordable" (BD-R) refers to two optical disc formats that can be recorded with an optical disc recorder. BD-Rs can be written to once, whereas Blu-ray Disc Recordable Erasable (BD-REs) can be erased and re-recorded multiple times. The current practical maximum speed for Blu-ray Discs is about 12× (). Higher speeds of rotation (5,000+ rpm) cause too much wobble for the discs to be written properly, as with the 24× () and 56× (, 11,200 rpm) maximum speeds, respectively, of standard DVDs and CDs. Since September 2007, BD-RE is also available in the smaller 8cm Mini Blu-ray Disc size. On September 18, 2007, Pioneer and Mitsubishi codeveloped BD-R LTH ("Low to High" in groove recording), which features an organic dye recording layer that can be manufactured by modifying existing CD-R and DVD-R production equipment, significantly reducing manufacturing costs. In February 2008, Taiyo Yuden, Mitsubishi, and Maxell released the first BD-R LTH Discs, and in March 2008, Sony's PlayStation 3 officially gained the ability to use BD-R LTH Discs with the 2.20 firmware update. In May 2009 Verbatim/Mitsubishi announced the industry's first 6X BD-R LTH media, which allows recording a 25GB disc in about 16 minutes. Unlike with the previous releases of 120mm optical discs (i.e. CDs and standard DVDs), Blu-ray recorders hit the market almost simultaneously with Blu-ray's debut. BD9 and BD5 The BD9 format was proposed to the Blu-ray Disc Association by Warner Home Video as a cost-effective alternative to the 25/50GB BD-ROM discs. The format was supposed to use the same codecs and program structure as Blu-ray Disc video but recorded onto less expensive 8.5GB dual-layer DVD. This red-laser media could be manufactured on existing DVD production lines with lower costs of production than the 25/50GB Blu-ray media. Usage of BD9 for releasing content on "pressed" discs never caught on. With the end of the format war, manufacturers ramped production of Blu-ray Discs and lowered prices to compete with DVDs. On the other hand, the idea of using inexpensive DVD media became popular among individual users. A lower-capacity version of this format that uses single-layer 4.7GB DVDs has been unofficially called BD5. Both formats are being used by individuals for recording high-definition content in Blu-ray format onto recordable DVD media. Despite the fact that the BD9 format has been adopted as part of the BD-ROM basic format, none of the existing Blu-ray player models explicitly claim to be able to read it. Consequently, the discs recorded in BD9 and BD5 formats are not guaranteed to play on standard Blu-ray Disc players. AVCHD and AVCREC also use inexpensive media like DVDs, but unlike BD9 and BD5 these formats have limited interactivity, codec types, and data rates. As of March 2011, BD9 was removed as an official BD-ROM disc. BDXL The BDXL format for recordable Blu-ray discs allows 100GB and 128GB write-once discs, and 100GB rewritable discs for commercial applications. The BDXL specification was finalised in June 2010. BD-R 3.0 Format Specification (BDXL) defined a multi-layered disc recordable in BDAV format with the speed of 2× and 4×, capable of 100/128GB and usage of UDF2.5/2.6. BD-RE 4.0 Format Specification (BDXL) defined a multi-layered disc rewritable in BDAV with the speed of 2× and 4×, capable of 100GB and usage of UDF2.5 as file system. Although the 66GB and 100GB BD-ROM discs used for Ultra HD Blu-ray use the same linear density as BDXL, the two formats are not compatible with each other, therefore it is not possible to use a triple layer BDXL disc to burn an Ultra HD Blu-ray Disc playable in an Ultra HD Blu-ray player, although standard 50GB BD-R dual-layer discs can be burned in the Ultra HD Blu-ray format. IH-BD The IH-BD (Intra-Hybrid Blu-ray) format includes a 25GB rewritable layer (BD-RE) and a 25GB write-once layer (BD-ROM), designed to work with existing Blu-ray Discs. Data format standards Filesystem Blu-ray Disc specifies the use of Universal Disk Format (UDF) 2.50 as a convergent-friendly format for both PC and consumer electronics environments. It is used in the latest specifications of BD-ROM, BD-RE, and BD-R. In the first BD-RE specification (defined in 2002), the BDFS (Blu-ray Disc File System) was used. The BD-RE 1.0 specification was defined mainly for the digital recording of high-definition television (HDTV) broadcast television. The BDFS was replaced by UDF 2.50 in the second BD-RE specification in 2005, to enable interoperability among consumer electronics, Blu-ray recorders, and personal computer systems. These optical disc recording technologies enabled PC recording and playback of BD-RE. BD-R can use UDF 2.50/2.60. The Blu-ray Disc application for recording of digital broadcasting has been developed as System Description Blu-ray Rewritable Disc Format Part 3 Audio Visual Basic Specifications (BDAV). The requirements related to the computer file system have been specified in System Description Blu-ray Rewritable Disc Format part 2 File System Specifications version 1.0 (BDFS). Initially, the BD-RE version 1.0 (BDFS) was specifically developed for recording of digital broadcasts using the Blu-ray Disc application (BDAV application). But these requirements are superseded by the Blu-ray Rewritable Disc File System Specifications version 2.0 (UDF) (a.k.a. RE 2.0) and Blu-ray Recordable Disc File System Specifications version 1.0 (UDF) (a.k.a. R 1.0). Additionally, a new application format, BDMV (System Description Blu-ray Disc Prerecorded Format part 3 Audio Visual Basic Specifications) for High Definition Content Distribution was developed for BD-ROM. The only file system developed for BDMV is the System Description Blu-ray Read-Only Disc Format part 2 File System Specifications version 1.0 (UDF) which defines the requirements for UDF 2.50. All BDMV application files are stored under a "BDMV" directory. Application format BDAV or BD-AV (Blu-ray Disc Audio/Visual): a consumer-oriented Blu-ray video format used for audio/video recording (defined in 2002). BDMV or BD-MV (Blu-ray Disc Movie): a Blu-ray video format with menu capability commonly used for movie releases. BDMV Recording specification (defined in September 2006 for BD-RE and BD-R). RREF (Realtime Recording and Editing Format): a subset of BDMV designed for real-time recording and editing applications. HFPA (High Fidelity Pure Audio): A high definition audio disc using the Blu-ray format Media format Container format Audio, video, and other streams are multiplexed and stored on Blu-ray Discs in a container format based on the MPEG transport stream. It is also known as BDAV MPEG-2 transport stream and can use the filename extension .m2ts. Blu-ray Disc titles authored with menus are in the BDMV (Blu-ray Disc Movie) format and contain audio, video, and other streams in BDAV container. There is also the BDAV (Blu-ray Disc Audio/Visual) format, the consumer oriented alternative to the BDMV format used for movie releases. The BDAV format is used on BD-REs and BD-Rs for audio/video recording. BDMV format was later defined also for BD-RE and BD-R (in September 2006, in the third revision of BD-RE specification and second revision of BD-R specification). Blu-ray Disc employs the MPEG transport stream recording method. That enables transport streams of digital broadcasts to be recorded as they are broadcast, without altering the format. It also enables flexible editing of a digital broadcast that is recorded as is and where the data can be edited just by rewriting the playback stream. Although it is quite natural, a function for high-speed and easy-to-use retrieval is built in. Blu-ray Disc Video use MPEG transport streams, compared to DVD's MPEG program streams. An MPEG transport stream contains one or more MPEG program streams, so this allows multiple video programs to be stored in the same file so they can be played back simultaneously (e.g., with the "picture-in-picture" effect). Codecs The BD-ROM specification mandates certain codec compatibilities for both hardware decoders (players) and movie software (content). Windows Media Player does not come with all of the codecs required to play Blu-ray Discs. Video Originally, BD-ROMs stored video up to pixel resolution at up to 60 (59.94) fields per second. Currently, with UHD BD-ROM, videos can be stored at a maximum of pixel resolution at up to 60 (59.94) frames per second, progressively scanned. While most current Blu-ray players and recorders can read and write video at the full 59.94p and 50p progressive format, new players for the UHD specifications will be able to read at video at either 59.94p and 50p formats. For video, all players are required to process H.262/MPEG-2 Part 2, H.264/MPEG-4 Part 10: AVC, and SMPTE VC-1. BD-ROM titles with video must store video using one of the three mandatory formats; multiple formats on a single title are allowed. Blu-ray Disc allows video with a bit depth of 8-bits per color YCbCr with 4:2:0 chroma subsampling. The choice of formats affects the producer's licensing/royalty costs as well as the title's maximum run time, due to differences in compression efficiency. Discs encoded in MPEG-2 video typically limit content producers to around two hours of high-definition content on a single-layer (25GB) BD-ROM. The more-advanced video formats (VC-1 and MPEG-4 AVC) typically achieve a video run time twice that of MPEG-2, with comparable quality. MPEG-2, however, does have the advantage that it is available without licensing costs, as all MPEG-2 patents have expired. MPEG-2 was used by many studios (including Paramount Pictures, which initially used the VC-1 format for HD DVD releases) for the first series of Blu-ray Discs, which were launched throughout 2006. Modern releases are now often encoded in either MPEG-4 AVC or VC-1, allowing film studios to place all content on one disc, reducing costs and improving ease of use. Using these formats also frees a lot of space for storage of bonus content in HD (1080i/p), as opposed to the SD (480i/p) typically used for most titles. Some studios, such as Warner Bros., have released bonus content on discs encoded in a different format than the main feature title. For example, the Blu-ray Disc release of Superman Returns uses VC-1 for the feature film and MPEG-2 for some of its bonus content. Today, Warner and other studios typically provide bonus content in the video format that matches the feature. Audio For audio, BD-ROM players are required to implement Dolby Digital (AC-3), DTS, and linear PCM. Players may optionally implement Dolby Digital Plus and DTS-HD High Resolution Audio as well as lossless 5.1 and 7.1 surround sound formats Dolby TrueHD and DTS-HD Master Audio. BD-ROM titles must use one of the mandatory schemes for the primary soundtrack. A secondary audiotrack, if present, may use any of the mandatory or optional codecs. Bit rate The Blu-ray specification defines a maximum data transfer rate of , a maximum AV bitrate of (for both audio and video data), and a maximum video bit rate of . In contrast, the HD DVD standard has a maximum data transfer rate of , a maximum AV bitrate of , and a maximum video bitrate of . Java software interface At the 2005 JavaOne trade show, it was announced that Sun Microsystems' Java cross-platform software environment would be included in all Blu-ray Disc players as a mandatory part of the standard. Java is used to implement interactive menus on Blu-ray Discs, as opposed to the method used on DVD-video discs. DVDs use pre-rendered MPEG segments and selectable subtitle pictures, which are considerably more primitive and rarely seamless. At the conference, Java creator James Gosling suggested that the inclusion of a Java virtual machine, as well as network connectivity in some BD devices, will allow updates to Blu-ray Discs via the Internet, adding content such as additional subtitle languages and promotional features not included on the disc at pressing time. This Java version is called BD-J and is built on a profile of the Globally Executable MHP (GEM) standard; GEM is the worldwide version of the Multimedia Home Platform standard. Player profiles The BD-ROM specification defines four Blu-ray Disc player profiles, including an audio-only player profile (BD-Audio) that does not require video decoding or BD-J. All of the video-based player profiles (BD-Video) are required to have a full implementation of BD-J. On November 2, 2007, the Grace Period Profile was superseded by Bonus View as the minimum profile for new BD-Video players released to the market. When Blu-ray Disc software not authored with interactive features dependent on Bonus View or BD-Live hardware capabilities is played on Profile 1.0 players, it is able to play the main feature of the disc, but some extra features may not be available or will have limited capability. BD-Live The biggest difference between Bonus View and BD-Live is that BD-Live requires the Blu-ray Disc player to have an Internet connection to access Internet-based content. BD-Live features have included Internet chats, scheduled chats with the director, Internet games, downloadable featurettes, downloadable quizzes, and downloadable movie trailers. While some Bonus View players may have an Ethernet port, it is used for firmware updates and is not used for Internet-based content. In addition, Profile 2.0 also requires more local storage in order to handle this content. Profile 1.0 players are not eligible for Bonus View or BD-Live compliant upgrades and do not have the function or capability to access these upgrades, with the exception of the latest players and the PlayStation 3. Internet is required to use. Region codes As with the implementation of region codes for DVDs, Blu-ray Disc players sold in a specific geographical region are designed to play only discs authorized by the content provider for that region. This is intended to permit content providers (motion picture studios, television production companies, etc.) to enact regional price discrimination and/or exclusive content licensing. According to the Blu-ray Disc Association, all Blu-ray Disc players and Blu-ray Disc-equipped computer systems are required to enforce regional coding. However, content providers need not use region playback codes. Some current estimates suggest 70% of available movie Blu-ray Discs from the major studios are region-free and can therefore be played on any Blu-ray Disc player in any region. Movie distributors have different region-coding policies. Among major U.S. studios, Walt Disney Pictures, Warner Bros., Paramount Pictures, Universal Studios, and Sony Pictures have released most of their titles free of region-coding. MGM and Lionsgate have released a mix of region-free and region-coded titles. While 20th Century Fox initially released most of their titles region-coded, most of their post-Disney merger content is region-free. Vintage film restoration and distribution company The Criterion Collection uses US region-coding in all Blu-ray releases, with their releases in the UK market using UK region-coding. The Blu-ray Disc region-coding scheme divides the world into three regions, labeled A, B, and C. A new form of Blu-ray region-coding tests not only the region of the player/player software, but also its country code, repurposing a user setting intended for localization (PSR19) as a new form of regional lockout. This means, for example, while both the US and Japan are Region A, some American discs will not play on devices/software configured for Japan or vice versa, since the two countries have different country codes. (For example, the United States is "US" (21843 or hex 0x5553), Japan is "JP" (19024 or hex 0x4a50), and Canada is "CA" (17217 or hex 0x4341).) Although there are only three Blu-ray regions, the country code allows much more precise control of the regional distribution of Blu-ray Discs than the six (or eight) DVD regions. With Blu-ray Discs, there are no "special regions" such as the regions 7 and 8 for DVDs. In circumvention of region-coding restrictions, stand-alone Blu-ray Disc players are sometimes modified by third parties to allow for playback of Blu-ray Discs (and DVDs) with any region code. Instructions ("hacks") describing how to reset the Blu-ray region counter of computer player applications to make them multi-region indefinitely are also regularly posted to video enthusiast websites and forums. Unlike DVD region codes, Blu-ray region codes are verified only by the player software, not by the optical drive's firmware. the latest types of Blu-ray players, suitable for Ultra HD Blu-ray content, are not region-free, but Ultra HD Blu-ray disc manufacturers have not yet locked the discs to any region and they work worldwide. Digital rights management The Blu-ray Disc format employs several layers of digital rights management (DRM) which restrict the usage of the discs. This has led to extensive criticism of the format by organizations opposed to DRM, such as the Free Software Foundation, and consumers because new releases require player firmware updates to allow disc playback. High-bandwidth Digital Content Protection Blu-ray equipment is required to implement the High-bandwidth Digital Content Protection (HDCP) system to encrypt the data sent by players to rendering devices through physical connections. This is aimed at preventing the copying of copyrighted content as it travels across cables. Through a protocol flag in the media stream called the Image Constraint Token (ICT), a Blu-ray Disc can enforce its reproduction in a lower resolution whenever a full HDCP-compliant link is not used. In order to ease the transition to high definition formats, the adoption of this protection method was postponed until 2011. Advanced Access Content System The Advanced Access Content System (AACS) is a standard for content distribution and digital rights management. It was developed by AS Licensing Administrator, LLC (AACS LA), a consortium that includes Disney, Intel, Microsoft, Panasonic, Warner Bros., IBM, Toshiba, and Sony. Since the appearance of the format on devices in 2006, several successful attacks have been made on it. The first known attack relied on the trusted client problem. In addition, decryption keys have been extracted from a weakly protected player (WinDVD). Since keys can be revoked in newer releases, this is only a temporary attack, and new keys must continually be discovered in order to decrypt the latest discs. BD+ BD+ was developed by Cryptography Research Inc. and is based on their concept of Self-Protecting Digital Content. BD+, effectively a small virtual machine embedded in authorized players, allows content providers to include executable programs on Blu-ray Discs. Such programs can: Examine the host environment to see if the player has been tampered with. Every licensed playback device manufacturer must provide the BD+ licensing authority with memory footprints that identify their devices. Verify that the player's keys have not been changed Execute native code, possibly to patch an otherwise insecure system Transform the audio and video output. Parts of the content will not be viewable without letting the BD+ program unscramble it. If a playback device manufacturer finds that its devices have been hacked, it can potentially release BD+ code that detects and circumvents the vulnerability. These programs can then be included in all new content releases. The specifications of the BD+ virtual machine are available only to licensed device manufacturers. A list of licensed commercial adopters is available from the BD+ website. The first titles using BD+ were released in October 2007. Since November 2007, versions of BD+ protection have been circumvented by various versions of the AnyDVD HD program. Other programs known to be capable of circumventing BD+ protection are DumpHD (versions 0.6 and above, along with some supporting software), MakeMKV, and two applications from DVDFab (Passkey and HD Decrypter). BD-ROM Mark ROM Mark is a small amount of cryptographic data that is stored separately from normal Blu-ray Disc data, aiming to prevent replication of the discs. The cryptographic data is needed to decrypt the copyrighted disc content protected by AACS. A specially licensed piece of hardware is required to insert the ROM Mark into the media during mastering. During replication, this ROM Mark is transferred together with the recorded data to the disc. In consequence, any copies of a disc made with a regular recorder will lack the ROM Mark data and will be unreadable on standard players. Backward compatibility The Blu-ray Disc Association recommends but does not require that Blu-ray Disc drives be capable of reading standard DVDs and CDs, for backward compatibility. Most Blu-ray Disc players are capable of reading both CDs and DVDs; however, a few of the early Blu-ray Disc players released in 2006, such as the Sony BDP-S1, could play DVDs but not CDs. In addition, with the exception of some early models from LG and Samsung, Blu-ray players cannot play HD DVDs, and HD DVD players cannot play Blu-ray Discs. Some Blu-ray players can also play Video CDs, Super Audio CDs, and/or DVD-Audio discs. All Ultra HD Blu-ray players can play regular Blu-ray Discs, and most can play DVDs and CDs. The PlayStation 4 and PlayStation 5 do not support CDs. Variations High Fidelity Pure Audio (BD-A) High Fidelity Pure Audio (HFPA) is a marketing initiative, spearheaded by the Universal Music Group, for audio-only Blu-ray optical discs. Launched in 2013 as a potential successor to the compact disc, it has been compared with DVD-A and SACD, which had similar aims. AVCHD AVCHD was originally developed as a high-definition format for consumer tapeless camcorders. Derived from the Blu-ray Disc specification, AVCHD shares a similar random access directory structure but is restricted to lower audio and video bitrates, simpler interactivity, and the use of AVC-video and Dolby AC-3 (or linear PCM) audio. Being primarily an acquisition format, AVCHD playback is not universally recognized among devices that play Blu-ray Discs. Nevertheless, many such devices are capable of playing AVCHD recordings from removable media, such as DVDs, SD/SDHC memory cards, "Memory Stick" cards, and hard disk drives. AVCREC AVCREC uses a BDAV container to record high-definition content on conventional DVDs. Presently AVCREC is tightly integrated with the Japanese ISDB broadcast standard and is not marketed outside of Japan. AVCREC is used primarily in set-top digital video recorders and in this regard it is comparable to HD REC. Blu-ray 3D The Blu-ray Disc Association (BDA) created a task force made up of executives from the film industry and the consumer electronics and IT sectors to help define standards for putting 3D film and 3D television content on a Blu-ray Disc. On December 17, 2009, the BDA officially announced 3D specs for Blu-ray Disc, allowing backward compatibility with current 2D Blu-ray players, though compatibility is limited by the fact that the longer 3D discs are triple-layer, which normal (2D only) players cannot read. The BDA has said, "The Blu-ray 3D specification calls for encoding 3D video using the "Stereo High" profile defined by Multiview Video Coding (MVC), an extension to the ITU-T H.264 Advanced Video Coding (AVC) codec currently implemented by all Blu-ray Disc players. MPEG4-MVC compresses both left and right eye views with a typical 50% overhead compared to equivalent 2D content, and can provide full 1080p resolution backward compatibility with current 2D Blu-ray Disc players." This means the MVC (3D) stream is backward compatible with H.264/AVC (2D) stream, allowing older 2D devices and software to decode stereoscopic video streams, ignoring additional information for the second view. However, some 3D discs have a user limitation set preventing the disc from being viewed in 2D (though a 2D disc is often included in the packaging). Sony added Blu-ray 3D support to its PlayStation 3 console via a firmware upgrade on September 21, 2010. The console had previously gained 3D gaming capability via an update on April 21, 2010. Since the version 3.70 software update on August 9, 2011, the PlayStation 3 can play DTS-HD Master Audio and DTS-HD High Resolution Audio while playing 3D Blu-ray. Dolby TrueHD is used on a small minority of Blu-ray 3D releases, and bitstreaming implemented in slim PlayStation 3 models only (original "fat" PS3 models decode internally and send audio as LPCM). The PlayStation VR can also be used to watch these movies in 3D on a PlayStation 4. most major home entertainment studios, such as Walt Disney Studios, Sony Pictures, MGM, and Universal Pictures had discontinued the Blu-ray 3D format in North America, but continued to produce and sell them in other regions such as South America, Europe, Asia, and Australia. Paramount Pictures has ceased sales and productions of 3D Blu-ray Discs all over the world, its last 3D releases being Ghost in the Shell and Transformers: The Last Knight, while Warner Bros. continued to sell and produce 3D Blu-ray Discs in North America until 2022 with their last film released on the format being Dune. Ultra HD Blu-ray Ultra HD Blu-ray Discs are incompatible with existing standard Blu-ray players. They support 4K UHD ( pixel resolution) video at frame rates up to 60 progressive frames per second, encoded using High Efficiency Video Coding. The discs support both high dynamic range (HDR) by increasing the color depth to 10-bit per color and a greater color gamut than supported by conventional Blu-ray video by using the Rec. 2020 color space. The specification for an 8K Blu-ray format was also completed by the Blu-ray Disc Association for use in Japan. More than two hours of 8K content can be recorded on BDXL discs. See also 2D plus Delta Blu-ray Disc authoring Blu-ray Disc recordable Comparison of high-definition optical disc formats Comparison of popular optical data-storage systems Comparison of video player software: Optical media ability, for a list of software BD video players Digital 3D and 3D television Disk-drive performance characteristics Format war High-definition optical disc format war High-definition television Holographic Versatile Disc (HVD) List of optical disc manufacturers List of Blu-ray player manufacturers Universal Media Disc Notes References External links Blu-ray Disc Association's Technical White Papers Blu-ray Disc License Office AACS LA Audiovisual introductions in 2006 2006 in technology Audio storage Computer-related introductions in 2006 Consumer electronics High dynamic range High-definition television Home video Japanese inventions Dutch inventions Java platform Products introduced in 2006 Rotating disc computer storage media Sony hardware Television terminology Video game distribution Video storage 21st-century inventions Optical computer storage media
Blu-ray
[ "Technology", "Engineering" ]
12,278
[ "Electrical engineering", "Computing platforms", "Java platform", "High dynamic range" ]
11,015,838
https://en.wikipedia.org/wiki/Malagasy%20giant%20rat
The Malagasy giant rat (Hypogeomys antimena), also known as the votsotsa or votsovotsa, is a nesomyid rodent found only in the Menabe region of Madagascar. It is an endangered species due to habitat loss, slow reproduction, and limited range (200 square kilometres north of Morondava, between the rivers Tomitsy and Tsiribihina) Pairs are monogamous and females bear only one or two young per year. It is the only extant species in the genus Hypogeomys; another species, Hypogeomys australis, is known from subfossil remains a few thousand years old. Physical description Malagasy giant rats have an appearance somewhat similar to rabbits, though maintaining many rat-like features especially in the face. Males and females both grow to roughly rabbit-size, around and , though with an additional of dark tail. They have a coarse coat which varies from gray to brown to reddish, darkening around the head and fading to white on the belly. They also have prominent, pointed ears and long, muscular back legs, used for jumping to avoid predators. They can leap almost in the air, for which reason they are sometimes called giant jumping rats. Reproduction and maturation The male Malagasy giant rat reaches sexual maturity within one year, but will not mate until reaching 1.5 to two years of age. The female Malagasy giant rat reaches sexual maturity in two years. These rats are one of the few rodent species to practice sexual monogamy. Once mated, a pair will stay together until one of them dies. On the death of a mate, females tend to remain in the burrow until a new male is found. While males usually wait for a new mate as well, they do occasionally move to live with a widowed female. Females give birth to a single offspring after a gestation of 102–138 days (number observed in captivity) once or twice during the mating season, which coincides with the Madagascar rainy season from December to April. The young are raised by both parents, remaining in the family burrow for the first 4–6 weeks, then increasingly exploring and foraging outside. Young males stay with the family unit for one year before achieving sexual maturity and leaving to find their own burrow. Females do not mature for two years and remain with their parents for the extra year. Males are extremely protective of their young. They are known to increase their own predation risk to follow or defend their offspring. Lifestyle and behavior Completely nocturnal, the giant rats live in burrows up to across with as many as six entrances which, even those in regular use, are kept blocked by dirt and leaves to discourage predation by the Malagasy ground boa. The other main traditional predatory threat is the puma-like fossa but increasingly feral dogs and cats introduced to the island are hunting them as well. When foraging, the rats move on all fours, searching the forest floor for fallen fruit, nuts, seeds, and leaves. They have also been known to strip bark from trees and dig for roots and invertebrates. Pairs are highly territorial and the male and female will both defend their territory from other rats. They mark their territory with urine, feces, and scent gland secretions. Conservation and efforts The Malagasy giant rat is listed as critically endangered. Limited range, habitat destruction, increased predation by non-native feral dogs and cats, and disease have all led to the decline. Many feral cats also carry a parasite called toxoplasmosis which causes rodents to lose their fear of cats, to the point of almost being attracted to cats, resulting in their being caught and killed more easily. Hantavirus is another rodent disease ravaging the population, which causes kidney failure. The Madagascan Government has enacted laws to protect the giant rat. Much of their territory is now the Kirindy Forest Reserve where sustainable forestry is practiced. The government has also introduced policies that help the inhabitants of the island coexist with the animals that live there. Gerald Durrell was the first scientist to breed the rats in captivity. In 1990, he brought five specimens to Jersey. Since then, 16 breeding programs have been set up and 12 have been successful. References Hypogeomys Mammals described in 1869 Endemic fauna of Madagascar EDGE species Taxa named by Alfred Grandidier
Malagasy giant rat
[ "Biology" ]
892
[ "EDGE species", "Biodiversity" ]
11,015,903
https://en.wikipedia.org/wiki/Water%20brake
A water brake is a type of fluid coupling used to absorb mechanical energy and usually consists of a turbine or propeller mounted in an enclosure filled with water. As the turbine or propeller turns, mechanical energy is transferred to the water due to turbulence and friction. The shock caused by the acceleration of the water as it passes from pockets in the stator to the pockets in the spinning rotor requires energy. That energy heats the water due to the friction as the water moves through the water brake. Almost all of the horsepower of the system turning the rotor (usually an internal combustion engine) is converted into a temperature change of the water. A very small amount of energy is taken by the bearings and seals within the unit. Therefore, water must constantly move through the device at a rate proportional to the horse power that is being absorbed. Water temperature exiting the unit must be kept under 120–160 °F (50–70 °C) to prevent scale formation and cavitation. The water enters in the center of the device and after passing through the pockets in the stator and rotor exits the outside of the housing through a controlled orifice. The amount of loading is dependent on the level of water inside the housing. Some water brakes vary the load by controlling the inlet water volume only and have a set outlet orifice size depending on the desired hp to be absorbed and some control both input and output orifices at the same time which allows greater control over outlet water temperatures. The housing is vented to the outside to allow air to displace the water as the water level in the unit rises and falls. The amount of torque that can be absorbed is defined by the equation T=kN2D5 where T = torque, N = RPM, D = the diameter of the rotor and k = a constant dependent on the size and shape and angle of the rotor/stator pockets. Systems that require the torque of the system under test to be measured typically use a strain gauge mounted on a torque arm that is attached to the housing perpendicular to the input shaft. The housing/stator is mounted on roller bearings and the rotor is mounted on roller bearings within the housing/stator so that it can turn independently of the rotor and frame. The strain gauge connects the torque arm to the frame assembly and keeps the housing from spinning as housing tries to turn in the same direction of the turbine. (Newton's third law). The amount of resistance can be varied by changing the amount of water in the enclosure at any one time. This is accomplished through manual or electronically controlled water valves. The higher the water levels within the brake the greater the loading. Water brakes are commonly used on some forms of dynamometer but have also been used on railways vehicles such as the British Advanced Passenger Train. Hydrokinetic construction (torque absorption) The Froude waterbrake is based on hydrokinetic construction or (torque absorption). The machine consists of an impeller (rotor) which accelerates water outwards by its rotation. The water has its velocity changed by a stator which causes the water to be returned to the inner diameter of the rotor. For a given mass of water, this velocity change yields a corresponding momentum change – and the rate of change of momentum is proportional to a force. This force acting at some point within the rotor and stator is a distance from the shaft centerline, and a force multiplied by a distance produces torque. See also Torque converter References Dynamometers
Water brake
[ "Technology", "Engineering" ]
700
[ "Dynamometers", "Measuring instruments" ]
11,015,920
https://en.wikipedia.org/wiki/Cornish%E2%80%93Windsor%20Covered%20Bridge
The Cornish–Windsor Covered Bridge is a -year-old, two-span, timber Town lattice-truss, , covered bridge that crosses the Connecticut River between Cornish, New Hampshire (on the east), and Windsor, Vermont (on the west). Until 2008, when the Smolen–Gulf Bridge opened in Ohio, it had been the longest covered bridge (still standing) in the United States. History Previous bridges There were three bridges previously built on this site—one each in 1796, 1824 and 1828. The 1824 and 1828 spans were constructed and operated by a group of businessmen which included Allen Wardner (1786–1877). 1866 bridge (current) The current bridge was built in 1866 by Bela Jenks Fletcher (1811–1877) of Claremont and James Frederick Tasker (1826–1903) of Cornish at a cost of $9,000 (). The bridge is approximately long and wide. The structure uses a lattice truss patented in 1820 and 1835 by Ithiel Town (1784–1844). From 1866 through 1943, it operated as a toll bridge. According to a 1966 report by the New Hampshire Division of Economic Development, the bridge was plenty long enough to earn the name "kissin' bridge", a vernacular of covered bridges referring to the brief moment of relative privacy while crossing. Other tolls, in 1866, ran as high as 20 cents () for a four-horse carriage. The span was purchased by the state of New Hampshire in 1936 and became toll-free in 1943. Landmark designation and restoration 1970: The American Society of Civil Engineers (ASCE) designated the bridge a National Historic Civil Engineering Landmark. 1976: The bridge was listed in the National Register of Historic Places. 1988: The Cornish–Windsor Covered Bridge was rehabilitated, funded by the Federal Highway Administration. Clarification of "longest bridge" status While the Old Blenheim Bridge had and Bridgeport Covered Bridge has longer clear spans, and the Smolen–Gulf Bridge is longer overall, with a longest single span of , the Cornish–Windsor Bridge is still the longest wooden covered bridge and has the longest single covered span to carry automobile traffic. (Blenheim was and Bridgeport is pedestrian only.) The Hartland Bridge in Hartland, New Brunswick, Canada, is longer than the Cornish-Windsor Bridge, and is currently open, but the claim that Cornish-Windsor was the longest was made when the Hartland was closed. Access From Vermont Vermont Route 44 in Windsor heading southeast, ends at Main Street. (Main Street is also US 5 and VT 12.) Continuing past Main, the road becomes Bridge Street. Traveling on Bridge Street from Main, the Windsor bridge approach is about 2 tenths of a mile or . After crossing the bridge, Bridge Street ends at New Hampshire Route 12A, which runs along the Connecticut River on the west and Cornish Wildlife Management Area on the east. Although the public sometimes perceives the bridge as being solely in Windsor, the bridge is mostly in Cornish, given that the New Hampshire-Vermont boundary runs along the western mean low-water mark of the Connecticut River. Put another way, when one enters the bridge from the Windsor side, one is immediately in New Hampshire. From New Hampshire On New Hampshire Route 12A (Town House Road) in Cornish, coming from the south, Bridge Road is a T intersection on the left (west). Traveling from the north, from West Lebanon, New Hampshire, New Hampshire Route 12A is a notably scenic route along the Connecticut River. Historical marker Traveling from Cornish, just before the bridge intersection (about south of the bridge intersection), on the left, there is a parking area (about ) for viewing the bridge, which includes a New Hampshire historical marker. The marker (number 158) is one of four in Cornish. See also Other covered bridges in Cornish Blow-Me-Down Covered Bridge, built by James Tasker Blacksmith Shop Covered Bridge, now only foot traffic, built by James Tasker Dingleton Hill Covered Bridge, built by James Tasker Covered bridges in West Windsor, Vermont Bowers Covered Bridge Best's Covered Bridge Other bridges elsewhere List of bridges documented by the Historic American Engineering Record in New Hampshire List of bridges documented by the Historic American Engineering Record in Vermont List of crossings of the Connecticut River List of covered bridges in New Hampshire List of covered bridges in Vermont Old Blenheim Bridge – previous claim of longest single covered span Bridgeport Covered Bridge – another claim of longest single covered span Hartland Bridge – The longest covered bridge in the world (located in Hartland, New Brunswick, Canada) List of bridges on the National Register of Historic Places in New Hampshire List of bridges on the National Register of Historic Places in Vermont References External links Cornish–Windsor Bridge, New Hampshire Division of Historical Resources Bridges completed in 1866 1866 establishments in Vermont Wooden bridges in Vermont Covered bridges in Windsor County, Vermont Tourist attractions in Windsor County, Vermont Covered bridges on the National Register of Historic Places in Vermont Road bridges on the National Register of Historic Places in Vermont National Register of Historic Places in Windsor County, Vermont Buildings and structures in Windsor, Vermont Windsor, Vermont 1866 establishments in New Hampshire Wooden bridges in New Hampshire Bridges in Sullivan County, New Hampshire Tourist attractions in Sullivan County, New Hampshire Covered bridges on the National Register of Historic Places in New Hampshire Road bridges on the National Register of Historic Places in New Hampshire National Register of Historic Places in Sullivan County, New Hampshire Cornish, New Hampshire Historic American Engineering Record in New Hampshire Historic American Engineering Record in Vermont Historic Civil Engineering Landmarks Bridges over the Connecticut River Lattice truss bridges in the United States Interstate vehicle bridges in the United States
Cornish–Windsor Covered Bridge
[ "Engineering" ]
1,135
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
11,016,148
https://en.wikipedia.org/wiki/Martin%20measure
In descriptive set theory, the Martin measure is a filter on the set of Turing degrees of sets of natural numbers, named after Donald A. Martin. Under the axiom of determinacy it can be shown to be an ultrafilter. Definition Let be the set of Turing degrees of sets of natural numbers. Given some equivalence class , we may define the cone (or upward cone) of as the set of all Turing degrees such that ; that is, the set of Turing degrees that are "at least as complex" as under Turing reduction. In order-theoretic terms, the cone of is the upper set of . Assuming the axiom of determinacy, the cone lemma states that if A is a set of Turing degrees, either A includes a cone or the complement of A contains a cone. It is similar to Wadge's lemma for Wadge degrees, and is important for the following result. We say that a set of Turing degrees has measure 1 under the Martin measure exactly when contains some cone. Since it is possible, for any , to construct a game in which player I has a winning strategy exactly when contains a cone and in which player II has a winning strategy exactly when the complement of contains a cone, the axiom of determinacy implies that the measure-1 sets of Turing degrees form an ultrafilter. Consequences It is easy to show that a countable intersection of cones is itself a cone; the Martin measure is therefore a countably complete filter. This fact, combined with the fact that the Martin measure may be transferred to by a simple mapping, tells us that is measurable under the axiom of determinacy. This result shows part of the important connection between determinacy and large cardinals. References Descriptive set theory Determinacy Computability theory
Martin measure
[ "Mathematics" ]
368
[ "Computability theory", "Mathematical logic", "Game theory", "Determinacy" ]
11,016,661
https://en.wikipedia.org/wiki/HD%20147506
HD 147506, also known as HAT-P-2 and formally named Hunor, is a magnitude 8.7 F8 dwarf star that is somewhat larger and hotter than the Sun. The star is approximately from Earth and is positioned near the keystone of Hercules. It is estimated to be 2 to 3 billion years old, towards the end of its main sequence life. There is one known transiting exoplanet, and a second planet not observed to transit. Nomenclature The designation HD 147506 comes from the Henry Draper Catalogue. When the star was found to have a planet by the HATNet Project, it was assigned the designation HAT-P-2, indicating that it was the second star found to have a planet by this project. The star HAT-P-2 has the proper name Hunor. The name was selected in the NameExoWorlds campaign by Hungary, during the 100th anniversary of the IAU. Hunor was a legendary ancestor of the Huns and the Hungarian nation, and brother of Magor (name of the planet HAT-P-2b). Variability In addition to being a planetary transit variable, there are also stellar pulsations induced by the planet. This is the first known instance of a planet inducing pulsations in its host star. The amplitude is very small at approximately 40 parts per million. These pulsations correspond to exact harmonics of the planet's orbital frequency, indicating they are of a tidal origin. Planetary system Orbiting the star is HAT-P-2b, later named Magor, which was at the time of its discovery the most massive transiting exoplanet. At around 9 times the mass of Jupiter and an estimated surface temperature of ~900 kelvins, on a 5.6 day orbit, this planet is unlike any previously discovered transiting planet. The planet has a large mass (nine times the mass of Jupiter), and a surface gravity 25 times that exerted by the Earth. Its orbital eccentricity is very large (e = 0.5). Since tidal forces should have reduced the orbital eccentricity of this planet, it was speculated that another massive planet found outside the orbit of HAT-P-2b is in orbital resonance with HAT-P-2b. The planet was discovered by the HATNet Project, and the researchers there found the planet to be 10-20% larger than Jupiter. This discovery is important as it provides further support for the existing theory of planetary structure. Additional measurements taken over six years show a long-term linear trend in the radial velocity data consistent with a companion of 15 Jupiter masses or greater. Adaptive optics images were taken at the Keck telescope, and when combined with the radial velocity data show the maximum mass of the companion is that of an M dwarf star. In 2023 the presence of an outer companion, HAT-P-2c, was confirmed, and its mass found to be planetary at around 11 times that of Jupiter. See also List of transiting exoplanets References BD+41 2693 147506 080076 Hercules (constellation) F-type main-sequence stars Planetary systems with two confirmed planets Planetary transit variables
HD 147506
[ "Astronomy" ]
649
[ "Hercules (constellation)", "Constellations" ]
11,016,935
https://en.wikipedia.org/wiki/Hypertranscendental%20number
A complex number is said to be hypertranscendental if it is not the value at an algebraic point of a function which is the solution of an algebraic differential equation with coefficients in and with algebraic initial conditions. The term was introduced by D. D. Morduhai-Boltovskoi in "Hypertranscendental numbers and hypertranscendental functions" (1949). The term is related to transcendental numbers, which are numbers which are not a solution of a non-zero polynomial equation with rational coefficients. The number is transcendental but not hypertranscendental, as it can be generated from the solution to the differential equation . Any hypertranscendental number is also a transcendental number. See also Hypertranscendental function References Hiroshi Umemura, "On a class of numbers generated by differential equations related with algebraic groups", Nagoya Math. Journal. Volume 133 (1994), 1-55. (Downloadable from ProjectEuclid) Transcendental numbers Ordinary differential equations
Hypertranscendental number
[ "Mathematics" ]
213
[ "Mathematical objects", "Numbers", "Number stubs" ]
11,017,274
https://en.wikipedia.org/wiki/Ogataea%20polymorpha
Ogataea polymorpha is a methylotrophic yeast with unusual characteristics. It is used as a protein factory for pharmaceuticals. Ogataea polymorpha belongs to a limited number of methylotrophic yeast species – yeasts that can grow on methanol. The range of methylotrophic yeasts includes Candida boidinii, Pichia methanolica, Pichia pastoris and Ogataea polymorpha. O. polymorpha is taxonomically a species of the family Saccharomycetaceae. Strains Three O. polymorpha strains, identified in the 1950s, are known. They have unclear relationships and are of independent origins. They are found in soil samples, the gut of insects or in spoiled concentrated orange juice. They exhibit different features and are used in basic research and to recombinant protein production: strain CBS4732 (CCY38-22-2; ATCC34438, NRRL-Y-5445) strain DL-1 (NRRL-Y-7560; ATCC26012) strain NCYC495 (CBS1976; ATAA14754, NRLL-Y-1798) Strains CBS4732 and NCYY495 can be mated whereas strain DL-1 cannot be mated with the other two. Strains CBS4732 and DL-1 are employed for recombinant protein production, strain NCYC495 is mainly used for the study of nitrate assimilation. The entire genome of strain CBS4732 has completely been sequenced. Ogataea polymorpha is a thermo-tolerant microorganism with some strains growing at temperatures above . The organism is able to assimilate nitrate and can grow on a range of carbon sources in addition to methanol. Cells grown under conditions of elevated temperature accumulate a sugar named trehalose (this sugar is usually found in insects) as thermo-protective compound. It was shown that trehalose synthesis is not required for growth under these conditions, but for acquisition of thermotolerance. The synthetic steps for trehalose synthesis have been detailed for O. polymorpha, and TPS1, the key enzyme gene of this pathway, has been isolated and characterized. All methylotrophic yeasts share an identical methanol utilization pathway (Fig. 1). Growth on methanol is accompanied by a massive proliferation of cell organelles named peroxisomes in which the initial enzymatic steps of this pathway take place. O. polymorpha is model organism to study all aspects of peroxisomal functions and the underlying molecular biology. During growth on methanol key enzymes of the methanol metabolism are present in high amounts. An especially high abundance can be observed for enzymes called MOX (methanol oxidase), FMDH (formate dehydrogenase), and DHAS (dihydroxyacetone synthase). Their presence is regulated at the transcriptional level of the respective genes. In the related species C. boidinii, P. methanolica, and P. pastoris this gene expression strictly depends on the presence of methanol or methanol derivatives, whereas in O. polymorpha strong expression is elicited by appropriate levels of glycerol or under conditions of glucose starvation. O. polymorpha produces glycoproteins with two types of sugar chains, N- and O-linked glycans are attached to protein. Studies on the structure of N-linked chains have revealed a certain average length (Man8-12GlcNAc2) with terminal alpha-1,2-linked mannose residues, and not with allergenic terminal alpha-1,3-linked mannose residues as found in other yeasts, especially in the baker’s yeast Saccharomyces cerevisiae. Biotechnological applications Ogataea polymorpha with its unusual characteristics provides an excellent platform for the gene technological production of proteins, especially of pharmaceuticals like insulin for treatment of diabetes, hepatitis B vaccines or IFNalpha-2a for the treatment of hepatitis C. Derivatives of both CBS4732 and DL-1 are employed in the production of such recombinant compounds. Further yeasts employed for this application are Pichia pastoris, Arxula adeninivorans and Saccharomyces cerevisiae and others. Like other yeasts, O. polymorpha is a microorganism that can be cultured in large fermenters to high cell densities within a short time. It is a safe organism in not containing pyrogens, pathogens or viral inclusions. It can release compounds into a culture medium as it has all the components required for secretion (this is for instance not the case with bacteria like Escherichia coli). It can provide attractive genetic components for an efficient production of proteins. In Fig. 2 the general design of a vector (a genetic vehicle to transform a yeast strain into a genetically engineered protein producer). It must contain several genetic elements: 1. A selection marker, required to select a transformed strain from an untransformed background –this can be done if for instance such an element enables a deficient strain to grow under culturing conditions void of a certain compound like a particular amino acid that cannot be produced by the deficient strain). 2. Certain elements to propagate and to target the foreign DNA to the chromosome of the yeast (ARS and/or rDNA sequence). 3. A segment responsible for the production of the desired protein compound a so-called expression cassette. Such a cassette is made up by a sequence of regulatory elements, a promoter that controls, how much and under which circumstances a following gene sequence is transcribed and as a consequence how much protein is eventually made. This means that the segment following the promoter is variable depending on the desired product – it could be a sequence determining the amino acids for insulin, for hepatitis B vaccine or for interferon. The expression cassette is terminated by a following terminator sequence that provides a proper stop of the transcription. The promoter elements of the O. polymorpha system are derived from genes that are highly expressed, from instance from the MOX gene, the FMD gene or the TPS1 gene mentioned before. They are not only very strong, but can also be regulated by certain addition of carbon sources like sugar, methanol or glycerol. In 2000 an informal society of scientists was founded named HPWN (Hansenula polymorpha worldwide network) founded by Marten Veenhuis, Groningen, and Gerd Gellissen, Düsseldorf. Every two years meetings are held. The attractiveness of the O. polymorpha platform is commercially exploited by several biotech companies for the development of production processes, among others by PharmedArtis, located in Aachen, Germany and the Leibniz-Institut für Pflanzengenetik und Kulturpflanzenforschung (IPK). References Bibliography External links http://www.pharmedartis.de http://www.ipk-gatersleben.de/Internet/Forschung/MolekulareZellbiologie/ Yeasts Fungi described in 1959 Saccharomycetaceae Fungus species
Ogataea polymorpha
[ "Biology" ]
1,532
[ "Yeasts", "Fungi", "Fungus species" ]
11,017,352
https://en.wikipedia.org/wiki/Hypertranscendental%20function
A hypertranscendental function or transcendentally transcendental function is a transcendental analytic function which is not the solution of an algebraic differential equation with coefficients in (the integers) and with algebraic initial conditions. History The term 'transcendentally transcendental' was introduced by E. H. Moore in 1896; the term 'hypertranscendental' was introduced by D. D. Morduhai-Boltovskoi in 1914. Definition One standard definition (there are slight variants) defines solutions of differential equations of the form , where is a polynomial with constant coefficients, as algebraically transcendental or differentially algebraic. Transcendental functions which are not algebraically transcendental are transcendentally transcendental. Hölder's theorem shows that the gamma function is in this category. Hypertranscendental functions usually arise as the solutions to functional equations, for example the gamma function. Examples Hypertranscendental functions The zeta functions of algebraic number fields, in particular, the Riemann zeta function The gamma function (cf. Hölder's theorem) Transcendental but not hypertranscendental functions The exponential function, logarithm, and the trigonometric and hyperbolic functions. The generalized hypergeometric functions, including special cases such as Bessel functions (except some special cases which are algebraic). Non-transcendental (algebraic) functions All algebraic functions, in particular polynomials. See also Hypertranscendental number Notes References Loxton, J.H., Poorten, A.J. van der, "A class of hypertranscendental functions", Aequationes Mathematicae, Periodical volume 16 Mahler, K., "Arithmetische Eigenschaften einer Klasse transzendental-transzendenter Funktionen", Math. Z. 32 (1930) 545-585. Analytic functions Mathematical analysis Types of functions Ordinary differential equations
Hypertranscendental function
[ "Mathematics" ]
408
[ "Mathematical analysis", "Functions and mappings", "Mathematical objects", "Mathematical relations", "Types of functions" ]
11,017,369
https://en.wikipedia.org/wiki/ISO/IEEE%2011073
CEN ISO/IEEE 11073 Health informatics - Medical / health device communication standards enable communication between medical, health care and wellness devices and external computer systems. They provide automatic and detailed electronic data capture of client-related and vital signs information, and of device operational data. Background Goals Real-time plug-and-play interoperability for citizen-related medical, healthcare and wellness devices; Efficient exchange of care device data, acquired at the point-of-care, in all care environments. "Real-time" means that data from multiple devices can be retrieved, time correlated, and displayed or processed in fractions of a second. "Plug-and-play" means that all a user has to do is make the connection – the systems automatically detect, configure, and communicate without any other human interaction "Efficient exchange of care device data" means that information that is captured at the point-of-care (e.g., personal vital signs data) can be archived, retrieved, and processed by many different types of applications without extensive software and equipment support, and without needless loss of information. The standards are targeted at both point-of-care devices (ventilators, infusion pumps, ECG, etc.) and personal health and fitness devices (such as glucose monitors, pulse oximeters, weighing scales, medication dispensers and activity monitors) and at continuing and acute care devices (such as pulse oximeters, ventilators and infusion pumps). They comprise a family of standards that can be layered together to provide connectivity optimized for the specific devices being interfaced. There are four main partitions to the standards: Device data, including a nomenclature, or terminology, optimized for vital signs information representation based on an object-oriented data model, and device specialisations; General application services (e.g., polled vs. "event driven" services); Internetworking and gateway standards (e.g., an observation reporting interface from CEN ISO/IEEE 11073-based messaging and data representation to HL7 or DICOM); Transports (e.g., cable connected or wireless). Problems In the absence of standards for these devices, (a) data is captured either manually or at considerable expense (using specialized equipment), or (b) it is not captured at all, which is most often the case. Manually captured data is labour-intensive, recorded infrequently (e.g., written down hourly by a nurse clinician), and prone to human error. Use of expensive custom connectivity equipment (a) drives up the cost of care delivery; (b) is only used for patients with the highest acuity; and (c) tends to lock care providers into single companies or partnerships that provide "complete" information system solutions, making it difficult to choose best-of-breed technologies to meet client needs, or the most cost effective systems. Development and deployment of advanced care delivery systems are hindered. For example, systems that collect real-time data from multiple devices and use the information to detect safety problems (e.g., adverse drug events), or to quickly determine a client's condition and automatically, or with minimal carer involvement, optimally adjust a device's operation (e.g., for insulin delivery based on glucose level information) cannot operate without these standards. With no standardisation in this area, even when similar devices do provide communications, there is no consistency in the information and services that are provided, thus inhibiting the development of advanced care delivery systems or even consistent health records. In short: appropriate use of 11073 device communication standards can help deliver better health, fitness and care, more quickly, safely, and at a lower cost. Motivation These are the only standards addressing this area of connectivity. They provide a complete solution for medical device connectivity, starting at the physical cable or wireless connection up through the abstract representation of information and the services used for its management and exchange. They can enable full disclosure of device-mediated information. So measurement modalities be declared in detail and the associated metrics & alerts communicated, together with any user-made changes to settings. In addition, the device can communicate its manufacturer, model, serial number, configuration, operating status and network location – all in real time if required. The IEEE 11073 standards have been developed with a high level of international participation. They have been, and continue to be, adopted as International Organization for Standardization (ISO) standards through ISO TC215 Health Informatics and as European standards through the European Committee for Standardization (CEN) TC251 Health Informatics, specifically as the CEN ISO/IEEE 11073 series. The end result is a single set of internationally harmonized standards that have been developed and adopted by ISO and CEN member countries. These CEN ISO/IEEE 11073 standards have been developed in close coordination with other standards development organisations, including IEEE 802, IHTSDO, IrDA, HL7, DICOM, and CLSI. Memoranda of Understanding with IHE, IHTSDO, and HL7; and (through ISO) close liaison with Continua Health Alliance assist still wider integration. The CEN ISO/IEEE 11073 nomenclature is now being used to populate, and to establish equivalence, within SNOMED CT - the most widely used clinical terminology. A liaison between the IEEE 11073 standards group and the USA Food and Drug Administration (FDA) Center for Devices and Radiological Health (CDRH) in the USA helps ensure that patient safety and efficacy concerns are fully addressed in the standards. The Continua Health Alliance and the English NHS National Programme for Information Technology (NPfIT) both specify use of the standards for device communication. The standards have been included in the USA National Committee on Vital and Health Statistics recommendations to the Department of Health and Human Services related to patient medical record information message formats supporting Health Insurance Portability and Accountability Act (HIPAA) compliant implementations. The cost of integrating innovative technologies into established product lines is reduced — and a barrier to new companies is lowered. Availability 11073 standards are available freely to those actively involved in their development, others may purchase them. For published and draft standards search for '11073' at: IEEE, ISO or CEN. Standards may be purchased from the national standards organisation or bookstore (e.g. AFNOR, BSI, DIN, JIS, UNI, etc.). Overview The ISO/IEEE 11073 Medical / Health Device Communication Standards are a family of ISO, IEEE, and CEN joint standards addressing the interoperability of medical devices. The ISO/IEEE 11073 standard family defines parts of a system, with which it is possible, to exchange and evaluate vital signs data between different medical devices, as well as remote control these devices. Point-of-care medical device The 'core' standards are: 11073-10101, 11073-10201, 11073-20101 and 11073-30200 Personal health device ISO/IEEE 11073 personal health device (PHD) standards are a group of standards addressing the interoperability of personal health devices (PHDs) such as weighing scales, blood pressure monitors, blood glucose monitors and the like. The standards draw upon earlier IEEE11073 standards work, but differ from this earlier work due to an emphasis on devices for personal use (rather than hospital use) and a simpler communications model. These are described in more detail at ISO/IEEE 11073 Personal Health Data (PHD) Standards Core standard Nomenclature Within this standard nomenclature codes are defined, these give the possibility to clearly identify objects and attributes in relation to the so-called OID-Code (). The nomenclature is divided in partitions, to demarcate codes with regards to content and functions. Programmatically these codes are defined as constants, those can be used by a pseudonym. example in C: #define MDC_PART_OBJ 1 /* Definition for the Partition Object Infrastructure */ #define MDC_MOC_VMS_MDS_SIMP 37 /* Define the Object Simple Medical Device System */ Domain information model This standard is the "heart" of VITAL. Within this, objects and their arrangement in a Domain Information Model for vital signs data transmission are defined. Beyond this the standard defines a service model for the standardized communication. Base standard The common background for assembly and transmission of objects and their attributes are defined in this standard. It's subdivided in a communication model and an information model. The communication model describes the layers 5 to 7 of the OSI 7-layer model. The information model defines the modeling, formatting and the syntax for transmission coding of the objects. Agent/manager principle All defined parts of this standard family are designed to allow communication according to this principle. The arrangement of two or more medical devices as a system, so that the components are possible to understand and to interact, are the basic idea of this principle. The agent is the part of the principle that is connected to the medical devices. It provides the data. The manager keeps a copy of the agent data, reacts on update events from them, and triggers events on the agent. In most use cases the manager is only used to remotely monitor and display agent data, but in some cases it may also remotely control the agents. Agents and manager are built in the same structure. This enables an agent to act as a manager and reverse. Besides the plain agent-manager application, hybrid systems over multiple stages are possible. Agent application process(es) This module is the interface between a proprietary (eventually native) protocol and the ISO/IEEE (VITAL) object world. It is not defined within the standard and as a result it can be implemented free. Medical data information base MMOs (Managed Medical Objects) are stored hierarchically within a tree structure in a form named Domain Information Model (DIM). This MMOs and their arrangement in the DIM are defined within this standard. The implementation of the MDIB (Medical Device Information Base) and their functionality is out of the scope of the standard. Association service control element This module is subject to the standards ISO/IEC 15953 and ISO/IEC 15954. It has services available, that controlling the association assembly and disassembly. A possible association and their condition is negotiated here, no MMOs are transmitted over this module. An element of the application layer, which is responsible for the establishment, termination and control of associations between two or more communication parties (programs). Common medical device information service element Services for the data exchange of MMOs (Managed Medical Objects) between Agent-Manager systems, are defined in this module. This data exchange is highly dynamic. Objects are created, changed or deleted by services named CREATE, UPDATE, DELETE. Through reports, which can be defined detailed down to the single object attribute, it is possible to trigger complex operations in Agent or Manager, through this services. Presentation layer This layer contains the encoding of object data. Objects, groups of objects attributes or single attributes are encoded by ASN.1 representations, respectively the specialization MDER (Medical Device encoding Rules). Session layer That layer controls connection at the session level. Domain information model The central core of the standard is the so-called Domain Information Model. Objects containing vital-sign data representations and their relationships are defined in this model. Objects for additional services around vital signs data objects, are defined also here. For content sensitive classification of the objects, they are divided into packages. Medical package The package that defines objects, to map medical vital signs data. There are different objects to store vital signs data in different ways. As an example the RealTimeSampleArray object for the management of e.g. ECG data be mentioned. Alert package This small package is related within the medical package. It is used for setting and administrating alert parameters to objects from the medical package. System package A representation of a medical device can be achieved with objects of this package. It contains concrete derivations of the abstract MDS (MedicalDevice System) object. One of these concrete derivations are ever the root object of a DIM tree. The Battery object and the Clock object are further objects in this package. The last one can be used for time synchronization of medical device data. Control package Inside the control package, objects for the remote control of a medical device are defined. There are objects used for influencing the modality of measuring (for example the SetRangeOperation object) and objects for direct remote control of medical devices (for example the ActivateOperation object). Extended services package Other than the name supposes, in this package essential and ever-used objects are defined. This package is built on so-called scanner objects in different derivations. Scanning data in other objects and generation of event reports, which can be sent, is the sense of these objects. The scanner objects have a wide range of different attributes (e.g. scan interval, scan lists, scan period etc.), for a wide range of applications of the DIM. As an example, the FastPeriCfgScanner object (Fast Periodic Configurable Scanner) is specially constructed for the requirements of real-time data exchange in conjunction with the RealTimeSampleArray object to transmit live data from ECG devices. Communication package The objects in these package contain information, which are responsible for basic communication profiles. These packages are developed very open, so that different communication profiles and interfaces to proprietary device interfaces can be built. Annotation by the author: From a historic view, the standard was developed for the first time in the early 90s, this package has to be reconstructed. Archival package Storing Patient related data in online or offline archives is the idea for objects in the archival package. For Example, the Patient Archive object can store vital signs data, demographic data and treatment data in one object. Patient package The patient package contains only one object, the Patient Demographics object. This object contains patient related data and can be set in relationship to an MDS object or one of the objects from the archive package, to give anonymous data the reference to patient data. Communication model The complete communication sequence can be very complex. This article should provide basic information, that can be described in more detail at a later time in a separate article. Finite-state machine The finite-state machine regulates the synchronization of an Agent Manager system over different conditions. A complete session roundtrip starts up with the disconnected state, is transferred by multiple stages to the initialized state, in what the actual data transfer shall be done, and ends with the disconnected state. Initializing MDIB During the association phase, the configuring state will be reached. In this condition Agent and Manager are to exchange object data for the first time. In the process a MDSCreateEvent in the form of a report would be triggered. This report creates a copy of the MDS root object from the Agent MDIB in the Manager MDIB. Afterward a Contextscanner object is created in the Agent MDIB. This scanner object scans the complete MDIB and generates a report containing the complete Agent MDIB representation, except the MDS root object. The Manager evaluates this report and creates the objects defined here in his own MDIB copy. At this point the manager has an exact copy of the Agent MDIB. Both are now at configured state. Data exchange through services The Common Medical Device Information Service Element (CMDISE) provides a GET service, to deliver data requested by the Manager. The Agent GET service retrieves a list of attribute IDs. These IDs identify explicit values within Agents MDIB. Now the Agent creates a report, containing the requested values. This report is sent back to the Manager. Data exchange through scanner objects In an MDIB, additional objects shall be created through the CREATE service of CMDISE. The Manager requests the Agent through this service to create a scanner object itself, and to fix the scanner object on one or more values. Optional for example the scan interval for the data delivery can be set. The Agent creates the scanner object in his own MDIB and sends the Manager a response message. Now the Manager creates a copy of the scanner object in his MDIB. The data updates from Agent to Manager now occur automatically through the scanner object. Through CMDISE's DELETE service, the scanner object can be deleted, like all other MDIB objects. IEEE 11073 Service-oriented Device Connectivity (SDC) See also H.810 References Android 4.0 implements support for IEEE 11073 via the BluetoothHealth class NIST Standard Conformance Tools 11073 Web site ZigBee provides support for IEEE 1073 via the ZigBee Health Care Profile (ZHCP) Computing in medical imaging 11073 IEEE standards Health standards
ISO/IEEE 11073
[ "Technology" ]
3,473
[ "Computer standards", "IEEE standards" ]
11,017,777
https://en.wikipedia.org/wiki/Defence%20CBRN%20Centre
The Defence Chemical, Biological, Radiological and Nuclear Centre (the Defence CBRN Centre or DCBRNC for short) is a United Kingdom military facility at Winterbourne Gunner in Wiltshire, south of Porton Down and about north-east of Salisbury. It is a tri-service location, with the Army being the lead service. The centre is responsible for all training issues relating to chemical, biological, radiological and nuclear (CBRN) defence and warfare for the UK's armed forces. It is also the home of the National Ambulance Resilience Unit's Training & Education Centre which, among other things, is responsible for training the NHS ambulance service's Hazardous Area Response Teams (HART). History The site was established as an element of the Porton Down research facility in 1917. Known as Porton South Camp, it served as a trench mortar experimental site. Reduced in scale immediately following the cessation of hostilities in 1918, research into chemical weapons and defence recommenced in 1921, with South Camp becoming the Chemical Warfare School in 1926. In 1931 the site became part of the Small Arms School as the Anti-Gas Wing. It would later become an independent entity, in 1939, as the Army Gas School, later the Army School of Chemical Warfare. The site was operated by the Army until 1947, when it became a joint Army and Royal Air Force establishment. The emergence of a nuclear weapons threat led to the inclusion of radiological defence into the portfolio. In 1964 the biological threat was included in the scope of the centre and it became the Defence Nuclear, Biological and Chemical School. In 1999 the RAF took over the operation of the site, following the 1997 decision that they became the lead service for NBC training. A full refurbishment of the site was completed in 2005, with the World War I accommodation replaced by a modernised training facility, used by all three services. Around this time the centre was the home of the Police National CBRN Centre, until it moved to NPIA facilities at Ryton, Warwickshire. On 21 July 2005, the name of the site was changed from the Defence Nuclear Biological and Chemical Centre (DNBCC) to the Defence Chemical, Biological, Radiation and Nuclear Centre (DCBRNC). In April 2019, the Army took over control of the centre from the RAF regiment. DCBRNC courses The Defence CBRN (Chemical, Biological, Radiological and Nuclear) School is the instructional element of the centre. Its mission is to deliver the UK's CBRN Defence Training for Operations on land. CBRN Defence Advisors' course The 10-day CBRN Defence Advisors' is aimed at military officers within a battlegroup or unit who have responsibility to assist/ advice the commander in the planning and execution of CBRN measures and unit CBRN training, or who fill CBRN staff appointments. The course trains CBRN Defence Advisors operating at battlegroup or deployed at an operating base at Staff Officer level. CBRN Defence Senior Officers' symposium The CBRN Defence Senior Officers' Symposium has three parts: the UK's CBRN defence capabilities, the threat and countering the threat. CBRN Defence Cell Controller This is a course for military personnel who manage and carry out the functions of CBRN Warning & Reporting and Collection Centres in line with Allied / NATO standards. This task includes dealing with CBRN data, interpreting that information and issuing subsequent reports on the threat. The emphasis of the course is on the automated plotting of threats. GSR Conversion course This 5-day course trains CBRN defence instructors to fit, test and maintain the General Service Respirator and operate the Advanced Respirator Testing System. CBRN Defence Trainer course This course provides the knowledge and skills to conduct and deliver instruction and testing on MATT 4 / CCS. It incorporates instruction on the GSR. Individuals can apply for the course alone or together with the Defence Operational Instructor course. CBRN Defence Operational Instructor course This course provides the knowledge and skills to conduct unit instruction in CBRN incident response. CBRN Defence Equipment Manager's course This course is intended for civilian and military stores staff who are responsible for storage, maintenance and management of CBRN defence equipment. CBRN Defence Casualty Decontamination Area course This course trains military band personnel (who in war are stretcher bearers) to perform casualty decontamination in a CBRN environment. CBRN Medical The Defence CBRN Centre is the home of the Joint CBRN Medical Faculty. The centre provides CBRN medical training to all medical officers in the UK Armed forces and courses are available to NATO/Allied Nations. As well as military training, Defence CBRN Centre also supports civilian response in partnership with the Health Protection Agency. The Joint CBRN Medical Faculty supports CBRN medical doctrine development, training and curriculum development and SME support to defence research programmes working closely with partners in the health sector. CBRN medical centre The Joint CBRN medical centre supports the medical response to a CBRN incident and the management of CBRN casualties. It is a cross-government group under the remit under the Surgeon-General to develop CBRN clinical guidance, medical training and research. The clinical training objectives are to: "manage any CBRN casualties including trauma, manage the medical aspects of a CBRN incident, treat chemical casualties, treat biological casualties including sepsis, treat radiological casualties including nuclear. CBRN Emergency Medical Treatment (Medical Officer) course The Emergency Medical Treatment course is a 3-day course to provide military doctors with an awareness of the effects of CBRN agents and teach the competencies to provide Role 1 CBRN casualty management. CBRN Clinical course The CBRN Clinical course trains Role 1 (pre-hospital), 2 (hospital) and 3 (medical, nursing and allied health) professionals in the recognition and treatment of all casualties in a CBRN environment, through to Role 3 advanced medical care including critical care. This course supports the military competencies for emergency medicine, acute medicine, intensive care and specialist nurse training. Defence Medic CBRN course The Defence Medic CBRN course trains Role 1 (pre-hospital) medics in the recognition and treatment of all casualties in a CBRN environment. This course supports includes advanced first aid in the hot zone, emergency medical treatment and casualty decontamination. Technical Support Group The DCBRNC Technical Support Group provides an external training and trials function. The TSG External Training Team provides all three services with their CBRN defence training, inspecting the training at unit level. Secondly, the Trials part of TSG helps the development of Joint Service CBRN defence equipment and procedures, supporting the CBRN Delivery Team and DES. The team has contributed to Light Role Teams, G.S.R. and the ARTS system. The Defence CBRN Centre assists with the military's annual chemical warfare exercise, Exercise TOXIC DAGGER, which in 2018 took place on Salisbury Plain and involved over 300 military personnel, including 40 Commando Royal Marine, the RAF Regiment and the Royal Marines Band Service for casualty treatment. References External links British Army – Defence Chemical, Biological, Radiological and Nuclear Centre Defence CBRN Centre at GOV.UK Chemical warfare facilities in the United Kingdom Military installations in England Military training establishments of the United Kingdom Research institutes in Wiltshire Chemical, biological, radiological and nuclear defense
Defence CBRN Centre
[ "Chemistry", "Biology" ]
1,469
[ "Chemical", " biological", " radiological and nuclear defense", "Biological warfare" ]
11,017,807
https://en.wikipedia.org/wiki/Morrie%27s%20law
Morrie's law is a special trigonometric identity. Its name is due to the physicist Richard Feynman, who used to refer to the identity under that name. Feynman picked that name because he learned it during his childhood from a boy with the name Morrie Jacobs and afterwards remembered it for all of his life. Identity and generalisation It is a special case of the more general identity with n = 3 and α = 20° and the fact that since Similar identities A similar identity for the sine function also holds: Moreover, dividing the second identity by the first, the following identity is evident: Proof Geometric proof of Morrie's law Consider a regular nonagon with side length and let be the midpoint of , the midpoint and the midpoint of . The inner angles of the nonagon equal and furthermore , and (see graphic). Applying the cosinus definition in the right angle triangles , and then yields the proof for Morrie's law: Algebraic proof of the generalised identity Recall the double angle formula for the sine function Solve for It follows that: Multiplying all of these expressions together yields: The intermediate numerators and denominators cancel leaving only the first denominator, a power of 2 and the final numerator. Note that there are n terms in both sides of the expression. Thus, which is equivalent to the generalization of Morrie's law. See also Viète's formula, same identity taking on Morrie's law List of trigonometric identities References Further reading Glen Van Brummelen: Trigonometry: A Very Short Introduction. Oxford University Press, 2020, , pp. 79–83 Ernest C. Anderson: Morrie's Law and Experimental Mathematics. In: Journal of recreational mathematics, 1998 External links Mathematical identities Trigonometry Articles containing proofs
Morrie's law
[ "Mathematics" ]
385
[ "Mathematical problems", "Articles containing proofs", "Mathematical identities", "Mathematical theorems", "Algebra" ]
11,018,045
https://en.wikipedia.org/wiki/249th%20Engineer%20Battalion%20%28United%20States%29
The 249th Engineer Battalion (United States) is a versatile power generation battalion assigned to the U.S. Army Corps of Engineers that provides commercial-level power to military units and federal relief organizations during full-spectrum operations. Additionally, the commander serves as the Commandant of the U.S. Army Prime Power School, the institution responsible for the development of Army and Navy power generation specialists. Motto The battalion's motto is "Build, Support, Sustain!". Units Headquarters and Headquarters Company – Fort Belvoir, Virginia Heavy Maintenance Section – Fort Belvoir, Virginia A Company – Schofield Barracks, Hawaii B Company – Fort Liberty, North Carolina C Company – Fort Belvoir, Virginia D Company – (USAR) – Providence, Rhode Island 1st Platoon – Cranston, Rhode Island 2nd Platoon – Cranston, Rhode Island 3rd Platoon – Cranston, Rhode Island 4th Platoon – Fort Belvoir, Virginia U.S. Army Prime Power School – Fort Leonard Wood, Missouri. Mission On order, deploy worldwide to provide prime electrical power and electrical systems expertise in support of military operations and the National Response Framework. The 249th Engineer Battalion also supports other missions: Operation Enduring Freedom Operation Iraqi Freedom THAAD Power Support JLENS Power Support Intelligence and Security Command (INSCOM) (Korea generator maintenance) Operation Bright Star (Egypt) Chinhae generator maintenance Limited Installation support missions Task Force SAFE U.S. Army Corps of Engineers support to presidentially declared disasters History As a combat engineer battalion World War II The 249th Engineer Combat Battalion was constituted on 5 May 1943 at Camp Bowie, Texas. The battalion was organized and under the command of only three captains. The other officers that were supplied to the unit were second lieutenants from the 1943 class of West Point. Shortly after, the battalion participated in two maneuvers in Louisiana, known as the "Louisiana Maneuvers"; there the battalion and its soldiers learned valuable lessons for war. The 249th sailed from the United States to England in May 1944, after equipping and preparing for combat, the Unit landed on Utah Beach in August 1944 under the 1137th Engineer Combat Group commanded by Colonel George A. Morris. In October through November 1944, the soldiers were specially trained on using the Bailey bridge in Trier, France. Later that year on 18 December 1944, the Black Lions were ordered to move from the Saar River, where the unit was building a bridge, to the Ardennes, commonly called the Battle of the Bulge. Upon arriving to the front, the 249th was assigned to the 26th Infantry Division, already engaged and in defensive positions along the southeast corner of the Bulge. The battalion was used in an effort to block the German advance by deploying landmines, obstacles and establishing roadblocks. On 24 December 1944, Brigadier General Harlan Harkness, the assistant division commander, ordered the battalion to advance and secure the towns of Arsdorf and Bigonville to the north of the 26th Infantry Division, near the area of operations of the 4th Armored Division, in order to relieve the occupied towns so the division could advance and attack the enemy line. Companies A and C were ordered into the town of Arsdorf where the battalion was engaged in fierce combat for two days. It was later learned that the town had never been secured by the 4th Armored Division. In February 1945, the battalion was selected for the special task of crossing the Rhine River. On 19 March 1945, the unit was assigned to the engineer task force charged with crossing the Rhine at Oppenheim. The main thrust of the effort was to use assault boats to get troops from 5th Infantry Division across and later to construct a more stable pontoon bridge. The battalion met little resistance across the river and quickly began constructing the bridge. After an accident resulting in a raft being sunk, the Battalion moved downriver to Mainz. After this bridge site was secure, the 249th was detached from the 1137th Engineer Group and was given the mission to secure and maintain the bridges on the Rhine River. In May 1945, when the war ended in Europe, the battalion was moved to Plattling, Germany where they built a camp for displaced refugees. In November 1945, the 249th Engineers were sent on their final orders to Camp Lucky Strike, near Marseilles, France and then redeployed back to the United States. The division was inactivated at Camp Patrick Henry, Virginia on 27 November 1945. Post World War II In late 1954, the Black Lion Battalion was withdrawn from the Reserves and assigned to the Regular Army. In February 1955, it was activated and assigned to USAREUR and an Engineer Battalion (Combat Heavy). From 1955 until 1960, the 249th Engineer Battalion (Construction) was stationed at Kleber Kaserne, ((Kaiserslautern, Germany)). Then it was dispatched to Etain, France for a time. Then the battalion was stationed at Gerszewski Barracks, Knielingen, Karlsruhe, Germany, under the command of the 18th Engineer Brigade, where it provided construction support to USAREUR elements stationed in Germany for the Cold War. As a prime power battalion In 1994, the battalion was reactivated and designated as the 249th Engineer Battalion (Prime Power), stationed at Fort Belvoir, VA. 9/11 Immediately after the attacks on the World Trade Center on 11 September 2001, elements of the 249th were deployed to New York City and were instrumental in restoring power to Wall Street enabling the financial district to resume operations within a week of the attack. Global War on Terrorism The 249th Engineer Battalion (Prime Power) provides oversight on all coalition operating base power projects in Iraq (Operation Iraqi Freedom) and Afghanistan (Operation Enduring Freedom). Hurricane Katrina The 249th deployed teams to the Gulf Region under Joint Task Force Katrina, working with contractors, and local and state entities to assess, they helped install and maintain emergency generators at critical facilities. By 5 September 2005, the 17th Street Canal breach was closed. Blackhawk and Chinook helicopters had dropped over 200 sand bags, with approximately 125 sandbags breaking the surface of the water. After the emergency was over, plans called for the canal to be drained and the wall repaired. There were three 42" mobile pumps staged and two 42" and two 30" pumps were placed at the sheet pile closure. Sewer & water board, electric utility and the 249th Engineer Battalion (Prime Power) were completing pump house inspection. When the pumps began operation, a 40-foot-wide opening was made in the sheet piling to allow water to flow out of the canal. Worldwide Through the United States Army Corps of Engineers, the 249th soldiers provide contracting officer technical representation on projects throughout the world. Lineage Constituted 25 February 1943 in the Army of the United States as the 249th Engineer Combat Battalion Activated 5 May 1943 at Camp Bowie, Texas Inactivated 28 November 1945 at Camp Patrick Henry, Virginia Redesignated 23 March 1948 as the 442d Engineer Construction Battalion and allotted to the Organized Reserves Activated 8 April 1948 with headquarters at Ames, Iowa (Organized Reserves redesignated 25 March 1948 as the Organized Reserve Corps; redesignated 9 July 1952 as the Army Reserve) Inactivated 22 May 1950 at Ames and Council Bluffs, Iowa Redesignated 25 June 1952 as the 249th Engineer Construction Battalion Redesignated 9 December 1954 as the 249th Engineer Battalion; concurrently withdrawn from the Army Reserve and allotted to the Regular Army Activated 9 February 1955 in Germany Inactivated 15 October 1991 in Germany Activated 16 November 1994 at Fort Belvoir, Virginia Honors Campaign participation credit World War II Northern France Rhineland Ardennes-Alsace Central Europe Southwest Asia Defense of Saudi Arabia Liberation and Defense of Kuwait Cease-Fire Decorations Cited in the Order of the Day of the Belgian Army for actions in the Ardennes Meritorious Unit Commendation (Army) for SOUTHWEST ASIA 1990–1991 Army Superior Unit Award for 25 Aug 92 – 28 Oct 92 Army Superior Unit Award for 1994–1995 Army Superior Unit Award for 1995–1996 Army Superior Unit Award for 2005 (Hurricanes Katrina, Rita, & Wilma) Army Superior Unit Award for 2011-2012 See also United States Army Corps of Engineers Civil engineering and infrastructure repair in New Orleans after Hurricane Katrina Army Nuclear Power Program References External links Official 249th Engineer Battalion website Official U.S. Army Prime Power School website Description of the Coat of Arms and Distinctive Unit Insignia Bridge to the Past: 249th Engineer Battalion from Combat to Prime Power by COL John K. Addison, Retired Prime-Power Considerations for Engineer Planners, by Captain Geoff Van Epps Reflections on Building Great Engineers, COL Paul B. Olsen 249 Military units and formations established in 1943 249 United States Army Corps of Engineers
249th Engineer Battalion (United States)
[ "Engineering" ]
1,738
[ "Engineering units and formations", "United States Army Corps of Engineers" ]
11,018,121
https://en.wikipedia.org/wiki/Quaternion-K%C3%A4hler%20symmetric%20space
In differential geometry, a quaternion-Kähler symmetric space or Wolf space is a quaternion-Kähler manifold which, as a Riemannian manifold, is a Riemannian symmetric space. Any quaternion-Kähler symmetric space with positive Ricci curvature is compact and simply connected, and is a Riemannian product of quaternion-Kähler symmetric spaces associated to compact simple Lie groups. For any compact simple Lie group G, there is a unique G/H obtained as a quotient of G by a subgroup Here, Sp(1) is the compact form of the SL(2)-triple associated with the highest root of G, and K its centralizer in G. These are classified as follows. The twistor spaces of quaternion-Kähler symmetric spaces are the homogeneous holomorphic contact manifolds, classified by Boothby: they are the adjoint varieties of the complex semisimple Lie groups. These spaces can be obtained by taking a projectivization of a minimal nilpotent orbit of the respective complex Lie group. The holomorphic contact structure is apparent, because the nilpotent orbits of semisimple Lie groups are equipped with the Kirillov-Kostant holomorphic symplectic form. This argument also explains how one can associate a unique Wolf space to each of the simple complex Lie groups. See also Quaternionic discrete series representation References . Reprint of the 1987 edition. . Differential geometry Structures on manifolds Riemannian geometry Homogeneous spaces Lie groups
Quaternion-Kähler symmetric space
[ "Physics", "Mathematics" ]
325
[ "Lie groups", "Mathematical structures", "Group actions", "Homogeneous spaces", "Space (mathematics)", "Topological spaces", "Algebraic structures", "Geometry", "Symmetry" ]
11,018,224
https://en.wikipedia.org/wiki/Comparative%20Tracking%20Index
The Comparative Tracking Index (CTI) is used to measure the electrical breakdown (tracking) properties of an insulating material. Tracking is an electrical breakdown on the surface of an insulating material wherein an initial exposure to electrical arcing heat carbonizes the material. The carbonized areas are more conductive than the pristine insulator, increasing current flow, resulting in increased heat generation, and eventually the insulation becomes completely conductive. Details A large voltage difference gradually creates a conductive leakage path across the surface of the material by forming a carbonized track. Testing method is specified in IEC standard 60112 and ASTM D3638. To measure the tracking, 50 drops of 0.1% ammonium chloride solution are dropped on the material, and the voltage measured for a 3 mm thickness is considered representative of the material performance. Also term PTI (Proof Tracking Index) is used: it means voltage at which during testing on five samples the samples pass the test with no failures. Performance Level Categories (PLC) were introduced to avoid excessive implied precision and bias. The CTI value is used for electrical safety assessment of electrical apparatus, as for instance carried out by testing and certification laboratories. The minimum required creepage distances over an insulating material between electrically conducting parts in apparatus, especially between parts with a high voltage and parts that can be touched by human users, is dependent on the insulator's CTI value. Also for internal distances in an apparatus by maintaining CTI based distances, the risk of fire is reduced. Creepage distance requirement depends on the CTI. Material which CTI is unknown are classified in IIIb group. There are no CTI requirement for glass, ceramic, and other inorganic material which do not breakdown on the surface. The better the insulation, the higher the CTI (positive relationship). In terms of clearance, a higher CTI value means a lower minimum creepage distance required, and the closer two conductive parts can be. In design of medical products, the CTI is treated differently. Material groups are classified as shown below, per IEC 60601-1:2005, International Standard published by the International Electrotechnical Commission (IEC): The test method does not work well for voltages below 125VAC as the solution does not evaporate between successive drops. The test method has an upper limit of 600VAC; higher voltages are currently not covered by the standard. In the recent version of the standard the evaluation of the test method at higher voltages (above 600 V) is stated as a target for the future. In principle, tests at higher voltages should be possible as the breakdown voltage of air at 50 Hz is larger than 40 kV at 4 mm. However, arching between the electrodes might increase at higher voltages and might have an impact on the test result. Therefore, this needs to be further evaluated before the maximum voltage in the standard can be increased. In addition, dependent standards such as IEC 60664 would need to be changed as well in case a new material class for higher voltages is introduced in IEC 60112. References External links Underwriters Laboratory definition of CTI IEC 60112 Method for the determination of the proof and the comparative tracking indices of solid insulating materials IEC 60601-1 Medical electrical equipment - Part 1: General requirements for basic safety and essential performance Electrical breakdown Units of measurement
Comparative Tracking Index
[ "Physics", "Mathematics" ]
687
[ "Physical phenomena", "Quantity", "Electrical phenomena", "Electrical breakdown", "Units of measurement" ]
11,020,249
https://en.wikipedia.org/wiki/Multiplicity%20%28chemistry%29
In spectroscopy and quantum chemistry, the multiplicity of an energy level is defined as 2S+1, where S is the total spin angular momentum. States with multiplicity 1, 2, 3, 4, 5 are respectively called singlets, doublets, triplets, quartets and quintets. In the ground state of an atom or molecule, the unpaired electrons usually all have parallel spin. In this case the multiplicity is also equal to the number of unpaired electrons plus one. Atoms The multiplicity is often equal to the number of possible orientations of the total spin relative to the total orbital angular momentum L, and therefore to the number of near–degenerate levels that differ only in their spin–orbit interaction energy. For example, the ground state of a carbon atom is 3P (Term symbol). The superscript three (read as triplet) indicates that the multiplicity 2S+1 = 3, so that the total spin S = 1. This spin is due to two unpaired electrons, as a result of Hund's rule which favors the single filling of degenerate orbitals. The triplet consists of three states with spin components +1, 0 and –1 along the direction of the total orbital angular momentum, which is also 1 as indicated by the letter P. The total angular momentum quantum number J can vary from L+S = 2 to L–S = 0 in integer steps, so that J = 2, 1 or 0. However the multiplicity equals the number of spin orientations only if S ≤ L. When S > L there are only 2L+1 orientations of total angular momentum possible, ranging from S+L to S-L. The ground state of the nitrogen atom is a 4S state, for which 2S + 1 = 4 in a quartet state, S = 3/2 due to three unpaired electrons. For an S state, L = 0 so that J can only be 3/2 and there is only one level even though the multiplicity is 4. Molecules Most stable organic molecules have complete electron shells with no unpaired electrons and therefore have singlet ground states. This is true also for inorganic molecules containing only main-group elements. Important exceptions are dioxygen (O2) as well as methylene (CH2) and other carbenes. However, higher spin ground states are very common in coordination complexes of transition metals. A simple explanation of the spin states of such complexes is provided by crystal field theory. Dioxygen The highest occupied orbital energy level of dioxygen is a pair of antibonding π* orbitals. In the ground state of dioxygen, this energy level is occupied by two electrons of the same spin, as shown in the molecular orbital diagram. The molecule, therefore, has two unpaired electrons and is in a triplet state. In contrast, the first and second excited states of dioxygen are both states of singlet oxygen. Each has two electrons of opposite spin in the π* level so that S = 0 and the multiplicity is 2S + 1 = 1 in consequence. In the first excited state, the two π* electrons are paired in the same orbital, so that there are no unpaired electrons. In the second excited state, however, the two π* electrons occupy different orbitals with opposite spin. Each is therefore an unpaired electron, but the total spin is zero and the multiplicity is 2S + 1 = 1 despite the two unpaired electrons. The multiplicity of the second excited state is therefore not equal to the number of its unpaired electrons plus one, and the rule which is usually true for ground states is invalid for this excited state. Carbenes In organic chemistry, carbenes are molecules which have carbon atoms with only six electrons in their valence shells and therefore disobey the octet rule. Carbenes generally split into singlet carbenes and triplet carbenes, named for their spin multiplicities. Both have two non-bonding electrons; in singlet carbenes these exist as a lone pair and have opposite spins so that there is no net spin, while in triplet carbenes these electrons have parallel spins. See also Quantum numbers Principal quantum number Azimuthal quantum number Magnetic quantum number Spin quantum number Exchange interaction Term symbol Slater's rules Effective nuclear charge Shielding effect References Bibliography Quantum chemistry
Multiplicity (chemistry)
[ "Physics", "Chemistry" ]
917
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", " molecular", "Atomic", " and optical physics" ]
11,020,357
https://en.wikipedia.org/wiki/List%20of%20Japanese%20World%20War%20II%20explosives
This is a complete list of Japanese explosives used during the Second World War. It is sorted according to application. References Explosives World War II explosives
List of Japanese World War II explosives
[ "Chemistry" ]
30
[ "Explosives", "Explosions" ]
11,021,015
https://en.wikipedia.org/wiki/Petroleum%20Administration%20for%20Defense%20Districts
The United States is divided into five Petroleum Administration for Defense Districts, or PADDs. These were created during World War II under the Petroleum Administration for War to help organize the allocation of fuels derived from petroleum products, including gasoline and diesel (or "distillate") fuel. Today, these regions are still used for data collection purposes. The Petroleum Administration for War was established in 1942 by executive order, and abolished in 1946. The districts are now named for the later Petroleum Administration for Defense which existed during the Korean War. It was established by the Defense Production Act of 1950, then abolished in 1954, with its role taken over by the United States Department of the Interior's Oil and Gas Division. The US government divided the US into five Petroleum Administration for Defense Districts (PADDs). These were created during World War II to help organize the allocation of fuels, including gasoline and diesel fuel. Today, these regions are still used for data collection purposes. PAD Districts PADD I (East Coast) is composed of the following three subdistricts: Subdistrict A (New England): Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont. Subdistrict B (Central Atlantic): Delaware, District of Columbia, Maryland, New Jersey, New York, and Pennsylvania. Subdistrict C (Lower Atlantic): Florida, Georgia, North Carolina, South Carolina, Virginia, and West Virginia. PADD II (Midwest): Illinois, Indiana, Iowa, Kansas, Kentucky, Michigan, Minnesota, Missouri, Nebraska, North Dakota, South Dakota, Ohio, Oklahoma, Tennessee, and Wisconsin. PADD III (Gulf Coast): Alabama, Arkansas, Louisiana, Mississippi, New Mexico, and Texas. PADD IV (Rocky Mountain): Colorado, Idaho, Montana, Utah, and Wyoming. PADD V (West Coast): Alaska, Arizona, California, Hawaii, Nevada, Oregon, and Washington. PADD VI (Caribbean): US Virgin Islands and Puerto Rico. PADD VII (Pacific): Guam, American Samoa, and the Northern Mariana Islands. See also Oil and gas law in the United States References PADD Definitions. Energy Information Administration. Records of the Petroleum Administration for Defense. National Archives and Records Administration. Records of the Petroleum Administration for War. National Archives and Records Administration. Specific Energy policy United States Department of the Interior Fossil fuels in the United States Petroleum in the United States Agencies of the United States government during World War II
Petroleum Administration for Defense Districts
[ "Environmental_science" ]
495
[ "Environmental social science", "Energy policy" ]
11,021,964
https://en.wikipedia.org/wiki/Bolection
A bolection is a decorative moulding which projects beyond the face of a panel or frame in raised panel walls, doors, and fireplaces. It is commonly used when the meeting surfaces are at different levels, especially to hold floating panels in place while allowing them to expand and contract with changes in temperature and humidity. Also sometimes called balection or bilection, the term is of uncertain etymology and was first used in the early 18th century. Bolection was used to great effect by Christopher Wren. Bolection mouldings are usually rabbeted on their underside to the depth of the lower element, but attached to the upper (when panels are allowed to float) or both (when merely decorative). Bolection mouldings were also regularly used in the 16th and 17th century for picture frames. On frames of this type, the highest point of the moulding is close to the sight edge (the inner edge of the frame which meets the picture), from which the profile descends towards its lowest point at its outer edge. Such frames are sometimes described as bolection frames but more often as "reverse profile" frames. References Architectural elements
Bolection
[ "Technology", "Engineering" ]
237
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
11,021,990
https://en.wikipedia.org/wiki/Phytoecdysteroid
Phytoecdysteroids are plant-derived ecdysteroids. Phytoecdysteroids are a class of chemicals that plants synthesize for defense against phytophagous (plant eating) insects. These compounds are mimics of hormones used by arthropods in the molting process known as ecdysis. It is presumed that these chemicals act as endocrine disruptors for insects, so that when insects eat the plants with these chemicals they may prematurely molt, lose weight, or suffer other metabolic damage and die. Chemically, phytoecdysteroids are classed as triterpenoids, the group of compounds that includes triterpene saponins, phytosterols, and phytoecdysteroids. Plants, but not animals, synthesize phytoecdysteroids from mevalonic acid in the mevalonate pathway of the plant cell using acetyl-CoA as a precursor. Some ecdysteroids, including ecdysone and 20-hydroxyecdysone (20E), are produced by both plants and arthopods. Besides those, over 250 ecdysteroid analogs have been identified so far in plants, and it has been theorized that there are over 1,000 possible structures which might occur in nature. Many more plants have the ability to "turn on" the production of phytoecdysteroids when under stress, animal attack or other conditions. The term phytoecdysteroid can also apply to ecdysteroids found in fungi, even though fungi are not plants. The more precise term mycoecdysteroid has been applied to these chemicals. Some plants and fungi that produce phytoecdysteroids include Achyranthes bidentata, Tinospora cordifolia, Pfaffia paniculata, Leuzea carthamoides, Rhaponticum uniflorum, Serratula coronata, Cordyceps, and Asparagus. Effect on arthopods It is generally believed that phytoecdysteroid exert a negative effect on pests. Indeed, phytoecdysteroids sprayed onto plants have been shown to reduce the infestation of nematodes and insects. However, in very limited scenarios, phytoecdysteroids may end up becoming beneficial for the insect. For example, ginsenosides are able to activate the ecdysteroid receptor in fruit flies, but this activation happens to compensate for age-related reduction in 20E levels. Effect on plants Phytoecdysteroids have also been reported to influence the germination of other plants, making it an allelochemical. The plant producing phytoecdysteroids may also be affected by ecdysteroids, mainly by increasing the rate of photosynthesis. Effect on mammals They are not toxic to mammals and occur in the human diet. 20-hydroxyecdysone is a drug candidate, but this does not mean dietary amounts have any effect. See also Phytoandrogen Phytoestrogen Plant defense against herbivory References Phytochemicals Steroids Insect ecology Chemical ecology
Phytoecdysteroid
[ "Chemistry", "Biology" ]
669
[ "Biochemistry", "Chemical ecology" ]
11,022,628
https://en.wikipedia.org/wiki/List%20of%20space%20groups
There are 230 space groups in three dimensions, given by a number index, and a full name in Hermann–Mauguin notation, and a short name (international short symbol). The long names are given with spaces for readability. The groups each have a point group of the unit cell. Symbols In Hermann–Mauguin notation, space groups are named by a symbol combining the point group identifier with the uppercase letters describing the lattice type. Translations within the lattice in the form of screw axes and glide planes are also noted, giving a complete crystallographic space group. These are the Bravais lattices in three dimensions: P primitive I body centered (from the German Innenzentriert) F face centered (from the German Flächenzentriert) A centered on A faces only B centered on B faces only C centered on C faces only R rhombohedral A reflection plane m within the point groups can be replaced by a glide plane, labeled as a, b, or c depending on which axis the glide is along. There is also the n glide, which is a glide along the half of a diagonal of a face, and the d glide, which is along a quarter of either a face or space diagonal of the unit cell. The d glide is often called the diamond glide plane as it features in the diamond structure. , , or : glide translation along half the lattice vector of this face : glide translation along half the diagonal of this face : glide planes with translation along a quarter of a face diagonal : two glides with the same glide plane and translation along two (different) half-lattice vectors. A gyration point can be replaced by a screw axis denoted by a number, n, where the angle of rotation is . The degree of translation is then added as a subscript showing how far along the axis the translation is, as a portion of the parallel lattice vector. For example, 21 is a 180° (twofold) rotation followed by a translation of of the lattice vector. 31 is a 120° (threefold) rotation followed by a translation of of the lattice vector. The possible screw axes are: 21, 31, 32, 41, 42, 43, 61, 62, 63, 64, and 65. Wherever there is both a rotation or screw axis n and a mirror or glide plane m along the same crystallographic direction, they are represented as a fraction or n/m. For example, 41/a means that the crystallographic axis in question contains both a 41 screw axis as well as a glide plane along a. In Schoenflies notation, the symbol of a space group is represented by the symbol of corresponding point group with additional superscript. The superscript doesn't give any additional information about symmetry elements of the space group, but is instead related to the order in which Schoenflies derived the space groups. This is sometimes supplemented with a symbol of the form which specifies the Bravais lattice. Here is the lattice system, and is the centering type. In Fedorov symbol, the type of space group is denoted as s (symmorphic ), h (hemisymmorphic), or a (asymmorphic). The number is related to the order in which Fedorov derived space groups. There are 73 symmorphic, 54 hemisymmorphic, and 103 asymmorphic space groups. Symmorphic The 73 symmorphic space groups can be obtained as combination of Bravais lattices with corresponding point group. These groups contain the same symmetry elements as the corresponding point groups. Example for point group 4/mmm (): the symmorphic space groups are P4/mmm (, 36s) and I4/mmm (, 37s). Hemisymmorphic The 54 hemisymmorphic space groups contain only axial combination of symmetry elements from the corresponding point groups. Example for point group 4/mmm (): hemisymmorphic space groups contain the axial combination 422, but at least one mirror plane m will be substituted with glide plane, for example P4/mcc (, 35h), P4/nbm (, 36h), P4/nnc (, 37h), and I4/mcm (, 38h). Asymmorphic The remaining 103 space groups are asymmorphic. Example for point group 4/mmm (): P4/mbm (, 54a), P42/mmc (, 60a), I41/acd (, 58a) - none of these groups contains the axial combination 422. List of triclinic List of monoclinic List of orthorhombic List of tetragonal List of trigonal List of hexagonal List of cubic Notes References External links International Union of Crystallography Point Groups and Bravais Lattices Full list of 230 crystallographic space groups Conway et al. on fibrifold notation Symmetry Crystallography
List of space groups
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,027
[ "Materials science", "Crystallography", "Condensed matter physics", "Geometry", "Symmetry" ]
9,527,250
https://en.wikipedia.org/wiki/Knowledge%20organization
Knowledge organization (KO), organization of knowledge, organization of information, or information organization is an intellectual discipline concerned with activities such as document description, indexing, and classification that serve to provide systems of representation and order for knowledge and information objects. According to The Organization of Information by Joudrey and Taylor, information organization: Issues related to knowledge sharing can be said to have been an important part of knowledge management for a long time. Knowledge sharing has received a lot of attention in research and business practice both within and outside organizations and its different levels. Sharing knowledge is not only about giving it to others, but it also includes searching, locating, and absorbing knowledge. Unawareness of the employees' work and duties tends to provoke the repetition of mistakes, the waste of resources, and duplication of the same projects. Motivating co-workers to share their knowledge is called knowledge enabling. It leads to trust among individuals and encourages a more open and proactive relationship that grants the exchange of information easily. Knowledge sharing is part of the three-phase knowledge management process which is a continuous process model. The three parts are knowledge creation, knowledge implementation, and knowledge sharing. The process is continuous, which is why the parts cannot be fully separated. Knowledge creation is the consequence of individuals' minds, interactions, and activities. Developing new ideas and arrangements alludes to the process of knowledge creation. Using the knowledge which is present at the company in the most effective manner stands for the implementation of knowledge. Knowledge sharing, the most essential part of the process for our topic, takes place when two or more people benefit by learning from each other. Traditional human-based approaches performed by librarians, archivists, and subject specialists are increasingly challenged by computational (big data) algorithmic techniques. KO as a field of study is concerned with the nature and quality of such knowledge-organizing processes (KOP) (such as taxonomy and ontology) as well as the resulting knowledge organizing systems (KOS). Theoretical approaches Traditional approaches Among the major figures in the history of KO are Melvil Dewey (1851–1931) and Henry Bliss (1870–1955). Dewey's goal was an efficient way to manage library collections; not an optimal system to support users of libraries. His system was meant to be used in many libraries as a standardized way to manage collections. The first version of this system was created in 1876. An important characteristic in Henry Bliss' (and many contemporary thinkers of KO) was that the sciences tend to reflect the order of Nature and that library classification should reflect the order of knowledge as uncovered by science: Natural order → Scientific classification → Library classification (KO) The implication is that librarians, in order to classify books, should know about scientific developments. This should also be reflected in their education: Among the other principles, which may be attributed to the traditional approach to KO are: Principle of controlled vocabulary Cutter's rule about specificity Hulme's principle of literary warrant (1911) Principle of organizing from the general to the specific Today, after more than 100 years of research and development in LIS, the "traditional" approach still has a strong position in KO and in many ways its principles still dominate. Facet analytic approaches The date of the foundation of this approach may be chosen as the publication of S. R. Ranganathan's colon classification in 1933. The approach has been further developed by, in particular, the British Classification Research Group. The best way to explain this approach is probably to explain its analytico-synthetic methodology. The meaning of the term "analysis" is: breaking down each subject into its basic concepts. The meaning of the term synthesis is: combining the relevant units and concepts to describe the subject matter of the information package in hand. Given subjects (as they appear in, for example, book titles) are first analyzed into a few common categories, which are termed "facets". Ranganathan proposed his PMEST formula: Personality, Matter, Energy, Space and Time: Personality is the distinguishing characteristic of a subject. Matter is the physical material of which a subject may be composed. Energy is any action that occurs with respect to the subject. Space is the geographic component of the location of a subject. Time is the period associated with a subject. The information retrieval tradition (IR) Important in the IR-tradition have been, among others, the Cranfield experiments, which were founded in the 1950s, and the TREC experiments (Text Retrieval Conferences) starting in 1992. It was the Cranfield experiments, which introduced the measures "recall" and "precision" as evaluation criteria for systems efficiency. The Cranfield experiments found that classification systems like UDC and facet-analytic systems were less efficient compared to free-text searches or low level indexing systems ("UNITERM"). The Cranfield I test found, according to Ellis (1996, 3–6) the following results: Although these results have been criticized and questioned, the IR-tradition became much more influential while library classification research lost influence. The dominant trend has been to regard only statistical averages. What has largely been neglected is to ask: Are there certain kinds of questions in relation to which other kinds of representation, for example, controlled vocabularies, may improve recall and precision? User-oriented and cognitive views The best way to define this approach is probably by method: Systems based upon user-oriented approaches must specify how the design of a system is made on the basis of empirical studies of users. User studies demonstrated very early that users prefer verbal search systems as opposed to systems based on classification notations. This is one example of a principle derived from empirical studies of users. Adherents of classification notations may, of course, still have an argument: That notations are well-defined and that users may miss important information by not considering them. Folksonomies is a recent kind of KO based on users' rather than on librarians' or subject specialists' indexing. Bibliometric approaches These approaches are primarily based on using bibliographical references to organize networks of papers, mainly by bibliographic coupling (introduced by Kessler 1963) or co-citation analysis ( independently suggested by Marshakova 1973 and Small 1973). In recent years it has become a popular activity to construe bibliometric maps as structures of research fields. Two considerations are important in considering bibliometric approaches to KO: The level of indexing depth is partly determined by the number of terms assigned to each document. In citation indexing this corresponds to the number of references in a given paper. On the average, scientific papers contain 10–15 references, which provide quite a high level of depth. The references, which function as access points, are provided by the highest subject-expertise: The experts writing in the leading journals. This expertise is much higher than that which library catalogs or bibliographical databases typically are able to draw on. The domain analytic approach Domain analysis is a sociological-epistemological standpoint that advocates that the indexing of a given document should reflect the needs of a given group of users or a given ideal purpose. In other words, any description or representation of a given document is more or less suited to the fulfillment of certain tasks. A description is never objective or neutral, and the goal is not to standardize descriptions or make one description once and for all for different target groups. The development of the Danish library "KVINFO" may serve as an example that explains the domain-analytic point of view. KVINFO was founded by the librarian and writer Nynne Koch and its history goes back to 1965. Nynne Koch was employed at the Royal Library in Copenhagen in a position without influence on book selection. She was interested in women's studies and began personally to collect printed catalog cards of books in the Royal Library, which were considered relevant for women's studies. She developed a classification system for this subject. Later she became the head of KVINFO and got a budget for buying books and journals, and still later, KVINFO became an independent library. The important theoretical point of view is that the Royal Library had an official systematic catalog of a high standard. Normally it is assumed that such a catalog is able to identify relevant books for users whatever their theoretical orientation. This example demonstrates, however, that for a specific user group (feminist scholars), an alternative way of organizing catalog cards was important. In other words: Different points of view need different systems of organization. Domain analysis has examined epistemological issues in the field, i.e. comparing the assumptions made in different approaches to KO and examining the questions regarding subjectivity and objectivity in KO. Subjectivity is not just about individual differences. Such differences are of minor interest because they cannot be used as guidelines for KO. What seems important are collective views shared by many users. A kind of subjectivity about many users is related to philosophical positions. In any field of knowledge different views are always at play. In arts, for example, different views of art are always present. Such views determine views on art works, writing on art works, how art works are organized in exhibitions and how writings on art are organized in libraries. In general it can be stated that different philosophical positions on any issue have implications for relevance criteria, information needs and for criteria of organizing knowledge. Other approaches One widely used analysis of information-organizational principles, attributed to Richard Saul Wurman, summarizes them as Location, Alphabet, Time, Category, Hierarchy (LATCH). See also Automatic document classification Body of knowledge Classification (general theory) Dewey decimal classification Discipline (academia) Document classification Growth of knowledge Information ecology Knowledge organization systems Knowledge representation and reasoning Library and information science Library classification List of academic fields Outline of academic disciplines Personal information management References Social epistemology Information science Information science by discipline Knowledge representation Organization Library cataloging and classification Library science
Knowledge organization
[ "Technology" ]
2,022
[ "Social epistemology", "Science and technology studies" ]
9,527,696
https://en.wikipedia.org/wiki/Abell%20S740
The Abell S740 is a cluster of galaxies identified in the Abell catalogue of southern rich clusters of galaxies. It is over 450 Mly away in the constellation Centaurus. It has a redshift of 10,073 km/s. See also Abell catalogue List of Abell clusters References Abell S0740 Abell S0740 S0740 Abell richness class 0
Abell S740
[ "Astronomy" ]
87
[ "Galaxy clusters", "Astronomical objects", "Centaurus", "Constellations" ]
9,527,920
https://en.wikipedia.org/wiki/Evolutionary%20physiology
Evolutionary physiology is the study of the biological evolution of physiological structures and processes; that is, the manner in which the functional characteristics of organisms have responded to natural selection or sexual selection or changed by random genetic drift across multiple generations during the history of a population or species. It is a sub-discipline of both physiology and evolutionary biology. Practitioners in the field come from a variety of backgrounds, including physiology, evolutionary biology, ecology, and genetics. Accordingly, the range of phenotypes studied by evolutionary physiologists is broad, including life history traits, behavior, whole-organism performance, functional morphology, biomechanics, anatomy, classical physiology, endocrinology, biochemistry, and molecular evolution. The field is closely related to comparative physiology, ecophysiology, and environmental physiology, and its findings are a major concern of evolutionary medicine. One definition that has been offered is "the study of the physiological basis of fitness, namely, correlated evolution (including constraints and trade-offs) of physiological form and function associated with the environment, diet, homeostasis, energy management, longevity, and mortality and life history characteristics". History As the name implies, evolutionary physiology is the product of a merger between two distinct scientific disciplines. According to Garland and Carter, evolutionary physiology arose in the late 1970s, following debates concerning the metabolic and thermoregulatory status of dinosaurs (see physiology of dinosaurs) and mammal-like reptiles. This period was followed by attempts in the early 1980s to integrate quantitative genetics into evolutionary biology, which had spillover effects on other fields, such as behavioral ecology and ecophysiology. In the mid- to late 1980s, phylogenetic comparative methods started to become popular in many fields, including physiological ecology and comparative physiology. A 1987 volume titled New Directions in Ecological Physiology had little ecology but a considerable emphasis on evolutionary topics. It generated vigorous debate, and within a few years the National Science Foundation had developed a panel titled Ecological and Evolutionary Physiology. Shortly thereafter, selection experiments and experimental evolution became increasingly common in evolutionary physiology. Macrophysiology has emerged as a sub-discipline, in which practitioners attempt to identify large-scale patterns in physiological traits (e.g. patterns of co-variation with latitude) and their ecological implications. More recently, the importance of evolutionary physiology has been argued from the perspective of functional analyses, epigenetics, and an extended evolutionary synthesis. The growth of evolutionary physiology is also reflected in the emergence of sub-disciplines, such as evolutionary biomechanics and evolutionary endocrinology, which addresses such hybrid questions as "What are the most common endocrine mechanisms that respond to selection on behavior or life-history traits?" Emergent properties As a hybrid scientific discipline, evolutionary physiology provides some unique perspectives. For example, an understanding of physiological mechanisms can help in determining whether a particular pattern of phenotypic variation or co-variation (such as an allometric relationship) represents what could possibly exist or just what selection has allowed. Similarly, a thorough knowledge of physiological mechanisms can greatly enhance understanding of possible reasons for evolutionary correlations and constraints than is possible for many of the traits typically studied by evolutionary biologists (such as morphology). Areas of research Important areas of current research include: Organismal performance as a central phenotype (e.g., measures of speed or stamina in animal locomotion) Role of behavior in physiological evolution Physiological and endocrinological basis of variation in life history traits (e.g., clutch size) Functional significance of molecular evolution Genomic basis of adaptation Extent to which species differences are adaptive Physiological underpinnings of limits to geographic ranges Geographic variation in physiology Role of sexual selection in shaping physiological evolution Magnitude of "phylogenetic signal" in physiological traits Role of pathogens and parasites in physiological evolution and immunity Application of optimality modeling to elucidate the degree of adaptation Role of phenotypic plasticity in accounting for individual, population, and species differences Mechanistic basis of trade-offs and constraints on evolution (e.g., putative Carrier's constraint on running and breathing) Limits on sustained metabolic rate Origin of allometric scaling relations or allometric laws (and the so-called metabolic theory of ecology) Individual variation (see also Individual differences psychology) Functional significance of biochemical polymorphisms Analysis of physiological variation via quantitative genetics Paleophysiology and the evolution of endothermy Human adaptational physiology Darwinian medicine Evolution of dietary antioxidants Techniques Artificial selection and experimental evolution mouse wheel running video Genetic analyses and manipulations Measurement of selection in the wild Phenotypic plasticity and manipulation Phylogenetically based comparisons Doubly labeled water measurements of free-living energy demands of animals Funding and societies In the United States, research in evolutionary physiology is funded mainly by the National Science Foundation. A number of scientific societies feature sections that encompass evolutionary physiology, including: American Physiological Society "integrating the life sciences from molecule to organism" Society for Integrative and Comparative Biology Society for Experimental Biology Journals that frequently publish articles about evolutionary physiology American Naturalist Comparative Biochemistry and Physiology Comprehensive Physiology Ecology Evolution Functional Ecology Integrative and Comparative Biology Journal of Comparative Physiology Journal of Evolutionary Biochemistry and Physiology Journal of Evolutionary Biology Journal of Experimental Biology Ecological and Evolutionary Physiology (formerly Physiological and Biochemical Zoology) See also References External links People, Labs, and Programs in Evolutionary Physiology Evolutionary Systems Biology - Some Important Papers Physiological and Biochemical Zoology Focused Collection: Trade-Offs in Ecological and Evolutionary Physiology Physiology Evolutionary biology
Evolutionary physiology
[ "Biology" ]
1,111
[ "Evolutionary biology", "Physiology" ]
9,528,025
https://en.wikipedia.org/wiki/Greenhouse%20gas%20emissions
Greenhouse gas (GHG) emissions from human activities intensify the greenhouse effect. This contributes to climate change. Carbon dioxide (), from burning fossil fuels such as coal, oil, and natural gas, is the main cause of climate change. The largest annual emissions are from China followed by the United States. The United States has higher emissions per capita. The main producers fueling the emissions globally are large oil and gas companies. Emissions from human activities have increased atmospheric carbon dioxide by about 50% over pre-industrial levels. The growing levels of emissions have varied, but have been consistent among all greenhouse gases. Emissions in the 2010s averaged 56 billion tons a year, higher than any decade before. Total cumulative emissions from 1870 to 2022 were 703 (2575 ), of which 484±20 (1773±73 ) from fossil fuels and industry, and 219±60 (802±220 ) from land use change. Land-use change, such as deforestation, caused about 31% of cumulative emissions over 1870–2022, coal 32%, oil 24%, and gas 10%. Carbon dioxide is the main greenhouse gas resulting from human activities. It accounts for more than half of warming. Methane (CH4) emissions have almost the same short-term impact. Nitrous oxide (N2O) and fluorinated gases (F-gases) play a lesser role in comparison. Emissions of carbon dioxide, methane and nitrous oxide in 2023 were all higher than ever before. Electricity generation, heat and transport are major emitters; overall energy is responsible for around 73% of emissions. Deforestation and other changes in land use also emit carbon dioxide and methane. The largest source of anthropogenic methane emissions is agriculture, closely followed by gas venting and fugitive emissions from the fossil-fuel industry. The largest agricultural methane source is livestock. Agricultural soils emit nitrous oxide partly due to fertilizers. Similarly, fluorinated gases from refrigerants play an outsized role in total human emissions. The current -equivalent emission rates averaging 6.6 tonnes per person per year, are well over twice the estimated rate 2.3 tons required to stay within the 2030 Paris Agreement increase of 1.5 °C (2.7 °F) over pre-industrial levels. Annual per capita emissions in the industrialized countries are typically as much as ten times the average in developing countries. The carbon footprint (or greenhouse gas footprint) serves as an indicator to compare the amount of greenhouse gases emitted over the entire life cycle from the production of a good or service along the supply chain to its final consumption. Carbon accounting (or greenhouse gas accounting) is a framework of methods to measure and track how much greenhouse gas an organization emits. Relevance for greenhouse effect and global warming Overview of main sources Relevant greenhouse gases The major anthropogenic (human origin) sources of greenhouse gases are carbon dioxide (), nitrous oxide (), methane and three groups of fluorinated gases (sulfur hexafluoride (), hydrofluorocarbons (HFCs) and perfluorocarbons (PFCs, sulphur hexafluoride (SF6), and nitrogen trifluoride (NF3)). Though the greenhouse effect is heavily driven by water vapor, human emissions of water vapor are not a significant contributor to warming. Although CFCs are greenhouse gases, they are regulated by the Montreal Protocol which was motivated by CFCs' contribution to ozone depletion rather than by their contribution to global warming. Ozone depletion has only a minor role in greenhouse warming, though the two processes are sometimes confused in the media. In 2016, negotiators from over 170 nations meeting at the summit of the United Nations Environment Programme reached a legally binding accord to phase out hydrofluorocarbons (HFCs) in the Kigali Amendment to the Montreal Protocol. The use of CFC-12 (except some essential uses) has been phased out due to its ozone depleting properties. The phasing-out of less active HCFC-compounds will be completed in 2030. Human activities Starting about 1750, industrial activity powered by fossil fuels began to significantly increase the concentration of carbon dioxide and other greenhouse gases. Emissions have grown rapidly since about 1950 with ongoing expansions in global population and economic activity following World War II. As of 2021, measured atmospheric concentrations of carbon dioxide were almost 50% higher than pre-industrial levels. The main sources of greenhouse gases due to human activity (also called carbon sources) are: Burning fossil fuels: Burning oil, coal and gas is estimated to have emitted 37.4 billion tonnes of -eq in 2023. The largest single source is coal-fired power stations, with 20% of greenhouse gases (GHG) as of 2021. Land use change (mainly deforestation in the tropics) accounts for about a quarter of total anthropogenic GHG emissions. Livestock enteric fermentation and manure management, paddy rice farming, land use and wetland changes, human-made lakes, pipeline losses, and covered vented landfill emissions leading to higher methane atmospheric concentrations. Many of the newer style fully vented septic systems that enhance and target the fermentation process also are sources of atmospheric methane. Use of chlorofluorocarbons (CFCs) in refrigeration systems, and use of CFCs and halons in fire suppression systems and manufacturing processes. Agricultural soils emit nitrous oxide (N2O) partly due to application of fertilizers. The largest source of anthropogenic methane emissions is agriculture, closely followed by gas venting and fugitive emissions from the fossil-fuel industry. The largest agricultural methane source is livestock. Cattle (raised for both beef and milk, as well as for inedible outputs like manure and draft power) are the animal species responsible for the most emissions, representing about 65% of the livestock sector's emissions. Global estimates Global greenhouse gas emissions are about 50 Gt per year and for 2019 have been estimated at 57 Gt eq including 5 Gt due to land use change. In 2019, approximately 34% [20 Gt-eq] of total net anthropogenic GHG emissions came from the energy supply sector, 24% [14 Gt-eq] from industry, 22% [13 Gt-eq]from agriculture, forestry and other land use (AFOLU), 15% [8.7 Gt-eq] from transport and 6% [3.3 Gt-eq] from buildings. The current -equivalent emission rates averaging 6.6 tonnes per person per year, are well over twice the estimated rate 2.3 tons required to stay within the 2030 Paris Agreement increase of 1.5 °C (2.7 °F) over pre-industrial levels. While cities are sometimes considered to be disproportionate contributors to emissions, per-capita emissions tend to be lower for cities than the averages in their countries. A 2017 survey of corporations responsible for global emissions found that 100 companies were responsible for 71% of global direct and indirect emissions, and that state-owned companies were responsible for 59% of their emissions. China is, by a significant margin, Asia's and the world's largest emitter: it emits nearly 10 billion tonnes each year, more than one-quarter of global emissions. Other countries with fast growing emissions are South Korea, Iran, and Australia (which apart from the oil rich Persian Gulf states, now has the highest per capita emission rate in the world). On the other hand, annual per capita emissions of the EU-15 and the US are gradually decreasing over time. Emissions in Russia and Ukraine have decreased fastest since 1990 due to economic restructuring in these countries. 2015 was the first year to see both total global economic growth and a reduction of carbon emissions. High income countries compared to low income countries Annual per capita emissions in the industrialized countries are typically as much as ten times the average in developing countries. Due to China's fast economic development, its annual per capita emissions are quickly approaching the levels of those in the Annex I group of the Kyoto Protocol (i.e., the developed countries excluding the US). Africa and South America are both fairly small emitters, accounting for 3-4% of global emissions each. Both have emissions almost equal to international aviation and shipping. Calculations and reporting Variables There are several ways of measuring greenhouse gas emissions. Some variables that have been reported include: Definition of measurement boundaries: Emissions can be attributed geographically, to the area where they were emitted (the territory principle) or by the activity principle to the territory that produced the emissions. These two principles result in different totals when measuring, for example, electricity importation from one country to another, or emissions at an international airport. Time horizon of different gases: The contribution of given greenhouse gas is reported as a equivalent. The calculation to determine this takes into account how long that gas remains in the atmosphere. This is not always known accurately and calculations must be regularly updated to reflect new information. The measurement protocol itself: This may be via direct measurement or estimation. The four main methods are the emission factor-based method, mass balance method, predictive emissions monitoring systems, and continuous emissions monitoring systems. These methods differ in accuracy, cost, and usability. Public information from space-based measurements of carbon dioxide by Climate Trace is expected to reveal individual large plants before the 2021 United Nations Climate Change Conference. These measures are sometimes used by countries to assert various policy/ethical positions on climate change.The use of different measures leads to a lack of comparability, which is problematic when monitoring progress towards targets. There are arguments for the adoption of a common measurement tool, or at least the development of communication between different tools. Reporting Emissions may be tracked over long time periods, known as historical or cumulative emissions measurements. Cumulative emissions provide some indicators of what is responsible for greenhouse gas atmospheric concentration build-up. National accounts balance The national accounts balance tracks emissions based on the difference between a country's exports and imports. For many richer nations, the balance is negative because more goods are imported than they are exported. This result is mostly due to the fact that it is cheaper to produce goods outside of developed countries, leading developed countries to become increasingly dependent on services and not goods. A positive account balance would mean that more production was occurring within a country, so more operational factories would increase carbon emission levels. Emissions may also be measured across shorter time periods. Emissions changes may, for example, be measured against the base year of 1990. 1990 was used in the United Nations Framework Convention on Climate Change (UNFCCC) as the base year for emissions, and is also used in the Kyoto Protocol (some gases are also measured from the year 1995). A country's emissions may also be reported as a proportion of global emissions for a particular year. Another measurement is of per capita emissions. This divides a country's total annual emissions by its mid-year population. Per capita emissions may be based on historical or annual emissions. Embedded emissions One way of attributing greenhouse gas emissions is to measure the embedded emissions (also referred to as "embodied emissions") of goods that are being consumed. Emissions are usually measured according to production, rather than consumption. For example, in the main international treaty on climate change (the UNFCCC), countries report on emissions produced within their borders, e.g., the emissions produced from burning fossil fuels. Under a production-based accounting of emissions, embedded emissions on imported goods are attributed to the exporting, rather than the importing, country. Under a consumption-based accounting of emissions, embedded emissions on imported goods are attributed to the importing country, rather than the exporting, country. A substantial proportion of emissions is traded internationally. The net effect of trade was to export emissions from China and other emerging markets to consumers in the US, Japan, and Western Europe. Carbon footprint Emission intensity Emission intensity is a ratio between greenhouse gas emissions and another metric, e.g., gross domestic product (GDP) or energy use. The terms "carbon intensity" and "emissions intensity" are also sometimes used. Emission intensities may be calculated using market exchange rates (MER) or purchasing power parity (PPP). Example tools and websites Carbon accounting (or greenhouse gas accounting) is a framework of methods to measure and track how much greenhouse gas an organization emits. Climate TRACE Historical trends Cumulative and historical emissions Cumulative anthropogenic (i.e., human-emitted) emissions of from fossil fuel use are a major cause of global warming, and give some indication of which countries have contributed most to human-induced climate change. In particular, stays in the atmosphere for at least 150 years and up to 1000 years, whilst methane disappears within a decade or so, and nitrous oxides last about 100 years. The graph gives some indication of which regions have contributed most to human-induced climate change. When these numbers are calculated per capita cumulative emissions based on then-current population the situation is shown even more clearly. The ratio in per capita emissions between industrialized countries and developing countries was estimated at more than 10 to 1. Non-OECD countries accounted for 42% of cumulative energy-related emissions between 1890 and 2007. Over this time period, the US accounted for 28% of emissions; the EU, 23%; Japan, 4%; other OECD countries 5%; Russia, 11%; China, 9%; India, 3%; and the rest of the world, 18%.The European Commission adopted a set of legislative proposals targeting a reduction of the emissions by 55% by 2030. Overall, developed countries accounted for 83.8% of industrial emissions over this time period, and 67.8% of total emissions. Developing countries accounted for industrial emissions of 16.2% over this time period, and 32.2% of total emissions. However, what becomes clear when we look at emissions across the world today is that the countries with the highest emissions over history are not always the biggest emitters today. For example, in 2017, the UK accounted for just 1% of global emissions. In comparison, humans have emitted more greenhouse gases than the Chicxulub meteorite impact event which caused the extinction of the dinosaurs. Transport, together with electricity generation, is the major source of greenhouse gas emissions in the EU. Greenhouse gas emissions from the transportation sector continue to rise, in contrast to power generation and nearly all other sectors. Since 1990, transportation emissions have increased by 30%. The transportation sector accounts for around 70% of these emissions. The majority of these emissions are caused by passenger vehicles and vans. Road travel is the first major source of greenhouse gas emissions from transportation, followed by aircraft and maritime. Waterborne transportation is still the least carbon-intensive mode of transportation on average, and it is an essential link in sustainable multimodal freight supply chains. Buildings, like industry, are directly responsible for around one-fifth of greenhouse gas emissions, primarily from space heating and hot water consumption. When combined with power consumption within buildings, this figure climbs to more than one-third. Within the EU, the agricultural sector presently accounts for roughly 10% of total greenhouse gas emissions, with methane from livestock accounting for slightly more than half of 10%. Estimates of total emissions do include biotic carbon emissions, mainly from deforestation. Including biotic emissions brings about the same controversy mentioned earlier regarding carbon sinks and land-use change. The actual calculation of net emissions is very complex, and is affected by how carbon sinks are allocated between regions and the dynamics of the climate system. The graphic shows the logarithm of 1850–2019 fossil fuel emissions; natural log on left, actual value of Gigatons per year on right. Although emissions increased during the 170-year period by about 3% per year overall, intervals of distinctly different growth rates (broken at 1913, 1945, and 1973) can be detected. The regression lines suggest that emissions can rapidly shift from one growth regime to another and then persist for long periods of time. The most recent drop in emissions growth – by almost 3 percentage points – was at about the time of the 1970s energy crisis. Percent changes per year were estimated by piecewise linear regression on the log data and are shown on the plot; the data are from The Integrated Carbon Observation system. Changes since a particular base year The sharp acceleration in emissions since 2000 to more than a 3% increase per year (more than 2 ppm per year) from 1.1% per year during the 1990s is attributable to the lapse of formerly declining trends in carbon intensity of both developing and developed nations. China was responsible for most of global growth in emissions during this period. Localised plummeting emissions associated with the collapse of the Soviet Union have been followed by slow emissions growth in this region due to more efficient energy use, made necessary by the increasing proportion of it that is exported. In comparison, methane has not increased appreciably, and by 0.25% y−1. Using different base years for measuring emissions has an effect on estimates of national contributions to global warming. This can be calculated by dividing a country's highest contribution to global warming starting from a particular base year, by that country's minimum contribution to global warming starting from a particular base year. Choosing between base years of 1750, 1900, 1950, and 1990 has a significant effect for most countries. Data from Global Carbon Project The Global Carbon Project continuously releases data about emissions, budget and concentration. Emissions by type of greenhouse gas Carbon dioxide () is the dominant emitted greenhouse gas, while methane () emissions almost have the same short-term impact. Nitrous oxide (N2O) and fluorinated gases (F-gases) play a lesser role in comparison. Greenhouse gas emissions are measured in equivalents determined by their global warming potential (GWP), which depends on their lifetime in the atmosphere. Estimations largely depend on the ability of oceans and land sinks to absorb these gases. Short-lived climate pollutants (SLCPs) including methane, hydrofluorocarbons (HFCs), tropospheric ozone and black carbon persist in the atmosphere for a period ranging from days to 15 years; whereas carbon dioxide can remain in the atmosphere for millennia. Reducing SLCP emissions can cut the ongoing rate of global warming by almost half and reduce the projected Arctic warming by two-thirds. Greenhouse gas emissions in 2019 were estimated at 57.4 Gte, while emissions alone made up 42.5 Gt including land-use change (LUC). While mitigation measures for decarbonization are essential on the longer term, they could result in weak near-term warming because sources of carbon emissions often also co-emit air pollution. Hence, pairing measures that target carbon dioxide with measures targeting non- pollutants – short-lived climate pollutants, which have faster effects on the climate, is essential for climate goals. Carbon dioxide () Fossil fuel (use for energy generation, transport, heating and machinery in industrial plants): oil, gas and coal (89%) are the major driver of anthropogenic global warming with annual emissions of 35.6 Gt in 2019. Cement production (burning of fossil fuels) (4%) is estimated at 1.42 Gt Land-use change (LUC) is the imbalance of deforestation and reforestation. Estimations are very uncertain at 4.5 Gt. Wildfires alone cause annual emissions of about 7 Gt Non-energy use of fuels, carbon losses in coke ovens, and flaring in crude oil production. Methane (CH4) Methane has a high immediate impact with a 5-year global warming potential of up to 100. Given this, the current 389 Mt of methane emissions has about the same short-term global warming effect as emissions, with a risk to trigger irreversible changes in climate and ecosystems. For methane, a reduction of about 30% below current emission levels would lead to a stabilization in its atmospheric concentration. Fossil fuels (32%) (emissions due to losses during production and transport) account for most of the methane emissions including coal mining (12% of methane total), gas distribution and leakages (11%) as well as gas venting in oil production (9%). Livestock (28%) with cattle (21%) as the dominant source, followed by buffalo (3%), sheep (2%), and goats (1.5%). Human waste and wastewater (21%): When biomass waste in landfills and organic substances in domestic and industrial wastewater is decomposed by bacteria in anaerobic conditions, substantial amounts of methane are generated. Rice cultivation (10%) on flooded rice fields is another agricultural source, where anaerobic decomposition of organic material produces methane. Nitrous oxide () N2O has a high GWP and significant Ozone Depleting Potential. It is estimated that the global warming potential of N2O over 100 years is 265 times greater than . For N2O, a reduction of more than 50% would be required for a stabilization. Most emissions (56%) of nitrous oxide comes from agriculture, especially meat production: cattle (droppings on pasture), fertilizers, animal manure.Further contributions come from combustion of fossil fuels (18%) and biofuels as well as industrial production of adipic acid and nitric acid. F-gases Fluorinated gases include hydrofluorocarbons (HFC), perfluorocarbons (PFC), sulfur hexafluoride (SF6), and nitrogen trifluoride (NF3). They are used by switchgear in the power sector, semiconductor manufacture, aluminum production and a largely unknown source of SF6. Continued phase down of manufacture and use of HFCs under the Kigali Amendment to the Montreal Protocol will help reduce HFC emissions and concurrently improve the energy efficiency of appliances that use HFCs like air conditioners, freezers and other refrigeration devices. Hydrogen Hydrogen leakages contribute to indirect global warming. When hydrogen is oxidized in the atmosphere, the result is an increase in concentrations of greenhouse gases in both the troposphere and the stratosphere. Hydrogen can leak from hydrogen production facilities as well as any infrastructure in which hydrogen is transported, stored, or consumed. Black carbon Black carbon is formed through the incomplete combustion of fossil fuels, biofuel, and biomass. It is not a greenhouse gas but a climate forcing agent. Black carbon can absorb sunlight and reduce albedo when deposited on snow and ice. Indirect heating can be caused by the interaction with clouds. Black carbon stays in the atmosphere for only several days to weeks. Emissions may be mitigated by upgrading coke ovens, installing particulate filters on diesel-based engines, reducing routine flaring, and minimizing open burning of biomass. Emissions by sector Global greenhouse gas emissions can be attributed to different sectors of the economy. This provides a picture of the varying contributions of different types of economic activity to climate change, and helps in understanding the changes required to mitigate climate change. Greenhouse gas emissions can be divided into those that arise from the combustion of fuels to produce energy, and those generated by other processes. Around two thirds of greenhouse gas emissions arise from the combustion of fuels. Energy may be produced at the point of consumption, or by a generator for consumption by others. Thus emissions arising from energy production may be categorized according to where they are emitted, or where the resulting energy is consumed. If emissions are attributed at the point of production, then electricity generators contribute about 25% of global greenhouse gas emissions. If these emissions are attributed to the final consumer then 24% of total emissions arise from manufacturing and construction, 17% from transportation, 11% from domestic consumers, and 7% from commercial consumers. Around 4% of emissions arise from the energy consumed by the energy and fuel industry itself. The remaining third of emissions arise from processes other than energy production. 12% of total emissions arise from agriculture, 7% from land use change and forestry, 6% from industrial processes, and 3% from waste. Electricity generation Coal-fired power stations are the single largest emitter, with over 20% of global greenhouse gas emissions in 2018. Although much less polluting than coal plants, natural gas-fired power plants are also major emitters, taking electricity generation as a whole over 25% in 2018. Notably, just 5% of the world's power plants account for almost three-quarters of carbon emissions from electricity generation, based on an inventory of more than 29,000 fossil-fuel power plants across 221 countries. In the 2022 IPCC report, it is noted that providing modern energy services universally would only increase greenhouse gas emissions by a few percent at most. This slight increase means that the additional energy demand that comes from supporting decent living standards for all would be far lower than current average energy consumption. In March 2024, the International Energy Agency (IEA) reported that in 2023, global emissions from energy sources increased by 1.1%, rising by 410 million tonnes to a record 37.4 billion tonnes, primarily due to coal. Drought-related decreases in hydropower contributed to a 170 million tonne rise in emissions, which would have otherwise led to a decrease in the electricity sector's emissions. The implementation of clean energy technologies like solar, wind, nuclear, heat pumps, and electric vehicles since 2019 has significantly tempered emissions growth, which would have been threefold without these technologies. Agriculture, forestry and land use Agriculture Deforestation Deforestation is a major source of greenhouse gas emissions. A study shows annual carbon emissions (or carbon loss) from tropical deforestation have doubled during the last two decades and continue to increase. (0.97 ±0.16 PgC per year in 2001–2005 to 1.99 ±0.13 PgC per year in 2015–2019) Land-use change Land-use change, e.g., the clearing of forests for agricultural use, can affect the concentration of greenhouse gases in the atmosphere by altering how much carbon flows out of the atmosphere into carbon sinks. Accounting for land-use change can be understood as an attempt to measure "net" emissions, i.e., gross emissions from all sources minus the removal of emissions from the atmosphere by carbon sinks. There are substantial uncertainties in the measurement of net carbon emissions. Additionally, there is controversy over how carbon sinks should be allocated between different regions and over time. For instance, concentrating on more recent changes in carbon sinks is likely to favour those regions that have deforested earlier, e.g., Europe. In 1997, human-caused Indonesian peat fires were estimated to have released between 13% and 40% of the average annual global carbon emissions caused by the burning of fossil fuels. Transport of people and goods Transportation accounts for 15% of emissions worldwide. Over a quarter of global transport emissions are from road freight, so many countries are further restricting truck emissions to help limit climate change. Maritime transport accounts for 3.5% to 4% of all greenhouse gas emissions, primarily carbon dioxide. In 2022, the shipping industry's 3% of global greenhouse gas emissions made it "the sixth largest greenhouse gas emitter worldwide, ranking between Japan and Germany." Aviation Jet airliners contribute to climate change by emitting carbon dioxide (), nitrogen oxides, contrails and particulates.In 2018, global commercial operations generated 2.4% of all emissions. In 2020, approximately 3.5% of the overall human impacts on climate are from the aviation sector. The impact of the sector on climate in the last 20 years had doubled, but the part of the contribution of the sector in comparison to other sectors did not change because other sectors grew as well. Some representative figures for average direct emissions (not accounting for high-altitude radiative effects) of airliners expressed as and equivalent per passenger kilometer: Domestic, short distance, less than : 257 g/km or 259 g/km (14.7 oz/mile) e Long-distance flights: 113 g/km or 114 g/km (6.5 oz/mile) e Buildings and construction In 2018, manufacturing construction materials and maintaining buildings accounted for 39% of carbon dioxide emissions from energy and process-related emissions. Manufacture of glass, cement, and steel accounted for 11% of energy and process-related emissions. Because building construction is a significant investment, more than two-thirds of buildings in existence will still exist in 2050. Retrofitting existing buildings to become more efficient will be necessary to meet the targets of the Paris Agreement; it will be insufficient to only apply low-emission standards to new construction. Buildings that produce as much energy as they consume are called zero-energy buildings, while buildings that produce more than they consume are energy-plus. Low-energy buildings are designed to be highly efficient with low total energy consumption and carbon emissions—a popular type is the passive house. The construction industry has seen marked advances in building performance and energy efficiency over recent decades. Green building practices that avoid emissions or capture the carbon already present in the environment, allow for reduced footprint of the construction industry, for example, use of hempcrete, cellulose fiber insulation, and landscaping. In 2019, the building sector was responsible for 12 Gt-eq emissions. More than 95% of these emissions were carbon, and the remaining 5% were , , and halocarbon. The largest contributor to building sector emissions (49% of total) is the production of electricity for use in buildings. Of global building sector GHG emissions, 28% are produced during the manufacturing process of building materials such as steel, cement (a key component of concrete), and glass. The conventional process inherently related to the production of steel and cement results in large amounts of CO2 emitted. For example, the production of steel in 2018 was responsible for 7 to 9% of the global CO2 emissions. The remaining 23% of global building sector GHG emissions are produced directly on site during building operations. Embodied carbon emissions in construction sector Embodied carbon emissions, or upfront carbon emissions (UCE), are the result of creating and maintaining the materials that form a building. As of 2018, "Embodied carbon is responsible 11% of global greenhouse gas emissions and 28% of global building sector emissions ... Embodied carbon will be responsible for almost half of total new construction emissions between now and 2050." GHG emissions which are produced during the mining, processing, manufacturing, transportation and installation of building materials are referred to as the embodied carbon of a material. The embodied carbon of a construction project can be reduced by using low-carbon materials for building structures and finishes, reducing demolition, and reusing buildings and construction materials whenever possible. Industrial processes Secunda CTL is the world's largest single emitter, at 56.5 million tonnes a year. Mining Flaring and venting of natural gas in oil wells is a significant source of greenhouse gas emissions. Its contribution to greenhouse gases has declined by three-quarters in absolute terms since a peak in the 1970s of approximately 110 million metric tons/year, and in 2004 accounted for about 1/2 of one percent of all anthropogenic carbon dioxide emissions. The World Bank estimates that 134 billion cubic meters of natural gas are flared or vented annually (2010 datum), an amount equivalent to the combined annual gas consumption of Germany and France or enough to supply the entire world with gas for 16 days. This flaring is highly concentrated: 10 countries account for 70% of emissions, and twenty for 85%. Steel and aluminum Steel and aluminum are key economic sectors where is produced. According to a 2013 study, "in 2004, the steel industry along emits about 590M tons of , which accounts for 5.2% of the global anthropogenic GHG emissions. emitted from steel production primarily comes from energy consumption of fossil fuel as well as the use of limestone to purify iron oxides." Plastics Plastics are produced mainly from fossil fuels. It was estimated that between 3% and 4% of global GHG emissions are associated with plastics' life cycles. The EPA estimates as many as five mass units of carbon dioxide are emitted for each mass unit of polyethylene terephthalate (PET) produced—the type of plastic most commonly used for beverage bottles, the transportation produce greenhouse gases also. Plastic waste emits carbon dioxide when it degrades. In 2018 research claimed that some of the most common plastics in the environment release the greenhouse gases methane and ethylene when exposed to sunlight in an amount that can affect the earth climate. Due to the lightness of plastic versus glass or metal, plastic may reduce energy consumption. For example, packaging beverages in PET plastic rather than glass or metal is estimated to save 52% in transportation energy, if the glass or metal package is single-use, of course. In 2019 a new report "Plastic and Climate" was published. According to the report, the production and incineration of plastics will contribute in the equivalent of 850 million tonnes of carbon dioxide () to the atmosphere in 2019. With the current trend, annual life cycle greenhouse gas emissions of plastics will grow to 1.34 billion tonnes by 2030. By 2050, the life cycle emissions of plastics could reach 56 billion tonnes, as much as 14 percent of the Earth's remaining carbon budget. The report says that only solutions which involve a reduction in consumption can solve the problem, while others like biodegradable plastic, ocean cleanup, using renewable energy in plastic industry can do little, and in some cases may even worsen it. Pulp and paper The global print and paper industry accounts for about 1% of global carbon dioxide emissions. Greenhouse gas emissions from the pulp and paper industry are generated from the combustion of fossil fuels required for raw material production and transportation, wastewater treatment facilities, purchased power, paper transportation, printed product transportation, disposal and recycling. Various services Digital services In 2020, data centers (excluding cryptocurrency mining) and data transmission each used about 1% of world electricity. The digital sector produces between 2% and 4% of global GHG emissions, a large part of which is from chipmaking. However the sector reduces emissions from other sectors which have a larger global share, such as transport of people, and possibly buildings and industry. Mining for proof-of-work cryptocurrencies requires enormous amounts of electricity and consequently comes with a large carbon footprint. Proof-of-work blockchains such as Bitcoin, Ethereum, Litecoin, and Monero were estimated to have added between 3 million and 15 million tonnes of carbon dioxide () to the atmosphere in the period from 1 January 2016 to 30 June 2017. By the end of 2021, Bitcoin was estimated to produce 65.4 million tonnes of , as much as Greece, and consume between 91 and 177 terawatt-hours annually. Bitcoin is the least energy-efficient cryptocurrency, using 707.6 kilowatt-hours of electricity per transaction. A study in 2015 investigated the global electricity usage that can be ascribed to Communication Technology (CT) between 2010 and 2030. Electricity usage from CT was divided into four principle categories: (i) consumer devices, including personal computers, mobile phones, TVs and home entertainment systems; (ii) network infrastructure; (iii) data center computation and storage; and lastly (iv) production of the above categories. The study estimated for the worst-case scenario, that CT electricity usage could contribute up to 23% of the globally released greenhouse gas emissions in 2030. Health care The healthcare sector produces 4.4–4.6% of global greenhouse gas emissions. Based on the 2013 life cycle emissions in the health care sector, it is estimated that the GHG emissions associated with US health care activities may cause an additional 123,000 to 381,000 DALYs annually. Water supply and sanitation Tourism According to UNEP, global tourism is a significant contributor to the increasing concentrations of greenhouse gases in the atmosphere. Emissions by other characteristics The responsibility for anthropogenic climate change differs substantially among individuals, e.g. between groups or cohorts. By type of energy source By socio-economic class and age Fueled by the consumptive lifestyle of wealthy people, the wealthiest 5% of the global population has been responsible for 37% of the absolute increase in greenhouse gas emissions worldwide. It can be seen that there is a strong relationship between income and per capita carbon dioxide emissions. Almost half of the increase in absolute global emissions has been caused by the richest 10% of the population. In the newest report from the IPCC 2022, it states that the lifestyle consumptions of the poor and middle class in emerging economies produce approximately 5–50 times less the amount that the high class in already developed high-income countries. Variations in regional, and national per capita emissions partly reflect different development stages, but they also vary widely at similar income levels. The 10% of households with the highest per capita emissions contribute a disproportionately large share of global household greenhouse gas emissions. Studies find that the most affluent citizens of the world are responsible for most environmental impacts, and robust action by them is necessary for prospects of moving towards safer environmental conditions. According to a 2020 report by Oxfam and the Stockholm Environment Institute, the richest 1% of the global population have caused twice as much carbon emissions as the poorest 50% over the 25 years from 1990 to 2015. This was, respectively, during that period, 15% of cumulative emissions compared to 7%. The bottom half of the population is directly responsible for less than 20% of energy footprints and consume less than the top 5% in terms of trade-corrected energy. The largest disproportionality was identified to be in the domain of transport, where e.g. the top 10% consume 56% of vehicle fuel and conduct 70% of vehicle purchases. However, wealthy individuals are also often shareholders and typically have more influence and, especially in the case of billionaires, may also direct lobbying efforts, direct financial decisions, and/or control companies. Based on a study in 32 developed countries, researchers found that "seniors in the United States and Australia have the highest per capita footprint, twice the Western average. The trend is mainly due to changes in expenditure patterns of seniors". Methods for reducing greenhouse gas emissions Governments have taken action to reduce greenhouse gas emissions to mitigate climate change. Countries and regions listed in Annex I of the United Nations Framework Convention on Climate Change (UNFCCC) (i.e., the OECD and former planned economies of the Soviet Union) are required to submit periodic assessments to the UNFCCC of actions they are taking to address climate change. Policies implemented by governments include for example national and regional targets to reduce emissions, promoting energy efficiency, and support for an energy transition. Projections for future emissions In October 2023, the US Energy Information Administration (EIA) released a series of projections out to 2050 based on current ascertainable policy interventions. Unlike many integrated systems models in this field, emissions are allowed to float rather than be pinned to netzero in 2050. Asensitivity analysis varied key parameters, primarily future GDP growth (2.6%pa as reference, variously 1.8% and 3.4%) and secondarily technological learning rates, future crude oil prices, and similar exogenous inputs. The model results are far from encouraging. In no case did aggregate energy-related carbon emissions ever dip below 2022 levels (see figure3 plot). The IEO2023 exploration provides a benchmark and suggests that far stronger action is needed. By country List of countries In 2019, China, the United States, India, the EU27+UK, Russia, and Japan - the world's largest emitters - together accounted for 51% of the population, 62.5% of global gross domestic product, 62% of total global fossil fuel consumption and emitted 67% of total global fossil . Emissions from these five countries and the EU28 show different changes in 2019 compared to 2018: the largest relative increase is found for China (+3.4%), followed by India (+1.6%). On the contrary, the EU27+UK (-3.8%), the United States (-2.6%), Japan (-2.1%) and Russia (-0.8%) reduced their fossil emissions. United States China India Society and culture Impacts of the COVID-19 pandemic In 2020, carbon dioxide emissions fell by 6.4% or 2.3 billion tonnes globally. In April 2020, emissions fell by up to 30%. In China, lockdowns and other measures resulted in a 26% decrease in coal consumption, and a 50% reduction in nitrogen oxide emissions. Greenhouse gas emissions rebounded later in the pandemic as many countries began lifting restrictions, with the direct impact of pandemic policies having a negligible long-term impact on climate change. See also References External links Latest official greenhouse gas emissions data of developed countries from the UNFCCC Earlier official greenhouse gas emissions data of developed countries from the UNFCCC Annual Greenhouse Gas Index (AGGI) from NOAA NOAA CMDL CCGG – Interactive Atmospheric Data Visualization NOAA data IPCC Website Official IPCC Sixth Assessment Report website Articles containing video clips Climate forcing Climate change
Greenhouse gas emissions
[ "Chemistry" ]
8,536
[ "Greenhouse gases", "Greenhouse gas emissions" ]
9,528,907
https://en.wikipedia.org/wiki/Nucleic%20acid%20quantitation
In molecular biology, quantitation of nucleic acids is commonly performed to determine the average concentrations of DNA or RNA present in a mixture, as well as their purity. Reactions that use nucleic acids often require particular amounts and purity for optimum performance. To date, there are two main approaches used by scientists to quantitate, or establish the concentration, of nucleic acids (such as DNA or RNA) in a solution. These are spectrophotometric quantification and UV fluorescence tagging in presence of a DNA dye. Spectrophotometric analysis One of the most commonly used practices to quantitate DNA or RNA is the use of spectrophotometric analysis using a spectrophotometer. A spectrophotometer is able to determine the average concentrations of the nucleic acids DNA or RNA present in a mixture, as well as their purity. Spectrophotometric analysis is based on the principles that nucleic acids absorb ultraviolet light in a specific pattern. In the case of DNA and RNA, a sample is exposed to ultraviolet light at a wavelength of 260 nanometres (nm) and a photo-detector measures the light that passes through the sample. Some of the ultraviolet light will pass through and some will be absorbed by the DNA / RNA. The more light absorbed by the sample, the higher the nucleic acid concentration in the sample. The resulting effect is that less light will strike the photodetector and this will produce a higher optical density (OD) Using the Beer–Lambert law it is possible to relate the amount of light absorbed to the concentration of the absorbing molecule. At a wavelength of 260 nm, the average extinction coefficient for double-stranded DNA is 0.020 (μg/mL)−1 cm−1, for single-stranded DNA it is 0.027 (μg/mL)−1 cm−1, for single-stranded RNA it is 0.025 (μg/mL)−1 cm−1 and for short single-stranded oligonucleotides it is dependent on the length and base composition. Thus, an Absorbance (A) of 1 corresponds to a concentration of 50 μg/mL for double-stranded DNA. This method of calculation is valid for up to an A of at least 2. A more accurate extinction coefficient may be needed for oligonucleotides; these can be predicted using the nearest-neighbor model. Calculations The optical density is generated from equation: Optical density= Log (Intensity of incident light / Intensity of Transmitted light) In practical terms, a sample that contains no DNA or RNA should not absorb any of the ultraviolet light and therefore produce an OD of 0 Optical density= Log (100/100)=0 When using spectrophotometric analysis to determine the concentration of DNA or RNA, the Beer–Lambert law is used to determine unknown concentrations without the need for standard curves. In essence, the Beer Lambert Law makes it possible to relate the amount of light absorbed to the concentration of the absorbing molecule. The following absorbance units to nucleic acid concentration conversion factors are used to convert OD to concentration of unknown nucleic acid samples: A260 dsDNA = 50 μg/mL A260 ssDNA = 33 μg/mL A260 ssRNA = 40 μg/mL Conversion factors When using a 10 mm path length, simply multiply the OD by the conversion factor to determine the concentration. Example, a 2.0 OD dsDNA sample corresponds to a sample with a 100 μg/mL concentration. When using a path length that is shorter than 10mm, the resultant OD will be reduced by a factor of 10/path length. Using the example above with a 3 mm path length, the OD for the 100 μg/mL sample would be reduced to 0.6. To normalize the concentration to a 10mm equivalent, the following is done: 0.6 OD X (10/3) * 50 μg/mL=100 μg/mL Most spectrophotometers allow selection of the nucleic acid type and path length such that resultant concentration is normalized to the 10 mm path length which is based on the principles of Beer's law. A260 as quantity measurement The "A260 unit" is used as a quantity measure for nucleic acids. One A260 unit is the amount of nucleic acid contained in 1 mL and producing an OD of 1. The same conversion factors apply, and therefore, in such contexts: 1 A260 unit dsDNA = 50 μg 1 A260 unit ssDNA = 33 μg 1 A260 unit ssRNA = 40 μg Sample purity (260:280 / 260:230 ratios) It is common for nucleic acid samples to be contaminated with other molecules (i.e. proteins, organic compounds, other). The secondary benefit of using spectrophotometric analysis for nucleic acid quantitation is the ability to determine sample purity using the 260 nm:280 nm calculation. The ratio of the absorbance at 260 and 280 nm (A260/280) is used to assess the purity of nucleic acids. For pure DNA, A260/280 is widely considered ~1.8 but has been argued to translate - due to numeric errors in the original Warburg paper - into a mix of 60% protein and 40% DNA. The ratio for pure RNA A260/280 is ~2.0. These ratios are commonly used to assess the amount of protein contamination that is left from the nucleic acid isolation process since proteins absorb at 280 nm. The ratio of absorbance at 260 nm vs 280 nm is commonly used to assess DNA contamination of protein solutions, since proteins (in particular, the aromatic amino acids) absorb light at 280 nm. The reverse, however, is not true — it takes a relatively large amount of protein contamination to significantly affect the 260:280 ratio in a nucleic acid solution. 260:280 ratio has high sensitivity for nucleic acid contamination in protein: 260:280 ratio lacks sensitivity for protein contamination in nucleic acids (table shown for RNA, 100% DNA is approximately 1.8): This difference is due to the much higher mass attenuation coefficient nucleic acids have at 260 nm and 280 nm, compared to that of proteins. Because of this, even for relatively high concentrations of protein, the protein contributes relatively little to the 260 and 280 absorbance. While the protein contamination cannot be reliably assessed with a 260:280 ratio, this also means that it contributes little error to DNA quantity estimation. Contamination identification Examination of sample spectra may be useful in identifying that a problem with sample purity exists. Other common contaminants Contamination by phenol, which is commonly used in nucleic acid purification, can significantly throw off quantification estimates. Phenol absorbs with a peak at 270 nm and a A260/280 of 1.2. Nucleic acid preparations uncontaminated by phenol should have a A260/280 of around 2. Contamination by phenol can significantly contribute to overestimation of DNA concentration. Absorption at 230 nm can be caused by contamination by phenolate ion, thiocyanates, and other organic compounds. For a pure RNA sample, the A230:260:280 should be around 1:2:1, and for a pure DNA sample, the A230:260:280 should be around 1:1.8:1. Absorption at 330 nm and higher indicates particulates contaminating the solution, causing scattering of light in the visible range. The value in a pure nucleic acid sample should be zero. Negative values could result if an incorrect solution was used as blank. Alternatively, these values could arise due to fluorescence of a dye in the solution. Analysis with fluorescent dye tagging An alternative method to assess DNA and RNA concentration is to tag the sample with a Fluorescent tag, which is a fluorescent dye used to measure the intensity of the dyes that bind to nucleic acids and selectively fluoresce when bound (e.g. Ethidium bromide). This method is useful for cases where concentration is too low to accurately assess with spectrophotometry and in cases where contaminants absorbing at 260 nm make accurate quantitation by that method impossible. The benefit of fluorescence quantitation of DNA and RNA is the improved sensitivity over spectrophotometric analysis. Although, that increase in sensitivity comes at the cost of a higher price per sample and a lengthier sample preparation process. There are two main ways to approach this. "Spotting" involves placing a sample directly onto an agarose gel or plastic wrap. The fluorescent dye is either present in the agarose gel, or is added in appropriate concentrations to the samples on the plastic film. A set of samples with known concentrations are spotted alongside the sample. The concentration of the unknown sample is then estimated by comparison with the fluorescence of these known concentrations. Alternatively, one may run the sample through an agarose or polyacrylamide gel, alongside some samples of known concentration. As with the spot test, concentration is estimated through comparison of fluorescent intensity with the known samples. If the sample volumes are large enough to use microplates or cuvettes, the dye-loaded samples can also be quantified with a fluorescence photometer. Minimum sample volume starts at 0.3 μL. To date there is no fluorescence method to determine protein contamination of a DNA sample that is similar to the 260 nm/280 nm spectrophotometric version. See also Nucleic acid methods Phenol–chloroform extraction Column purification Protein methods References External links IDT online tool for predicting nucleotide UV absorption spectrum Ambion guide to RNA quantitation Hillary Luebbehusen, The significance of 260/230 Ratio in Determining Nucleic Acid Purity (pdf document) double stranded, single stranded DNA and RNA quantification by 260nm absorption, Sauer lab at OpenWetWare Absorbance to Concentration Web App @ DNA.UTAH.EDU Nucleic Acid Quantification Accuracy and Reproducibility Spectroscopy Biochemistry methods Nucleic acids
Nucleic acid quantitation
[ "Physics", "Chemistry", "Biology" ]
2,092
[ "Biochemistry methods", "Biomolecules by chemical classification", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Biochemistry", "Spectroscopy", "Nucleic acids" ]
9,529,092
https://en.wikipedia.org/wiki/Nuri%20Killigil
Nuri Killigil, also known as Nuri Pasha (1889–1949) was an Ottoman general in the Ottoman Army. He was the half-brother of Ottoman Minister of War, Enver Pasha. Military career Libya Infantry Machine-Gun Captain Nuri Efendi was sent to Libya on a Greek ship with Major Jafar al-Askari Bey and 10,000 pieces of gold. His mission was to organize and coordinate operations of Teşkilat-ı Mahsusa forces with local forces against Italian and British forces. They landed on the shore between Tobruk and Sallum on February 21, 1915, and then went to Ahmed Sharif es Senussi in Sallum. In 1917, in an attempt to organize the efforts which was dispersed by the British, the Ottoman General Staff established the "Africa Groups Command" (Afrika Grupları Komutanlığı), of which the primary objective was the coastal regions of Libya. Lieutenant Colonel Nuri Bey was appointed its first commander and his chief of Staff was Staff Major Abdurrahman Nafiz Bey (Gürman). Caucasus Nuri Bey's elder brother Enver Pasha, commander of the Ottoman Army, who saw an opportunity in the Caucasus after the Russian Revolution took Russia out of World War I, called back Nuri Bey from Libya. He was promoted to Mirliva Fahri (honorary) Ferik and gave the mission to form and command volunteer based the Islamic Army of the Caucasus. Nuri Bey came to Yelizavetpol (present day: Ganja) on May 25, 1918, and began to organize his forces. The Army of Islam was formed officially on July 10, 1918. Liberation of the Caucasus campaign begun and fierce fightings happened between Bolshevik Baku Commune-Armenians Dashnaktsutyun and Islamic Army of the Caucasus. The Islamic Army of the Caucasus led by Nuri Pasha took control of the whole Azerbaijan and the capital Baku on 15 September 1918. During this time, Nuri presided over the massacre of 30,000 Armenian civilians in the city of Baku. At the end of the war, Nuri was arrested by British troops and held in detention in Batum, awaiting trial for wartime crimes. In August 1919, his supporters ambushed guards escorting him and helped him escape to Erzurum. Later life In 1938, Killigil bought a coal mining plant in Turkey. He began to organize the production of guns, bullets, gas masks, and other war equipment. After some time, he announced the end of the production of weapons, but still secretly continued production. Killigil established contact with Franz von Papen, the Nazi ambassador in Ankara in 1941 in order to win German support for the Pan-Turkic cause. With his assistance, the Turkestan Legion was formed by the Schutzstaffel. During World War II, Killigil was in Germany, attempting to develop strong relationships between Nazi Germany and Turkey and achieve the recognition of the independence of Azerbaijan. He was unsuccessful. In September 1941, Killigil offered to organize an anti-Soviet, pan-Turkish uprising in the Caucasus, but the Germans declined the offer. Death Killigil was killed on 2 March 1949 by an explosion in his factory in Istanbul that also killed 28 other people. He was buried without a proper funeral ceremony at the time, as it was viewed as contrary to religious beliefs for dismembered corpses. A formal funeral service, attended by the Azerbaijani politician Ganira Pashayeva and representatives from the Municipality of Istanbul, was only held in 2016. Sources 1889 births 1949 deaths People from Bitola Ottoman Military Academy alumni Ottoman military personnel of World War I Ottoman Army generals Armenian genocide perpetrators Turkish collaborators with Nazi Germany Enver Pasha 20th-century Turkish businesspeople Deaths from explosion Industrial accident deaths Pan-Turkists Turkish anti-communists Turkish escapees Accidental deaths in Turkey Escapees from British military detention
Nuri Killigil
[ "Chemistry" ]
794
[ "Deaths from explosion", "Explosions" ]
9,529,253
https://en.wikipedia.org/wiki/Regenerative%20heat%20exchanger
A regenerative heat exchanger, or more commonly a regenerator, is a type of heat exchanger where heat from the hot fluid is intermittently stored in a thermal storage medium before it is transferred to the cold fluid. To accomplish this the hot fluid is brought into contact with the heat storage medium, then the fluid is displaced with the cold fluid, which absorbs the heat. In regenerative heat exchangers, the fluid on either side of the heat exchanger can be the same fluid. The fluid may go through an external processing step, and then it is flowed back through the heat exchanger in the opposite direction for further processing. Usually the application will use this process cyclically or repetitively. Regenerative heating was one of the most important technologies developed during the Industrial Revolution when it was used in the hot blast process on blast furnaces. It was later used in glass melting furnaces and steel making, to increase the efficiency of open hearth furnaces, and in high pressure boilers and chemical and other applications, where it continues to be important today. History The first regenerator was invented by Rev. Robert Stirling in 1816, and is also found as a component of some examples of his Stirling engine. The simplest Stirling engines, including most models, use the walls of the cylinder and displacer as a rudimentary regenerator, which is simpler and cheaper to construct but far less efficient. Later applications included the blast furnace process known as hot blast and the open hearth furnace also called the Siemens regenerative furnace (which was used for making glass), where the hot exhaust gases from combustion are passed through firebrick regenerative chambers, which are thus heated. The flow is then reversed, so that the heated bricks preheat the fuel. Edward Alfred Cowper applied the regeneration principle to blast furnaces, in the form of the "Cowper stove", patented in 1857. This is almost invariably used with blast furnaces to this day. Types of regenerators Regenerators exchange heat from one process fluid to an intermediate solid heat storage medium, then that medium exchanges heat with a second process fluid flow. The two flows are either separated in time, alternately circulating through the storage medium, or are separated in space and the heat storage medium is moved between the two flows. In rotary regenerators, or thermal wheels, the heat storage "matrix" in the form of a wheel or drum, that rotates continuously through two counter-flowing streams of fluid. In this way, the two streams are mostly separated. Only one stream flows through each section of the matrix at a time; however, over the course of a rotation, both streams eventually flow through all sections of the matrix in succession. The heat storage medium can be a relatively fine-grained set of metal plates or wire mesh, made of some resistant alloy or coated to resist chemical attack by the process fluids, or made of ceramics in high temperature applications. A large amount of heat transfer area can be provided in each unit volume of the rotary regenerator, compared to a shell-and-tube heat exchanger - up to 1000 square feet of surface can be contained in each cubic foot of regenerator matrix, compared to about 30 square feet in each cubic foot of a shell-and-tube exchanger. Each portion of the matrix will be nearly isothermal, since the rotation is perpendicular to both the temperature gradient and flow direction, and not through them. The two fluid streams flow counter-current. The fluid temperatures vary across the flow area; however the local stream temperatures are not a function of time. The seals between the two streams are not perfect, so some cross contamination will occur. The allowable pressure level of a rotary regenerator is relatively low, compared to heat exchangers. In a fixed matrix regenerator, a single fluid stream has cyclical, reversible flow; it is said to flow "counter-current". This regenerator may be part of a valveless system, such as a Stirling engine. In another configuration, the fluid is ducted through valves to different matrices in alternate operating periods resulting in outlet temperatures that vary with time. For example, a blast furnace may have several "stoves" or "checkers" full of refractory fire brick. The hot gas from the furnace is ducted through the brickwork for some interval, say one hour, until the brick reaches a high temperature. Valves then operate and switch the cold intake air through the brick, recovering the heat for use in the furnace. Practical installations will have multiple stoves and arrangements of valves to gradually transfer flow between a "hot" stove and an adjacent "cold" stove, so that the variations in the outlet air temperature are reduced. Another type of regenerator is called a micro scale regenerative heat exchanger. It has a multilayer grating structure in which each layer is offset from the adjacent layer by half a cell which has an opening along both axes perpendicular to the flow axis. Each layer is a composite structure of two sublayers, one of a high thermal conductivity material and another of a low thermal conductivity material. When a hot fluid flows through the cell, heat from the fluid is transferred to the cell walls, and stored there. When the fluid flow reverses direction, heat is transferred from the cell walls back to the fluid. A third type of regenerator is called a "Rothemühle" regenerator. This type has a fixed matrix in a disk shape, and streams of fluid are ducted through rotating hoods. The Rothemühle regenerator is used as an air preheater in power generating plants. The thermal design of this regenerator is the same as of other types of regenerators. Biology The nose and throat work as regenerative heat exchangers during breathing. The cooler air coming in is warmed, so that it reaches the lungs as warm air. On the way back out, this warmed air deposits much of its heat back onto the sides of the nasal passages, so that these passages are then ready to warm the next batch of air coming in. Some animals, including humans, have curled sheets of bone inside the nose called nasal turbinates to increase the surface area for heat exchange. Cryogenics Regenerative heat exchangers are made up of materials with high volumetric heat capacity and low thermal conductivity in the longitudinal (flow) direction. At cryogenic (very low) temperatures around 20 K, the specific heat of metals is low, and so a regenerator must be larger for a given heat load. Advantages of regenerators The advantages of a regenerator over a recuperating (counter-flowing) heat exchanger is that it has a much higher surface area for a given volume, which provides a reduced exchanger volume for a given energy density, effectiveness and pressure drop. This makes a regenerator more economical in terms of materials and manufacturing, compared to an equivalent recuperator. The design of inlet and outlet headers used to distribute hot and cold fluids in the matrix is much simpler in counter flow regenerators than recuperators. The reason behind this is that both streams flow in different sections for a rotary regenerator and one fluid enters and leaves one matrix at a time in a fixed-matrix regenerator. Furthermore, flow sectors for hot and cold fluids in rotary regenerators can be designed to optimize pressure drop in the fluids. The matrix surfaces of regenerators also have self-cleaning characteristics, reducing fluid-side fouling and corrosion. Finally properties such as small surface density and counter-flow arrangement of regenerators make it ideal for gas-gas heat exchange applications requiring effectiveness exceeding 85%. The heat transfer coefficient is much lower for gases than for liquids, thus the enormous surface area in a regenerator greatly increases heat transfer. Disadvantages of regenerators The major disadvantage of rotary and fixed-matrix regenerators is that there is always some mixing of the fluid streams, and they can not be completely separated. There is an unavoidable carryover of a small fraction of one fluid stream into the other. In the rotary regenerator, the carryover fluid is trapped inside the radial seal and in the matrix, and in a fixed-matrix regenerator, the carryover fluid is the fluid that remains in the void volume of the matrix. This small fraction will mix with the other stream in the following half-cycle. Therefore, rotary and fixed-matrix regenerators are only used when it is acceptable for the two fluid streams to be mixed. Mixed flow is common for gas-to-gas heat and/or energy transfer applications, and less common in liquid or phase-changing fluids since fluid contamination is often prohibited with liquid flows. The constant alternation of heating and cooling that takes place in regenerative heat exchangers puts a lot of stress on the components of the heat exchanger, which can cause cracking or breakdown of materials. See also Countercurrent exchange Economizer Heat exchanger Hot blast Recuperator desalination – some thermal desalination plants use regenerative heat exchangers Thermal wheel, a regenerative heat exchanger where the heated medium is rotated continuously between the two gasflows. References Bibliography https://books.google.com/books?id=beSXNAZblWQC&pg=PA8&dq=fluid+heat+exchangers&sig=v3NF11puSFyQiUfPV2VbWjOEHik#PPA51,M1 Heat exchangers Energy recovery
Regenerative heat exchanger
[ "Chemistry", "Engineering" ]
1,972
[ "Chemical equipment", "Heat exchangers" ]
9,529,645
https://en.wikipedia.org/wiki/Internet%20Routing%20Registry
An Internet Routing Registry (IRR) is a database of Internet route objects for determining, and sharing route and related information used for configuring routers, with a view to avoiding problematic issues between Internet service providers. The Internet routing registry works by providing an interlinked hierarchy of objects designed to facilitate the organization of IP routing between organizations, and also to provide data in an appropriate format for automatic programming of routers. Network engineers from participating organizations are authorized to modify the Routing Policy Specification Language (RPSL) objects, in the registry, for their own networks. Then, any network engineer, or member of the public, is able to query the route registry for particular information of interest. Relevant objects AUT-NUM INETNUM6 ROUTE INETNUM ROUTE6 AS-SET Status of implementation In some RIR regions, the adoption/updates of for e.g. AUT-NUM (Represents for e.g. Autonomous system (Internet)) is only done when the record is created by the RIR, and as long nobody complains about issues, the records remain unreliable/original-state. Most global ASNs provide valid information about their resources in their e.g. AS-SET objects. Peering networks are highly automated, and it would be very harmful for the ASNs. See also Resource Public Key Infrastructure Autonomous system (Internet) References Representation of IP Routing Policies in a Routing Registry (ripe-81++) Internet Routing Registry Tutorial External links RFC 2622, Routing Policy Specification Language RFC 2650, Using RPSL in Practice IRR LIST, A list of routing registries with links to databases and general information IRR accuracy, BGPmon.net - How accurate are the Internet Route Registries (IRR) Internet architecture Routing
Internet Routing Registry
[ "Technology" ]
364
[ "Internet architecture", "IT infrastructure" ]
9,529,754
https://en.wikipedia.org/wiki/Operation%20Popeye
Operation Popeye / Sober Popeye (Project Controlled Weather Popeye / Motorpool / Intermediary-Compatriot) was a military cloud-seeding project carried out by the U.S. Air Force during the Vietnam War in 1967–1972. The highly classified program attempted to extend the monsoon season over specific areas of the Ho Chi Minh Trail, to disrupt North Vietnamese military supplies by softening road surfaces and causing landslides. The chemical weather modification program was conducted from Thailand over Cambodia, Laos, and Vietnam and allegedly sponsored by Secretary of State Henry Kissinger and the CIA without the authorization of then Secretary of Defense Melvin Laird, who had categorically denied to Congress that a program for modification of the weather for use as a tactical weapon even existed. Build up A report titled Rainmaking in SEASIA outlines use of lead iodide and silver iodide deployed by aircraft in a program that was developed in California at Naval Air Weapons Station China Lake and tested in Okinawa, Guam, the Philippines, Texas, and Florida in a hurricane study program called Project Stormfury. Objectives Operation Popeye's goal was to increase rainfall in carefully selected areas to deny the Vietnamese enemy, namely military supply trucks, the use of roads by: Softening road surfaces Causing landslides along roadways Washing out river crossings Maintaining saturated soil conditions beyond the normal time span. The goal of the operation was to extend days of rainfall by about 30 to 45 days each monsoon season. Implementation The 54th Weather Reconnaissance Squadron carried out the operation using the slogan "make mud, not war." Starting on 20 March 1967, and continuing through every rainy season (March to November) in Southeast Asia until 1972, operational cloud seeding missions were flown. Three C-130 Hercules aircraft and two F-4C Phantom aircraft based at Udon Thani Royal Thai Air Force Base in Thailand flew two sorties per day. The aircraft were officially on weather reconnaissance missions and the aircraft crews as part of their normal duty also generated weather report data. The crews, all from the 54th Weather Reconnaissance Squadron, were rotated into the operation on a regular basis from Guam. Inside the squadron, the rainmaking operations were code-named "Motorpool". Public revelation Reporter Jack Anderson published a story in March 1971 concerning Operation Popeye (though in his column, it was called Intermediary-Compatriot). The name Operation Popeye (Pop Eye) entered the public space through a brief mention in the Pentagon Papers and a 3 July 1972, article in the New York Times. See also Project Stormfury Weather warfare Weather modification WC-130 Hercules Sources Weather Modification Hearing, United States Senate Subcommittee on Oceans and International Environment of the Committee on Foreign Relations, March 20, 1974 Published government documents Keefer, Edward C. Foreign Relations of the United States 1964–1968, Volume XXVII, Laos United States Government Printing Office, 1998. References External links Transcript of the US Senate Hearing on Weather Modification of March 20, 1974 Operation Motorpool Gallery Dr. Edwin X Berry's 1969 trip to Philippines for Operation Popeye (Berry now at www.climatephysics.com) Further reading Weather modification Climate of Asia Battles and operations of the Vietnam War in 1967 Battles and operations of the Vietnam War in 1968 Battles and operations of the Vietnam War in 1969 Battles and operations of the Vietnam War in 1970 Battles and operations of the Vietnam War in 1971 Battles and operations of the Vietnam War in 1972
Operation Popeye
[ "Engineering" ]
697
[ "Planetary engineering", "Weather modification" ]
9,530,099
https://en.wikipedia.org/wiki/Mesoamerican%20Long%20Count%20calendar
The Mesoamerican Long Count calendar is a non-repeating base-20 and base-18 calendar used by pre-Columbian Mesoamerican cultures, most notably the Maya. For this reason, it is often known as the Maya Long Count calendar. Using a modified vigesimal tally, the Long Count calendar identifies a day by counting the number of days passed since a mythical creation date that corresponds to August 11, 3114 BCE in the proleptic Gregorian calendar. The Long Count calendar was widely used on monuments. Background The two most widely used calendars in pre-Columbian Mesoamerica were the 260-day Tzolkʼin and the 365-day Haabʼ. The equivalent Aztec calendars are known in Nahuatl as the Tonalpohualli and Xiuhpohualli, respectively. The combination of a Haabʼ and a Tzolkʼin date identifies a day in a combination which does not occur again for 18,980 days (52 Haabʼ cycles of 365 days equals 73 Tzolkʼin cycles of 260 days, approximately 52 years), a period known as the Calendar Round. To identify days over periods longer than this, Mesoamericans used the Long Count calendar. Long Count periods The Long Count calendar identifies a date by counting the number of days from a starting date that is generally calculated to be August 11, 3114 BCE in the proleptic Gregorian calendar or September 6 in the Julian calendar (or −3113 in astronomical year numbering). There has been much debate over the precise correlation between the Western calendars and the Long Count calendars. The August 11 date is based on the GMT correlation. The completion of 13 bʼakʼtuns (August 11, 3114 BCE) marks the Creation of the world of human beings according to the Maya. On this day, Raised-up-Sky-Lord caused three stones to be set by associated gods at Lying-Down-Sky, First-Three-Stone-Place. Because the sky still lay on the primordial sea, it was black. The setting of the three stones centered the cosmos which allowed the sky to be raised, revealing the Rather than using a base 10 scheme, the Long Count days were tallied in a modified base-20 scheme. In a pure base 20 scheme, 0.0.0.1.5 is equal to 25 and 0.0.0.2.0 is equal to 40. The Long Count is not pure base-20, however, since the second digit from the right (and only that digit) rolls over to zero when it reaches 18. Thus 0.0.1.0.0 does not represent 400 days, but rather only 360 days and 0.0.0.17.19 represents 359 days. The name bʼakʼtun was invented by modern scholars. The numbered Long Count was no longer in use by the time the Spanish arrived in the Yucatán Peninsula, although unnumbered kʼatuns and tuns were still in use. Instead the Maya were using an abbreviated Short Count. Mesoamerican numerals Long Count dates are written with Mesoamerican numerals, as shown on this table. A dot represents 1 while a bar equals 5. The shell glyph was used to represent the zero concept. The Long Count calendar required the use of zero as a place-holder and presents one of the earliest uses of the zero concept in history. On Maya monuments, the Long Count syntax is more complex. The date sequence is given once, at the beginning of the inscription and opens with the so-called ISIG (Introductory Series Initial Glyph) which reads tzik-a(h) habʼ [patron of Haabʼ month] ("revered was the year-count with the patron [of the month]"). Next come the 5 digits of the Long Count, followed by the Calendar Round (tzolkʼin and Haabʼ) and supplementary series. The supplementary series is optional and contains lunar data, for example, the age of the moon on the day and the calculated length of current lunation. The text then continues with whatever activity occurred on that date. A drawing of a full Maya Long Count inscription is shown below. Earliest Long Counts The earliest contemporaneous Long Count inscription yet discovered is on Stela 2 at Chiapa de Corzo, Chiapas, Mexico, showing a date of 36 BCE, although Stela 2 from Takalik Abaj, Guatemala might be earlier. Takalik Abaj Stela 2's highly battered Long Count inscription shows 7 bak'tuns, followed by k'atuns with a tentative 6 coefficient, but that could also be 11 or 16, giving the range of possible dates to fall between 236 and 19 BCE. Although Takalik Abaj Stela 2 remains controversial, this table includes it, as well as six other artifacts with the eight oldest Long Count inscriptions according to Dartmouth professor Vincent H. Malmström (two of the artifacts contain two dates and Malmström does not include Takalik Abaj Stela 2). Interpretations of inscriptions on some artifacts differ. Of the six sites, three are on the western edge of the Maya homeland and three are several hundred kilometers further west, leading some researchers to believe that the Long Count calendar predates the Maya. La Mojarra Stela 1, the Tuxtla Statuette, Tres Zapotes Stela C and Chiapa Stela 2 are all inscribed in an Epi-Olmec, not Maya, style. El Baúl Stela 2, on the other hand, was created in the Izapan style. The first unequivocally Maya artifact is Stela 29 from Tikal, with the Long Count date of 292 CE (8.12.14.8.15), more than 300 years after Stela 2 from Chiapa de Corzo. More recently, with the discovery in Guatemala of the San Bartolo (Maya site) stone block text ( 300 BCE), it has been argued that this text celebrates an upcoming time period ending celebration. This time period may have been projected to end sometime between 7.3.0.0.0 (295 BCE) and 7.5.0.0.0 (256 BCE). Besides being the earliest Maya hieroglyphic text so far uncovered, this would arguably be the earliest evidence to date of Long Count notation in Mesoamerica. Correlations between Western calendars and the Long Count The Maya and Western calendars are correlated by using a Julian day number (JDN) of the starting date of the current creation — 13.0.0.0.0, 4 Ajaw, 8 Kumkʼu. This is referred to as a "correlation constant". The generally accepted correlation constant is the Modified Thompson 2, "Goodman–Martinez–Thompson", or GMT correlation of 584,283 days. Using the GMT correlation, the current creation started on September 6, −3113 (Julian astronomical) – August 11, 3114 BCE in the Proleptic Gregorian calendar. The study of correlating the Maya and western calendar is referred to as the correlation question. The GMT correlation is also called the 11.16 correlation. In Breaking the Maya Code, Michael D. Coe writes: "In spite of oceans of ink that have been spilled on the subject, there now is not the slightest chance that these three scholars (conflated to G-M-T when talking about the correlation) were not right ...". The evidence for the GMT correlation is historical, astronomical and archaeological: Historical: Calendar Round dates with a corresponding Julian date are recorded in Diego de Landa's Relación de las cosas de Yucatán (written circa 1566), the Chronicle of Oxcutzkab and the books of Chilam Balam. De Landa records a date that is a Tun ending in the Short Count. Oxkutzcab contains 12 Tun endings. Bricker and Bricker find that only the GMT correlation is consistent with these dates. The Book of Chilam Balam of Chumayel contains the only colonial reference to classic long-count dates. The Julian calendar date of 11.16.0.0.0 (November 2, 1539) confirms the GMT correlation. The Annals of the Cakchiquels contains numerous Tzolkʼin dates correlated with European dates. These confirm the GMT correlation. Weeks, Sachse and Prager transcribed three divinatory calendars from highland Guatemala. They found that the 1772 calendar confirms the GMT correlation. The fall of the capital city of the Aztec Empire, Tenochtitlan, occurred on August 13, 1521. A number of different chroniclers wrote that the Tzolkʼin (Tonalpohualli) date of the event was 1 Snake. Post-conquest scholars such as Sahagún and Durán recorded Tonalpohualli dates with a calendar date. Many indigenous communities in the Mexican states of Veracruz, Oaxaca and Chiapas and in Guatemala, principally those speaking the Mayan languages Ixil, Mam, Pokomchí and Quiché, keep the Tzolkʼin and in many cases the Haabʼ. These are all consistent with the GMT correlation. Munro Edmonsen studied 60 Mesoamerican calendars, 20 of which have known correlations to European calendars, and found remarkable consistency among them and that only the GMT correlation fits the historical, ethnographic and astronomical evidence. Astronomical: Any correct correlation must match the astronomical content of classic inscriptions. The GMT correlation does an excellent job of matching lunar data in the supplementary series. For example: An inscription at the Temple of the Sun at Palenque records that on Long Count 9.16.4.10.8 there were 26 days completed in a 30-day lunation. This Long Count is also the entry date for the eclipse table of the Dresden Codex. Using the third method, the Palenque system, the new moon would have been the first evening when one could look to the west after sunset and see the thin crescent moon. Given our modern ability to know exactly where to look, when the crescent Moon is favorably located, from an excellent site, on rare occasions, using binoculars or a telescope, observers can see and photograph the crescent moon less than one day after conjunction. Generally, most observers cannot see the new Moon with the naked eye until the first evening when the lunar phase day is at least 1.5. If one assumes that the new moon is the first day when the lunar phase day is at least 1.5 at six in the evening in time zone UTC−6 (the time zone of the Maya area), the GMT correlation will match many lunar inscriptions exactly. In this example the lunar phase day was 27.7 (26 days counting from zero) at 6 pm after a conjunction at 1:25 am and a new Moon when the lunar phase day was 1.7 at 6 pm on (Julian calendar). This works well for many but not all lunar inscriptions. Modern astronomers refer to the conjunction of the Sun and Moon (the time when the Sun and Moon have the same ecliptic longitude) as the new moon. But Mesoamerican astronomy was observational, not theoretical. The people of Mesoamerica did not know about the Copernican nature of the solar system — they had no theoretical understanding of the orbital nature of the heavenly bodies. Some authors analyze the lunar inscriptions based on this modern understanding of the motions of the Moon but there is no evidence that the Mesoamericans shared it. The first method seems to have been used for other inscriptions such as Quirgua stela E (9.17.0.0.0). By the third method, that stela should show a moon age of 26 days, but in fact it records a new moon. Using the GMT correlation at six AM in the time zone UTC−6, this would be 2.25 days before conjunction, so it could record the first day when one could not see the waning moon. Fuls analysed these inscriptions and found strong evidence for the Palenque system and the GMT correlation; however, he cautioned: "Analysis of the Lunar Series shows that at least two different methods and formulas were used to calculate the moon's age and position in the six-month cycle ..." which gives eclipse seasons when the Moon is near its ascending or descending node and an eclipse is likely to occur. Dates converted using the GMT correlation agree closely with the Dresden Codex eclipse tables. The Dresden Codex contains a Venus table which records the heliacal risings of Venus. Using the GMT correlation these agree closely with modern astronomical calculations. Archaeological: Various items that can be associated with specific Long Count dates have been isotope dated. In 1959 the University of Pennsylvania carbon dated samples from ten wood lintels from Tikal. These were carved with a date equivalent to 741 AD, using the GMT correlation. The average carbon date was 746±34 years. Recently one of these, Lintel 3 from Temple I, was analyzed again using more accurate methods and found to agree closely with the GMT correlation. In 2012, using modern AMS radiocarbon dating, a single beam from Tikal was dated, also strongly supporting the GMT. If a proposed correlation only has to agree with one of these lines of evidence there could be numerous other possibilities. Astronomers have proposed many correlations, for example: Lounsbury, Fuls, et al., Böhm and Böhm and Stock. Today, (UTC), in the Long Count is (using GMT correlation). 2012 and the Long Count According to the Popol Vuh, a book compiling details of creation accounts known to the Kʼicheʼ Maya of the Colonial-era highlands, humankind lives in the fourth world. The Popol Vuh describes the first three creations that the gods failed in making and the creation of the successful fourth world where men were placed. In the Maya Long Count, the previous creation ended at the end of a 13th bʼakʼtun. The previous creation ended on a Long Count of 12.19.19.17.19. Another 12.19.19.17.19 occurred on December 20, 2012 (Gregorian Calendar), followed by the start of the 14th bʼakʼtun, 13.0.0.0.0, on December 21, 2012. There are only two references to the current creation's 13th bʼakʼtun in the fragmentary Mayan corpus: Tortuguero Monument 6, part of a ruler's inscription and the recently discovered La Corona Hieroglyphic Stairway 2, Block V. Maya inscriptions occasionally reference future predicted events or commemorations that would occur on dates that lie beyond 2012 (that is, beyond the completion of the 13th bʼakʼtun of the current era). Most of these are in the form of "distance dates" where some Long Count date is given, together with a Distance Number that is to be added to the Long Count date to arrive at this future date. For example, on the west panel at the Temple of Inscriptions in Palenque, a section of the text projects into the future to the 80th Calendar Round (CR) 'anniversary' of the famous Palenque ruler Kʼinich Janaabʼ Pakal's accession to the throne (Pakal's accession occurred on a Calendar Round date 5 Lamat 1 Mol, at Long Count 9.9.2.4.8 equivalent to 27 July 615 CE in the proleptic Gregorian calendar). It does this by commencing with Pakal's birthdate 9.8.9.13.0   8 Ajaw 13 Pop (24 March ) and adding to it the Distance Number 10.11.10.5.8. This calculation arrives at the 80th Calendar Round since his accession, a day that also has a CR date of , but which lies over 4,000 years in the future from Pakal's time—the day 21 October in the year 4772. The inscription notes that this day would fall eight days after the completion of the 1st piktun (since the creation or zero date of the Long Count system), where the piktun is the next-highest order above the bʼakʼtun in the Long Count. If the completion date of that piktun—13 October 4772—were to be written out in Long Count notation, it could be represented as 1.0.0.0.0.0. The 80th CR anniversary date, eight days later, would be 1.0.0.0.0.8   5 Lamat 1 Mol. Despite the publicity generated by the 2012 date, Susan Milbrath, curator of Latin American Art and Archaeology at the Florida Museum of Natural History, stated that "We have no record or knowledge that [the Maya] would think the world would come to an end" in 2012. USA Today writes For the ancient Maya, it was a huge celebration to make it to the end of a whole cycle,' says Sandra Noble, executive director of the Foundation for the Advancement of Mesoamerican Studies in Crystal River, Florida. To render December 21, 2012, as a doomsday event or moment of cosmic shifting, she says, is 'a complete fabrication and a chance for a lot of people to cash in. "There will be another cycle," says E. Wyllys Andrews V, director of the Tulane University Middle American Research Institute (MARI). "We know the Maya thought there was one before this, and that implies they were comfortable with the idea of another one after this." Converting between the Long Count and western calendars Calculating a Western calendar date from a Long Count It is important to know the difference between the Julian and Gregorian calendars when calculating a Western calendar date from a Long Count date. Using as an example the Long Count date of 9.10.11.17.0 (Long Count date mentioned on the Palenque Palace Tablet), first calculate the number of days that have passed since the zero date (August 11, 3114 BCE; GMT correlation, in the Proleptic Gregorian calendar, September 6, −3113 Julian astronomical). Then add the GMT correlation to the total number of days. 1,372,300 + 584,283 = 1,956,583 This number is a Julian day. To convert a Julian day to a Proleptic Gregorian calendar date: From this number, subtract the nearest smaller Julian Day Number (in the table below), in this case 1,940,206, which corresponds to the year 600 CE. 1,956,583 – 1,940,206 = 16,377 Next, divide this number by 365 days (vague year). 16,377 / 365 = 44.86849 The remainder is 44.86849 years, which is 44 years and 317 days. The full year date is 644 CE. Now calculate the month and day number, taking into account leap days over the 44 years. In the Gregorian Calendar, every fourth year is a leap year with the exception of centuries not evenly divisible by 400 (e.g. 100, 200, 300). When the year is divisible by 400 (e.g. 400, 800, etc.), do not add an extra day. The calculated year is 644 CE. The number of leap days, keeping in mind that the year 600 is not a leap year, is 10. Subtracting that from 317 remainder days is 307; in other words, the 307th day of the year 644 CE, which is November 3. To summarize: the Long Count date 9.10.11.17.0 corresponds to November 3, 644 CE, in the Proleptic Gregorian calendar. To convert a Julian day to a Julian/Gregorian astronomical date (Proleptic Julian calendar before 46 BCE): Use an astronomical algorithm such as the Method of Meeus to convert the Julian day to a Julian/Gregorian date with astronomical dating of negative years: In this example: input: Julian day J J = J + 0.5 // 1,956,583.5 Z = integer part of J // 1,956,583 F = fraction part of J // 0.5 if Z < 2,299,161 then // Julian? A = Z else alpha = floor((Z - 1,867,216.25) / 36,524.25) // 15 A = Z + 1 + alpha - floor(alpha / 4.0) // 2,436,129 // The floor operation rounds a decimal number down to the next lowest integer. // For example, floor(1.5) = 1 and floor(−1.5) = -2 end if B = A + 1524 // 1,958,107 C = floor((B - 122.1) / 365.25) // 5,360 D = floor(365.25 × C) // 1,957,740 E = floor((B - D) / 30.6001) // 11 day = B - D - floor(30.6001 × E) + F // 31.5 if E < 14 then month = E - 1 // 10 else month = E - 13 end if if month > 2 then year = C - 4716 // 644 else year = C - 4715 end if return (year, month, day) In this example the Julian date is noon October 31, 644. The Method of Meeus is not valid for negative year numbers (astronomical), so another method such as the method of Peter Baum should be used. Calculating a full Long Count date A full Long Count date not only includes the five digits of the Long Count, but the 2 character Tzolkʼin and the two-character Haabʼ dates as well. The five digit Long Count can therefore be confirmed with the other four characters (the "calendar round date"). Taking as an example a Calendar Round date of 9.12.2.0.16 (Long Count) 5 Kibʼ (Tzolkʼin) 14 Yaxkʼin (Haabʼ). One can check whether this date is correct by the following calculation. It is perhaps easier to find out how many days there are since 4 Ajaw 8 Kumkʼu and show how the date 5 Kibʼ 14 Yaxkʼin is derived. Calculating the Tzolkʼin date portion The Tzolkʼin date is counted forward from 4 Ajaw. To calculate the numerical portion of the Tzolkʼin date, add 4 to the total number of days given by the date and then divide total number of days by 13. (4 + 1,383,136) / 13 = 106,395 (and 5/13) This means that 106,395 whole 13 day cycles have been completed and the numerical portion of the Tzolkʼin date is 5. To calculate the day, divide the total number of days in the long count by 20 since there are twenty day names. 1,383,136 / 20 = 69,156 (and 16/20) This means 16 day names must be counted from Ajaw. This gives Kibʼ. Therefore, the Tzolkʼin date is 5 Kibʼ. Calculating the Haabʼ date portion The Haabʼ date 8 Kumkʼu is the ninth day of the eighteenth month. There are 17 days to the start of the next year. Subtract 17 days from the total, to find how many complete Haabʼ years are contained. 1,383,136 − 17 = 1,383,119 by 365 1,383,119 / 365 = 3,789 and (134/365) Therefore, 3,789 complete Haabʼ have passed and the remainder 134 is the 135th day in the new Haabʼ, since a remainder of 0 would indicate the first day. Find which month the day is in. Dividing the remainder 134 by 20, is six complete months and a remainder of 14, indicating the 15th day. So, the date in the Haabʼ lies in the seventh month, which is Yaxkʼin. The fifteenth day of Yaxkʼin is 14, thus the Haabʼ date is 14 Yaxkʼin. So the date of the long count date 9.12.2.0.16   5 Kibʼ 14 Yaxkʼin is confirmed. Piktuns and higher orders There are also four rarely used higher-order periods above the bʼakʼtun: piktun, kalabtun, kʼinchiltun and alautun. All of these words are inventions of Mayanists. Each one consists of 20 of the lesser units. Many inscriptions give the date of the current creation as a large number of 13s preceding 13.0.0.0.0   4 Ahau 8 Kumkʼu. For example, a Late Classic monument from Coba, Stela 1. The date of creation is expressed as 13.13.13.13.13.13.13.13.13.13.13.13.13.13.13.13.13.13.13.13.0.0.0.0, where the units are 13s in the nineteen places larger than the bʼakʼtun. Some authors think that the 13s were symbolic of a completion and did not represent an actual number. Most inscriptions that use these are in the form of distance dates and Long Reckonings – they give a base date, a distance number that is added or subtracted and the resulting Long Count. The first example below is from Schele (1987). The second is from Stuart (2005 pp. 60, 77) Palenque Temple of the Cross, tablet, Schele (1987 p.) 12.19.13.4.0   8 Ajaw 18 Tzek in the prior era 6.14.0 Distance number linking to the "era date" 13.0.0.0.0   4 Ajaw 8 Kumkʼu Palenque Temple XIX, South Panel G2-H6 Stuart (2005 pp. 60, 77) 12.10.1.13.2   9 Ikʼ 5 Mol   (seating of GI in the prior era) 2.8.3.8.0 1.18.5.3.2   9 Ikʼ 15 Keh   (rebirth of GI, this date also in Temple of the Cross) The tablet of the inscriptions contains this inscription: 9.8.9.13.0   8 Ajaw 13 Pop 10.11.10.5.8 1.0.0.0.0.8 The Dresden codex contains another method for writing distance numbers. These are Ring Numbers. Specific dates within the Dresden codex are often given by calculations involving Ring Numbers. Förstemann identified these, but Wilson (1924) later clarified the way in which they operate. Ring Numbers are intervals of days between the Era Base date 4 Ajaw 8 Kumkʼu and an earlier Ring Base date, where the place-holder for the numeral of days in the interval is circled by an image of a tied red band. Added to this earlier Ring Base date is another count of days forward, which Thompson refers to as a Long Round, leading to a final date within the Long Count that is given as an entry date to be used within a specific table in the codex. Ring number     (12) 12.12.17.3.1   13 Imix 9 Wo (7.2.14.19 before (13) 13.0.0.0.0) distance number (0) 10.13.13.3.2 Long Count              10.6.10.6.3   13 Akʼbal 1 Kankʼin Ring number (portion of the DN preceding era date) 7.2.14.19 Add Ring number to the ring number date to reach 13.0.0.0.0 Thompson contains a table of typical long reckonings after Satterwaite. The "Serpent Numbers" in the Dresden codex pp. 61–69 is a table of dates using a base date of 1.18.1.8.0.16 in the prior era (5,482,096 days). See also Aztec calendar Maya astronomy Maya calendar Maya codices Mesoamerican calendars Notes References Bibliography External links Coba Stela 1 (Schele #4087), partial illustration from the Linda Schele Drawings Collection of the monument from Coba with an expanded Long Count date Maya calendar on michielb.nl, with conversion applet from Gregorian calendar to Maya date (Uses the proleptic Gregorian calendar.) The Dresden Codex Lunar Series and Sidereal Astronomy 1897 text by Cyrus Thomas. Long Count Maya Classic Period Specific calendars Chronology Obsolete calendars 2012 phenomenon it:Calendario maya#Il Lungo computo
Mesoamerican Long Count calendar
[ "Physics" ]
6,082
[ "Spacetime", "Chronology", "Physical quantities", "Time" ]
9,530,784
https://en.wikipedia.org/wiki/Bachelor%20of%20Industrial%20Design
The Bachelor of Industrial Design (B.I.D.) is an undergraduate academic degree awarded by a university for a four-year course of study that specializes on the design of industrial products. Some colleges also offer four-year B.I.D. programs. References Industrial Design Industrial design
Bachelor of Industrial Design
[ "Engineering" ]
61
[ "Industrial design", "Design engineering", "Design" ]
9,531,733
https://en.wikipedia.org/wiki/Atlas%20wild%20ass
The Atlas wild ass (Equus africanus atlanticus), also known as Algerian wild ass, is a purported extinct subspecies of the African wild ass that was once found across North Africa and parts of the Sahara. It was last represented in a villa mural ca. 300 AD in Bona, Algeria, and may have become extinct as a result of Roman sport hunting. Taxonomy Purported bones have been found in a number of rock shelters across Morocco and Algeria by paleontologists including Alfred Romer (1928, 1935) and Camille Arambourg (1931). While the existence of numerous prehistoric rock art depictions, and Roman mosaics leave no doubt about the former existence of African wild asses in North Africa, it has been claimed that the original bones that were used to describe the subspecies atlanticus actually belonged to a fossil zebra. Therefore, the name E. a. atlanticus would be "unavailable" to the Atlas wild ass. It was also hypothesized that the appearance of Nubian and Somali wild asses were clinal and that they appeared different as an artifact of the recent extinction of intermediate-looking populations. This would make the living African wild ass a monotypic species with no subspecies, and at least question the existence of extinct subspecies like the Atlas wild ass. However, genetic studies have shown since that Nubian and Somali wild asses are different enough to warrant subspecies status. Additionally, domestic donkeys carry two different haplotypes, one shared with the Nubian wild ass, and another of unknown origin that is not found in the Somali wild ass. The presence of the extinct Atlas wild ass in the Ancient Mediterranean makes it a plausible source for the second haplotype. Description Ancient art consistently depicts the African wild asses of North Africa as similar to, but darker colored than, the Nubian and Somali wild ass subspecies. The general color was gray, with marked black and white stripes on the legs, and a black shoulder cross (sometimes doubled). In comparison, the Nubian wild ass is gray with shoulder cross but no stripes, and the Somali wild ass is sandy with black stripes, but no shoulder cross. One or both features appear occasionally in domestic donkeys. Wild and primitive domestic asses are indistinguishable from their bones, which complicates their identification in archaeological sites. Range and ecology The Atlas wild ass was found in the region around the Atlas Mountains, across modern day Algeria, Tunisia and Morocco. It might also have occurred in rocky areas of the Saharan Desert, but not in sands which are avoided by wild asses. However, the 20th century reports of wild asses from northern Chad and the Hoggar Massif in the central Sahara are doubtful. References Harper, F. (1944.5). Extinct and Vanishing Mammals of the Old World, QL707.H37, p. 352 Ziswiler, V. (1967). Extinct and Vanishing Animals, QL88.Z513, p. 113 African wild ass Extinct mammals of Africa Holocene extinctions Species made extinct by human activities Controversial mammal taxa Mammals described in 1884
Atlas wild ass
[ "Biology" ]
634
[ "Biological hypotheses", "Controversial mammal taxa", "Controversial taxa" ]
9,534,603
https://en.wikipedia.org/wiki/PMB%20%28software%29
PMB is a fully featured open source integrated library system. It is continuously developed and maintained by the French company PMB Services. Features PMB follows the rules of the library science. The software provides 4 essential features : the library management, the watch and the documentary products, the publication of editorial content the electronic document management. It provides an integrated portal of news and management of Web 2.0 content and is the only ILS that doesn't use a third-party CMS for the management of the portal. It is multilingual (100% English & French, 80% Spanish and Italian) and even supports Arabic (translation and UTF8 support) since its 3.0.5 version of November 2006. The latest 4.2 version of July 2015 includes a watch unit (Watch&Share) and allows geo-referencing of the collections and several other improvements to the software. Size The software is used with collections up to around 500 000 records. Tests are done with 2 million records to show its capacity to manage bigger collections. It is regularly installed in public libraries networks of 10 to 15 sites. Interoperability PMB allows to use the Z39.50 protocol (in order to import bibliographic records that can be directly integrated in the database from different servers). It manages the UNIMARC cataloguing format and the ISO 2709 record exchange format. It also includes the XML data format. PMB is also OAI server and client. The user's database can be connected to an LDAP directory or any other base of users reachable by web services. It has an API allowing to integrate it into an existing information system. The integration of PMB into a Virtual Learning Environment (VLE) is functional in many French academies. Units/Modules PMB is divided in two modules: the management module and the portal module (or OPAC). The management module includes specific functions for the librarian: circulation (loan/return), catalogue, authorities, editions, SDI (Selective Dissemination of the Information and the watch module Watch&Share), acquisitions, CMS and administration. PMB comes with a user request management feature since 2009. It can, moreover, be completed by the preset addition of an extensions module. At the start, the software was providing a specialized user interface on the catalogue : the OPAC. An improvement to the software in 2012 added a CMS feature offering the ability to make highly customizable portals. Requirements It is a web application, based on a web server platform (Apache, Microsoft IIS) + PHP + MySQL or MariaDB, which can therefore work on Linux, Mac OS X or Microsoft Windows . PMB has its own search engine, supporting phonetic searches without needing any complementary search engine. PMB is written using PHP programming language. It requires: PHP Apache web server MySQL database Web browser Documentary languages PMB can integrate different sorting plans: DEWEY, UDC, PCDM or any other custom sorting plan. It includes the management of several thesauri, a feature that is actually in use with thesauri such as: PRISME, BDSP, MOTBIS, DELPHES, Thesaurus du Management, Vie culturelle, etc. It comes with a management of concepts allowing it to respect the ISO 25964-1 standard and therefore the use of indexation languages like RAMEAU or MeSH. It also allows the total implementation of the FRBR model. Users PMB is used by big institutions such as local communities, ministries, the constitutional council, regional councils, metro-poles, the Academy of Rennes and every documentation center of Brittany. Many public libraries networks, secondary schools, the ONISEP (a French work and study information institute) and the INSEE have chosen the software PMB. The group Radio France has joined the community of PMB users in 2015 for an FRBR migration of its library (including musical partitions). PMB also equips private groups such as law offices, Grandes écoles, internationally known fashion groups etc. According to the annual survey of Livres-Hebdo, PMB was the third Integrated library system in France in 2005 in terms of number of installations with 165 installations. Next years' surveys showed the fast progression of the free software in many structures. Since 2011, PMB Services refuses to take part to this survey : the numbers given to Marc Maisonneuve weren't correctly reused and there was an amalgam made with the software BCDI, yet not a free software. On 1 January 2015, there was more than 6 000 operational installations in the world, for collections sizes going from 300 records to 500 000. Big companies such as Alstom or Orange S.A. now use more free solutions such as PMB. Since 2012, the software is running in a higher education network in Belgium, HENAM-HENALLUX, with more than 400 000 online searchable records. In France, PMB Services claims to have more than 1 800 clients. The company's official website lists most of them and links their online catalogues. History In its early beta versions, the software was called PhpMyBibli. It has been launched by François Lemarchand (director of the Library of Agneaux) in October 2002. The cataloguing and the application's base have been created during the autumn 2002, followed a bit later by the serials management module. In 2003, Eric Robert, an IT engineer fighting for the free software joins PMB founder François Lemarchand. He then develops the loan module, the Unimarc imports, the statistic files and the Z39.50 client. It is in December 2003, for a presentation at an international conference in Rabat that the 1.0 version was released. PMB then officially becomes an integrated library system (ILS). The OPAC (the user interface) appeared this year too, developed by Gautier Michelin and Christophe Bliard. The most involved developers at this date (Eric Robert, Gautier Michelin and Florent Tétart) then created the company PMB Services to professionalize the software and to offer the services necessary for interested libraries or companies. The society provides services like training and installation of the software (local or hosted) along with all the support services needed to implement the software: local installation or SaaS mode, migration or recovery of data, set up, training, construction and design of the portal. The first library to be equipped with PMB was the library of Bueil-en-Touraine (in France). The version 4.1 has been downloaded more than 38 000 times. The 4.2 is available since 24 July 2015 and has been downloaded 2272 times on 15 September. Development PMB was initially licensed under GNU General Public License, which ensures the free availability of the software. Wiki, mailing lists and BerliOS hosting facilities allow communication between PMB developers and users. PMB is now licensed under CECILL free licence, that ensures legal security in France and other countries with similar legal systems. See also List of free and open source software packages Notes References External links PMB Website PMB Forge Website Bibliography Library automation PHP software Free library and information science software
PMB (software)
[ "Engineering" ]
1,481
[ "Library automation", "Automation" ]
9,534,983
https://en.wikipedia.org/wiki/Biocultural%20diversity
Biocultural diversity is defined by Luisa Maffi, co-founder and director of Terralingua, as "the diversity of life in all its manifestations: biological, cultural, and linguistic — which are interrelated (and possibly coevolved) within a complex socio-ecological adaptive system." "The diversity of life is made up not only of the diversity of plants and animal species, habitats and ecosystems found on the planet, but also of the diversity of human cultures and languages." Research has linked biocultural diversity to the resilience of social-ecological systems. Certain geographic areas have been positively correlated with high levels of biocultural diversity, including those of low latitudes, higher rainfalls, higher temperatures, coastlines, and high altitudes. A negative correlation is found with areas of high latitudes, plains, and drier climates. Positive correlations can also be found between biological diversity and linguistic diversity, illustrated in the overlap between the distribution of plant diverse and language diverse zones. Social factors, such as modes of subsistence, have also been found to affect biocultural diversity. Measuring biocultural diversity Biocultural diversity can be quantified using QCUs (quantum co-evolution units), and can be monitored through time to quantify biocultural evolution (a form of coevolution). This methodology can be used to study the role that biocultural diversity plays in the resilience of social-ecological systems. It can also be applied on a landscape scale to identify critical cultural habitat for Indigenous peoples. Linguistic diversity Cultural traditions are passed down through language, making language an important factor in the existence of biocultural diversity. There has been a decline in the number of languages globally. The Linguistic Diversity Index has recorded that between 1970 and 2005, the number of languages spoken globally has decreased by 20%. This decline has been especially observed in indigenous languages, with a 60% decline in the Americas, 30% in the Pacific, and 20% in Africa. Currently, there are 7,000 languages being spoken in the world. Half the population speaks only 25 of these languages, the top 5 in order being Mandarin, Spanish, English, Hindi, and Bengali. The remaining 6975 languages are divided between the other half of the population. Because languages develop in a given community of speakers as that society adapts to its environment, languages reflect and express the biodiversity of that area. In areas of high biodiversity, language diversity is also higher, suggesting that a greater diversity in culture can be found in these areas. In fact, many of the areas of the world inhabited by smaller, isolated communities are also home to large numbers of endemic plant and animal species. As these people are often considered to be "stewards" of their environments, loss of language diversity means a disappearance of traditional ecological knowledge (TEK), an important factor in the conservation of biodiversity. Declaration of Belem Awareness about the balance between biological and cultural diversity has been increasing for a few decades. At the first international congress on ethnobiology in 1988, scientists met with indigenous peoples to discuss ways to better manage the use of natural resources and protect vulnerable communities around the world. They developed the Declaration of Belem, named after the city where the congress was held, which outlined eight steps to ensure conservation efforts would be implemented effectively. (This is not to be confused with the 2023 Belem Declaration by the eight Amazon basin countries which tackles deforestation, see 2023 Amazon Cooperation Treaty Organization Summit) Hotspots of biocultural diversity There are three areas which have been identified as hotspots of biocultural diversity: The Amazon Basin, Central Africa, and Indomalaysia/Melanesia. Hot spots of biocultural diversity can be calculated by averaging a countries biological diversity and cultural diversity. Cultural diversity is scored based on "a country's language diversity, religion diversity, and ethnic group diversity". Recent programs in the Eastern Himalayas have also engaged this concept to promote conservation. Biocultural conservation In 2000, Ricardo Rozzi coined the term biocultural conservation to emphasize that “1) conservation biology issues involve [ontologically, epistemologically, and ethically] both humans and other living beings, 2) biological and cultural diversity are inextricably integrated, and 3) social welfare and biocultural conservation go together” (p. 10). Then, Rozzi and collaborators proposed participatory approaches to biocultural conservation, identifying ten principles: 1) interinstitutional cooperation, (2) a participatory approach, (3) an interdisciplinary approach, (4) networking and international cooperation, (5) communication through the media, (6) identification of a flagship species, (7) outdoor formal and informal education, (8) economic sustainability and ecotourism, (9) administrative sustainability, and (10) research and conceptual sustainability for conservation. These principles were effective for establishing the Cape Horn Biosphere Reserve, Chile, at the southern end of the Americas, involving multiple actors, disciplines, and scales. Biocultural restoration Biocultural restoration endeavors to revive the many connections between cultures and the biodiversity they are founded on. This can be done in a larger effort to restore resilience in social-ecological systems. While some have questioned the conservation value of biocultural restoration, recent research has shown that such approaches can be in alignment with core conservation goals. The Hawaiian renaissance in Hawaii is held up as a global model for biocultural restoration within the scholarly literature on the topic. See also Biocultural anthropology Biocultural evolution Conservation movement Cultural landscape Ecology Ecosystem Environmental protection Terralingua References External links UNESCO web page on biocultural diversity Terralingua Biocultural Diversity Conservation Biodiversity Anthropology Indigenous peoples and the environment
Biocultural diversity
[ "Biology" ]
1,156
[ "Biodiversity" ]
9,535,267
https://en.wikipedia.org/wiki/List%20of%20weather%20instruments
This is a list of devices used for recording and give output readings of various aspects of the weather. Typical instruments Weather stations typically have these following instruments: Thermometer for measuring air and sea surface temperature Barometer for measuring atmospheric pressure Hygrometer for measuring humidity Anemometer for measuring wind speed Pyranometer for measuring solar radiation Rain gauge for measuring liquid precipitation over a set period of time Wind sock for measuring general wind speed and wind direction Wind vane (also called a weather vane or a weathercock) for showing the wind direction Present Weather/Precipitation Identification Sensor for identifying falling precipitation Disdrometer for measuring drop size distribution Transmissometer for measuring visibility Ceilometer for measuring cloud ceiling Observation systems Argo Global Atmosphere Watch Automatic weather station Remote Automated Weather Stations (RAWS) Automated Surface Observing System (ASOS) NEXRAD radar Global Sea Level Observing System SST buoys Hurricane Hunters Dropsonde SNOTEL Weather balloon Weather vane Windsock Thermometer Anemometer Hygrometer Automated Meteorological Data Acquisition System Obsolete observation systems WSR-57 WSR-74 Orbital instrumentation AIRS AMSU-A ASTER Aqua Aura AVHRR CALIPSO CloudSat CERES DMC Envisat EROS GOES GOMOS GRACE Hydros ICESat IKONOS Jason-1 Landsat MERIS MetOp Meteor Meteosat MLS MIPAS MISR MODIS MOPITT MTSAT NMP NOAA-N' NPOESS OMI OCO PARASOL QuickBird QuikSCAT RADARSAT-1 SCIAMACHY SeaWiFS SORCE SPOT TES Terra TRMM Obsolete orbital instrumentation ERS Nimbus program Project Vanguard Seasat TOPEX/Poseidon TIROS See also Automated Quality control of meteorological observations Convective storm detection Earth Observing System Environmental monitoring Geographic information system (GIS) Glossary of meteorology Mesonet Meteorology Radiosonde Rocketsonde Surface weather observation Timex Expedition WS4 Tropical cyclone observation Weather reconnaissance Weather radar Weather satellite Meteorological instrumentation and equipment
List of weather instruments
[ "Technology", "Engineering" ]
408
[ "Meteorological instrumentation and equipment", "Measuring instruments" ]
9,536,676
https://en.wikipedia.org/wiki/Kinematic%20pair
In classical mechanics, a kinematic pair is a connection between two physical objects that imposes constraints on their relative movement (kinematics). German engineer Franz Reuleaux introduced the kinematic pair as a new approach to the study of machines that provided an advance over the notion of elements consisting of simple machines. Description Kinematics is the branch of classical mechanics which describes the motion of points, bodies (objects) and systems of bodies (groups of objects) without consideration of the causes of motion. Kinematics as a field of study is often referred to as the "geometry of motion". For further detail, see Kinematics. Hartenberg & Denavit presents the definition of a kinematic pair: In the matter of connections between rigid bodies, Reuleaux recognized two kinds; he called them higher and lower pairs (of elements). With higher pairs, the two elements are in contact at a point or along a line, as in a ball bearing or disk cam and follower; the relative motions of coincident points are dissimilar. Lower pairs are those for which area contact may be visualized, as in pin connections, crossheads, ball-and socket joints and some others; the relative motion of coincident points of the elements, and hence of their links, are similar, and an exchange of elements from one link to the other does not alter the relative motion of the parts as it would with higher pairs.In kinematics, the two connected physical objects, forming a kinematic pair, are called 'rigid bodies'. In studies of mechanisms, manipulators or robots, the two objects are typically called 'links'. Lower pair A lower pair is an ideal joint that constrains contact between a surface in the moving body to a corresponding in the fixed body. A lower pair is one in which there occurs a surface or area contact between two members, e.g. nut and screw, universal joint used to connect two propeller shafts. Cases of lower joints: A revolute R joint, or hinged joint, requires a line in the moving body to remain co-linear with a line in the fixed body, and a plane perpendicular to this line in the moving body maintain contact with a similar perpendicular plane in the fixed body. This imposes five constraints on the relative movement of the links, which therefore has one degree of freedom. A prismatic P joint, or slider, requires that a line in the moving body remain co-linear with a line in the fixed body, and a plane parallel to this line in the moving body maintain contact with a similar parallel plane in the fixed body. This imposes five constraints on the relative movement of the links, which therefore has one degree of freedom. A screw joint or helical H joint requires cut threads in two links, so that there is a turning as well as sliding motion between them. This joint has one degree of freedom. A cylindrical C joint requires that a line in the moving body remain co-linear with a line in the fixed body. It is a combination of a revolute joint and a sliding joint. This joint has two degrees of freedom. A universal U joint consists of two intersecting, mutually orthogonal revolute joints connecting rigid links whose axes are inclined to each other. A spherical S joint or ball and socket joint requires that a point in the moving body remain stationary in the fixed body. This joint has three degrees of freedom, corresponding to rotations around orthogonal axes. A planar joint requires that a plane in the moving body maintain contact with a plane in fixed body. This joint has three degrees of freedom. The moving plane can slide in two dimensions along the fixed plane, and it can rotate on an axis normal to the fixed plane. A parallelogram Pa joint, is composed of four links connected together by four revolute joints at the corners of a parallelogram. Higher pairs Generally, a higher pair is a constraint that requires a curve or surface in the moving body to maintain contact with a curve or surface in the fixed body. For example, the contact between a cam and its follower is a higher pair called a cam joint. Similarly, the contact between the involute curves that form the meshing teeth of two gears are cam joints, as is a wheel rolling on a surface. It has a point or line contact. Wrapping pair/ Higher pair A wrapping/higher pair is a constraint that comprises belts, chains, and such other devices. A belt-driven pulley is an example of this pair. In this type of which is very similar to the higher pair (which is having point or line contact), but having multiple point contact. Joint notation Context Mechanisms, manipulators or robots are typically composed of links connected together by joints.  Serial manipulators, like the SCARA robot, connect a moving platform to a base through a single chain of links and joints.  In robotics the moving platform is called the 'end effector'.   Multiple serial chains connect the moving platform to the base of parallel manipulators, like the Gough-Stewart mechanism.  The individual serial chains of parallel manipulators are called 'limbs' or 'legs'. Topology refers to the arrangement of links and joints forming a manipulator or robot.  Joint notation is a convenient way of defining the joint topology of mechanisms, manipulators or robots. Abbreviations Joints are abbreviated as follows: prismatic P, revolute R, universal U, cylindrical C, spherical S, parallelogram Pa.  Actuated or active joints are identified by underscores, i.e., P, R, U, C, S, Pa. Notation Joint notation specifies the type and order of the joints forming a mechanism. It identifies the sequences of joints, starting from the abbreviation of the first joint at the base to the last abbreviation at the moving platform.  For example, joint notation for the serial SCARA robot is RRP , indicating that it is composed of two active revolute joints RR followed by an active prismatic P joint.  Repeated joints may be summarized by their number; so that joint notation for the SCARA robot can also be written 2RP for example. Joint notation for the parallel Gough-Stewart mechanism is 6-UPS or 6(UPS) indicating that it is composed of six identical serial limbs, each one composed of a universal U, active prismatic P and spherical S joint.  Parentheses () enclose the joints of individual serial limbs. See also Mechanism (engineering) Manipulator (device) Linkage (mechanical) References Hartenberg, R.S. & J. Denavit (1964) Kinematic synthesis of linkages, pp 17,18, New York: McGraw-Hill, online link from Cornell University. pair Rigid bodies
Kinematic pair
[ "Physics", "Technology" ]
1,392
[ "Machines", "Kinematics", "Physical phenomena", "Classical mechanics", "Physical systems", "Motion (physics)", "Mechanics" ]
9,536,737
https://en.wikipedia.org/wiki/Kinematic%20chain
In mechanical engineering, a kinematic chain is an assembly of rigid bodies connected by joints to provide constrained motion that is the mathematical model for a mechanical system. As the word chain suggests, the rigid bodies, or links, are constrained by their connections to other links. An example is the simple open chain formed by links connected in series, like the usual chain, which is the kinematic model for a typical robot manipulator. Mathematical models of the connections, or joints, between two links are termed kinematic pairs. Kinematic pairs model the hinged and sliding joints fundamental to robotics, often called lower pairs and the surface contact joints critical to cams and gearing, called higher pairs. These joints are generally modeled as holonomic constraints. A kinematic diagram is a schematic of the mechanical system that shows the kinematic chain. The modern use of kinematic chains includes compliance that arises from flexure joints in precision mechanisms, link compliance in compliant mechanisms and micro-electro-mechanical systems, and cable compliance in cable robotic and tensegrity systems. Mobility formula The degrees of freedom, or mobility, of a kinematic chain is the number of parameters that define the configuration of the chain. A system of rigid bodies moving in space has degrees of freedom measured relative to a fixed frame. This frame is included in the count of bodies, so that mobility does not depend on link that forms the fixed frame. This means the degree-of-freedom of this system is , where is the number of moving bodies plus the fixed body. Joints that connect bodies impose constraints. Specifically, hinges and sliders each impose five constraints and therefore remove five degrees of freedom. It is convenient to define the number of constraints that a joint imposes in terms of the joint's freedom , where . In the case of a hinge or slider, which are one-degree-of-freedom joints, have and therefore . The result is that the mobility of a kinematic chain formed from moving links and joints each with freedom , , is given by Recall that includes the fixed link. Analysis of kinematic chains The constraint equations of a kinematic chain couple the range of movement allowed at each joint to the dimensions of the links in the chain, and form algebraic equations that are solved to determine the configuration of the chain associated with specific values of input parameters, called degrees of freedom. The constraint equations for a kinematic chain are obtained using rigid transformations to characterize the relative movement allowed at each joint and separate rigid transformations to define the dimensions of each link. In the case of a serial open chain, the result is a sequence of rigid transformations alternating joint and link transformations from the base of the chain to its end link, which is equated to the specified position for the end link. A chain of links connected in series has the kinematic equations, where is the transformation locating the end-link—notice that the chain includes a "zeroth" link consisting of the ground frame to which it is attached. These equations are called the forward kinematics equations of the serial chain. Kinematic chains of a wide range of complexity are analyzed by equating the kinematics equations of serial chains that form loops within the kinematic chain. These equations are often called loop equations. The complexity (in terms of calculating the forward and inverse kinematics) of the chain is determined by the following factors: Its topology: a serial chain, a parallel manipulator, a tree structure, or a graph. Its geometrical form: how are neighbouring joints spatially connected to each other? Explanation Two or more rigid bodies in space are collectively called a rigid body system. We can hinder the motion of these independent rigid bodies with kinematic constraints. Kinematic constraints are constraints between rigid bodies that result in the decrease of the degrees of freedom of rigid body system. Synthesis of kinematic chains The constraint equations of a kinematic chain can be used in reverse to determine the dimensions of the links from a specification of the desired movement of the system. This is termed kinematic synthesis. Perhaps the most developed formulation of kinematic synthesis is for four-bar linkages, which is known as Burmester theory. Ferdinand Freudenstein is often called the father of modern kinematics for his contributions to the kinematic synthesis of linkages beginning in the 1950s. His use of the newly developed computer to solve Freudenstein's equation became the prototype of computer-aided design systems. This work has been generalized to the synthesis of spherical and spatial mechanisms. See also Assur group Denavit–Hartenberg parameters Chebychev–Grübler–Kutzbach criterion Configuration space Machine (mechanical) Mechanism (engineering) Six-bar linkage Simple machines Six degrees of freedom Superposition principle References Computer graphics 3D computer graphics Computational physics Robot kinematics Virtual reality Mechanisms (engineering) Diagrams Classical mechanics
Kinematic chain
[ "Physics", "Engineering" ]
1,007
[ "Robotics engineering", "Classical mechanics", "Computational physics", "Mechanics", "Robot kinematics", "Mechanical engineering", "Mechanisms (engineering)" ]
9,536,761
https://en.wikipedia.org/wiki/Phased-array%20optics
Phased-array optics is the technology of controlling the phase and amplitude of light waves transmitting, reflecting, or captured (received) by a two-dimensional surface using adjustable surface elements. An optical phased array (OPA) is the optical analog of a radio-wave phased array. By dynamically controlling the optical properties of a surface on a microscopic scale, it is possible to steer the direction of light beams (in an OPA transmitter), or the view direction of sensors (in an OPA receiver), without any moving parts. Phased-array beam steering is used for optical switching and multiplexing in optoelectronic devices and for aiming laser beams on a macroscopic scale. Complicated patterns of phase variation can be used to produce diffractive optical elements, such as dynamic virtual lenses, for beam focusing or splitting in addition to aiming. Dynamic phase variation can also produce real-time holograms. Devices permitting detailed addressable phase control over two dimensions are a type of spatial light modulator (SLM). Transmitter An optical phased-array transmitter includes a light source (laser), power splitters, phase shifters, and an array of radiating elements. The output light of the laser source is split into several branches using a power splitter tree. Each branch is then fed to a tunable phase shifter. The phase-shifted light is input to a radiating element (a nanophotonic antenna) that couples the light into free space. Light radiated by the elements is combined in the far-field and forms the far-field pattern of the array. By adjusting the relative phase shift between the elements, a beam can be formed and steered. Receiver In an optical phased-array receiver, the incident light (usually coherent light) on a surface is captured by a collection of nanophotonic antennas that are placed on a 1D or 2D array. The light received by each element is phase-shifted and amplitude-weighted on a chip. These signals are then added together in the optic or electronic domain to form a reception beam. By adjusting the phase shifts, the reception beam can be steered to different directions, and light incident from each direction is collected selectively. Applications In nanotechnology, phased-array optics refers to arrays of lasers or SLMs with addressable phase and amplitude elements smaller than a wavelength of light. While still theoretical, such high-resolution arrays would permit extremely realistic three-dimensional image display by dynamic holography with no unwanted orders of diffraction. Applications for weapons, space communications, and invisibility by optical camouflage have also been suggested. DARPA's Excalibur program aims to provide realtime correction of atmospheric turbulence for a laser weapon. The Breakthrough Starshot organisation has proposed to use phased arrays to precisely aim and steer propulsion lasers for a hypothetical gram-scale solar sail-based craft or fleet of crafts. See also Holography Phased array Spatial light modulator References External links Phased Array Optics Animation of beam steering using phased arrays on YouTube Optical devices Display technology Hypothetical technology
Phased-array optics
[ "Materials_science", "Engineering" ]
615
[ "Glass engineering and science", "Electronic engineering", "Optical devices", "Display technology" ]
9,536,893
https://en.wikipedia.org/wiki/Armature%20%28computer%20animation%29
An armature is a kinematic chain used in computer animation to simulate the motions of virtual human or animal characters. In the context of animation, the inverse kinematics of the armature is the most relevant computational algorithm. There are two types of digital armatures: Keyframing (stop-motion) armatures and real-time (puppeteering) armatures. Keyframing armatures were initially developed to assist in animating digital characters without basing the movement on a live performance. The animator poses a device manually for each keyframe, while the character in the animation is set up with a mechanical structure equivalent to the armature. The device is connected to the animation software through a driver program and each move is recorded for a particular frame in time. Real-time armatures are similar, but they are puppeteered by one or more people and captured in real time. See also Linkages Skeletal animation References 3D graphics software Computational physics
Armature (computer animation)
[ "Physics" ]
202
[ "Computational physics" ]
9,537,367
https://en.wikipedia.org/wiki/Cap%20of%20invisibility
In classical mythology, the Cap of Invisibility (Ἅϊδος κυνέη (H)aïdos kyneē in Greek, lit. dog-skin of Hades) is a helmet or cap that can turn the wearer invisible, also known as the Cap of Hades or Helm of Hades. Wearers of the cap in Greek myths include Athena, the goddess of wisdom, the messenger god Hermes, and the hero Perseus. Those wearing the Cap become invisible to other supernatural entities, akin to a cloud of mist sometimes used to remain undetectable. Origins One ancient source that attributes a special helmet to the ruler of the underworld is the Bibliotheca (2nd/1st century BC), in which the Uranian Cyclopes give Zeus the lightning bolt, Poseidon the trident, and a helmet (kyneê) to Hades (or Pluto) in their war against the Titans. In classical mythology the helmet is regularly said to belong to the god of the underworld. Rabelais calls it the Helmet of Pluto, and Erasmus the Helmet of Orcus. The helmet becomes proverbial for those who conceal their true nature by a cunning device: "the helmet of Pluto, which maketh the politic man go invisible, is secrecy in the counsel, and celerity in the execution." Users Hades As the name implies, Hades owned the helmet. It was forged for him by Elder Cyclopes after he and his brothers Zeus and Poseidon freed them from Tartarus. He then used this helmet to great effect during the Titanomachy and was instrumental in routing the Titans. Athena Athena, the goddess of wisdom, battle, and handicrafts, wore the Cap of Invisibility in one instance during the Trojan War. She used it to become invisible to Ares when she aided Diomedes, his enemy. Her assistance even enabled Diomedes to injure the god of war with a spear. Hermes The messenger god Hermes wore the Cap during his battle with Hippolytus, the giant. Perseus In some stories, Perseus received the Cap of Invisibility (along with the Winged Sandals) from Athena when he went to slay the Gorgon Medusa, which helped him escape her sisters. In other myths, however, Perseus obtained these items from the Stygian nymphs. The Cap of Invisibility was not used to avoid the Gorgons' petrifying gazes, but rather to escape from the immortal Stheno and Euryale later on after he had decapitated Medusa. In popular culture In the Percy Jackson & the Olympians series by Rick Riordan, Annabeth Chase (a daughter of Athena) received a New York Yankees baseball cap from her mother that was a disguised cap of invisibility. In the same series, the main antagonist, Luke Castellan, stole Hades' Helm of Darkness, as well as Zeus' master bolt. Hades has also used it in The Blood of Olympus, where he goes banishing Gaea and Tartarus's children, the giants, to Tartarus. The helmet also appears in the Italian mythological comedy Arrivano i titani, but its invisibility powers work in this version only at night. The helm plays a major role in Dan Simmons' novel Ilium in which the scholiastic narrator Thomas Hockenberry acquires the artifact through Aphrodite in her scheme to have the scholiast spy on and eventually assassinate the goddess Athena. See also Bident – another mystical object associated with Hades Cloak of invisibility Cloaking device Mambrino – a fictional Moorish king who possessed a golden helmet that would make the wearer invulnerable Ring of Gyges Tarnhelm References Ancient helmets Athena Caps Deeds of Hermes Fiction about invisibility Magic items Mythological clothing Objects in Greek mythology Perseus Symbols of Hades
Cap of invisibility
[ "Physics" ]
806
[ "Magic items", "Physical objects", "Matter" ]
9,537,636
https://en.wikipedia.org/wiki/Alternative%20beta
Alternative beta is the concept of managing volatile "alternative investments", often through the use of hedge funds. Alternative beta is often also referred to as "alternative risk premia". Researcher Lars Jaeger says that the return from an investment mainly results from exposure to systematic risk factors. These exposures can take two basic forms: long only "buy and hold" exposures and exposures through the use of alternative investment techniques such as long/short investing, the use of derivatives (non-linear payout profiles), or the employment of leverage) Background Alternative investments Although alternative investment is a general term, (commonly defined as any investment other than stocks, bonds or cash), alternative beta relates to the use of hedge funds. At its most basic, a hedge fund is an investment vehicle that pools capital from a number of investors and invests in securities and other instruments. It is administered by a professional management firm, and often structured as a limited partnership, limited liability company, or similar vehicle. Volatility ("beta") For an investment that involves risk to be worthwhile, its returns must be higher than a risk-free investment. The risk is related to volatility. A measure of the factors influencing an investment's volatility is the beta. The beta is a measure of the risk arising from exposure to general market movements as opposed to idiosyncratic factors. A beta below 1 can indicate either an investment with lower volatility than the market, or a volatile investment whose price movements are not highly correlated with the market. An example of the first is a treasury bill: the price does not go up or down a lot, so it has a low beta. An example of the second is gold. The price of gold does go up and down a lot, but not in the same direction or at the same time as the market. A beta above 1 generally means that the asset both is volatile and tends to move up and down with the market. An example is a stock in a big technology company. Negative betas are possible for investments that tend to go down when the market goes up, and vice versa. There are few fundamental investments with consistent and significant negative betas, but some derivatives like equity put options can have large negative beta values. Investments with a high beta value are often called "beta investments", as opposed to "alpha investments" which typically have lower volatility and lower returns. Volatility and hedge funds Separating returns into alpha and beta can also be applied to determine the amount and type of fees to charge. The consensus is to charge higher fees for alpha (incl. performance fee), since it is mostly viewed as skill based. The topic has received increasing levels of attention due to the very rapid growth of the hedge fund industry, where investment companies typically charge fees higher than those of mutual funds, based on the assumption that hedge funds are alpha investments. Investors have started to question whether hedge funds are actually alpha investments, or just some "new" form of beta (i.e. alternative beta). This issue was raised in the 1997 paper "Empirical Characteristics of Dynamic Trading Strategies: The Case of Hedge Funds" by William Fung and David Hsieh. Following this paper, several groups of academics (such as Thomas Schneeweis et al.) started to explain past hedge fund returns using various systematic risk factors (i.e. Alternative Betas). Following this, a paper has discussed whether investable strategies based on such factors can not only explain past returns, but also replicate future ones. Different betas based on different investment exposures Traditional betas can be seen as those related to investments the common investor would already be experienced with (examples include stocks and most bonds). They are typically represented through indexation, and the techniques employed here are what is called "long only". The definition of alternative beta in contrast requires the consideration of other investment techniques such as short selling, use of derivatives and leverage - techniques which are often associated with the activities of hedge funds. The underlying non-traditional investment risks are often seen as being riskier, as investors are less familiar with them. Alpha and investments and beta investments Viewed from the implementation side, investment techniques and strategies are the means to either capture risk premia (beta) or to obtain excess returns (alpha). Whereas returns from beta are a result of exposing the portfolio to systematic risks (traditional or alternative), alpha is an exceptional return that an investor or portfolio manager earns due to his unique skill, i.e. exploiting market inefficiencies. Academic studies as well as their performance in recent years strongly support the idea that the return from hedge funds mostly consists of (alternative) risk premia. This is the basis of the various approaches to replicate the return profile of hedge funds by direct exposures to alternative beta (hedge fund replication). Hedge fund replication There are currently two main approaches to replicate the return profile of hedge funds based on the idea of Alternative Betas: directly extracting the risk premia (bottom up), an approach developed and advocated by Lars Jaeger the factor-based approach based on Sharpe's factor models and developed for hedge funds first by professors Bill Fung (London Business School), David Hsieh (Fuqua School of Business, Duke University) et al. See also Hedge fund replication Trading Strategy Index References Investment Mathematical finance Finance theories
Alternative beta
[ "Mathematics" ]
1,099
[ "Applied mathematics", "Mathematical finance" ]
9,538,066
https://en.wikipedia.org/wiki/Sir%20Peter%20Blake%20Marine%20Education%20and%20Recreation%20Centre
Sir Peter Blake Marine Education & Recreation Centre (MERC) is a not for profit charity (Registration Number CC29903) based in Long Bay, Auckland, New Zealand. MERC offers marine-based environmental education and outdoor recreational experiences to schools, not-for-profit organisations, and business and corporate groups. MERC can accommodate up to 85 people at any time for overnight stays and activities, and up to a total of 120 people per day. A day programme consists of 3 activities on the day and instruction is given between 9am and 4pm. There is scope to organise short duration, single day and multi-day courses, and these are specifically developed to meet the needs of each group. MERCs facilities have separate teacher/leader accommodation, multi-purpose hall, wheelchair access and large kitchen & dining facilities. The Centre's mission statement is to provide life-changing marine environmental education and outdoor experiences for young New Zealanders. History The Centre has been in operation since 1990. A group of local yachtsmen and teachers (Don St Clair Brown, Don Burfoot, Laurie Baxter, Ian Sage, Dr Ross Garrett and John Orams, responded to the vision of fellow yachtsman Dr. David Gray, that a centre be built on the present site which was designated “Conditional Use Reserve” land belonging to Takapuna City Council. In 1980 this site was littered with building debris, scraps of machinery and foundations of early buildings. These basic amenities had serviced the large Long Bay camping population as well as local holiday residents in their numerous beaches. There was a store with petrol pumps – started by Tom Vaughan about 1920 but taken over by Mr and Mrs Aston and demolished about 1950 (?), a butchery owned by Mr Lopes which later became a private beach, a holiday house built above the sea wall (1956) still tenanted in 1989. References External links Sir Peter Blake Marine Education and Recreation Centre - official site Education in the Auckland Region Buildings and structures in the Auckland Region Environmental education Environment of New Zealand Educational organisations based in New Zealand East Coast Bays
Sir Peter Blake Marine Education and Recreation Centre
[ "Environmental_science" ]
420
[ "Environmental education", "Environmental social science" ]
9,538,207
https://en.wikipedia.org/wiki/Mitigation%20of%20seismic%20motion
Mitigation of seismic motion is an important factor in earthquake engineering and construction in earthquake-prone areas. The destabilizing action of an earthquake on constructions may be direct (seismic motion of the ground) or indirect (earthquake-induced landslides, liquefaction of the foundation soils and waves of tsunami). Knowledge of local amplification of the seismic motion from the bedrock is very important in order to choose the suitable design solutions. Local amplification can be anticipated from the presence of particular stratigraphic conditions, such as soft soil overlapping the bedrock, or where morphological settings (e.g. crest zones, steep slopes, valleys, or endorheic basins) may produce focalization of the seismic event. The identification of the areas potentially affected by earthquake-induced landslides and by soil liquefaction can be made by geological survey and by analysis of historical documents. Even quiescent and stabilized landslide areas may be reactivated by severe earthquake. Young soil may be particularly susceptible to liquefaction. See also Base isolation Seismic hazard Seismic performance Tuned mass damper Vibration control Crash testing References Building engineering Earthquake and seismic risk mitigation
Mitigation of seismic motion
[ "Engineering" ]
237
[ "Structural engineering", "Building engineering", "Civil engineering", "Earthquake and seismic risk mitigation", "Architecture" ]
9,538,394
https://en.wikipedia.org/wiki/Ecology%20Letters
Ecology Letters is a monthly peer-reviewed scientific journal published by Wiley and the French National Centre for Scientific Research. Peter H. Thrall is the current editor-in-chief, taking over from Tim Coulson (University of Oxford). The journal covers research on all aspects of ecology. Abstracting and indexing Ecology Letters is abstracted and indexed in Academic Search/Academic Search Premier, AGRICOLA, Aquatic Sciences and Fisheries Abstracts, Biological Abstracts, BIOSIS and BIOSIS Previews, CAB Abstracts, CAB Health/CABDirect, Cambridge Scientific Abstracts databases, Current Contents/Agriculture, Biology & Environmental Sciences, GEOBASE, GeoRef, Index Medicus/MEDLINE, InfoTrac, PubMed, Science Citation Index, Scopus, and The Zoological Record. According to the 2021 Journal Citation Reports, Ecology Letters is ranked sixth out of one hundred and seventy-three (6/173) journals in the category "Ecology," with a 2022 impact factor of 8.8. References External links Wiley-Blackwell academic journals Ecology journals Academic journals established in 1998 English-language journals Monthly journals
Ecology Letters
[ "Environmental_science" ]
228
[ "Environmental science journals", "Ecology journals" ]
9,538,788
https://en.wikipedia.org/wiki/Tensioner
A tensioner is a device that applies a force to create or maintain tension. The force may be applied parallel to, as in the case of a hydraulic bolt tensioner, or perpendicular to, as in the case of a spring-loaded bicycle chain tensioner, the tension it creates. The force may be generated by a fixed displacement, as in the case of an eccentric bicycle bottom bracket, which must be adjusted as parts wear, or by stretching or compressing a spring, as in the case of a spring-loaded bicycle chain tensioner; by changing the volume of a gas, as in the case of a marine riser tensioner; by hydraulic pressure, as in the case of a hydraulic bolt tensioner; or by gravity acting on a suspended mass, as in the case of a chair lift cable tensioner. In the power sector, the tensioner is a machine for maintaining constant tension of the conductors during work of hanging the transmission network.. Applications Bolt tensioners are devices designed to apply a specific tension to a bolt. The device may be either removed once the actual nut is threaded into place or left in place, in the case of a hydraulic nut. The belt or chain tension on a single-speed bicycle can be maintained by either setting the fixed horizontal position of the rear sprocket or the front chainring horizontally, or by a separate tensioner that pushes perpendicular to the chain with either a fixed position or spring tension. The serpentine belt and the timing belt or chain on an automobile engine may be guided by an idler pulley and/or a belt tensioner, which may be spring-loaded, hydraulic, or fixed. The chain tension of a chainsaw may be adjusted with a chain tensioner. A marine riser tensioner is a device used on an offshore drilling vessel that provides a near-constant upward force on the drilling riser independent of the movement of the floating drill vessel. A guideline tensioner is a hydropneumatic device used on an offshore drilling rig that keeps a positive pulling force on the guidelines from the platform to a template on the seabed. Overhead electrical wires may be kept in tension by springs or weights. Conveyor belts Chair lift and gondola lift cables Certain wood trusses, such as the beam tensioner truss picture below. Fencing made of wire, such as electric fences, barbed-wire fences, and chainlink fences often include tensioning devices to keep them taut. Belt sanders have a mechanism, often a spring-loaded idler drum, to apply the proper tension to the sanding belt, which can be released to allow for changing belts. Gallery See also Turnbuckle Torque wrench References External links Hydraulic Tensioning Glossary Hydraulic Puller-tensioner Drilling technology Hardware (mechanical)
Tensioner
[ "Physics", "Technology", "Engineering" ]
561
[ "Physical systems", "Machines", "Hardware (mechanical)", "Construction" ]
9,538,835
https://en.wikipedia.org/wiki/Peak%20information%20rate
Peak information rate (PIR) is a burstable rate set on routers and/or switches that allows throughput overhead. Related to committed information rate (CIR) which is a committed rate speed guaranteed/capped. For example, a CIR of 10 Mbit/s PIR of 12 Mbit/s allows you access to 10 Mbit/s minimum speed with burst/spike control that allows a throttle of an additional 2 Mbit/s; this allows for data transmission to "settle" into a flow. PIR is defined in MEF Standard 10.4 Subscriber Ethernet Service Attributes Excess information rate (EIR) is the magnitude of the burst above the CIR (PIR = EIR + CIR). Maximum information rate (MIR) in reference to broadband wireless refers to maximum bandwidth the subscriber unit will be delivered from the wireless access point in kbit/s. See also Maximum throughput Information rate References Network performance Computer network analysis Temporal rates
Peak information rate
[ "Physics", "Technology" ]
203
[ "Temporal quantities", "Physical quantities", "Computer network stubs", "Temporal rates", "Computing stubs" ]
9,538,946
https://en.wikipedia.org/wiki/Suspension%20keel
A suspension keel is an extension pylon to the bodywork of single-seat, open wheel racing cars designed with a raised nose cone, to allow the lower suspension arms to be attached to the car approximately parallel to the road surface. In recent years the placing and design of a suspension keel, or the lack of such, has been one of the few distinct variables in Formula One chassis design. Traditional low nose cone designs (e.g. the McLaren MP4/4) allow the lower suspension arms to be directly attached to the main structural members of the car. However, since the move to high nose cone designs – which allow better use of airflow underneath the car, and to a lesser extent the front wing – location of these lower arms has proven problematic. For ideal suspension geometry, and hence maximum mechanical grip, the lower arms should be long and near parallel with the road. As there is no longer any structural bodywork in these low positions, extensions were developed to allow the suspension to be mounted with correct geometry. Since the advent of high nose designs in the early 1990s, pioneered on the Tyrrell 019 Formula One car, three major keel designs have emerged to solve this problem: Single-keel: Perhaps the simplest response, utilising a single, planar extension to the ventral surface of the car's nose cone, providing a plate onto which the proximal ends of the suspension arms can be mounted. Benefits include a simple construction design, and the flexibility of having a large surface, thus allowing the suspension geometry to be altered for fine tuning. A serious hindrance in the single-keel design is that the keel itself protrudes into the underbody airflow, thus reducing the benefits of the raised nose design. As a consequence of this the single keel design fell out of favour in the late 2000s. However, for the 2010 Formula One season both the Mercedes MGP W01 and Virgin VR-01 feature single keel front suspension. Twin-keel: As the name suggests, rather than one single keel, two shorter keel stubs are used. Each protrudes from the underside or lower corners of the nose cone, and the left and right suspension arms are mounted to the appropriate keel. This design reduces the disturbance to the airflow, but compromises the suspension set up and configuration flexibility, and introduces significant structural complexity and weight. The twin-keel concept was conceived by Harvey Postlethwaite during his time at Honda Racing Developments, before being introduced by Sauber during the 2000 Formula One season and swiftly copied by many other teams. In , only Red Bull Racing used a twin-keel chassis. V-keel: Used principally by the Renault F1 team, who introduced the design in , this variant uses two keel elements protruding downward in a V shape. The tips of the elements are fused, and at this point the suspension arms are mounted. Something of a compromise position, benefits include a reduction in disturbance to the underbody airflow in comparison to a single-keel design, with fewer geometry restrictions than with twin-keels. One limitation of any keel design is that, while the keel influence may vary, the suspension linkages themselves still disrupt the underbody airflow. This problem was exacerbated when the FIA introduced rule changes in that forced teams to mount their front wing in a more elevated position. In response to this, many F1 teams have developed zero-keel chassis designs. Here the keel is removed entirely, and the suspension is mounted directly to the chassis. As the nose cone is in a raised position, this entails that the suspension arms take a distinctly inclined angle with respect to the road surface, reducing suspension efficiency. However, with continued restrictions to aerodynamic downforce through the use of aerofoil wings, and the lighter V8 engines specified from 2006 onwards causing weight distribution to shift forward, many designers apparently consider this drawback to be less significant than the concomitant increase in venturi downforce generated underneath the car; except for Renault and Red Bull, all of the teams in the 2007 Formula One World Championship used a zero-keel design. References External links Craig Scarborough's technical information site: www.scarbsF1.com NASA: Aerodynamics in Car Racing Aerodynamics Motorsport terminology Automotive suspension technologies
Suspension keel
[ "Chemistry", "Engineering" ]
862
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
9,538,947
https://en.wikipedia.org/wiki/History%20of%20web%20syndication%20technology
Web syndication technologies were preceded by metadata standards such as the Meta Content Framework (MCF) and the Resource Description Framework (RDF), as well as by 'push' specifications such as Channel Definition Format (CDF). Early web syndication standards included Information and Content Exchange (ICE) and RSS. More recent specifications include Atom and GData. Predecessors Web syndication specifications were preceded by several formats in push and metadata technologies, few of which achieved widespread popularity, as many, such as Backweb and Pointcast, were intended to work only with a single service. Between 1995 and 1997, Ramanathan V. Guha and others at Apple Computer's Advanced Technology Group developed the Meta Content Framework (MCF). MCF was a specification for structuring metadata information about web sites and other data, implemented in HotSauce, a 3D flythrough visualizer for the web. When the research project was discontinued in 1997, Guha left Apple for Netscape. Guha and the XML co-creator Tim Bray extended MCF into an XML application that Netscape submitted to the World Wide Web Consortium (W3C) as a proposed web standard in June 1997. This submission contributed towards the emergence of the Resource Description Framework (RDF). In March 1997, Microsoft submitted a detailed specification for the 'push' technology Channel Definition Format (CDF) to the W3C. This format was designed for the Active Channel feature of Internet Explorer 4.0. CDF never became popular, perhaps because of the extensive resources it required at a time when people were mostly on dial-up. Backweb and Pointcast were geared towards news, much like a personal application programming interface (API) feed. Backweb later morphed into providing software updates, a precursor to the push update features used by various companies now. In September 1997, Netscape previewed a new, competing technology named "Aurora," based on RDF, a metadata model whose first public working draft would be posted the next month by a W3C working group that included representatives of many companies, including R.V. Guha of Netscape. In December 1997, Dave Winer designed his own XML format for use on his Scripting News weblog. Early web syndication: ICE and RSS The first standard created specifically for web syndication was Information and Content Exchange (ICE), which was proposed by Firefly Networks and Vignette in January 1998. The ICE Authoring Group included Microsoft, Adobe, Sun, CNET, National Semiconductor, Tribune Media Services, Ziff Davis and Reuters, amongst others, and was limited to thirteen companies. The ICE advisory council included nearly a hundred members. ICE was submitted to the World Wide Web Consortium standards body on 26 October 1998, and showcased in a press event the day after. The standard failed to benefit from the open-source implementation that W3C XML specifications often received. RDF Site Summary, the first web syndication format to be called "RSS", was offered by Netscape in March 1999 for use on the My Netscape portal. This version became known as RSS 0.9. In July 1999, responding to comments and suggestions, Dan Libby produced a prototype tentatively named RSS 0.91 (RSS standing for Rich Site Summary at that time), that simplified the format and incorporated parts of Winer's scripting news format. This they considered an interim measure, with Libby suggesting an RSS 1.0-like format through the so-called Futures Document. In April 2001, in the midst of AOL's acquisition and subsequent restructuring of Netscape properties, a re-design of the My Netscape portal removed RSS/XML support. The RSS 0.91 DTD was removed during this re-design, but in response to feedback, Dan Libby was able to restore the DTD, but not the RSS validator previously in place. In response to comments within the RSS community at the time, Lars Marius Garshol, to whom authorship of the original 0.9 DTD is sometimes attributed, commented, "What I don't understand is all this fuss over Netscape removing the DTD. A well-designed RSS tool, whether it validates or not, would not use the DTD at Netscape's site in any case. There are several mechanisms which can be used to control the dereferencing of references from XML documents to their DTDs. These should be used. If not the result will be as described in the article." Effectively, this left the format without an owner, just as it was becoming widely used. Initial adoption of RSS (2000–2003) A working group and mailing list, RSS-DEV, was set up by various users and XML notables to continue its development. At the same time, Winer unilaterally posted a modified version of the RSS 0.91 specification to the Userland website, since it was already in use in their products. He claimed the RSS 0.91 specification was the property of his company, UserLand Software. Since neither side had any official claim on the name or the format, arguments raged whenever either side claimed RSS as its own, creating what became known as the RSS fork. The RSS-DEV group went on to produce RSS 1.0 in December 2000. Like RSS 0.9 (but not 0.91) this was based on the RDF specifications, but was more modular, with many of the terms coming from standard metadata vocabularies such as Dublin Core. Nineteen days later, Winer released by himself RSS 0.92, a minor and supposedly compatible set of changes to RSS 0.91 based on the same proposal. In April 2001, he published a draft of RSS 0.93 which was almost identical to 0.92. A draft RSS 0.94 surfaced in August, reverting the changes made in 0.93, and adding a type attribute to the description element. In September 2002, Winer released a final successor to RSS 0.92, known as RSS 2.0 and emphasizing "Really Simple Syndication" as the meaning of the three-letter abbreviation. The RSS 2.0 spec removed the type attribute added in RSS 0.94 and allowed people to add extension elements using XML namespaces. Several versions of RSS 2.0 were released, but the version number of the document model was not changed. In November 2002, The New York Times began offering its readers the ability to subscribe to RSS news feeds related to various topics. In January 2003, Winer called the New York Times' adoption of RSS the "tipping point" in driving the RSS format's becoming a de facto standard. In July 2003, Winer and Userland Software assigned ownership of the RSS 2.0 specification to his then workplace, Harvard's Berkman Center for the Internet & Society. Development of Atom (2003) In 2003, the primary method of web content syndication was the RSS family of formats. Developers who wished to overcome the limitations of these formats were unable to make changes directly to RSS 2.0 because the specification was copyrighted by Harvard University and "frozen," stating that "no significant changes can be made and it is intended that future work be done under a different name". In June 2003, Sam Ruby set up a wiki to discuss what makes "a well-formed log entry." This posting acted as a rallying point. People quickly started using the wiki to discuss a new syndication format to address the shortcomings of RSS. It also became clear that the new format could also form the basis of a more robust replacement for blog editing protocols such as Blogger API and LiveJournal XML-RPC Client/Server Protocol. The project aimed to develop a web syndication format that was: "100% vendor neutral," "implemented by everybody," "freely extensible by anybody, and" "cleanly and thoroughly specified." In short order, a project road map was built. The effort quickly attracted more than 150 supporters including Dave Sifry of Technorati, Mena Trott of Six Apart, Brad Fitzpatrick of LiveJournal, Jason Shellen of Blogger, Jeremy Zawodny of Yahoo!, Timothy Appnel of the O'Reilly Network, Glenn Otis Brown of Creative Commons and Lawrence Lessig. Other notables supporting Atom include Mark Pilgrim, Tim Bray, Aaron Swartz, Joi Ito, and Jack Park. Also, Dave Winer, the key figure behind RSS 2.0, gave tentative support to the Atom endeavor (which at the time was called Echo.) After this point, discussion became chaotic, due to the lack of a decision-making process. The project also lacked a name, tentatively using "Pie," "Echo," and "Necho" before settling on Atom. After releasing a project snapshot known as Atom 0.2 in early July 2003, discussion was shifted off the wiki. The discussion then moved to a newly set up mailing list. The next and final snapshot during this phase was Atom 0.3, released in December 2003. This version gained widespread adoption in syndication tools, and in particular it was added to several Google-related services, such as Blogger, Google News, and Gmail. Google's Data APIs (Beta) GData are based on Atom 1.0 and RSS 2.0. Atom 1.0 and IETF standardization In 2004, discussions began about moving the Atom project to a standards body such as the W3C or the Internet Engineering Task Force (IETF). The group eventually chose the IETF and the Atompub working group was formally set up in June 2004, finally giving the project a charter and process. The Atompub working group is co-chaired by Tim Bray (the co-editor of the XML specification) and Paul Hoffman. Initial development was focused on the syndication format. The final draft of Atom 1.0 was published in July 2005 and was accepted by the IETF as a "proposed standard" in August 2005. Work then continued on the further development of the publishing protocol and various extensions to the syndication format. The Atom Syndication Format was issued as a proposed "internet official protocol standard" in IETF RFC 4287 in December 2005 with the help of the co-editors Mark Nottingham and Robert Sayre. Post-Atom technical developments related to web syndication In January 2005, Sean B. Palmer, Christopher Schmidt, and Cody Woodard produced a preliminary draft of RSS 1.1. It was intended as a bugfix for 1.0, removing little-used features, simplifying the syntax and improving the specification based on the more recent RDF specifications. As of July 2005, RSS 1.1 had amounted to little more than an academic exercise. In April 2005, Apple released Safari 2.0 with RSS Feed capabilities built in. Safari delivered the ability to read RSS feeds, and bookmark them, with built-in search features. Safari's RSS button is a blue rounded rectangle with "RSS" written inside in white. The favicon displayed defaults to a newspaper icon. In November 2005, Microsoft proposed its Simple Sharing Extensions to RSS. In December 2005, Microsoft announced in blogs that Internet Explorer 7 and Microsoft Outlook 12 (Outlook 2007) will adopt the feed icon first used in the Mozilla Firefox, effectively making the orange square with white radio waves the industry standard for both RSS and related formats such as Atom. Also in February 2006, Opera Software announced they too would add the orange square in their Opera 9 release. In January 2006, Rogers Cadenhead relaunched the RSS Advisory Board in order to move the RSS format forward. In January 2007, as part of a revitalization of Netscape by AOL, the FQDN for my.netscape.com was redirected to a holding page in preparation for an impending relaunch, and as a result some news feeders using RSS 0.91 stopped working. The DTD has again been restored. HTML5 In 2013 the Candidate Recommendation for HTML5 included explicit provision for syndication by introducing the 'article' element. See also History of the World Wide Web References External links Early RSS history from several different personal points of view History of RSS compiled in 2003 by Joseph Reagle History of RSS compiled in 2004 by Dave Winer History of the RSS fork compiled in 2002 by Mark Pilgrim History of ESS Feed in 2012 Web syndication technology Web syndication formats History of computing History by topic
History of web syndication technology
[ "Technology" ]
2,606
[ "Computers", "History of computing" ]
9,538,987
https://en.wikipedia.org/wiki/Faked%20death
A faked death, also called a staged death, is the act of an individual purposely deceiving other people into believing that the individual is dead, when the person is, in fact, still alive. The faking of one's own death by suicide is sometimes referred to as pseuicide or pseudocide. People who commit pseudocide can do so by leaving evidence, clues, or through other methods. Death hoaxes can also be created and spread solely by third-parties for various purposes. Committing pseudocide may be done for a variety of reasons, such as to fraudulently collect insurance money, to evade pursuit, to escape from captivity, to arouse false sympathy, or as a practical joke. While faking one's own death is not inherently illegal, it may be part of a fraudulent or illicit activity such as tax evasion, insurance fraud, or avoiding a criminal prosecution. History Deaths have been faked since ancient times, but the rate increased significantly in the middle of the 19th century, when life insurance, and insurance fraud, became more common. In the late 20th century, advancements in technology began to make it increasingly more difficult to simply disappear after faking a death. Such things as credit card purchases, social media, and mobile phone systems, among others, have made it harder to make a clean break with a past identity. Widespread use of facial recognition tools can connect new identities to old social media accounts. Other factors include a desire of fakers to observe the reactions of others to their deaths, which may prompt them to check websites for information about their disappearances, which in turn could lead to their discovery through Internet geolocation. Motivation While some people fake their deaths as a prank or self-promotion effort, or to get a clean start, the most common motivations are money or a need to escape an abusive relationship. Men are more likely to fake their deaths than women. People who fake their deaths often feel like they are trapped in a desperate situation. Because of this, an investigation may be triggered if the person disappears, no body is found, and the person is in significant financial difficulties. Many people who fake their deaths intend for the change to be temporary, until a problem is resolved. Methods People who fake their own deaths often do so by trying to pretend drowning, because it provides a plausible reason for the absence of a body. However, drowned bodies usually appear within a few days of a death, and when no body appears, a faked death is suspected. Outcome Although firm figures are impossible to identify, investigators can resolve nearly all of the cases they receive, and researchers believe that most people are caught. Most people are caught quickly, within hours or days. For example, Marcus Schrenker faked a plane crash to avoid prosecution and was captured two days later, after he sent an e-mail message to a friend about his plans. Faking a death is not a victimless act. The people who grieved what they believed was a real death are usually angry and sometimes see the offense as being unforgivable. Accomplices, such as romantic partners and children, may be asked to commit crimes, such as filing false insurance claims or making false reports to the police, which can result in criminal charges. Those who are unaware that the death is fake may feel emotionally abused or manipulated. Rather than being happy or relieved to discover that the faker is alive, they may be angry and refuse to have any further contact. On social media False claims of death, including false claims of suicide, are not uncommon in social media accounts. The people who do this are often trying to get an advantage for themselves, such as more attention or likes, and they lie about their deaths "without thinking about the fact that there are people who would be upset, hurt or psychologically affected by the news of their death". It may be an intentional effort to manipulate other people's emotions or to see how people would react if they had died. Online, people have claimed to be dead as a response to real or perceived mistreatment on social media, and posting news of their death, especially their suicide, is a way to punish the other users. Examples of faked deaths on social media include BethAnn McLaughlin, a white woman who claimed to be Native American under another name on Twitter, and whose deception was uncovered after she faked her death during the COVID-19 pandemic. Kaycee Nicole in 2001 represented not just a fake death on social media, but also a fake person; she was the fictional creation of a middle-aged woman, and one of the first internet hoaxes to pretend that a character was dying. Notable faked deaths 1st century Yohanan ben Zakkai faked his death to escape from the Roman army. 14th century Joan of Leeds was a nun who faked her death to escape from a convent. 18th century Timothy Dexter was an eccentric 18th-century New England businessman probably best known for his punctuationless book A Pickle for the Knowing Ones. However, he is also known for having faked his own death to see how people would react. He paid his wife and members of his family with instructions to act. After the funeral he caned his wife for her poor acting by not looking sufficiently saddened at his passing. 20th century Grace Oakeshott, British women's rights activist, faked her death in 1907 to get out of her marriage. She lived the remainder of her life in New Zealand and died in 1929. Violet Charlesworth, a British fraudster, faked her death in 1909 to escape payment of debts. She was sentenced to three years in prison and released in 1912. C. J. De Garis, an Australian aviator and entrepreneur, faked his death in 1925 and became the subject on an eight-day nationwide search, before being spotted on a ship in New Zealand. He committed suicide in 1926. Aleister Crowley, English occultist and author, faked his death in 1930 in Portugal aided by Portuguese poet Fernando Pessoa, and then appeared three weeks later publicly in Berlin. Crowley actually died in 1947. Alfred Rouse, an English murderer, set his own car on fire in 1930 with a different man inside, in an attempt to convince the police that Rouse had died in the vehicle. He was arrested and convicted, and executed in 1931. The identity of the victim remains unknown. Aleksandr Uspensky, Russian government official, faked his own suicide in 1938 in an attempt to avoid capture by Soviet authority during the Great Purge. He was captured in 1939 and executed in 1940. Ferdinand Waldo Demara, American fraudster, faked his death in 1942. He actually died in 1982. Horst Kopkow, German SS major and war criminal, was declared dead by his MI6 handlers in January 1948. In reality he had been relocated to West Germany, where he died in 1996. Juan Pujol García, Spanish spy, faked his death from malaria in Angola in 1949, with help from the British spy agency MI5. He lived the remainder of his life in Venezuela and died in 1988. Lawrence Joseph Bader, an American salesperson, disappeared in 1957 and was presumed dead. He was found alive five years later assuming the identity of "John 'Fritz' Johnson", working as a local TV personality in Omaha, Nebraska. He either had amnesia of his life or was a hoaxer. He actually died in 1966, aged 39. Ken Kesey, American novelist, faked his suicide in 1965. He died in 2001. John Allen, a British criminal and murderer, faked his own death in 1966 to avoid prosecution for crimes he had committed. Allen actually died in 2015. John Stonehouse, a British politician who in November 1974 faked his own suicide by drowning to escape financial difficulties and live with his mistress. One month later, he was discovered in Australia. Police there initially thought he might be Lord Lucan (who had disappeared only a few weeks earlier, after being suspected of murder) and jailed him. Sent back to Britain, he was convicted and sentenced to seven years in prison for fraud. Jerry Balisok, an American professional wrestler, successfully convinced the FBI that he had died in 1978 in the Jonestown Massacre to avoid fraud charges, assuming the identity "Ricky Allen Wetta". A decade later, Wetta was arrested and convicted for attempted murder, at which point he was determined to be Balisok. Balisok actually died in 2013 while in prison for an unrelated crime. Audrey Marie Hilley, an American murderer, jumped bail in 1979 and lived under the assumed identity of Robbi Hannon. In 1982, under a different alias, she announced the death of Hannon. She was captured and imprisoned, and died in 1987. Robert Lenkiewicz, a British artist, had his death falsely announced to the newspapers in 1981. In reality he was in hiding with his friend Peregrine Eliot, 10th Earl of St Germans. Lenkiewicz later stated that he engineered the stunt because it was the closest he could get to knowing what it was like to be dead. Sukumara Kurup, an Indian who faked his own death by placing the corpse of his murder victim in his car and setting it on fire in 1984. The face of victim was charred beforehand to prevent identification. He did it to collect the money insured on his name. The police identified the victim and his accomplices were put on trial. He evaded arrest and is in a fugitive list of Interpol and Kerala Police. David Friedland, a former New Jersey senator, faked his own death via scuba-diving accident in 1985 while awaiting trial on racketeering charges. In December 1987, he was arrested by officials in Maldives, where he had been working as a scuba dive master and had posed in scuba gear for a picture post card. He eventually was returned to the United States and served nine years in prison. Friedland died in 2022. Charles Peter Mule, a Louisiana policeman, was charged with 29 counts related to the rape and molestation of several young girls in 1988. After being released on bail, Mule left his truck alongside a bridge and sent a note to his police department. His claimed suicide was ruled inconclusive after police failed to find a corpse in the river, and a hiker reported to police that a man had opened fire on him without warning and whose description matched Mule's. After the case was profiled on the television show Unsolved Mysteries, Mule was captured. Philip Sessarego, British author, faked his death by car bomb in Croatia in 1991 for unknown reasons, and lived under an assumed name for the next 17 years, with his own family only learning he was alive when he appeared in a 2001 TV interview. He died of an accidental poisoning in 2008. Russell Causley, a British man, faked his death by jumping off a ferry off the coast of Guernsey in 1993 as part of an insurance scam. His scheme was soon uncovered and he was jailed for fraud; this led to the police re-opening an investigation into the disappearance of his partner Carole Packman, who Causley would be convicted of murdering in 1996. Francisco Paesa, an agent of Centro Nacional de Inteligencia, the Spanish secret service, faked a fatal cardiac arrest in 1998 in Thailand, after having tricked Luis Roldán, known for being the general of the Spanish Civil Guard when a big scandal of corruption arose in 1993, into stealing all the money that Roldán had previously stolen in that case. He appeared in 2004. During these years, he opened an offshore company, which was exposed thanks to the leaking of the Panama Papers. Friedrich Gulda, Austrian pianist, falsely announced his death in 1999 to create publicity for a following "resurrection concert". He died in 2000. 21st century John Darwin, a former teacher and prison officer from Hartlepool, England, faked his own death on 21 March 2002 by canoeing out to sea and disappearing. His ruse fell apart in 2006 when a simple Google search revealed a photo of him buying a house in Panama. Darwin and his wife, Anne, were arrested and charged with fraud, deception, and money laundering related to the life insurance payout of £250,000. They were each sentenced to more than six years in prison, and all their property sold, and all their money taken, including his pension, to repay. Clayton Counts, American musician, reported himself dead on his website in 2007 as a prank. He actually died in 2016. Samuel Israel III, an American hedge fund manager who was facing 22 years in prison for financial malfeasance and fraud, left his truck and a suicide note at a bridge in an attempted fake suicide in April 2008. Authorities suspected that his suicide was faked since, among other things, passersby reported that a car had picked someone up on the bridge from near Israel's abandoned car. Two years were added to Israel's sentence for obstruction of justice, which he is currently serving. Marcus Schrenker, a financial manager from Fishers, Indiana, US, was charged with defrauding clients, and in 2009 attempted to fake his own death in a plane crash to avoid prosecution. The plane crash was quickly discovered to be staged, and Schrenker was captured two days later, after he sent an e-mail message to a friend about his plans. In October 2010, after pleading guilty to state charges, Schrenker was sentenced to 10 years in prison and was fined $633,781. Luke Rhinehart, American author, an email was sent out in August 2012 to 25 of Rhinehart's friends, informing them of his death. This was actually a hoax and a prank played by Rhinehart himself. The reactions of Rhinehart's 25 friends ranged from sorrow to gratitude and amusement. Chandra Mohan Sharma, Indian activist, murdered a homeless man, placed the body in his own car, and set the car on fire, in an attempt at faking his death in 2014 to get out of his marriage. He was captured by police later that year. Arkady Babchenko, a Russian journalist living in Ukraine who in 2018 faked his own assassination, which was widely reported in the international press, as part of a sting operation aimed at exposing an agent sent to kill him. Babchenko's appearance at a press conference the day after his "death" caused an international sensation. Nicholas Alahverdian, an American child welfare advocate and convicted sex offender from Rhode Island, purported to have died in February 2020, was found alive by police in Scotland in January 2022. Kim Avis, a busker and market trader from Inverness, Scotland and a local celebrity there. In 2019, he was reported dead in California but in the 2024 BBC Two documentary, Disclosure: Dead Man Running reporter and Inverness local Myles Bonnar uncovered evidence that Avis faked his death to evade charges of sexual assault. Conspiracy theories and false speculation On occasion, when a prominent public figure such as a singer or political leader dies, there are rumors that the figure in question did not actually die, but faked their death. These theories are all considered fringe theories. Among the suspected faked deaths include: Adolf Hitler, dictator of Nazi Germany (1933–1945), has been speculated (including by writer Emil Ludwig) to have faked his death and escaped Berlin in mid-1945, the setting of his death as established by Western scholars. Hitler is claimed to have utilized established escape routes while leaving behind misleading evidence such as his dental remains (via dentures and a broken-off jawbone) as well as a body double. Harold Holt, former Prime Minister of Australia, disappeared on the beach in 1967 with the consensus that he had drowned. Different theories emerged suggesting he had faked his death for any number of reasons, most famously that he was a Chinese spy who had been collected by a Chinese submarine, or that he feigned drowning to run away with his mistress. American singer Elvis Presley died in August 1977. Rumors claimed that he faked his death and went into hiding. Many of these fans have claimed to sighted Elvis (whose face was well known) in various places around the world. The earliest known alleged sighting of Elvis after was at the Memphis International Airport where a man who resembled Elvis gave the name "John Burrows", which was the same name Elvis used when booking hotels. In 1978, Gail Brewer-Giorgio published a book titled Orion, a novel about a fictional Presley-like singer called "Orion", who in the story faked his death to escape the pressures of fame. According to Brewer-Giorgio, her publisher inexplicably had her novel recalled from stores which made her wonder if the real Elvis Presley faked his death. She then began an investigation and wrote another book The Most Incredible Elvis Presley Story Ever Told AKA Is Elvis Alive? where she claimed that Elvis was faking his death. In 2017, Elvis fans claimed to see the singer visit his home Graceland on his 82nd birthday. Towards the end of the reign of Alexander I of Russia, Emperor of Russia (1801–1825), he was increasingly suspicious of those around him and was more religious. He then caught typhus and died. Russian legends claim that the Tsar faked his death and left for Siberia where he became a hermit and took on the name "Feodor Kuzmich". Such legends existed during Kuzmich's lifetime. When Kuzmich was on his deathbed in 1876, the priest there to perform the last rites on Kuzmich asked him if he was Tsar Alexander. Kuzmich replied with a vague sentence that did not answer the question. Historians are skeptical of the claim that Tsar Alexander I was Feodor Kuzmich. After rapper Jarad Higgins, known as Juice WRLD, died from a drug overdose at the age of 21, many fans speculated that his lyrics suggested that he expected to die young and thus could have faked his death. For example, in "Legends", he sings, "What's the 27 club? We ain't making it past 21," referring to a group of famous artists who died at the age of 27 (e.g. Kurt Cobain, Amy Winehouse, Jim Morrison, Janis Joplin, and Jimi Hendrix). Pseudocides in fiction Romeo and Juliet (Shakespeare) – To avoid a forced marriage, Juliet drinks a potion that causes her to appear dead for 42 hours. This backfires when Romeo hears of her death, unaware she was going to wake up, and kills himself, leading to Juliet also killing herself. In Gabrielle-Suzanne de Villeneuve's fairy tale Beauty and the Beast, Beauty's and her fairy mother's deaths were staged due to an evil fairy's plot to harm them and their family members, one of whom being the Prince she turned into a Beast for rejecting her marriage proposal. In The Adventure of the Empty House, Sherlock Holmes to Dr. Watson several years after his presumed death grappling with Professor Moriarty at the Reichenbach Falls. Holmes explains that he survived the fall where Moriarty did not, but had to remain "officially" dead while Moriarty's lieutenant, Sebastian Moran, was still at large. This event was loosely adapted by Steven Moffat for the 2010s television series Sherlock starring Benedict Cumberbatch and Martin Freeman in the episode "The Reichenbach Fall". Holmes is the subject of Jim Moriarty's work to undermine him in the public's view to drive Holmes to suicide. Moriarty instead kills himself and Holmes appears to kill himself to save his friends, but survives with the help of his brother Mycroft Holmes and returns to his work in the next episode, "The Empty Hearse". Adventures of Huckleberry Finn – to escape both his drunken father and his strict legal guardian, the main character fakes his own murder. The Fall and Rise of Reginald Perrin Gone Girl (2014): In the bestselling book and film, the Dunne marriage is falling apart after the husband is discovered to be having an affair and the wife commits pseudocide and travels to the western U.S. House, M.D.: Dr. Gregory House, the titular character of the television series, fakes his death in the series finale by switching dental records with a deceased patient. Gregory House, based on the character of Sherlock Holmes, commits pseudocide just as Holmes did in "The Adventure of the Empty House". The Outsider (1953) by Richard Wright tells the story of Cross Damon, who survives a subway accident but leaves his coat on another man's severely disfigured corpse. Investigators assume it is Cross' body, and he takes the opportunity to escape his previous life. In “What About Bob?” (1991), the title character, Bob Wiley (Bill Murray) is attempting to keep in touch with his psychiatrist Dr. Leo Marvin (Richard Dreyfuss) and poses as a detective to Dr. Marvin’s exchange staff to tell them that Bob committed suicide. While posing as a detective, Bob asks for postal information to Dr. Marvin’s residence in Lake Winnipesaukee. Pretty Little Liars (2010): A high-school student fakes her death in order to rid herself of a stalker in the episode "-A". Mona Vanderwaal, another character, also attempted to fake her own murder. Despicable Me 2 (2013): While Gru, Nefario and the girls are fighting the purple minions, Eduardo Perez reveals himself to be El Macho, a villain who faked his death by jumping out of a plane while standing on the back of a shark, having strapped two hundred and fifty pounds of dynamite to his chest, into the mouth of a volcano, which would end up killing both him and the shark. Big Hero 6 (2014): When Hiro manages to knock off the supervillain's mask at a teleportation research on an island, he thought that the villain was Krei, but its true identity was revealed to be Professor Callaghan instead, who faked his death by revealing that he escaped from the burning building by using Hiro's microbots to shield himself from the flames which killed his student and Hiro's brother, Tadashi after rushing into the burning building to save him. The Simpsons: Homer Simpson fakes his death to take a day off from work in the episode "Mother Simpson". In another episode, Krusty the Clown twice fakes his death in "Bart the Fink". Grand Theft Auto V: This video game portrays a faked death. In the first mission "Prologue" Michael Townley (main protagonist) robbed a bank in North Yankton, then used a bullet hit squib to fake his death, and moved to Los Santos with a fake name "Michael De Santa", claiming to be in witness protection. Alarm für Cobra 11 – Die Autobahnpolizei: on the ending of the season 6 finale "Ein Einsamer Sieg" (English: A Lonely Victory), Andre Fux was injured by the antagonist at sea. Andre is later rescued by a fisherman, who agrees to keep the secret of his fake death, and begins a new life with a new family. Fourteen years later, in the episode "Auferstehung" (Resurrection), Andre is reunited with Semir Gerkhan, his partner, who is still in the police. Semir learns that Andre's family had been killed. In the climactic scene, there is a car crash in the mountains. Semir tries to save Andre, but Andre falls off a cliff and dies. Before that, he gives Semir information about who killed Andre's family. Kathy Beale in EastEnders faked her death for 10 years and made a return on the 30th anniversary in 2015. Yakuza 6: Kazuma Kiryu faked his death to protect Haruka Sawamura and those around her and his friends. While under the radar, he helped Ichiban Kasuga in Yakuza: Like a Dragon by giving him the information he needed after a duel. Who Killed Sara? – Appeared a few times. In the James Bond film The Living Daylights, Bond fakes General Leonid Pushkin's death during a conference in Tangier, make to believe that General Georgi Koskov and Brad Whitaker's plan to assassinate Pushkin succeeded. In the James Bond film Spectre, Bond's adoptive brother Franz Oberhauser faked being killed in an avalanche alongside his father. In doing so, he took up the alias of Ernst Stavro Blofeld. Nora Prentiss, in which a man fakes his own death and is later charged with his own murder. In the South Park episode, Marjorine, the main characters fakes Butters' death and have him disguised himself as the new girl, Marjorine, to try to steal a paper fortune teller from the girls. True-crime genre Several books and television shows are dedicated to the theme of faked deaths. These include the 2014 television show Nowhere to Hide on Investigation Discovery, hosted by private investigator Steve Rambam. See also References Further reading Suicide Deception Death Fraud Insurance fraud Practical jokes
Faked death
[ "Biology" ]
5,181
[ "Behavior", "Human behavior", "Suicide" ]
9,539,181
https://en.wikipedia.org/wiki/Hydrastine
Hydrastine is an isoquinoline alkaloid which was discovered in 1851 by Alfred P. Durand. Hydrolysis of hydrastine yields hydrastinine, which was patented by Bayer as a haemostatic drug during the 1910s. It is present in Hydrastis canadensis (thus the name) and other plants of the family Ranunculaceae. Total synthesis The first attempt for the total synthesis of hydrastine was reported by Sir Robert Robinson and co-workers in 1931. Following studies where the synthesis of the key lactonic amide intermediate (structure 4 in figure) was the most troublesome, the major breakthrough was achieved in 1981 when J. R. Falck and co-workers reported a four-step total synthesis of hydrastine from simple starting materials. The key step in the Falck synthesis was using a Passerini reaction to construct the lactonic amide intermediate 4. Starting from a simple phenylbromide variant 1, alkylation reaction with lithium methylisocyanide gives the isocyanide intermediate 2. Reacting isocyanide intermediate 2 with opianic acid 3 initiated the intramolecular Passerini reaction to give the key lactonic amide intermediate 4. The tetrahydro-isoquinolin ring was formed by first a ring-closure reaction under dehydration conditions using POCl3 and then a catalyzed hydrogenation using PtO2 as the catalyst. Finally, hydrastine was synthesized by installing the N-methyl group via reductive amination reaction with formaldehyde. See also Bicuculline (very similar in structure) References External links Benzylisoquinoline alkaloids GABAA receptor antagonists Total synthesis 3-(5,6,7,8-tetrahydro-(1,3)dioxolo(4,5-g)isoquinolin-5-yl)-3H-2-benzofuran-1-ones
Hydrastine
[ "Chemistry" ]
414
[ "Total synthesis", "Alkaloids by chemical classification", "Tetrahydroisoquinoline alkaloids", "Chemical synthesis" ]
9,539,347
https://en.wikipedia.org/wiki/Allomerism
Allomerism is the similarity in the crystalline structure of substances of different chemical composition. References Penguin Science Dictionary 1994, Penguin Books Solid-state chemistry
Allomerism
[ "Physics", "Chemistry", "Materials_science" ]
32
[ "Physical chemistry stubs", "Condensed matter physics", "nan", "Solid-state chemistry" ]
9,539,477
https://en.wikipedia.org/wiki/Tethered%20balloon
A tethered, moored or captive balloon is a balloon that is restrained by one or more tethers attached to the ground so it cannot float freely. The base of the tether is wound around the drum of a winch, which may be fixed or mounted on a vehicle, and is used to raise and lower the balloon. A balloon is a form of aerostat, along with the powered free-flying airship, although the American GAO has used the term "aerostat" to describe a tethered balloon in contrast to the powered airship. Tethered balloons have been used for advertising, recreation, observation, and civil or military uses. Design principles Early balloons were simple round spheres, with a payload hung beneath. The round shape uses the minimum material to accommodate a given volume of lifting gas, making it the lightest construction. However, in any significant wind the round shape is aerodynamically unstable and will bob about, risking damage or the balloon breaking free. To avoid this problem, the kite balloon was developed. This form has an elongated shape to reduce wind resistance and some form of tail surface to stabilize it so that it always points into the wind. Like the powered airship, such balloons are often called blimps. A hybrid tethered balloon or kytoon is shaped to provide aerodynamic lift similar to a kite, as well as to reduce drag. History Designed by Albert Caquot, a French engineer, in 1914, the barrage balloons of World War I and World War II were early examples of tethered balloons. Military observation balloons were also used extensively in World War I. These early types used hydrogen as their lifting gas. Tethered balloons are used for lifting cameras, radio antennas, electro-optical sensors, radio-relay equipment and advertising banners – often for long durations. Tethered balloons are also used for position marking and bird control work. Typically, they use non-flammable helium gas to provide lift. Modern use Advertising Tethered balloons are often used for advertising, either by lifting advertisement signs, or by using a balloon with advertisements written on, or attached to it. Often both methods are combined. It is not uncommon to use specially designed balloons. Blimp-shaped balloons are especially popular for advertising use. By suspending a light source within the envelope, the balloon can be illuminated at night, drawing attention to its message. Earth sciences The United States Geological Survey uses tethered balloons to carry equipment to places where conventional aircraft cannot go, such as above an erupting volcano. Tethered balloons are ideal as they can easily remain more or less in one place, are less likely to be damaged by volcanic ash, and are less expensive to operate than a helicopter. Leisure Tethered balloons are frequently used as a recreational attraction. Telecommunications Tethered balloons can be used as temporary transmitters, instead of a radio mast, either by using the tether which holds the balloon as the antenna, or by carrying antennas on the balloon fed by a fiber optic or radio frequency cable contained inside the tether. The advantage of tethered balloons is that great antenna heights are easily attainable and they are significantly cheaper than erecting a temporary mast. This allows for more localized coverage with a high capacity within a radius of the balloon at an altitude between above ground level. Tethered balloons or blimps have been studied to overcome the limitations of terrestrial cell towers for telecommunications. Because of their higher elevation they can provide a larger coverage area and better line of sight, would be more economical and would have better latency than satellite systems. Security and defense During the 1990 Invasion of Kuwait, the first indication of the Iraqi ground advance was from a radar-equipped tethered balloon that detected Iraqi armor and air assets moving south. Tethered surveillance balloons were used in the 2004 American occupation of Iraq. They utilized a high-tech optics system to detect and observe enemies from miles away. They have been used to over watch foot patrols and convoys in Baghdad, Afghanistan, and are permanently installed above US military bases in Kabul and Bagram. The US Drug Enforcement Administration has contracted with Lockheed Martin to operate a series of radar-equipped tethered balloons to detect low-flying aircraft attempting to enter the United States. A total of twelve tethered balloons, called Tethered Aerostat Radar System (TARS), are positioned approximately apart, from California to Florida to Puerto Rico, providing unbroken radar coverage along the entire southern border of the US. The U.S. Army has developed a tethered aerostat to perform operational testing at Aberdeen Proving Ground beginning in 2015. The system, called JLENS, uses two moored balloons designed to provide over-the-horizon missile defense capability. Israeli missile detection system called Sky Dew, similar to TARS and JLENS, is in use from 2022. See also Aerostatics Barrage balloons Aerophile SA Raven Aerostar Worldwide Aeros Corp References Aircraft configurations Balloons (aeronautics) French inventions
Tethered balloon
[ "Engineering" ]
1,017
[ "Aircraft configurations", "Aerospace engineering" ]
13,548,775
https://en.wikipedia.org/wiki/Roman%20cement
Roman cement is a substance developed by James Parker in the 1780s, being patented in 1796. The name is misleading, as it is nothing like any material used by the Romans, but was a "natural cement" made by burning septaria – nodules that are found in certain clay deposits, and that contain both clay minerals and calcium carbonate. The burnt nodules were ground to a fine powder. This product, made into a mortar with sand, set in 5–15 minutes. The success of Roman cement led other manufacturers to develop rival products by burning artificial mixtures of clay and chalk. History There has been recent resurgence of interest in natural cements and Roman cements due mainly to the need for repair of façades done in this material in the 19th century. The major confusion involved for many people in this subject is the terminology used. Roman cement was originally the name given, by Parker, to the cement he patented which is a natural cement (i.e. it is a marl, or limestone containing integral clay, dug out of the ground, burnt and ground to a fine powder). In 1791, Parker was granted a patent "Method of Burning bricks, Tiles, Chalk". His second patent in 1796 "A certain Cement or Terras to be used in Aquatic and other Buildings and Stucco Work", covers Roman cement, the term he used in a 1798 pamphlet advertising his cement. He set up his manufacturing plant on Northfleetcreek, Kent. It was notably patented late on but James Parker is still the subject of all the credit. Later, in the 1800s various sources of the correct type of marl, known also as cement stone, were discovered across Europe and so there were a range of natural cements (with varying properties) in use across Europe. An Austrian standard from 1880, providing a contemporary definition of Roman cements, reads: "Roman cements are products obtained from argillaceous marlstones by burning below the sintering temperature. They do not slake in contact with water and must therefore be ground to a floury fineness." From around 1807 a number of people looked to make artificial versions of this cement (or more strictly hydraulic lime as it was not burnt at fusion temperatures). Amongst these were James Frost who had about twenty patents from 1811 to 1822 including one for "British Cement" and in 1824 Joseph Aspdin, a British bricklayer from Leeds, with his now famous patent for a method of making a cement he called "Portland cement". This was done by adding various materials together to make an artificial version of natural cement. The name "Portland cement" is also recorded in a directory published in 1823 being associated with William Lockwood, Dave Stewart, and possibly others. There then followed a number of independently discovered or copied versions of this "Portland cement" (also referred to as proto-Portland cement). Proto-Portland cement had a different chemical makeup from other natural cements being produced at the same time: It was burnt at a higher temperature than other Natural cements and thus crosses the barrier between traditional vertical kiln fired natural cement and the later horizontal kiln fired artificial cements. This cement is not, however, the same as the modern ordinary Portland cement, which can be defined as artificial cement. James Frost is reported to have erected a manufactory for making of an artificial cement in 1826. In 1843, Aspdin's son William improved their cement, which was initially called "Patent Portland cement," although he had no patent. In 1848, William Aspdin further improved his cement and in 1853, he moved to Germany where he was involved in cement making. William Aspdin made what could be called meso-Portland cement (a mix of Portland cement and hydraulic lime). Development in the 1860s of rotating horizontal kiln technology brought dramatic changes in properties, arguably resulting in modern cement. Certainly it is difficult to define whether an old render was a natural cement (single source marl) or an artificial one, but there is no doubt as whether the cement was fired in a vertical or horizontal kiln. The names natural cement or Roman cement then defines a cement coming from a single source rock. Early or proto-Portland cement could be used for early cement that comes from a number of sourced and mixed materials. There is no widely used terminology for these 19th-century cements. There had been, in order to rediscover this technology, two projects carried out by the European Union ROCEM and subsequently ROCARE (an ongoing project). Both these only deal with natural cement - referred to as Roman cement without reference to the early artificial cements. References Notes Bibliography Thurston A P, Parker's "Roman" Cement, Transactions of the Newcomen Society 1939 P193-206 (Newcomen Society) Major A J Francis, The Cement Industry 1796-1914 A History, 1977, Davis & Charles (Publishers) Ltd, Devon UK, North Pomfret Vermont US, North Vancouver Canada Weber J, Mayr N, Bayer K, Hughes D, Kozłowski R, Stillhammerova M, Ullrich D, Vyskocilova R (2007) Roman cement mortars in Europe’s architectural heritage of the 19th century. Journal of the American Society for Testing Materials International, Vol, 4 No 8 Footnotes External links Parker's Roman Cement 1796 Building materials Cement
Roman cement
[ "Physics", "Engineering" ]
1,097
[ "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
13,549,046
https://en.wikipedia.org/wiki/Micro%20T-Kernel
μT-Kernel is an open source real-time operating system (RTOS) designed for 16- and 8-bit microcontrollers. "μ” in the name stands for "micro" and pronounced as such. It is not pronounced as "mu". It is freely available under T-License. Supported CPU list is available. The latest version, μT-Kernel 3.0, is available from Github. μT-Kernel was standardized by T-Engine Forum (now merged into TRON Forum) and later it became the basis of IEEE Standard 2050-2018, "IEEE Standard for a Real-Time Operating System (RTOS) for Small-Scale Embedded Systems" published by the Institute of Electrical and Electronics Engineers (IEEE) Standards Association (IEEE SA). Its specification is available both in English and Japanese. The source code is available from the TRON Forum website and GitHub. An article comparing nine RTOSs in which μT-Kernel was evaluated and given favorable remarks appeared in IEEE publication. History μT-Kernel was developed as a smaller subset of T-Kernel, a full-featured real-time operating system. For example, it does not assume the use of MMU unlike the original T-Kernel. For more on its history and the overall philosophy behind the TRON real-time OS family, please see the entry of T-Kernel. See also T-Kernel References External links , TRON Forum μT-Kernel specifications in English and Japanese IEEE Publishes Standard Addressing Real-Time Architecture for Embedded Systems Information about T-Engine, T-Kernel and μT-Kernel Programming Introducing the μT-Kernel μT-Kernel for M16C/62P source code and documentation Embedded operating systems IEEE standards Internet of things Operating system kernels TRON project
Micro T-Kernel
[ "Technology" ]
370
[ "Computing platforms", "Computer standards", "TRON project", "Operating system stubs", "Computing stubs", "IEEE standards" ]
13,549,049
https://en.wikipedia.org/wiki/Equinox%20%28celestial%20coordinates%29
In astronomy, an equinox is either of two places on the celestial sphere at which the ecliptic intersects the celestial equator. Although there are two such intersections, the equinox associated with the Sun's ascending node is used as the conventional origin of celestial coordinate systems and referred to simply as "the equinox". In contrast to the common usage of spring/vernal and autumnal equinoxes, the celestial coordinate system equinox is a direction in space rather than a moment in time. In a cycle of about 25,800 years, the equinox moves westward with respect to the celestial sphere because of perturbing forces; therefore, in order to define a coordinate system, it is necessary to specify the date for which the equinox is chosen. This date should not be confused with the epoch. Astronomical objects show real movements such as orbital and proper motions, and the epoch defines the date for which the position of an object applies. Therefore, a complete specification of the coordinates for an astronomical object requires both the date of the equinox and of the epoch. The currently used standard equinox and epoch is J2000.0, which is January 1, 2000 at 12:00 TT. The prefix "J" indicates that it is a Julian epoch. The previous standard equinox and epoch was B1950.0, with the prefix "B" indicating it was a Besselian epoch. Before 1984 Besselian equinoxes and epochs were used. Since that time Julian equinoxes and epochs have been used. Motion of the equinox The equinox moves, in the sense that as time progresses it is in a different location with respect to the distant stars. Consequently, star catalogs over the years, even over the course of a few decades, will list different ephemerides. This is due to precession and nutation, both of which can be modeled, as well as other minor perturbing forces which can only be determined by observation and are thus tabulated in astronomical almanacs. Precession Precession of the equinox was first noted by Hipparchus in 129 BC, when noting the location of Spica with respect to the equinox and comparing it to the location observed by Timocharis in 273 BC. It is a long term motion with a period of 25,800 years. Nutation Nutation is the oscillation of the ecliptic plane. It was first observed by James Bradley as a variation in the declination of stars. Bradley published this discovery in 1748. Because he did not have an accurate enough clock, Bradley was unaware of the effect of nutation on the motion of the equinox along the celestial equator, although that is in the present day the more significant aspect of nutation. The period of oscillation of the nutation is 18.6 years. Equinoxes and epochs Besselian equinoxes and epochs A Besselian epoch, named after German mathematician and astronomer Friedrich Bessel (1784–1846), is an epoch that is based on a Besselian year of 365.242198781 days, which is a tropical year measured at the point where the Sun's longitude is exactly 280°. Since 1984, Besselian equinoxes and epochs have been superseded by Julian equinoxes and epochs. The current standard equinox and epoch is J2000.0, which is a Julian epoch. Besselian epochs are calculated according to: B = 1900.0 + (Julian date − 2415020.31352) / 365.242198781 The previous standard equinox and epoch were B1950.0, a Besselian epoch. Since the right ascension and declination of stars are constantly changing due to precession, astronomers always specify these with reference to a particular equinox. Historically used Besselian equinoxes include B1875.0, B1900.0, B1925.0 and B1950.0. The official constellation boundaries were defined in 1930 using B1875.0. Julian equinoxes and epochs A Julian epoch is an epoch that is based on Julian years of exactly 365.25 days. Since 1984, Julian epochs are used in preference to the earlier Besselian epochs. Julian epochs are calculated according to: J = 2000.0 + (Julian date − 2451545.0)/365.25 The standard equinox and epoch currently in use are J2000.0, which corresponds to January 1, 2000, 12:00 Terrestrial Time. J2000.0 The J2000.0 epoch is precisely Julian date 2451545.0 TT (Terrestrial Time), or January 1, 2000, noon TT. This is equivalent to January 1, 2000, 11:59:27.816 TAI or January 1, 2000, 11:58:55.816 UTC. Since the right ascension and declination of stars are constantly changing due to precession, (and, for relatively nearby stars due to proper motion), astronomers always specify these with reference to a particular epoch. The earlier epoch that was in standard use was the B1950.0 epoch. When the mean equator and equinox of J2000 are used to define a celestial reference frame, that frame may also be denoted J2000 coordinates or simply J2000. This is different from the International Celestial Reference System (ICRS): the mean equator and equinox at J2000.0 are distinct from and of lower precision than ICRS, but agree with ICRS to the limited precision of the former. Use of the "mean" locations means that nutation is averaged out or omitted. This means that the Earth's rotational North pole does not point quite at the J2000 celestial pole at the epoch J2000.0; the true pole of epoch nutates away from the mean one. The same differences pertain to the equinox. The "J" in the prefix indicates that it is a Julian equinox or epoch rather than a Besselian equinox or epoch. Equinox of Date There is a special meaning of the expression "equinox (and ecliptic/equator) of date". This reference frame is defined by the positions of the ecliptic and the celestial equator as of the date/epoch on which the position of something else (typically a solar system object) is being specified. Other equinoxes and their corresponding epochs Other equinoxes and epochs that have been used include: The Bonner Durchmusterung started by Friedrich Wilhelm August Argelander uses B1855.0 The Henry Draper Catalog uses B1900.0 Constellation boundaries were defined in 1930 along lines of right ascension and declination for the B1875.0 epoch. Occasionally, non-standard equinoxes have been used, such as B1925.0 and B1970.0 The Hipparcos Catalog uses the International Celestial Reference System (ICRS) coordinate system (which is essentially equinox J2000.0) but uses an epoch of J1991.25. For objects with a significant proper motion, assuming that the epoch is J2000.0 leads to a large position error. Assuming that the equinox is J1991.25 leads to a large error for nearly all objects. Epochs and equinoxes for orbital elements are usually given in Terrestrial Time, in several different formats, including: Gregorian date with 24-hour time: 2000 January 1, 12:00 TT Gregorian date with fractional day: 2000 January 1.5 TT Julian day with fractional day: JDT 2451545.0 NASA/NORAD's Two-line elements format with fractional day: 00001.50000000 Sidereal time and the equation of the equinoxes Sidereal time is the hour angle of the equinox. However, there are two types: if the mean equinox is used (that which only includes precession), it is called mean sidereal time; if the true equinox is used (the actual location of the equinox at a given instant), it is called apparent sidereal time. The difference between these two is known as the equation of the equinoxes, and is tabulated in the Astronomical Almanac. A related concept is known as the equation of the origins, which is the arc length between the Celestial Intermediate Origin and the equinox. Alternatively, the equation of the origins is the difference between the Earth Rotation Angle and the apparent sidereal time at Greenwich. Diminishing role of the equinox in astronomy In modern astronomy the ecliptic and the equinox are diminishing in importance as required, or even convenient, reference concepts. (The equinox remains important in ordinary civil use, in defining the seasons, however.) This is for several reasons. One important reason is that it is difficult to be precise what the ecliptic is, and there is even some confusion in the literature about it. Should it be centered on the Earth's center of mass, or on the Earth-Moon barycenter? Also with the introduction of the International Celestial Reference Frame, all objects near and far are put fundamentally in relationship to a large frame based on very distant fixed radio sources, and the choice of the origin is arbitrary and defined for the convenience of the problem at hand. There are no significant problems in astronomy where the ecliptic and the equinox need to be defined. References External links Celestial Coordinate System, UTK Astronomical coordinate systems Equinoxes
Equinox (celestial coordinates)
[ "Astronomy", "Mathematics" ]
2,023
[ "Time in astronomy", "Astronomical coordinate systems", "Equinoxes", "Coordinate systems" ]
13,549,080
https://en.wikipedia.org/wiki/Cross-border%20region
A cross-border region is a territorial entity that is made of several local or regional authorities that are co-located yet belong to different nation states. Cross-border regions exist to take advantage of geographical conditions to strengthen their competitiveness. Cross-border regions in Europe In Europe, there are a large number of cross-border regions. Some of them are often referred to as 'Euroregions' although this is an imprecise concept that is used for a number of different arrangements. European cross-border regions are most commonly constituted through co-operation among border municipalities, districts or regions. Many cross-border regions receive financial support from the European Commission via its Interreg programme. They vary in their legal and administrative set-up but have in common that they are not 'regions' in an administrative-constitutional sense. Many cross-border regions are based on some sort of civil-law agreements among the participating authorities. For instance, the classical form of a Euroregion is the ‘twin association’: On each side of the border, municipalities and districts form an association according to a legal form suitable within their own national legal systems. In a second step, the associations then join each other on the basis of a civil-law cross-border agreement to establish the cross-border entity. Many Euroregions along the Germany–Benelux border are established according to this model, following the initiatives by the EUREGIO. History The first European cross-border region, the EUREGIO, was established in 1958 on the Dutch–German border, in the area of Enschede (NL) and Gronau (DE). Since then, Euroregions and other forms of cross-border co-operation have developed throughout Europe. For local and regional authorities, engaging in cross-border regions meant they entered a field long reserved for central state actors. For dealing with issues such as local cross-cross-border spatial planning or transport policy, in the 1960s and 1970s various bi-lateral and multi-lateral governmental commissions were established without granting access to local authorities (Aykaç 1994). They dealt with issues such as local cross-border spatial planning and transport policy. But over the last 30 years, the scope for non-central governments (NCGs) to co-operate across borders has widened considerably. To a large degree, this can be related to macro-regional integration in Europe. In particular, two supranational bodies, the Council of Europe and the European Union, were important for improving the conditions under which NCGs could co-operate across borders. Whereas the Council of Europe was in past particularly active in improving the legal situation, the Commission of the European Union new provides substantial financial support for CBC initiatives. Legally, the first cross-border regions were based on agreements with varying degrees of formality and mostly relied on good will. In 1980, on the initiative of the Council of Europe, the so-called Madrid Convention (Outline Convention on Transfrontier Co-operation) was introduced as a first step towards CBC structures based on public law. The convention has been signed by 20 countries and was more recently updated with two Additional Protocols. It provides a legal framework for completing bi- and multinational agreements for public law CBC among NCGs. Examples for such agreements are the Benelux Convention on Cross-border Relations of 1989 and the German-Dutch Treaty of Anholt of 1991. The Rhine-Waal Euroregion on the Dutch–German border has been such a cross-national public body since 1993. However, the regulations delivered by such agencies are binding only on the public authorities within the cross-border area concerned and not on civil subjects (Denters et al. 1998). Compared with the Council of Europe, the CBC-related activities of the EU are primarily financial. Many CBC initiatives are eligible for support under the Interreg community initiative launched by the European Commission in 1990; this policy was re-confirmed as Interreg II in 1994 and as Interreg III in 1999. Types of European cross-border regions There are several ways in which cross-border regions can be distinguished. First, they vary in geographic scope. Small-scale initiatives such as the EUREGIO can be distinguished from larger groupings, such as the 'Working Communities'. The latter – most of them were founded between 1975 and 1985 – usually comprise several regions forming large areas that can stretch over several nation states. Examples of Working Communities are the Arge Alp, the Alpes-Adria, the Working Community of the Western Alps (COTRAO), the Working Community of the Pyrenees or the Atlantic Arc. Their organizational structures usually consist of a general assembly, an executive committee, thematic working groups and secretariats (Aykaç, 1994: 12–14), but activities tend to be confined to common declarations and information exchange. However, some groupings, such as the Atlantic Arc, succeeded in obtaining European funds (Balme et al., 1996). Smaller initiatives are technically referred to as micro cross-border regions but for simplicity they can be called Euroregions. Euroregions have a long tradition in certain areas of post-war Europe, especially on the Germany–Benelux border, where the expressions Euroregion and Euregio were coined. Organizationally, Euroregions usually have a council, a presidency, subject-matter oriented working groups and a common secretariat. The term Euroregion can refer both to a territorial unit, comprising the territories of the participating authorities, and an organizational entity, usually the secretariat or management unit. Legally, the cooperation can take different forms, ranging from legally non-binding arrangements to public-law bodies. The spatial extension of micro-CBRs will usually range between 50 and 100 km in width; and they tend to be inhabited by a few million inhabitants. In most cases, the participating authorities are local authorities, although in other cases regional or district authorities are involved. Occasionally, third organizations, such as regional development agencies, interest associations and chambers of commerce, have become official members. The organizational set-up can also differ from the original model inspired by the Dutch–German EUREGIO. Cross-border regions also differ in terms of how closely the participating parties work with each other. While some initiatives hardly go beyond ceremonial contacts, others are engaged in enduring and effective collaboration. For estimating the co-operation intensity of existing CBC arrangements, a catalogue of criteria proposed by the AEBR can be used: co-operation based on some type of legal arrangement, common permanent secretariat controlling its own resources existence of an explicitly documented development strategy broad scope of co-operation in multiple policy areas, similar to conventional local or regional authorities A third way of distinguishing cross-border regions considers the nature of the participating authorities. Most of the small-scale initiatives involve local authorities as the driving protagonists whereas large-scale CBC is almost exclusively driven by regional authorities. There is variance in this respect, depending on the territorial organization of different European countries. For instance, in Germany, local administration comprises two levels, the municipalities and the Kreise, with the latter being self-governed groupings of municipalities. In most cases, the Kreise are the driving force behind cross-border initiatives. By contrast, in Italy, it is meso-level authorities, the 'province' (provinces), that are usually involved in cross-border cooperation initiatives while the municipalities play a minor role because of their relative fragmentation compared to the German Kreise. In Scandinavia, as for instance in the Øresund region, both counties and large urban municipalities (Greater Copenhagen) participate in the cooperation arrangement. In general, in countries with a strong role for inter-municipal associations, cross-border co-operation is often pursued by local actors. By contrast, in countries with a two-tier regional administration and a minor role for inter-local action (such as Italy or France), cross-border regions are a domain pursued by regional authorities. Notes and references External links M Perkmann (2003): Cross-Border Regions in Europe: Significance and Drivers of Regional Cross-Border Co-operation. European Urban and Regional Studies, Vol. 10, No. 2, 153–171. M Perkmann (2007): Policy entrepreneurship and multilevel governance: a comparative study of European cross-border regions, Environment and Planning C, Vol: 25, No. 6, 861 - 879. J W Scott (1999) European and North American Contexts for Cross-border Regionalism. Regional Studies, Vol. 33, No. 7, 605 - 617. Borders International relations Types of geographical division
Cross-border region
[ "Physics" ]
1,754
[ "Spacetime", "Borders", "Space" ]
13,549,701
https://en.wikipedia.org/wiki/Apo%20Reef
Apo Reef is a coral reef system in the Philippines situated in the western waters of Occidental Mindoro province in the Mindoro Strait. Encompassing , it is considered the world's second-largest contiguous coral reef system, and is the largest in the country. The reef and its surrounding waters are protected areas administered as the Apo Reef Natural Park (ARNP). It is one of the best known and most popular diving regions in the country, and is on the tentative list for UNESCO World Heritage Sites. Geography and environment Apo Reef can is about west of the coast of Mindoro. It is separated from the main island by the Apo East Pass of the Mindoro Strait. Politically, the reef lies within the jurisdiction of the Province of Occidental Mindoro in Region IV-B of the Philippines and, more specifically, of the Municipality of Sablayan. Important Bird Area The park has been designated an Important Bird Area (IBA) by BirdLife International because it supports a significant seabird population, with at least 10,000 breeding pairs recorded. Reef system Apo Reef is a roughly triangular coral atoll formation approximately from the north to the south tip, and from east to west. It is separated by two lagoon systems, the north and south lagoons which are bounded by narrow reef platforms. It is of almost triangular northern and southern atoll-like reefs separated by a deep channel that is open to the west. The channel runs east to west from deep with a fine white sand bottom, numerous mounds and patches of branching corals under the deep blue water. The north lagoon is an enclosed triangular coral reef platform partly exposed during low tide. It is relatively shallow with depths of about . While the south lagoon is an inverted triangular coral platform enclosed on two sides and is about in depth. Likewise, reef limestone and coralline sand on the eastern and south-eastern sides dominantly underlie the area. Studies indicate that the modern reef has grown over an old reef formation during the last glacial cycle approximately 19,000 years ago. The surface morphology of the modern reef is a product of the fluctuating sea levels as the crust under Apo Reef subsides towards Manila Trench. Islands The main geographical feature of Apo Reef is submerged, but three islands mark it on the surface: Apo Island, Apo Menor (locally known as Binangaan) and Cayos del Bajo ("Keys of the bank", locally known as Tinangkapan). The islands are uninhabited. Since the declaration of "no-take-zone" policy at Apo Reef Natural Park in 2007, only protected area personnel and members of the Task Force MARLEN (Marine and Apo Reef Law Enforcement for Nature), who are tasked to implement protection and conservation work at the park, stay in the protected area on weekly shifts. Apo Island The largest is Apo Island at with mangroves and beach vegetation. The reef surrounding the island extends to in places. Outside the lagoonal mangroves in the eastern and southern sides of Apo Island, the soil is sandy-to-sandy loam that has less silt and clay particles, while the lagoonal mangroves have a sandy loam to clay loam soil, underlain by decomposed plant residues or coarse materials. Apo Island is separated from Apo Reef by a narrow, deep channel. The island is about from Mindoro and about from Nanga and Tara Islands, the nearest of the islands off Busuanga Island on the western side of the Mindoro Strait. The Apo Reef Light, situated on the north-east part of the island, warns ship about the location of this navigational hazard. The island is situated at a two-and-a-half-hour navigation 240° from Sablayan by pump boat (banka). The island houses a permanent ranger base which monitors the national park. An administrative desk collects the environmental fees. It is possible to stay overnight in tents subject to certain conditions. Very limited facilities are available onshore to protect the island's fragile ecosystem. Apo Menor Islet Apo Menor is located near the western end of Apo Reef, about east of Apo Island. It is a rocky limestone island with relatively little vegetation. Cayos del Bajo Cayos del Bajo are flat coralline rock formations with no vegetation on the northern lagoon near the eastern edge of the reef. At low tide, many small rocks are dry on the reef, particularly along its northern side. Conservation history The Apo Reef is a protected area of the Philippines classified as a Natural park encompassing . Of the total area, comprises the Apo Reef Natural Park while the remaining constitute a buffer zone surrounding the protected area. Prior to its declaration as a protected area, Apo Reef was first officially declared a "Marine Park" by then Philippine president Ferdinand Marcos in 1980. This was followed with the local government of Sablayan declaring the reef a special "Tourism Zone and Marine Reserve" three years later. In 1996, the entire reef was declared a protected natural park by then-president Fidel Ramos. In 2006, the Protected Areas and Wildlife Bureau of the Philippine Department of Environment and Natural Resources submitted the reef to the UNESCO World Heritage Centre for consideration as a World Heritage Site. Following a survey by the local chapter of the World Wide Fund for Nature, fishing within the reef was banned by the Philippine government in September 2007. The marine park opened for tourists to help generate funds for its protection as well as provide an alternative livelihood for hundreds of fishermen in the area. Apo Reef was declared a national park under Republic Act No. 11038 (Expanded National Integrated Protected Areas System Act of 2018) signed by President Rodrigo Duterte in July 2018. Tourism All people accessing Apo Reef need to pay an environmental fee. Tourist activities are administered by the local government of Sablayan and the local office of the Department of Environment and Natural Resources (DENR). Scuba diving The main activity of the reef relates to its underwater quality. Scuba diving and snorkeling in Apo Reef area are exceptional due to the quality of the flora, the fauna and the clarity of the water and white sand. Many species can be observed in deep or shallow waters in particular, sharks, giant napoleons, and manta rays. Marine gallery See also Apo Reef Light List of natural parks of the Philippines List of protected areas of the Philippines List of reefs List of World Heritage Sites in the Philippines Tubbataha Reef Verde Island Passage References Bibliography Department of Environment and Natural Resources, Conservation of Priority Protected Areas Project, Apo Reef Natural Park Brochure. Sablayan, Occidental Mindoro; List of Proclaimed Marine Protected Areas; Protected Areas And Wildlife Bureau, 2004. External links Apo Reef Natural Park Official Website (DENR) Travel Guide: Sablayan (Apo Reef/Pandan Island) Coral reefs Underwater diving sites in the Philippines Reefs of the Philippines Natural parks of the Philippines Important Bird Areas of the Philippines Landforms of Occidental Mindoro Islands of Occidental Mindoro Tourist attractions in Occidental Mindoro World Heritage Tentative List for the Philippines ASEAN heritage parks
Apo Reef
[ "Biology" ]
1,467
[ "Biogeomorphology", "Coral reefs" ]
13,550,431
https://en.wikipedia.org/wiki/Max%20Volmer
Max Volmer (; 3 May 1885 – 3 June 1965) was a German physical chemist, who made important contributions in electrochemistry, in particular on electrode kinetics. He derived and co-developed the Butler–Volmer equation, and was a professor of chemistry at the Technical University of Berlin. After the World War II in 1945, Volmer was held in the Soviet custody in Russia and was one of many leading German nuclear scientists in the Soviet program of nuclear weapons, where he headed a design bureau for the production of the heavy water. Upon his return to East Germany in 1955, he became a professor at the University of Berlin and was president of the East German Academy of Sciences. He later directed the "Physical Chemistry and Electrochemistry Institute" of the Technical University of Berlin until his death, aged 80, in 1965. Biography From 1905 to 1909, Volmer studied chemistry at the University of Marburg and earned his PhD in chemistry from the University of Leipzig in 1910–11, based on his work on photochemical reactions in high vacuums. In 1912, Volmer became an assistant professor of chemistry at the University of Leipzig and after completion of his required (per German university rules) Habilitation there in 1914, he became qualified as privatdozent of physical chemistry at the University of Leipzig. Career Early years In 1916, Volmer went to work on military-related research at the Physical Chemistry Institute of the University of Berlin (today the Humboldt University of Berlin). From 1918 to 1920, he found employment as a senior staff scientist at the Auergesellschaft AG in Berlin. In 1919, he invented the mercury steam ejector, and he published a paper, with Otto Stern which resulted in the attribution of the Stern–Volmer equation and constant. Also attributed from his work during this time is the Volmer isotherm. In 1920, Volmer was appointed extraordinarius professor of physical chemistry and electrochemistry at the University of Hamburg. In 1922, he was appointed ordinarius professor and director of the Physical Chemistry and Electrochemistry Institute of Technical University Berlin (Berlin-Charlottenburg); the position was previously held by Walther Nernst. It was during his time there that he discovered the migration of adsorbed molecules, known as Volmer diffusion. In 1930, he published a paper from which was attributed the Butler-Volmer equation, based on earlier work of John Alfred Valentine Butler. This work formed the basis of phenomenological kinetic electrochemistry. In Russia During the end times of the World War II, Max Volmer, Manfred von Ardenne, director of his private laboratory Forschungslaboratoriums für Elektronenphysik, Gustav Hertz, director of Research Laboratory II at Siemens, and Peter Adolf Thiessen, ordinarius professor at the University of Berlin, had made a pact. The pact was a pledge that whoever first made contact with the Soviets would speak for the rest. The objectives of their pact were threefold: (1) Prevent plunder of their institutes, (2) Continue their work with minimal interruption, and (3) Protect themselves from prosecution for any political acts of the past. Before the end of World War II, Thiessen, a member of the Nazi Party, had Communist contacts. On 27 April 1945, Thiessen arrived at von Ardenne's institute in an armored vehicle with a major of the Soviet Army, who was also a leading Soviet chemist. All four of the pact members were taken to the Soviet Union. Hertz was made head of Institute G, in Agudseri (Agudzery), about 10 km southeast of Sukhumi and a suburb of Gul’rips (Gulrip’shi); Volmer was initially assigned to Hertz's institute. Topics assigned to Gustav Hertz's Institute G included: (1) Separation of isotopes by diffusion in a flow of inert gases, for which Gustav Hertz was the leader, (2) Development of a condensation pump, for which Justus Mühlenpfordt was the leader, (3) Design and build a mass spectrometer for determining the isotopic composition of uranium, for which Werner Schütze was the leader, (4) Development of frameless (ceramic) diffusion partitions for filters, for which Reinhold Reichmann was the leader, and (5) Development of a theory of stability and control of a diffusion cascade, for which Heinz Barwich was the leader; Barwich had been deputy to Hertz at Siemens. Von Ardenne was made head of Institute A, in Sinop, a suburb of Sukhumi. Late in January 1946, Volmer was assigned to the Nauchno-Issledovatel’skij Institut-9 (NII-9, Scientific Research Institute No. 9), in Moscow. Volmer was given a design bureau to work on the production of heavy water; Robert Döpel also worked at NII-9. Volmer's group with Victor Bayerl, a physical chemist and Gustav Richter a physicist, was under Alexander Mikailovich Rosen, and they designed a heavy water production process and facility based on the counterflow of ammonia. The installation was constructed at Norilsk and completed in 1948, after which Volmer's organization was transferred to Zinaida Yershova’s group, which worked on plutonium extraction from fission products. Return to Germany In March 1955, Volmer returned to Germany, and he was honored with the National Prize. On 1 May 1955, he became an ordinarius professor at the Humboldt University of Berlin. On 10 November 1955, became a member of the Wissenschaftlichen Rates für die friedliche Anwendung der Atomenergie of the Council of Ministers of the German Democratic Republic (GDR). From 8 December 1955 to 1959, he became president of the German Academy of Sciences, after which he was vice-president until 1961. From 27 August 1957, he became an initial member of the Forschungsrat of the GDR. At Technische Universität Berlin, where Volmer worked for so many years, the Max Volmer Laboratory for Biophysical Chemistry was named in his honor. Also in Volmer's honor, a street was named Volmerstrasse in Berlin-Adlershof, Potsdam, and Hilden. Personal Volmer was married to the physical chemist Lotte Pusch, who later quit her career to support Volmer. Max and Lotte knew and socialized with the physicist Lise Meitner and the chemist Otto Hahn since the 1920s. Selected bibliography Articles O. Stern and M. Volmer Über die Abklingzeit der Fluoreszenz, Physik. Zeitschr. 20 183-188 (1919) as cited in Mehra and Rechenberg, Volume 1, Part 2, 2001, 849. T. Erdey-Grúz and M. Volmer Z. Phys. Chem. 150 (A) 203-213 (1930) Books Max Volmer, Kinetik der Phasenbildung (1939) Max Volmer, Zur Kinetik der Phasenbildung und der Elektrodenreaktionen. Acht Arbeiten. (Akademische Verlagsgesellschaft Geest & Portig K.-G., 1983) Max Volmer und L. Dunsch, Zur Kinetik der Phasenbildung und Elektrodenreaktion. Acht Arbeiten. (Deutsch Harri GmbH, 1983) See also Butler–Volmer equation Stern–Volmer equation and constant Notes References Heinemann-Grüder, Andreas Keinerlei Untergang: German Armaments Engineers during the Second World War and in the Service of the Victorious Powers in Monika Renneberg and Mark Walker (editors) Science, Technology and National Socialism 30–50 (Cambridge, 2002 paperback edition) Hentschel, Klaus (editor) and Ann M. Hentschel (editorial assistant and translator) Physics and National Socialism: An Anthology of Primary Sources (Birkhäuser, 1996) Kruglov, Arkadii The History of the Soviet Atomic Industry (Taylor and Francis, 2002) Mehra, Jagdish, and Helmut Rechenberg The Historical Development of Quantum Theory. Volume 1 Part 2 The Quantum Theory of Planck, Einstein, Bohr and Sommerfeld 1900–1925: Its Foundation and the Rise of Its Difficulties. (Springer, 2001) Naimark, Norman M. The Russians in Germany: A History of the Soviet Zone of Occupation, 1945–1949 (Hardcover – Aug 11, 1995) Belknap Sime, Ruth Lewin Lise Meitner: A Life in Physics (University of California, First Paperback Edition, 1997) Oleynikov, Pavel V. German Scientists in the Soviet Atomic Project, The Nonproliferation Review Volume 7, Number 2, 1 – 30 (2000). The author has been a group leader at the Institute of Technical Physics of the Russian Federal Nuclear Center in Snezhinsk (Chelyabinsk-70). External links Butler-Volmer Equation – Encyclopædia Britannica MVL – Max Volmer Laboratory for Biophysical Chemistry at the TU Berlin Stern–Volmer Constant – Kutztown University Stern–Volmer Equation – International Union of Pure and Applied Chemistry Two Internet sources with the same wording: Volmer – Institute of Chemistry, University of Jerusalem and Volmer – Incredible People Volmer – Technische Universität Berlin] Volmer – Volmer.biz Volmer Isotherm – Biophysical Journal SIPT - Sukhumi Institute of Physics and Technology, on the website are published the photographs of the German nuclear physicists who had been working for the Soviet nuclear program 1885 births 1965 deaths People from Hilden University of Marburg alumni Leipzig University alumni German chemists German physical chemists 20th-century German chemists Scientists from the Rhine Province Electrochemists Academic staff of Leipzig University Academic staff of the University of Hamburg Academic staff of Technische Universität Berlin Academic staff of the Humboldt University of Berlin Members of the Prussian Academy of Sciences German expatriates in the Soviet Union Nuclear weapons program of the Soviet Union people East German scientists Members of the German Academy of Sciences at Berlin Recipients of the Patriotic Order of Merit Recipients of the National Prize of East Germany
Max Volmer
[ "Chemistry" ]
2,156
[ "Electrochemistry", "Electrochemists" ]
13,550,433
https://en.wikipedia.org/wiki/International%20VELUX%20Award%20for%20Students%20of%20Architecture
The International VELUX Award is for students of architecture in the theme of sunlight and daylight. The award is biennial and was first presented in 2004. The award is for completed works on any scale from a small scale component to large urban contexts or abstract concepts and experimentation. The award is presented by VELUX in close cooperation with the International Union of Architects (UIA) and the European Association for Architectural Education (EAAE). Description “Light of Tomorrow” is the theme of the International VELUX Award. The award wants to challenge the future of daylight in the built environment. The award contains no specific categories and is in no way restricted to the use of VELUX products. The jury of the International VELUX Award comprises internationally recognized architects and other building professionals. Any registered student of architecture – individual or team – from all over the world may participate in the award. The award wants to acknowledge not only the students but their teachers as well. Therefore, all students must be backed and granted submission by a teacher from a school of architecture. History The first International VELUX Award took place in 2004. 760 students from 194 schools in 34 countries in Europe registered, and 258 students from 106 schools in 27 countries submitted their projects. The international jury led by Glenn Murcutt selected three winners and eight honourable mentions, who were announced at the Award event held in Paris. 2004 winners In 2004 the first prize went to Norwegian student Claes Heske Ekornås for his project “Light as matter”. In 2004, ten winners were announced at the Award event in Paris. 2006 winners In 2006, the award went global – inviting students from all over the world to participate. The number of submissions more than doubled reaching 557 projects from 225 schools in 53 countries. The international jury led by Per Olaf Fjeld decided to award three winners and 17 honourable mentions, and they were all celebrated at the Award event at the Guggenheim Museum in Bilbao, Spain. Louise Groenlund of Denmark won the International VELUX Award for her project ”A museum of photography”. Twenty winners and honourable mentions were announced at the Award event at the Guggenheim Bilbao in November 2006. 2008 winners 2010 winners 2016 winners The continental winners of Daylight in Buildings The continental winners of Daylight Investigations 2018 winners The continental winners of Daylight Investigations The continental winners of Daylight in Buildings 2020 winners The continental winners of Daylight Investigations The continental winners of Daylight in Buildings 2022 Winners The continental winners of Daylight Investigations The continental winners of Daylight in Buildings Architecture awards Architectural competitions Architectural education References
International VELUX Award for Students of Architecture
[ "Engineering" ]
507
[ "Architectural education", "Architectural competitions", "Architecture" ]
13,550,684
https://en.wikipedia.org/wiki/Shock%20pulse%20method
Shock pulse method (SPM) is a technique for using signals from rotating rolling bearings as the basis for efficient condition monitoring of machines. From the innovation of the method in 1969 it has been further developed and broadened and is a worldwide accepted philosophy for condition monitoring of rolling bearings and machine maintenance. Difference between shock pulse and vibration Consider a metal ball hitting a metal bar. At the moment of impact, a pressure wave spreads through the material of both bodies (1). The wave is transient (quickly damps out). When the wave front hits the shock pulse transducer, it will cause a dampened oscillation of the transducer's reference mass. The peak amplitude is a function of the impact velocity (v). During the next phase of the collision, both bodies start to vibrate (2). The frequency of this vibration is a function of the mass and the shape of the colliding bodies. Processing shock pulse signals A shock pulse transducer reacts with a large amplitude oscillation to the weak shock pulses, because it is excited at its resonance frequency of 32 kHz. Machine vibration, of a much lower frequency, is filtered out. The first frame shows the symbol for a transducer and, below, the vibration signal from the machine, with superimposed transients at the resonance frequency, caused by shock pulses. The second frame shows the electric filter which passes a train of transients at 32 kHz. Their amplitudes depend on the energy of the shock pulses. The transients are converted into analogue electric pulses. The last frame shows the converted shock pulse signal from the bearing, now consisting of a rapid sequence of stronger electric pulses. Shock pulse patterns The filtered transducer signal reflects the pressure variation in the rolling interface of the bearing. When the oil film in the bearing is thick, the shock pulse level is low, without distinctive peaks. The level increases when the oil film is reduced, but there are still no distinctive peaks. Damage causes strong pulses at irregular intervals. Measuring operating condition The shock pulse meters measure the shock signal on a decibel scale, at two levels. A micro processor evaluates the signal. It needs input data defining the bearing type (ISO number) and the rolling velocity (RPM and bearing diameter). Surface damage in bearings causes a large increase in shock pulse strength, combined with a notable change in the characteristics between stronger and weaker pulses. Shock values are thus immediately translated into measurements of relative oil film thickness or surface damage, whichever applies. See also Condition monitoring Machining vibrations Spectrum analyzer References Further reading BS ISO 18431-4: "Mechanical vibration and shock. Signal processing – Shock response spectrum analysis" (2007) Maintenance
Shock pulse method
[ "Engineering" ]
548
[ "Maintenance", "Mechanical engineering" ]
13,550,861
https://en.wikipedia.org/wiki/M%C3%A9xico%20Ind%C3%ADgena
México Indígena is a project of the American Geographical Society to organize teams of geographers to research the geography of indigenous populations in Mexico. The project's stated objective is to map "changes in the cultural landscape and conservation of natural resources" that result from large scale land privatization initiatives underway in Mexico. The project is led by Peter Herlihy at the University of Kansas and is funded by the U.S. Department of Defense through its Foreign Military Studies Office. The project has been the subject of criticism by various groups including groups representing indigenous peoples. Critics allege that the project was not forthcoming about its U.S. military funding, and that the project has various ulterior motives besides gathering information for research purposes. The project began in 2005, and lasted through 2008. Project and objectives The México Indígena was the first in a series of planned projects to enhance United States government geographical data around the world. The stated objective is to produce maps of the "digital human terrain," of the region's indigenous peoples. To accomplish this, the American Geographical Society sent geographers to several regions to gather cultural and GIS data and build relationships with local institutions. México Indígena was led by a team of geographers who specialize in Latin America, including Peter Herlihy of the University of Kansas, as well as Jeremy Dobson and Miguel Aguilar Robledo. Project methods México Indígena's primary method for obtaining and understanding geographic data is participatory research mapping (PRM). In PRM, local investigators, chosen by the communities, are trained by the formal researcher in geographic data gathering techniques. Cognitive mental (individual) maps are converted to consensual (community) maps, including only features whose nature, name, and coordinates have been verified. These are then converted to standardized maps, which the communities may choose to use for educational, political, legal, or other, purposes. Participatory maps of resource-use areas, for example, have been used successfully for indigenous territorial claims in Panama (Herlihy 2003) and elsewhere. México Indígena's primary tool for joining data from different sources to produce maps and to analyze trends is geographic information systems (GIS). Sponsoring and collaborating institutions and participants of the México Indígena research project have included the University of Kansas (US), the Autonomous University of San Luis Potosí (Mexico), the Foreign Military Studies Office, Radiance Technologies (US), and the Mexican federal environmental ministry SEMARNAT. "FMSO's goal is to help increase an understanding of the world's cultural terrain." Funding The National Association of State Universities and Land-Grant Colleges' International Development Project Database Survey notes that the México Indígena research project has received between $751,000-$1,000,000 from all sources external funds, including: U.S. Department of Defense, Foreign Military Studies Office; U.S. Department of State, Fulbright-Garcia Robles; the American Geographical Society; the University of Kansas' Center of Latin American Studies; Mexico's Secretaria de Medio Ambiente y Recursos Naturales; and the Universidad Autónoma de San Luis Potosí. Further reading Davies, Nancy 2009 "Geographic Survey Project of the Sierra Juarez Mountains Stirs Protests: In Oaxaca, Geographers Deny Surveillance Charges" The Narco News Bulletin February 21. References External links Links on Bowman Expeditions controversy compiled by Evergreen State College geography professor Zoltan Grossman Mexico Indigena research project website Open Anthropology website KUWatch website describing México Indígena Human geography Indigenous peoples in Mexico Geographic information systems Imperialism Oaxaca Zapotec civilization
México Indígena
[ "Technology", "Environmental_science" ]
752
[ "Environmental social science", "Information systems", "Geographic information systems", "Human geography" ]
13,550,938
https://en.wikipedia.org/wiki/CLARION%20%28cognitive%20architecture%29
Connectionist Learning with Adaptive Rule Induction On-line (CLARION) is a computational cognitive architecture that has been used to simulate many domains and tasks in cognitive psychology and social psychology, as well as implementing intelligent systems in artificial intelligence applications. An important feature of CLARION is the distinction between implicit and explicit processes and focusing on capturing the interaction between these two types of processes. The system was created by the research group led by Ron Sun. Overview CLARION is an integrative cognitive architecture, it is used to explain and simulate cognitive-psychological phenomena, which could potentially lead to an unified explanation of psychological phenomena. There are three layers to the CLARION theory, the first layer is the core theory of mind. The main theories consists of a number of distinct subsystems, which are the essential structures of CLARION, with a dual representational structure in each subsystem (implicit versus explicit representations). Its subsystems include the action-centered subsystem, the non-action-centered subsystem, the motivational subsystem, and the meta-cognitive subsystem. The second layer consists of the computational models that implements the basic theory; it is more detailed than the first level theory but is still general. The third layer consists of the specific implemented models and simulations of the psychological processes or phenomena. The models of this layer arise from the basic theory and the general computational models. Dual Representational Structure The distinction between implicit and explicit processes is fundamental to the Clarion cognitive architecture. This distinction is primarily motivated by evidence supporting implicit memory and implicit learning. Clarion captures the implicit-explicit distinction independently from the distinction between procedural memory and declarative memory. To capture the implicit-explicit distinction, Clarion postulates two parallel and interacting representational systems capturing implicit an explicit knowledge respectively. Explicit knowledge is associated with localist representation and implicit knowledge with distributed representation. Explicit knowledge resides in the top level of the architecture, whereas implicit knowledge resides in the bottom level. In both levels, the basic representational units are connectionist nodes, and the two levels differ with respect to the type of encoding. In the top level, knowledge is encoded using localist chunk nodes whereas, in the bottom level, knowledge is encoded in a distributed manner through collections of (micro)feature nodes. Knowledge may be encoded redundantly between the two levels and may be processed in parallel within the two levels. In the top level, information processing involves passing activations among chunk nodes by means of rules and, in the bottom level, information processing involves propagating (micro)feature activations through artificial neural networks. Top-down and bottom-up information flows are enabled by links between the two levels. Such links are established by Clarion chunks, each of which consists of a single chunk node, a collection of (micro)feature nodes, and links between the chunk node and the (micro)feature nodes. In this way a single chunk of knowledge may be expressed in both explicit (i.e., localist) and implicit (i.e., distributed) form, though such dual expression is not always required. The dual representational structure allows implicit and explicit processes to communicate and, potentially, to encode content redundantly. As a result, Clarion theory can account for various phenomena, such as speed-up effects in learning, verbalization-related performance gains, performance gains in transfer tasks, and the ability to perform similarity-based reasoning, in terms of synergistic interaction between implicit and explicit processes. These interactions involve both the flow of activations within the architecture (e.g., similarity-based reasoning is supported by spreading activation among chunks through shared (micro)features) as well as bottom-up, top-down and parallel learning processes. In bottom-up learning, associations among (micro)features in the bottom level are extracted and encoded as explicit rules. In top-down learning, rules in the top level guide the development of implicit associations in the bottom level. Additionally, learning may be carried out in parallel, touching both implicit and explicit processes simultaneously. Through these learning processes knowledge may be encoded redundantly or in complementary fashion, as dictated by agent history. Synergy effects arise, in part, from the interaction of these learning processes. Another important mechanism for explaining synergy effects is the combination and relative balance of signals from different levels of the architecture. For instance, in one Clarion-based modeling study, it has been proposed that an anxiety-driven imbalance in the relative contributions of implicit versus explicit processes may be the mechanism responsible for performance degradation under pressure. Subsystems The Clarion cognitive architecture consists of four subsystems. Action-centered subsystem The role of the action-centered subsystem is to control both external and internal actions. The implicit layer is made of neural networks called Action Neural Networks, while the explicit layer is made up of action rules. There can be synergy between the two layers, for example learning a skill can be expedited when the agent has to make explicit rules for the procedure at hand. It has been argued that implicit knowledge alone cannot optimize as well as the combination of both explicit and implicit. Non-action-centered subsystem The role of the non-action-centered subsystem is to maintain general knowledge. The implicit layer is made of Associative Neural Networks, while the bottom layer is associative rules. Knowledge is further divided into semantic and episodic, where semantic is generalized knowledge, and episodic is knowledge applicable to more specific situations. It is also important to note since there is an implicit layer, that not all declarative knowledge has to be explicit. Motivational subsystem The role of the motivational subsystem is to provide underlying motivations for perception, action, and cognition. The motivational system in CLARION is made up of drives on the bottom level, and each drive can have varying strengths. There are low level drives, and also high level drives aimed at keeping an agent sustained, purposeful, focused, and adaptive. The explicit layer of the motivational system is composed of goals. explicit goals are used because they are more stable than implicit motivational states. The CLARION framework views that human motivational processes are highly complex and can't be represented through just explicit representation. Examples of some low level drives include: food water reproduction avoiding unpleasant stimuli (not mutually exclusive of other low level drives, but separate for the possibility of more specific stimuli) Examples of some high level drives include: Affiliation and belongingness Recognition and achievement Dominance and power Fairness There is also a possibility for derived drives (usually from trying to satisfy primary drives) that can be created by either conditioning, or through external instructions. Each drive needed will have a proportional strength, opportunity will also be taken into account Meta-cognitive subsystem The role of the meta-cognitive subsystem is to monitor, direct, and modify the operations of all the other subsystems. Actions in the meta-cognitive subsystem include: setting goals for the action-centred subsystem, setting parameters for the action and non-action subsystems, and changing an ongoing process in both the action and non-action subsystems. Learning Learning can be represented with both explicit and implicit knowledge individually while also representing bottom-up and top-down learning. Learning with implicit knowledge is represented through Q-learning, while learning with just explicit knowledge is represented with one-shot learning such as hypothesis testing. Bottom-up learning is represented through a neural network propagating up to the explicit layer through the Rule-Extraction-Refinement algorithm (RER), while top-down learning can be represented through a variety of ways. Comparison with other cognitive architectures To compare with a few other cognitive architectures: ACT-R employs a division between procedural and declarative memory that is somewhat similar to CLARION’s distinction between the Action-Centered Subsystem and the Non-Action-Centered Subsystem. However, ACT-R does not have a clear-cut (process-based or representation-based) distinction between implicit and explicit processes, which is a fundamental assumption in the CLARION theory. Soar does not include a clear representation-based or process-based difference between implicit and explicit cognition, or between procedural and declarative memory; it is based on the ideas of problem spaces, states, and operators. When there is an outstanding goal on the goal stack, different productions propose different operators and operator preferences for accomplishing the goal. EPIC adopts a production system similar to ACT-R’s. However, it does not include the dichotomy of implicit and explicit processes, which is essential in CLARION. Theoretical applications CLARION has been used to account for a variety of psychological data, such as the serial reaction time task, the artificial grammar learning task, the process control task, a categorical inference task, an alphabetical arithmetic task, and the Tower of Hanoi task. The serial reaction time and process control tasks are typical implicit learning tasks (mainly involving implicit reactive routines), while the Tower of Hanoi and alphabetic arithmetic are high-level cognitive skill acquisition tasks (with a significant presence of explicit processes). In addition, extensive work has been done on a complex minefield navigation task, which involves complex sequential decision-making. Work on organizational decision tasks and other social simulation tasks, as well as meta-cognitive tasks, has also been initiated. Other applications of the cognitive architecture include simulation of creativity and addressing the computational basis of consciousness (or artificial consciousness). References External links The CLARION project The CLARION project page at sites.google The CogArch Lab The CogArch Lab at sites.google pyClarion Cognitive architecture
CLARION (cognitive architecture)
[ "Engineering" ]
1,996
[ "Artificial intelligence engineering", "Cognitive architecture" ]
13,551,131
https://en.wikipedia.org/wiki/Helen%20Lines
Helen Chambliss Williams Lines (July 13, 1918 – January 29, 2001) was an American amateur astronomer. In her beginnings she was a deep-sky observer and astrophotographer. Astronomy In 1969, Lines was one of early members of the Phoenix Astronomical Society. Lines was a member of the American Association of Variable Star Observers. She and her husband, civil engineer Richard D. Lines, built a small observatory in Mayer, Arizona, and wrote about its construction for Sky & Telescope. In 1992 they won the Amateur Achievement Award of the Astronomical Society of the Pacific for their work in the field of photoelectric photometry of variable stars. She was a co-author on two scientific papers published in the mid-1990s. Publications "A New Amateur Observatory in Central Arizona" (1968, with Richard D. Lines) "UBVRI photometry of the recurrent nova T coronae borealis" (1988, with Richard D. Lines and Thomas G. McFaul) "Evolution of starspots in the long-period RS CVN binary V1817 Cygni = HR 7428" (1990, with Richard D. Lines, Douglas S. Hall, and Susan E. Gessner) "The Two Variables in The Triple System HR 6469=V819 Her: One Eclipsing, One Spotted" (1994, with 16 other authors) "Starspots Found on the Ellipsoidal Variable V350 Lacertae = HR 8575" (1995, with 5 other authors) Personal life Helen Chambliss Williams was born in Forrest City, Arkansas, the daughter of Russell Williams and Sadie Borden Williams. Her father was the chief of police in Forrest City. She married Richard Damon Lines in 1936. They had a daughter, Chambliss. Richard Lines died in 1992, and Helen Lines died in 2001, aged 82 years, in Searcy, Arkansas. References External links Saguaro Astronomy Club: The Passing of Richard Lines Amateur Achievement Award winners History of Phoenix Astronomical Society 1918 births 2001 deaths People from Forrest City, Arkansas American women astronomers Amateur astronomers 20th-century American astronomers 20th-century American women scientists
Helen Lines
[ "Astronomy" ]
443
[ "Astronomers", "Amateur astronomers" ]
13,551,667
https://en.wikipedia.org/wiki/Social%20and%20Environmental%20Responsibility%20World%20Forum
The Social and Environmental Responsibility World Forum is an initiative organized by the Réseau Alliances in 1993 in partnership with private and public organizations, to encourage social and economic responsibility among businesses. The initiative was planned for four years (2007–2010), with the goal of creating a permanent cycle of continuous exchanges and communication between actors from all parts of the world. The work of the Forum was based on concrete projects and actions to formulate to generalize social and environmental responsibility throughout the world. During that period, major events were supposed to take place in Lille to bring together actors and experts in social and environmental responsibility of all nationalities, especially during the international meetings scheduled in October each year. Developed on economic leaders’ initiative, the Réseau Alliances federates and helps businesses anxious to improve their performance while being more respectful of people and the environment. Topics discussed A socially and environmentally responsible economy opened to the diversity of cultures and origins Diversity and equal opportunity The place and role of the world's women at all levels of responsibility Equal opportunities and territories: the conditions for fairer economic development Identifying, evaluating, and developing good practices in the company What it is about social and environmental responsibility References External links Site officiel de l'association Alliances Site officiel du Forum Mondial de l'Economie Responsable Site de référence sur la RSE et l' ISR en France Environmental ethics Environmental organizations based in France Social responsibility organizations Organizations based in Hauts-de-France
Social and Environmental Responsibility World Forum
[ "Environmental_science" ]
297
[ "Environmental ethics" ]
13,551,670
https://en.wikipedia.org/wiki/Electroacoustic%20phenomena
Electroacoustic phenomena arise when ultrasound propagates through a fluid containing ions. The associated particle motion generates electric signals because ions have electric charge. This coupling between ultrasound and electric field is called electroacoustic phenomena. The fluid might be a simple Newtonian liquid, or complex heterogeneous dispersion, emulsion or even a porous body. There are several different electroacoustic effects depending on the nature of the fluid. Ion vibration current (IVI) and potential, an electric signal that arises when an acoustic wave propagates through a homogeneous fluid. Streaming vibration current (SVI) and potential, an electric signal that arises when an acoustic wave propagates through a porous body in which the pores are filled with fluid. Colloid vibration current (CVI) and potential, an electric signal that arises when ultrasound propagates through a heterogeneous fluid, such as a dispersion or emulsion. Electric sonic amplitude (ESA), the inverse of the CVI effect, in which an acoustic field arises when an electric field propagates through a heterogeneous fluid. Ion vibration current Historically, the IVI was the first known electroacoustic effect. It was predicted by Debye in 1933. Streaming vibration current The streaming vibration current was experimentally observed in 1948 by Williams. A theoretical model was developed some 30 years later by Dukhin and others. This effect opens another possibility for characterizing the electric properties of the surfaces in porous bodies. A similar effect can be observed at a non-porous surface, when sound is bounced off at an oblique angle. The incident and reflected waves superimpose to cause oscillatory fluid motion in the plane of the interface, thereby generating an AC streaming current at the frequency of the sound waves. Double layer compression The electrical double layer can be regarded as behaving like a parallel plate capacitor with a compressible dielectric filling. When sound waves induce a local pressure variation, the spacing of the plates varies at the frequency of the excitation, generating an AC displacement current normal to the interface. For practical reasons this is most readily observed at a conducting surface. It is therefore possible to use an electrode immersed in a conducting electrolyte as a microphone, or indeed as a loudspeaker when the effect is applied in reverse. Colloid vibration potential and current Colloid vibration potential measures the AC potential difference generated between two identical relaxed electrodes, placed in the dispersion, if the latter is subjected to an ultrasonic field. When a sound wave travels through a colloidal suspension of particles whose density differs from that of the surrounding medium, inertial forces induced by the vibration of the suspension give rise to a motion of the charged particles relative to the liquid, causing an alternating electromotive force. The manifestations of this electromotive force may be measured, depending on the relation between the impedance of the suspension and that of the measuring instrument, either as colloid vibration potential or as colloid vibration current. Colloid vibration potential and current was first reported by Hermans and then independently by Rutgers in 1938. It is widely used for characterizing the ζ-potential of various dispersions and emulsions. The effect, theory, experimental verification and multiple applications are discussed in the book by Dukhin and Goetz. Electric sonic amplitude Electric sonic amplitude was experimentally discovered by Cannon with co-authors in early 1980s. It is also widely used for characterizing ζ-potential in dispersions and emulsions. There is review of this effect theory, experimental verification and multiple applications published by Hunter. Theory of CVI and ESA With regard to the theory of CVI and ESA, there was an important observation made by O'Brien, who linked these measured parameters with dynamic electrophoretic mobility μd. where A is calibration constant, depending on frequency, but not particles properties; ρp is particle density, ρm density of the fluid, φ is volume fraction of dispersed phase, Dynamic electrophoretic mobility is similar to electrophoretic mobility that appears in electrophoresis theory. They are identical at low frequencies and/or for sufficiently small particles. There are several theories of the dynamic electrophoretic mobility. Their overview is given in the Ref.5. Two of them are the most important. The first one corresponds to the Smoluchowski limit. It yields following simple expression for CVI for sufficiently small particles with negligible CVI frequency dependence: where: ε0 is vacuum dielectric permittivity, εm is fluid dielectric permittivity, ζ is electrokinetic potential η is dynamic viscosity of the fluid, Ks is conductivity of the system, Km is conductivity of the fluid, ρs is density of the system. This remarkably simple equation has same wide range of applicability as Smoluchowski equation for electrophoresis. It is independent on shape of the particles, their concentration. Validity of this equation is restricted with the following two requirements. First, it is valid only for a thin double layer, when the Debye length is much smaller than particle's radius a: Secondly, it neglects the contribution of the surface conductivity. This assumes a small Dukhin number: Restriction of the thin double layer limits applicability of this Smoluchowski type theory only to aqueous systems with sufficiently large particles and not very low ionic strength. This theory does not work well for nano-colloids, including proteins and polymers at low ionic strength. It is not valid for low- or non-polar fluids. There is another theory that is applicable for the other extreme case of a thick double layer, when This theory takes into consideration the double layer overlap that inevitably occurs for concentrated systems with thick double layer. This allows introduction of so-called "quasi-homogeneous" approach, when overlapped diffuse layers of particles cover the complete interparticle space. The theory becomes much simplified in this extreme case, as shown by Shilov and others. Their derivation predicts that surface charge density σ is a better parameter than ζ-potential for characterizing electroacoustic phenomena in such systems. An expression for CVI simplified for small particles follows: See also Interface and colloid science References Chemical mixtures Colloidal chemistry Condensed matter physics Matter Soft matter Ultrasound
Electroacoustic phenomena
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,311
[ "Colloidal chemistry", "Soft matter", "Phases of matter", "Materials science", "Colloids", "Surface science", "Chemical mixtures", "Condensed matter physics", "nan", "Matter" ]
13,551,904
https://en.wikipedia.org/wiki/Propiconazole
Propiconazole is a triazole fungicide, also known as a DMI, or demethylation inhibiting fungicide due to its binding with and inhibiting the 14-alpha demethylase enzyme from demethylating a precursor to ergosterol. Without this demethylation step, the ergosterols are not incorporated into the growing fungal cell membranes, and cellular growth is stopped. Agriculture Propiconazole is used agriculturally as a systemic fungicide on turfgrasses grown for seed and aesthetic or athletic value, wheat, mushrooms, corn, wild rice, peanuts, almonds, sorghum, oats, pecans, apricots, peaches, nectarines, plums, prunes and lemons. It is also used in combination with permethrin in formulations of wood preserver. Propiconazole is a mixture of four stereoisomers and was first developed in 1979 by Janssen Pharmaceutica. Propiconazole exhibits strong anti-feeding properties against the keratin-digesting Australian carpet beetle Anthrenocerus australis. References External links Non-CCA Wood Preservatives: Guide to Selected Resources - National Pesticide Information Center Aromatase inhibitors Fungicides Lanosterol 14α-demethylase inhibitors Triazoles
Propiconazole
[ "Biology" ]
278
[ "Fungicides", "Biocides" ]
13,552,288
https://en.wikipedia.org/wiki/Silver%20tetrafluoroborate
Silver tetrafluoroborate is an inorganic compound with the chemical formula AgBF4. It is a white solid that dissolves in polar organic solvents as well as water. In its solid state, the Ag+ centers are bound to fluoride. Preparation Silver tetrafluoroborate is prepared by the reaction between boron trifluoride and silver oxide in the presence of benzene. Laboratory uses In the inorganic and organometallic chemistry laboratory, silver tetrafluoroborate, sometimes referred to "silver BF-4", is a useful reagent. In dichloromethane, silver tetrafluoroborate is a moderately strong oxidant. Similar to silver hexafluorophosphate, it is commonly used to replace halide anions or ligands with the weakly coordinating tetrafluoroborate anions. The abstraction of the halide is driven by the precipitation of the corresponding silver halide. References Tetrafluoroborates Silver compounds Oxidizing agents
Silver tetrafluoroborate
[ "Chemistry" ]
220
[ "Redox", "Inorganic compounds", "Oxidizing agents", "Inorganic compound stubs" ]
13,552,465
https://en.wikipedia.org/wiki/Hereditary%20property
In mathematics, a hereditary property is a property of an object that is inherited by all of its subobjects, where the meaning of subobject depends on the context. These properties are particularly considered in topology and graph theory, but also in set theory. In topology In topology, a topological property is said to be hereditary if whenever a topological space has that property, then so does every subspace of it. If the latter is true only for closed subspaces, then the property is called weakly hereditary or closed-hereditary. For example, second countability and metrisability are hereditary properties. Sequentiality and Hausdorff compactness are weakly hereditary, but not hereditary. Connectivity is not weakly hereditary. If P is a property of a topological space X and every subspace also has property P, then X is said to be "hereditarily P". In combinatorics and graph theory Hereditary properties occur throughout combinatorics and graph theory, although they are known by a variety of names. For example, in the context of permutation patterns, hereditary properties are typically called permutation classes. In graph theory In graph theory, a hereditary property usually means a property of a graph which also holds for (is "inherited" by) its induced subgraphs. Equivalently, a hereditary property is preserved by the removal of vertices. A graph class is called hereditary if it is closed under induced subgraphs. Examples of hereditary graph classes are independent graphs (graphs with no edges), which is a special case (with c = 1) of being c-colorable for some number c, being forests, planar, complete, complete multipartite etc. Sometimes the term "hereditary" has been defined with reference to graph minors; then it may be called a minor-hereditary property. The Robertson–Seymour theorem implies that a minor-hereditary property may be characterized in terms of a finite set of forbidden minors. The term "hereditary" has been also used for graph properties that are closed with respect to taking subgraphs. In such a case, properties that are closed with respect to taking induced subgraphs, are called induced-hereditary. The language of hereditary properties and induced-hereditary properties provides a powerful tool for study of structural properties of various types of generalized colourings. The most important result from this area is the unique factorization theorem. Monotone property There is no consensus for the meaning of "monotone property" in graph theory. Examples of definitions are: Preserved by the removal of edges. Preserved by the removal of edges and vertices (i.e., the property should hold for all subgraphs). Preserved by the addition of edges and vertices (i.e., the property should hold for all supergraphs). Preserved by the addition of edges. (This meaning is used in the statement of the Aanderaa–Karp–Rosenberg conjecture.) The complementary property of a property that is preserved by the removal of edges is preserved under the addition of edges. Hence some authors avoid this ambiguity by saying a property A is monotone if A or AC (the complement of A) is monotone. Some authors choose to resolve this by using the term increasing monotone for properties preserved under the addition of some object, and decreasing monotone for those preserved under the removal of the same object. In matroid theory In a matroid, every subset of an independent set is again independent. This is a hereditary property of sets. A family of matroids may have a hereditary property. For instance, a family that is closed under taking matroid minors may be called "hereditary". In problem solving In planning and problem solving, or more formally one-person games, the search space is seen as a directed graph with states as nodes, and transitions as edges. States can have properties, and such a property P is hereditary if for each state S that has P, each state that can be reached from S also has P. The subset of all states that have P plus the subset of all states that have ~P form a partition of the set of states called a hereditary partition. This notion can trivially be extended to more discriminating partitions by instead of properties, considering aspects of states and their domains. If states have an aspect A, with di ⊂ D a partition of the domain D of A, then the subsets of states for which A ∈ di form a hereditary partition of the total set of states iff ∀i, from any state where A ∈ di only other states where A ∈ di can be reached. If the current state and the goal state are in different elements of a hereditary partition, there is no path from the current state to the goal state — the problem has no solution. Can a checkers board be covered with domino tiles, each of which covers exactly two adjacent fields? Yes. What if we remove the top left and the bottom right field? Then no covering is possible any more, because the difference between number of uncovered white fields and the number of uncovered black fields is 2, and adding a domino tile (which covers one white and one black field) keeps that number at 2. For a total covering the number is 0, so a total covering cannot be reached from the start position. This notion was first introduced by Laurent Siklóssy and Roach. In model theory In model theory and universal algebra, a class K of structures of a given signature is said to have the hereditary property if every substructure of a structure in K is again in K. A variant of this definition is used in connection with Fraïssé's theorem: A class K of finitely generated structures has the hereditary property if every finitely generated substructure is again in K. See age. In set theory Recursive definitions using the adjective "hereditary" are often encountered in set theory. A set is said to be hereditary (or pure) if all of its elements are hereditary sets. It is vacuously true that the empty set is a hereditary set, and thus the set containing only the empty set is a hereditary set, and recursively so is , for example. In formulations of set theory that are intended to be interpreted in the von Neumann universe or to express the content of Zermelo–Fraenkel set theory, all sets are hereditary, because the only sort of object that is even a candidate to be an element of a set is another set. Thus the notion of hereditary set is interesting only in a context in which there may be urelements. A couple of notions are defined analogously: A hereditarily finite set is defined as a finite set consisting of zero or more hereditarily finite sets. Equivalently, a set is hereditarily finite if and only if its transitive closure is finite. A hereditarily countable set is a countable set of hereditarily countable sets. Assuming the axiom of countable choice, then a set is hereditarily countable if and only if its transitive closure is countable. Based on the above, it follows that in ZFC a more general notion can be defined for any predicate . A set x is said to have hereditarily the property if x itself and all members of its transitive closure satisfy , i.e. . Equivalently, x hereditarily satisfies iff it is a member of a transitive subset of A property (of a set) is thus said to be hereditary if it is inherited by every subset. For example, being well-ordered is a hereditary property, and so it being finite. If we instantiate in the above schema with "x has cardinality less than κ", we obtain the more general notion of a set being hereditarily of cardinality less than κ, usually denoted by or . We regain the two simple notions we introduced above as being the set of hereditarily finite sets and being the set of hereditarily countable sets. ( is the first uncountable ordinal.) References Graph theory Set theory Model theory Matroid theory
Hereditary property
[ "Mathematics" ]
1,651
[ "Discrete mathematics", "Set theory", "Mathematical logic", "Graph theory", "Combinatorics", "Mathematical relations", "Model theory", "Matroid theory" ]
13,552,556
https://en.wikipedia.org/wiki/Trudeau%20Institute
The Trudeau Institute is an independent, not-for-profit, biomedical research center located on a campus in Saranac Lake, New York. Its scientific mission is to make breakthrough discoveries that lead to improved human health. Its current president is Bill Reiley. As of 2024, the institute employed 64 staff. History Trudeau Institute is named for tuberculosis researcher Edward Livingston Trudeau, who founded the Adirondack Cottage Sanitarium in 1884 in Saranac Lake, New York to serve as a tuberculosis treatment and research facility. Trudeau, who had himself survived a tuberculosis infection during a stay at Paul Smith's Hotel in the Adirondacks, believed that the clean air in the region would improve health outcomes. Several other cure cottages opened in the town after the sanitarium. In 1894, after a fire destroyed his small laboratory, Trudeau built the Saranac Laboratory for the Study of Tuberculosis, the first laboratory in the United States dedicated to the study of tuberculosis. He was later elected the first president of the National Association for the Study and Prevention of Tuberculosis, the predecessor of the American Lung Association. He died in 1915 at age 67. Following Trudeau's death, the sanitarium's name was changed to the Trudeau Sanatorium, and a foundation and school were established to train physicians and healthcare professionals in the latest tuberculosis treatment methods. By 1947, more than 15,000 patients had received treatment there. The sanatorium closed in 1954, after the discovery of effective antibiotic treatments for tuberculosis. In 1957 Trudeau's grandson, Francis B. Trudeau Jr., sold the property to the American Management Association. The proceeds were invested in a new medical research facility built on Lower Saranac Lake, which became the Trudeau Institute and opened in 1964. Trudeau's Saranac Laboratory has been restored by Historic Saranac Lake and now serves as its headquarters and a museum. Trudeau Institute was traditionally reliant on research grants from the National Institutes of Health to fund research, but following declines in grants, the institution lost staff and received offers to relocate out of New York. Beginning in 2016, the institute was revitalized under the leadership of Atsuo Kuki, who reoriented projects from pure research to applied science, a strategy he labeled "Trudeau 3.0". The institute hosted a summit on infectious disease in 2018. During the COVID-19 pandemic, the institute collaborated with several academic institutions and bio-pharma companies on research projects related to the disease, including vaccine development as part of Operation Warp Speed. In July 2020, due to their role in assisting the regional community during the COVID-19 pandemic, the Saranac Lake Chamber of Commerce honored Adirondack Health and the Trudeau Institute as joint partners of the year. The institute also partnered with the Walter Reed Army Institute of Research to develop a vaccine for the Zika virus. References External links Official site Historic Saranac Lake - the Saranac Laboratory Adirondacks Biotechnology organizations Medical research institutes in the United States Saranac Lake, New York Organizations established in 1884 Non-profit organizations based in New York (state) Research institutes in New York (state)
Trudeau Institute
[ "Engineering", "Biology" ]
654
[ "Biotechnology organizations" ]
13,552,928
https://en.wikipedia.org/wiki/Water%20stop
A water stop or water station on a railroad is a place where steam trains stop to replenish water. The stopping of the train itself is also referred to as a "water stop". The term originates from the times of steam engines when large amounts of water were essential. Also known as wood and water stops or coal and water stops, since it was reasonable to replenish engines with fuel as well when adding water to the tender. United States During the very early days of steam locomotives, water stops were necessary every 7–10 miles (11–16 km) and consumed much travel time. With the introduction of tenders (a special car containing water and fuel), trains could run 100–150 miles (160–240 km) without a refill. To accumulate the water, water stops employed water tanks, water towers and tank ponds. The water was initially pumped by windmills, watermills, or by hand pumps often by the train crew themselves. Later, small steam and gasoline engines were used. As the U.S. railroad system expanded, large numbers of tank ponds were built by damming various small creeks that intersected the tracks in order to provide water for water stops. Largemouth bass were often stocked in tank ponds. Many water stops along new railways evolved into new settlements. When a train stopped for water and was positioned by a water tower, a member of the engine crew, usually the fireman, swung out the spigot arm over the water tender and "jerked" the chain to begin watering. This gave rise to a 19th-century slang term "Jerkwater town" for towns too insignificant to have a regular train station. Some water stops grew into established settlements: for example, the town of Coalinga, California, formerly, Coaling Station A, gets its name from the original coal stop at this location. On the other hand, with the replacement of steam engines by diesel locomotives many of the then obsolete water stops, especially in deserted areas, became ghost towns. During the days of the Wild West, isolated water stops were among the favorite ambush places for train robbers. See also Track pan - a water trough Water crane Notes References Rail infrastructure Water supply Steam locomotive technologies
Water stop
[ "Chemistry", "Engineering", "Environmental_science" ]
448
[ "Hydrology", "Water supply", "Environmental engineering" ]
13,552,973
https://en.wikipedia.org/wiki/John%20Kao
John Kao (born 1950) is an author and strategic advisor based in Connecticut. His work concentrates on issues of innovation and organizational transformation. Life and career Kao was born in 1950 to Chinese immigrant parents. An accomplished jazz pianist, he spent the summer of 1969 playing keyboards for Frank Zappa. Kao studied philosophy at Yale College, received an MD from Yale Medical School, and an MBA from Harvard Business School. He taught at Harvard Business School from 1982 to 1996, where he specialized in innovation and entrepreneurship. He has also held faculty appointments at the Massachusetts Institute of Technology Media Lab, Yale College, and the Naval Postgraduate School. His advisory work for Senator Hillary Clinton, including his ideas on innovation and transformation, was described in The New York Times as "out of the box". In 2000, Kao became CEO of Ealing Films. He also founded Kao Ventures, and San Francisco-based The Idea Factory, working with Internet-related startups. He also shared producer credits for Sex, Lies, and Videotape and Mr. Baseball. Publications Key publications include: References External links Official website 1950 births Living people American business writers American people of Chinese descent Harvard Business School alumni Harvard University faculty MIT School of Architecture and Planning faculty Naval Postgraduate School faculty Science and technology studies scholars Writers from California Yale School of Medicine alumni Yale University faculty Yale College alumni American chief executives
John Kao
[ "Technology" ]
278
[ "Science and technology studies", "Science and technology studies scholars" ]
13,552,978
https://en.wikipedia.org/wiki/Behavioral%20confirmation
Behavioral confirmation is a type of self-fulfilling prophecy whereby people's social expectations lead them to behave in ways that cause others to confirm their expectations. The phenomenon of belief creating reality is known by several names in literature: self-fulfilling prophecy, expectancy confirmation, and behavioral confirmation, which was first coined by social psychologist Mark Snyder in 1984. Snyder preferred this term because it emphasizes that it is the target's actual behavior that confirms the perceiver's beliefs. Self-fulfilling prophecy Preconceived beliefs and expectations are used by human beings when they interact with others, as guides to action. Their actions may then guide the interacting partner to behave in a way that confirms the individual's initial beliefs. The self-fulfilling prophecy is essentially the idea that beliefs and expectations can and do create their own reality. Sociologist Robert K. Merton defined a self-fulfilling prophecy as, in the beginning, a false definition of the situation evoking a new behavior which makes the originally false conception come true. Self-fulfilling prophecy focuses on the behavior of the perceiver in electing expected behavior from the target, whereas behavioral confirmation focuses on the role of the target's behavior in confirming the perceiver's beliefs. Research Research has shown that a person (referred to as a perceiver) who possesses beliefs about another person (referred to as a target) will often act on these beliefs in ways that lead the target to actually behave in ways that confirm the perceiver's original beliefs. In one demonstration of behavioral confirmation in social interaction, Snyder and colleagues had previously unacquainted male and female partners get acquainted through a telephone-like intercom system. The male participants were referred to as the perceivers, and the female participants were referred to as the targets. Prior to their conversations, the experimenter gave the male participants a Polaroid picture and led them to believe that it depicted their female partners. The male participants were unaware that, in fact, the pictures were not of their partners. The experimenter gave the perceivers pictures which portrayed either physically attractive or physically unattractive women in order to activate the perceiver's stereotypes that they may possess concerning attractive and unattractive people. The perceiver-target dyads engaged in a 10-minute, unstructured conversation, which was initiated by the perceivers. Individuals, identified as the raters, listened in on only the targets' contributions to the conversations and rated their impressions of the targets. Results showed that targets whose partners believed them to be physically attractive came to behave in a more sociable, warm, and outgoing manner than targets whose partners believed them to be physically unattractive. Consequently, targets behaviorally confirmed the perceivers' beliefs, thus turning the perceivers' beliefs into self-fulfilling prophecies. The study also supported and displayed the physical attractiveness stereotype. These findings suggest that human beings, who are the targets of many perceivers in everyday life, may routinely act in ways which are consistent not with their own attitudes, beliefs, or feelings; but rather with the perceptions and stereotypes which others hold of them and their attributes. This seems to suggest that the power of others' beliefs over one's behaviour is extremely strong. Mechanisms Snyder proposed a four-step sequence in which behavioral confirmation occurs: The perceiver adopts beliefs about the target The perceiver acts as if these beliefs were true and treats the target accordingly The target assimilates his or her behavior to the perceiver's overtures The perceiver interprets the target's behavior as confirmation of his or her original beliefs. Motivational foundations The perceiver and the target have a common goal of getting acquainted with one other, and they do so in different functions. Behavioral confirmation occurs from the combination of a perceiver who is acting in the service of the knowledge function and a target whose behaviors serve an adjustive function. The perceiver uses knowledge motivations in order to get a stable and predictable view of those whom one interacts, eliciting behavioral confirmation. Perceivers use knowledge-oriented strategies, which occur when perceivers view their interactions with targets as opportunities to find out about their targets' personality and to check their impressions of targets, leading perceivers to ask belief-confirming questions. The perceiver asks the target questions in order to form stable and predictable impressions of their partner, and perceivers tend to confidently assume that possession of even the limited information gathered about the other person gives them the ability to predict that that person's future will be consistent with the impressions gathered. When the target is motivated by adjustive functions, they are motivated to try to get along with their partners and to have a smooth and pleasant conversation with the perceiver. The adjustive function motivates the targets to reciprocate perceivers' overtures and thereby to behaviorally confirm perceivers' erroneous beliefs. Without the adjustive function, this may lead to behavioral disconfirmation. Examples Physical attractiveness – When one interacts with another person of high or low physical attractiveness, they influence that person's social prowess. When a target (unbeknownst to themselves) is tagged physically attractive, that target, through interaction with the perceiver, in turn comes to behave in a friendlier manner than do those tagged unattractive. Race – In a 1997 study by Chen and Bargh, it was shown that participants who were subliminally primed with an African-American stereotype observed more hostility from the target they interacted with than those who were in the control condition. This study suggests that behavioral confirmation caused targets to become more hostile when their perceiver had been negatively primed. Gender – When participants were made aware of their targets' gender in a division of labour task, targets fell into their gender-specific roles through behavioral confirmation. Loneliness – Adults who were presented with a hypothetically lonely peer and a non-lonely hypothetical peer were found to report greater rejection of the lonely peer, with evidence that this was due to individuals stigmatizing loneliness as a discredited attribute. Critique The principle objection to the idea of behavioral confirmation is that the laboratory situations that are used in the research often do not map onto real-world social interaction easily. In addition, it is argued that behavioral disconfirmation is just as likely to develop out of expectancies as are self-fulfilling expectations. A strong criticism by Lee Jussim is the allegation that, in all previous behavioral confirmation studies, the participants have been falsely misled about the targets' characteristics; however, in real life, people's expectations are generally correct. To combat such critique, behavioral confirmation has adapted to introduce a non-conscious element. Even though there are clearly pitfalls to the phenomenon, it has continuously been studied over the past few decades, highlighting its importance in psychology. References Human behavior
Behavioral confirmation
[ "Biology" ]
1,390
[ "Behavior", "Human behavior" ]
13,553,180
https://en.wikipedia.org/wiki/Photothermal%20microspectroscopy
Photothermal microspectroscopy (PTMS), alternatively known as photothermal temperature fluctuation (PTTF), is derived from two parent instrumental techniques: infrared spectroscopy and atomic force microscopy (AFM). In one particular type of AFM, known as scanning thermal microscopy (SThM), the imaging probe is a sub-miniature temperature sensor, which may be a thermocouple or a resistance thermometer. This same type of detector is employed in a PTMS instrument, enabling it to provide AFM/SThM images: However, the chief additional use of PTMS is to yield infrared spectra from sample regions below a micrometer, as outlined below. Technique The AFM is interfaced with an infrared spectrometer. For work using Fourier transform infrared spectroscopy (FTIR), the spectrometer is equipped with a conventional black body infrared source. A particular region of the sample may first be chosen on the basis of the image obtained using the AFM imaging mode of operation. Then, when material at this location absorbs the electromagnetic radiation, heat is generated, which diffuses, giving rise to a decaying temperature profile. The thermal probe then detects the photothermal response of this region of the sample. The resultant measured temperature fluctuations provide an interferogram that replaces the interferogram obtained by a conventional FTIR setup, e.g., by direct detection of the radiation transmitted by a sample. The temperature profile can be made sharp by modulating the excitation beam. This results in the generation of thermal waves whose diffusion length is inversely proportional to the root of the modulation frequency. An important advantage of the thermal approach is that it permits to obtain depth-sensitive subsurface information from surface measurement, thanks to the dependence of thermal diffusion length on modulation frequency. Applications The two particular features of PTMS that have determined its applications so far are 1) spectroscopic mapping may be performed at a spatial resolution well below the diffraction limit of IR radiation, ultimately at a scale of 20-30 nm. In principle, this opens the way to sub-wavelength IR microscopy (see scanning probe microscopy) where the image contrast is to be determined by the thermal response of individual sample regions to particular spectral wavelengths and 2) in general, no special preparation technique is required when solid samples are to be studied. For most standard FTIR methods, this is not the case. Related technique This spectroscopic technique complements another recently developed method of chemical characterisation or fingerprinting, namely micro-thermal analysis (micro-TA). This also uses an “active” SThM probe, which acts as a heater as well as a thermometer, so as to inject evanescent temperature waves into a sample and to allow sub-surface imaging of polymers and other materials. The sub-surface detail detected corresponds to variations in heat capacity or thermal conductivity. Ramping the temperature of the probe, and thus the temperature of the small sample region in contact with it, allows localized thermal analysis and/or thermomechanometry to be performed. References Further reading , erratum in 19(5), 14 (2004) Scanning probe microscopy Spectroscopy
Photothermal microspectroscopy
[ "Physics", "Chemistry", "Materials_science" ]
656
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Scanning probe microscopy", "Microscopy", "Nanotechnology", "Spectroscopy" ]
13,553,280
https://en.wikipedia.org/wiki/BIOS%20Centre%20for%20the%20Study%20of%20Bioscience%2C%20Biomedicine%2C%20Biotechnology%20and%20Society
The BIOS Centre for the Study of Bioscience, Biomedicine, Biotechnology and Society was an international centre for research and policy on social aspects of the life sciences and biomedicine located at the London School of Economics (LSE), England. It was founded in 2002 by Professor Nikolas Rose, a prominent British sociologist. About BIOS BIOS was a multidisciplinary centre at LSE for research into contemporary developments in the life sciences, biomedicine and biotechnology. It was an initiative between the Department of Sociology, with the support of the Departments of Social Psychology, Government and Law and the Centre for Philosophy of Natural and Social Sciences. BIOS supported post-doctoral researchers, visiting fellows and professors, and post graduate students. It has an infrastructure which encourages and hosts research supported by funding bodies such as the Economic and Social Research Council (ESRC), the Wellcome Trust, the Medical Research Council and other major funding bodies. BIOS's ethos wass one of empirically grounded and conceptually sophisticated research, conducted in close relation with life scientists, clinicians and policy makers. Among the issues addressed are justice, power and inequality, geopolitics, social and individual identity. Major research projects focued on regenerative medicine, social aspects of the neurosciences and psychopharmacology, biosecurity, biopolitics, bioeconomics, translational biology and bioethics. BIOS ran an innovative Masters programme in Biomedicine and Society attracting students from backgrounds in many disciplines in the social and life sciences. The BIOS community of over 40 researchers included a large number of doctoral students, postdoctoral fellows, research staff, visiting fellows and professors and associated faculty. The founder and Director of BIOS was the Martin White Professor of Sociology at LSE, Professor Nikolas Rose and the Centre closed in 2012 when Nikolas Rose moved to King's College London to found the Department of Global Health and Social Medicine, which continues to develop the work initiated at BIOS. References London School of Economics BIOS External links European Neuroscience and Society Network ENSN European Science Foundation ESF Biotechnology in the United Kingdom Research institutes in London London School of Economics Biological research institutes in the United Kingdom Research institutes established in 2002 2002 establishments in England
BIOS Centre for the Study of Bioscience, Biomedicine, Biotechnology and Society
[ "Biology" ]
470
[ "Biotechnology in the United Kingdom", "Biotechnology by country" ]
13,553,672
https://en.wikipedia.org/wiki/Multilateral%20Interoperability%20Programme
The Multilateral Interoperability Programme (MIP) is an effort to deliver an assured capability for information interoperability to support multinational, combined and joint military operations. The goal of the Programm is to support all levels from corps to battalion and focuses on command and control (C2) systems. MIP is a consortium of 27 NATO and Non-NATO nations that meet quarterly to work on the next iteration of its Products. It has standing collaborations with the NATO Allied Command Transformation (ACT) and the European Defense Agency (EDA).[1][2] Overview The Multilateral Interoperability Programme referred to as MIP, is an interoperability standard consortium, established by national C2IS developers, with a requirement to share relevant C2 information in a multinational or coalition environment. As a result of collaboration within the programme, MIP produces a set of specifications which, when implemented by the nations, provide the required interoperability capability. MIP provides a venue for system level interoperability testing of national MIP implementations as well as providing a forum for exchanging information relevant to national implementation and fielding plans to enable synchronisation. MIP is NOT empowered to direct how nations develop their own C2IS. NATO and MIP share a common interest in building, testing, verifying and improving information exchange Models and derived specifications. Reasons for establishing MIP Interoperability among allied armed forces is more important today, than it ever was. Warfare, and the need for timely information exchange between allies, has changed tremendously over the last 100 years. If technology, terminology and command structures are not harmonized, a joint force is neither able to act fast, nor as one unit. To improve Interoperability alliances around the globe, such as NATO, EU and UN, are continuously working on standardizing their command and control information systems. MIP works outside their bureaucratic frameworks to spearhead and test solutions. Many of MIP's efforts end up being covered by NATO in STANAG’s. History The need for Interoperability between command and control information systems became readily apparent toward the end of the 20th century. MIP as we know it today is the result of 3 separate C2 initiatives: - Army Tactical Command and Control Information System (ATTCIS, 1980) - Battlefield Interoperability Programme for Lower Echelon Command and Control Systems (BIP, 1995) - Quadrilateral Interoperability Programme (QIP, 1998) The MIP was formed in 1998 by project managers of Canada, Germany, France, Great Britain, Italy and the US, as a merge between BIP and QIP. In 2004 it went on to combine all data modeling activities, including the ATTCIS. Among the successes of the MIP are the C2IEDM and the JC3IEDM. Both are covered in NATO STANAG and both are predecessors to the MIM. Products NATO STANAG 5523 (C2IEDM) The Command and Control Information Exchange Data Model (C2IEDM, predecessor to the JC3IEDM) is a data model that is managed by the MIP. It originated with experts from various NATO partners and from the Partnership for Peace nations. NATO STANAG 5525 (JC3IEDM) The Joint Command, Control and Consultation Information Exchange Data Model (JC3IEDM) is first and foremost an information exchange data model. The model can also serve as a coherent basis for other information exchange mechanisms, such as message formats, currently lacking a unified information structure. It supports data exchange over XML and is the most successful evolution in a long line of data models. JC3IEDM is intended to represent the core of the data identified for exchange across multiple functional areas and multiple views of the requirements. Toward that end, it lays down a common approach to describing the information to be exchanged. MIM - MIP Information Model MIM is a reference model that provides the taxonomic and semantic foundation for information exchange in the C2 domain. The goal is to harmonize current information exchange concepts and create a common ontology tailored to the needs of joint/combined military operations. MIP has submitted a proposal to cover its information model in the STANAG 5643 to the NATO Digital Policy Committee (DPC) (formerly known as C3 Board). MIM is the baseline for the MIP 4.4 Information Exchange Specification (MIP4.4-IES), which offers a service oriented architecture and expands on the data exchange capabilities of its predecessor JC3IEDM. Current Version of MIM is 5.2 which was released in July 2023. MIP IES – MIP Information Exchange Specification The IES is the actual product of MIP to be implemented in C2 systems. MIP 4.4 IES is the newest released Version (released in Oct 2023) and part of the Federated Mission Network (FMN) Spiral 4 and 5. Implementation guidance and Documentation is available through member nations. MIP 4.5 IES is scheduled to be released Sep 2025. MIP Members The Full members are:[3] France, Germany, the Netherlands, Spain, Türkye, the United Kingdom, the United States of America. The Associated members are: Canada, Denmark, Austria, Belgium, the Czech Republic, Greece, Hungary, Lithuania, Norway, New Zealand, Poland, Romania, Switzerland, Ukraine[4] as well as Allied Command Transformation. Full members commit to support the collaborative development of MIP solutions (with at least 3 Persons) and must express intention to field those solutions. Associated members can take part in the development process (with at least 1 Person) and fielding of MIP solutions, but have no voting rights at meetings. References External links Bundeswehr-internes Wiki tidepedia Website MIM NATO standardisation United States Department of Defense agencies Data modeling
Multilateral Interoperability Programme
[ "Engineering" ]
1,187
[ "Data modeling", "Data engineering" ]
13,554,247
https://en.wikipedia.org/wiki/Cadet%27s%20fuming%20liquid
Cadet's fuming liquid was a red-brown oily liquid prepared in 1760 by the French chemist Louis Claude Cadet de Gassicourt (1731-1799) by the reaction of potassium acetate with arsenic trioxide. It consisted mostly of dicacodyl (((CH3)2As)2) and cacodyl oxide (((CH3)2As)2O). The global reaction (mass balance) corresponding to the oxide formation is the following: These were the first organometallic substances prepared; as such, Cadet has been regarded as the father of organometallic chemistry. This liquid develops white fumes when exposed to air, resulting in a pale flame producing carbon dioxide, water, and arsenic trioxide. It has a nauseating and very disagreeable garlic-like odor. Around 1840, Robert Bunsen did much work on characterizing the compounds in the liquid and its derivatives. His research was important in the development of radical theory. References Organometallic chemistry Organoarsenic compounds Chemical mixtures
Cadet's fuming liquid
[ "Chemistry" ]
218
[ "Organometallic chemistry", "Chemical mixtures", "nan" ]
13,554,417
https://en.wikipedia.org/wiki/FlexiScale
FlexiScale is a utility computing platform launched by XCalibre Communications in the summer of 2007, and subsequently acquired by Flexiant. Launched shortly after Amazon's EC2 service, it was Europe's first and the world's second cloud computing platform. Users are able to create, start, and stop servers as they require allowing rapid deployment where needed. Both Windows and Linux are supported on the FlexiScale platform. FlexiScale uses the open source Xen hypervisor. Backend storage comes from a highly redundant SAN, although the level of redundancy was called into question in August 2008, with more than 2 days of downtime resulting from an engineer mistake. The storage system previously consisted of a dual head NetApp storage array with the disk shelves connected to both heads. This was replaced with the launch of FlexiScale 2.0 in June 2010 with an "Amber Road" based storage solution from Sun Microsystems. There have been a few revisions of the FlexiScale platform, from publicly available information the first version was based on the Virtual Iron VM management software. The second revision (called FlexiScale v1.5) of the platform was based on a in-house developed VM control system, now known as Extility. The most recent release (called FlexiScale v2.0) contributed an extensively revised User Interface. References External links FlexiScale main website Web services Cloud infrastructure
FlexiScale
[ "Technology" ]
301
[ "Cloud infrastructure", "IT infrastructure" ]
13,554,721
https://en.wikipedia.org/wiki/I%CE%BAB%CE%B1
IκBα (nuclear factor of kappa light polypeptide gene enhancer in B-cells inhibitor alpha; NFKBIA) is one member of a family of cellular proteins that function to inhibit the NF-κB transcription factor. IκBα inhibits NF-κB by masking the nuclear localization signals (NLS) of NF-κB proteins and keeping them sequestered in an inactive state in the cytoplasm. In addition, IκBα blocks the ability of NF-κB transcription factors to bind to DNA, which is required for NF-κB's proper functioning. Disease linkage The gene encoding the IκBα protein is mutated in some Hodgkin's lymphoma cells; such mutations inactivate the IκBα protein, thus causing NF-κB to be chronically active in the lymphoma tumor cells and this activity contributes to the malignant state of these tumor cells. Interactions IκBα has been shown to interact with: BTRC, C22orf25, CHUK, DYNLL1, G3BP2, Heterogeneous nuclear ribonucleoprotein A1, IKK2, NFKB1, P53, RELA, RPS6KA1, SUMO4, and Valosin-containing protein. References Further reading External links OMIM entries on Ectodermal Dysplasia, Anhidrotic, with T-cell Immunodeficiency Transcription factors
IκBα
[ "Chemistry", "Biology" ]
325
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
13,556,060
https://en.wikipedia.org/wiki/Salsola
Salsola is a genus of the subfamily Salsoloideae in the family Amaranthaceae. The genus sensu stricto is distributed in Australia, central and southwestern Asia, North Africa, and the Mediterranean. Common names of various members of this genus and related genera are saltwort (for their salt tolerance) and tumbleweed or roly-poly. The genus name Salsola is from the Latin , meaning . Description The species of Salsola are mostly subshrubs, shrubs, small trees, and rarely annuals. The leaves are mostly alternate, rarely opposite, simple, and entire. The bisexual flowers have five tepals and five stamens. The pistil ends in two stigmata. The fruit is spherical with a spiral embryo and no perisperm. Systematics The genus name Salsola was first published in 1753 by Linnaeus in Species Plantarum. The type species is Salsola soda L. The genus Salsola belongs to the tribe Salsoleae s.s. of the subfamily Salsoloideae in the family Amaranthaceae. The genus was recircumscribed in 2007 based on molecular phylogenetic research, greatly reducing the number of species. Synonyms of Salsola sensu stricto are: Darniella Maire & Weiller, Fadenia Aellen & Townsend, Neocaspia Tzvelev and Hypocylix Wol.. Plants of the World Online includes: Salsola acanthoclada Salsola africana Salsola algeriensis Salsola angusta Salsola arbusculiformis Salsola australis Salsola austrotibetica Salsola baranovii Salsola basaltica Salsola brevifolia Salsola chellalensis Salsola chinghaiensis Salsola collina Salsola cruciata Salsola divaricata Salsola drummondii Salsola euryphylla Salsola glomerata Salsola × gobicola Salsola griffithii Salsola gymnomaschala Salsola gypsacea Salsola halimocnemis Salsola hartmannii Salsola ikonnikovii Salsola intramongolica Salsola jacquemontii Salsola junatovii Salsola kali Salsola komarovii Salsola laricifolia Salsola mairei Salsola masclansii Salsola melitensis Salsola monoptera Salsola pachyphylla Salsola papillosa Salsola paulsenii Salsola pontica Salsola praecox Salsola praemontana Salsola ryanii Salsola sabrinae Salsola sinkiangensis Salsola squarrosa Salsola strobilifera Salsola subglabra Salsola tamamschjanae Salsola tamariscina Salsola tragus (sometimes placed in Kali) Salsola tunetana Salsola turcica Salsola verticillata Salsola webbii Salsola zaidamica Salsola zygophylla Excluded species: Many species formerly grouped in Salsola were excluded by Akhani et al. (2007). Some may now be classified in separate genera: Turania (for Salsola sect. Sogdiana) Xylosalsola (for Salsola sect. Coccosalsola subsect. Arbuscula) Caroxylon (for Salsola sect. Caroxylon) Caroxylon imbricatum (Forssk.) Moq. (Syn. Salsola imbricata Forssk.) Caroxylon vermiculatum (L.) Akhani & Roalson (Syn. Salsola vermiculata L.) Kaviria (for Salsola sect. Belanthera) Uses The leaves and shoots of S. soda, known in Italy as barba di frate or agretti, are cooked and used as vegetables. The species is also used for the production of potash. In Namibia, where the plant is called gannabos, it is a valuable fodder plant. References Amaranthaceae Halophytes Barilla plants Drought-tolerant plants Amaranthaceae genera
Salsola
[ "Chemistry" ]
924
[ "Halophytes", "Salts" ]
13,556,959
https://en.wikipedia.org/wiki/Perpendicular%20distance
In geometry, the perpendicular distance between two objects is the distance from one to the other, measured along a line that is perpendicular to one or both. The distance from a point to a line is the distance to the nearest point on that line. That is the point at which a segment from it to the given point is perpendicular to the line. Likewise, the distance from a point to a curve is measured by a line segment that is perpendicular to a tangent line to the curve at the nearest point on the curve. The distance from a point to a plane is measured as the length from the point along a segment that is perpendicular to the plane, meaning that it is perpendicular to all lines in the plane that pass through the nearest point in the plane to the given point. Other instances include: Point on plane closest to origin, for the perpendicular distance from the origin to a plane in three-dimensional space Nearest distance between skew lines, for the perpendicular distance between two non-parallel lines in three-dimensional space Perpendicular regression fits a line to data points by minimizing the sum of squared perpendicular distances from the data points to the line. Other geometric curve fitting methods using perpendicular distance to measure the quality of a fit exist, as in total least squares. The concept of perpendicular distance may be generalized to orthogonal distance, between more abstract non-geometric orthogonal objects, as in linear algebra (e.g., principal components analysis); normal distance, involving a surface normal, between an arbitrary point and its foot on the surface. It can be used for surface fitting and for defining offset surfaces. See also Distance between sets Hypercycle (geometry) Moment of inertia Signed distance References Orthogonality Distance
Perpendicular distance
[ "Physics", "Mathematics" ]
342
[ "Physical quantities", "Distance", "Quantity", "Size", "Space", "Spacetime", "Wikipedia categories named after physical quantities" ]
13,558,237
https://en.wikipedia.org/wiki/Mathematics%2C%20Form%20and%20Function
Mathematics, Form and Function, a book published in 1986 by Springer-Verlag, is a survey of the whole of mathematics, including its origins and deep structure, by the American mathematician Saunders Mac Lane. Mathematics and human activities Throughout his book, and especially in chapter I.11, Mac Lane informally discusses how mathematics is grounded in more ordinary concrete and abstract human activities. The following table is adapted from one given on p. 35 of Mac Lane (1986). The rows are very roughly ordered from most to least fundamental. For a bullet list that can be compared and contrasted with this table, see section 3 of Where Mathematics Comes From. Also see the related diagrams appearing on the following pages of Mac Lane (1986): 149, 184, 306, 408, 416, 422-28. Mac Lane (1986) cites a related monograph by Lars Gårding (1977). Mac Lane's relevance to the philosophy of mathematics Mac Lane cofounded category theory with Samuel Eilenberg, which enables a unified treatment of mathematical structures and of the relations among them, at the cost of breaking away from their cognitive grounding. Nevertheless, his views—however informal—are a valuable contribution to the philosophy and anthropology of mathematics. His views anticipate, in some respects, the more detailed account of the cognitive basis of mathematics given by George Lakoff and Rafael E. Núñez in their Where Mathematics Comes From. Lakoff and Núñez argue that mathematics emerges via conceptual metaphors grounded in the human body, its motion through space and time, and in human sense perceptions. See also 1986 in philosophy Notes References Gårding, Lars, 1977. Encounter with Mathematics. Springer-Verlag. Reuben Hersh, 1997. What Is Mathematics, Really? Oxford Univ. Press. George Lakoff and Rafael E. Núñez, 2000. Where Mathematics Comes From. Basic Books. Leslie White, 1947, "The Locus of Mathematical Reality: An Anthropological Footnote," Philosophy of Science 14: 289-303. Reprinted in Hersh, R., ed., 2006. 18 Unconventional Essays on the Nature of Mathematics. Springer: 304–19. 1986 non-fiction books Mathematics books Philosophy of mathematics literature Cognitive science literature
Mathematics, Form and Function
[ "Mathematics" ]
446
[ "Philosophy of mathematics literature" ]
13,561,452
https://en.wikipedia.org/wiki/Philosophy%20of%20desire
In philosophy, desire has been identified as a recurring philosophical problem. It has been variously interpreted as what compels someone towards the highest state of human nature or consciousness, as well as being posited as either something to be eliminated or a powerful source of potential. In Plato's The Republic, Socrates argued that individual desires must be postponed in the name of a higher ideal. Similarly, within the teachings of Buddhism, craving, identified as the most potent form of desire, is thought to be the cause of all suffering, which can be eliminated to attain greater happiness (Nirvana). While on the path to liberation, a practitioner is advised to "generate desire" for skillful ends. History Ancient Greece Plato uses the term epithumia to refer to both desire as a broad category and as a specific kind of desire. Aristotle clarifies the varying notions by specifying that overarching category is orexis. Within that category, epithumia is a type of desire along with boulêsis (wish) and thumos (spirited thinking). In Aristotle's De Anima the soul is seen to be involved in motion, because animals desire things and in their desire, they acquire locomotion. Aristotle argued that desire is implicated in animal interactions and the propensity of animals to motion. But Aristotle acknowledges that desire cannot account for all purposive movement towards a goal. He brackets the problem by positing that perhaps reason, in conjunction with desire and by way of the imagination, makes it possible for one to apprehend an object of desire, to see it as desirable. In this way reason and desire work together to determine what is a good object of desire. This resonates with desire in the chariots of Plato's Phaedrus, for in the Phaedrus the soul is guided by two horses, a dark horse of passion and a white horse of reason. Here passion and reason, as in Aristotle, are also together. Socrates does not suggest the dark horse be done away with, since its passions make possible a movement towards the objects of desire, but he qualifies desire and places it in a relation to reason so that the object of desire can be discerned correctly, so that we may have the right desire. Aristotle distinguishes desire into two aspects of appetition, and volition. Appetition, or appetite, is a longing for or seeking after something; a craving. Aristotle makes the distinction as follows: Everything, too, is pleasant for which we have the desire within us, since desire is the craving for pleasure. Of the desires some are irrational, some associated with reason. By irrational I mean those which do not arise from any opinion held by the mind. Of this kind are those known as ‘natural’; for instance, those originating in the body, such as the desire for nourishment, namely hunger and thirst, and a separate kind of desire answering to each kind of nourishment; and the desires connected with taste and sex and sensations of touch in general; and those of smell, hearing, and vision. Rational desires are those which we are induced to have; there are many things we desire to see or get because we have been told of them and induced to believe them good. Western philosophers In Passions of the Soul, René Descartes writes of the passion of desire as an agitation of the soul that projects desire, for what it represents as agreeable, into the future. Desire in Immanuel Kant can represent things that are absent and not only objects at hand. Desire is also the preservation of objects already present, as well as the desire that certain effects not appear, that what affects one adversely be curtailed and prevented in the future. Moral and temporal values attach to desire in that objects which enhance one's future are considered more desirable than those that do not, and it introduces the possibility, or even necessity, of postponing desire in anticipation of some future event, anticipating Sigmund Freud's text Beyond the Pleasure Principle. See also, the pleasure principle in psychology. In his Ethics, Baruch Spinoza declares desire to be "the very essence of man," in the "Definitions of the Affects" at the end of Part III. An early example of desire as an ontological principle, it applies to all things or "modes" in the world, each of which has a particular vital "striving" (sometimes expressed with the Latin "conatus") to persist in existence (Part III, Proposition 7). Different striving beings have different levels of power, depending on their capacity to persevere in being. Affects, or emotions which are divided into the joyful and the sad, alter our level of power or striving: joy is a passage "from a lesser to a greater perfection" or degree of power (III Prop. 11 Schol.), just as sadness is the opposite. Desire, qualified by the imagination and the intellect, is an attempt to maximize power, to "strive to imagine those things that increase or aid the body's power of acting." (III Prop. 12). Spinoza ends the Ethics by a proposition that both moral virtue and spiritual blessedness are a direct result of essential power to exist, i.e. desire (Part V Prop. 42). In A Treatise on Human Nature, David Hume suggests that reason is subject to passion. Motion is put into effect by desire, passions, and inclinations. It is desire, along with belief, that motivates action. Immanuel Kant establishes a relation between the beautiful and pleasure in Critique of Judgment. He says "I can say of every representation that it is at least possible (as a cognition) it should be bound up with a pleasure. Of representation that I call pleasant I say that it actually excites pleasure in me. But the beautiful we think as having a necessary reference to satisfaction." Desire is found in the representation of the object. Georg Wilhelm Friedrich Hegel begins his exposition of desire in Phenomenology of Spirit with the assertion that "self-consciousness is the state of desire () in general." It is in the restless movement of the negative that desire removes the antithesis between itself and its object, "and the object of immediate desire is a living thing," an object that forever remains an independent existence, something other. Hegel's inflection of desire via stoicism becomes important in understanding desire as it appears in Marquis de Sade. Stoicism in this view has a negative attitude towards "otherness, to desire, and work." Reading Maurice Blanchot in this regard, in his essay Sade's Reason, the libertine is one of a type that sometimes intersects with a Sadean man, who finds in stoicism, solitude, and apathy the proper conditions. Blanchot writes, "the libertine is thoughtful, self-contained, incapable of being moved by just anything." Apathy in de Sade is opposition not to desire but to its spontaneity. Blanchot writes that in Sade, "for passion to become energy, it is necessary that it be constricted, that it be mediated by passing through a necessary moment of insensibility, then it will be the greatest passion possible." Here is stoicism, as a form of discipline, through which the passions pass. Blanchot says, "Apathy is the spirit of negation, applied to the man who has chosen to be sovereign." Dispersed, uncontrolled passion does not augment one's creative force but it gets diminished. In his Principia Ethica, British philosopher G. E. Moore argued that two theories of desire should be clearly distinguished. The hedonistic theory of John Stuart Mill states that pleasure is the sole object of all desire. Mill suggests that a desire for an object is caused by an idea of the possible pleasure that would result from the attainment of the object. The desire is fulfilled when this pleasure is achieved. On this view, the pleasure is the sole motivating factor of the desire. Moore proposes an alternative theory in which an actual pleasure is already present in the desire for the object and that the desire is then for that object and only indirectly for any pleasure that results from attaining it. "In the first place, plainly, we are not always conscious of expecting pleasure, when we desire a thing. We may only be conscious of the thing which we desire, and may be impelled to make for it at once, without any calculation as to whether it will bring us pleasure or pain. In the second place, even when we do expect pleasure, it can certainly be very rarely pleasure only which we desire. On Moore's view, Mill's theory is too non-specific as to the objects of desire. Moore provides the following example: "For instance, granted that, when I desire my glass of port wine, I have also an idea of the pleasure I expect from it, plainly that pleasure cannot be the only object of my desire; the port wine must be included in my object, else I might be led by my desire to take wormwood instead of wine . . . If the desire is to take a definite direction, it is absolutely necessary that the idea of the object, from which the pleasure is expected, should also be present and should control my activity." For Charles Fourier, following desires (like passions or in Fourier's own words 'attractions') is a means to attain harmony. Buddhism Within the teachings of Siddhartha Gautama (Buddhism), craving is thought to be the cause of all suffering that one experiences in human existence. The extinction of this craving leads one to ultimate happiness, or Nirvana. Nirvana means "cessation", "extinction" (of suffering) or "extinguished", "quieted", "calmed"; it is also known as "Awakening" or "Enlightenment" in the West. The Four Noble Truths were the first teaching of Gautama Buddha after attaining Nirvana. They state that suffering is an inevitable part of life as we know it. The cause of this suffering is attachment to, or craving for worldly pleasures of all kinds and clinging to this very existence, our "self" and the things or people we—due to our delusions—deem the cause of our respective happiness or unhappiness. The suffering ends when the craving and desire ends, or one is freed from all desires by eliminating the delusions, reaches "Enlightenment". While greed and lust are always unskillful, desire is ethically variable—it can be skillful, unskillful, or neutral. In the Buddhist perspective, the enemy to be defeated is craving rather than desire in general. Psychoanalysis Jacques Lacan's désir follows Freud's concept of Wunsch and it is central to Lacanian theories. For the aim of the talking cure—psychoanalysis—is precisely to lead the analysis and or patient to uncover the truth about their desire, but this is only possible if that desire is articulated, or spoken. Lacan said that "it is only once it is formulated, named in the presence of the other, that desire appears in the full sense of the term." "That the subject should come to recognize and to name his/her desire, that is the efficacious action of analysis. But it is not a question of recognizing something which would be entirely given. In naming it, the subject creates, brings forth, a new presence in the world." "[W]hat is important is to teach the subject to name, to articulate, to bring desire into existence." Now, although the truth about desire is somehow present in discourse, discourse can never articulate the whole truth about desire: whenever discourse attempts to articulate desire, there is always a leftover, a surplus. In The Signification of the Phallus Lacan distinguishes desire from need and demand. Need is a biological instinct that is articulated in demand, yet demand has a double function, on one hand it articulates need and on the other acts as a demand for love. So, even after the need articulated in demand is satisfied, the demand for love remains unsatisfied and this leftover is desire. For Lacan "desire is neither the appetite for satisfaction nor the demand for love, but the difference that results from the subtraction of the first from the second" (article cited). Desire then is the surplus produced by the articulation of need in demand. Lacan adds that "desire begins to take shape in the margin in which demand becomes separated from need." Hence desire can never be satisfied, or as Slavoj Žižek puts it "desire's raison d'être is not to realize its goal, to find full satisfaction, but to reproduce itself as desire." It is also important to distinguish between desire and the drives. Even though they both belong to the field of the Other (as opposed to love), desire is one, whereas the drives are many. The drives are the partial manifestations of a single force called desire (see "The Four Fundamental Concepts of Psychoanalysis"). If one can surmise that objet petit a is the object of desire, it is not the object towards which desire tends, but the cause of desire. For desire is not a relation to an object but a relation to a lack (manque). Then desire appears as a social construct since it is always constituted in a dialectical relationship. Deleuze and Guattari French philosophers and critical theorists Gilles Deleuze and Félix Guattari's 1972 book Anti-Oedipus has been widely credited as a landmark work tackling philosophical and psychoanalytical conceptions of desire, and proposing a new theory of desire in the form of schizoanalysis. Deleuze and Guattari regard desire as a productive force, not as originating from lack like Lacan does. See also Desire Hedonism Passions (philosophy) References Further reading Middendorf Ulrike, Resexualizing the desexualized. The language of desire and erotic love in the classic of odes, Fabrizio Serra Editore. Nicolosi M. Grazia, Mixing memories and desire. Postmodern erotics of writing in the speculative fiction of Angela Carter, CUECM. Jadranka Skorin-Kapov, The Aesthetics of Desire and Surprise: Phenomenology and Speculation, Lexington Books 2015 Propositional attitudes Motivation
Philosophy of desire
[ "Biology" ]
2,996
[ "Ethology", "Behavior", "Motivation", "Human behavior" ]
13,561,682
https://en.wikipedia.org/wiki/Kinzua%20Bridge
The Kinzua Bridge or the Kinzua Viaduct (, ) was a railroad trestle that spanned Kinzua Creek in McKean County in the U.S. state of Pennsylvania. The bridge was tall and long. Most of its structure collapsed during a tornado in July 2003. Billed as the "Eighth Wonder of the World", the wrought iron original 1882 structure held the record for the tallest railroad bridge in the world for two years. In 1900, the bridge was dismantled and simultaneously rebuilt out of steel to allow it to accommodate heavier trains. It stayed in commercial service until 1959, when it was sold to a salvage company. In 1963 the Commonwealth of Pennsylvania purchased the bridge as the centerpiece of a state park. Restoration of the bridge began in 2002, but before it was finished a tornado struck the bridge in 2003, causing a large portion of the bridge to collapse. Corroded anchor bolts holding the bridge to its foundations failed, contributing to the collapse. Before its collapse, the Kinzua Bridge was ranked as the fourth-tallest railway bridge in the United States. It was listed on the National Register of Historic Places in 1977 and as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 1982. The ruins of the Kinzua Bridge are in Kinzua Bridge State Park off U.S. Route 6 near the borough of Mount Jewett, Pennsylvania. Original construction and service In 1882, Thomas L. Kane, president of the New York, Lake Erie and Western Railway (NYLE&W), was faced with the challenge of building a branch line off the main line in Pennsylvania, from Bradford south to the coalfields in Elk County. The fastest way to do so was to build a bridge across the Kinzua Valley. The only other alternative would have been to lay an additional of track over rough terrain. When built, the bridge was larger than any ever attempted and over twice as large as the largest similar structure at the time, the Portage Bridge over the Genesee River in western New York. The first Kinzua Bridge was built by a crew of 40 from of wrought iron in just 94 working days, between May 10 and August 29, 1882. The reason for the short construction time was that scaffolding was not used in the bridge's construction; instead a gin pole was used to build the first tower, then a traveling crane was built atop it and used in building the second tower. The process was then repeated across all 20 towers. The bridge was designed by the engineer Octave Chanute and was built by the Phoenix Iron Works, which specialized in producing patented, hollow iron tubes called "Phoenix columns". Because of the design of these columns, it was often mistakenly believed that the bridge had been built out of wooden poles. The bridge's 110 sandstone masonry piers were quarried from the hillside used for the foundation of the bridge. The tallest tower had a base that was wide. The bridge was designed to support a load of , and was estimated to cost between $167,000 and $275,000. On completion, the bridge was the tallest railroad bridge in the world and was advertised as the "Eighth Wonder of the World". Six of the bridge's 20 towers were taller than the Brooklyn Bridge. Excursion trains from as far away as Buffalo, New York, and Pittsburgh would come just to cross the Kinzua Bridge, which held the height record until the Garabit viaduct, tall, was completed in France in 1884. Trains crossing the bridge were restricted to a speed of because the locomotive, and sometimes the wind, caused the bridge to vibrate. People sometimes visited the bridge in hopes of finding the loot of a bank robber, who supposedly hid $40,000 in gold and currency under or near it. Reconstruction and service By 1893, the NYLE&W had gone bankrupt and was merged with the Erie Railroad, which became the owner of the bridge. By the start of the 20th century, locomotives were almost 85 percent heavier and the iron bridge could no longer safely carry trains. The last traffic crossed the old bridge on May 14, 1900, and removal of the old iron began on May 24. The new bridge was designed by C.R. Grimm and was built by the Elmira Bridge Company out of of steel, at a cost of $275,000. Construction began on May 26, starting from both ends of the old bridge. A crew of between 100 and 150 worked 10-hour days for almost four months to complete the new steel frame. Two Howe truss "timber travelers", each long and deep, were used to build the towers. Each "traveler" was supported by a pair of the original wrought-iron towers, separated by the one that was to be replaced. After the middle tower was demolished and a new steel one built in its place, the traveler was moved down the line by one tower and the process was repeated. Construction of each new tower and the spans adjoining it took one week to complete. The bolts used to hold the towers to the anchor blocks were reused from the first bridge, which would eventually play a major role in the bridge's demise. Grimm, the designer of the bridge, later admitted that the bolts should have been replaced. The Kinzua Viaduct reopened to traffic on September 25, 1900. The new bridge was able to safely accommodate Erie's heavy 2-8-2 Mikados. The Erie Railroad maintained a station at the Kinzua Viaduct. Constructed between 1911 and 1916, the station was not manned by an agent. The station was closed sometime between 1923 and 1927. Train crews would sometimes play a trick on a brakeman on his first journey on the line. When the train was a short distance from the bridge, the crew would send the brakeman over the rooftops of the cars to check on a small supposed problem. As the train crossed the bridge, the rookie "suddenly found himself terrified, staring down from the roof of a rocking boxcar". Even after being reconstructed, the bridge still had a speed limit of . As the bridge aged, heavy trains pulled by two steam locomotives had to stop so the engines could cross the bridge one at a time. Diesel locomotives were lighter and did not face that limit; the last steam locomotive for commercial service crossed on October 5, 1950. The Erie Railroad obtained trackage rights on the nearby Baltimore and Ohio Railroad (B&O) line in the late 1950s, allowing it to bypass the aging Kinzua Bridge. Regular commercial service ended on June 21, 1959, and the Erie sold the bridge to the Kovalchick Salvage Company of Indiana, Pennsylvania, for $76,000. The bridge was reopened for one day in October 1959 when a wreck on the B&O line forced trains to be rerouted across it. According to the American Society of Civil Engineers, the Kinzua Bridge "was a critical structure in facilitating the transport of coal from Northwestern Pennsylvania to the Eastern Great Lakes region, and is credited with causing an increase in coal mining that led to significant economic growth." Creation of state park Nick Kovalchick, head of the Kovalchick Salvage Company, which then owned the bridge, was reluctant to dismantle it. On seeing it for the first time he is supposed to have said "There will never be another bridge like this." Kovalchick worked with local groups who wanted to save the structure, and Pennsylvania Governor William Scranton signed a bill into law on August 12, 1963, to purchase the bridge and nearby land for $50,000 and create Kinzua Bridge State Park. The deed for the park's was recorded on January 20, 1965, and the park was opened to the public in 1970. An access road to the park was built in 1974, and new facilities there included a parking lot, drinking water and toilets, and installation of a fence on the bridge deck. On July 5, 1975, there was an official ribbon cutting ceremony for the park, which "was and is unique in the park system" since "its centerpiece is a man-made structure". The bridge was listed on the National Register of Historic Places on August 29, 1977, and was named to the National Register of Historic Civil Engineering Landmarks by the American Society of Civil Engineers on June 26, 1982. The Knox and Kane Railroad (KKRR) operated sightseeing trips from Kane through the Allegheny National Forest and over the Kinzua Bridge from 1987 until the bridge was closed in 2002. In 1988 it operated the longest steam train excursion in the United States, a round trip to the bridge from the village of Marienville in Forest County, with a stop in Kane. The New York Times described being on the bridge as "more akin to ballooning than railroading" and noted "You stare straight out with nothing between you and an immense sea of verdure a hundred yards [91 m] below." The railroad still operated excursions through the forest and stopped at the bridge's western approach until October 2004. As of 2009, Kinzua Bridge State Park is a Pennsylvania state park surrounding the bridge and the Kinzua Valley. The park is located off of U.S. Route 6 north of Mount Jewett in Hamlin and Keating Townships. A scenic overlook within the park allows views of the fallen bridge and of the valley, and is also a prime location to view the fall foliage in mid-October. The park has a shaded picnic area with a centrally-located modern restroom. Before the bridge's collapse, visitors were allowed on or under the bridge and hiking was allowed in the valley around the bridge. In September 2002 the bridge was closed even to pedestrian traffic. About of Kinzua Bridge State Park are open to hunting. Common game species are turkey, bear and deer. Bridge collapse Since 2002, the Kinzua Bridge had been closed to all "recreational pedestrian and railroad usage" after it was determined that the structure was at risk to high winds. Engineers had determined that during high winds, the bridge's center of gravity could shift, putting weight onto only one side of the bridge and causing it to fail. An Ohio-based bridge construction and repair company had started work on restoring the Kinzua Bridge in February 2003. On July 21, 2003, construction workers had packed up and were starting to leave for the day when a storm arrived. A tornado spawned by the storm struck the Kinzua Bridge, snapping and uprooting nearby trees, as well as causing 11 of the 20 bridge towers to collapse. There were no deaths or injuries. The tornado was produced by a mesoscale convective system (MCS), a complex of strong thunderstorms, that had formed over an area that included eastern Ohio, western Pennsylvania, western New York, and southern Ontario. The MCS traveled east at around . As the MCS crossed northwestern Pennsylvania, it formed into a distinctive comma shape. The northern portion of the MCS contained a long-lived mesocyclone, a thunderstorm with a rotating updraft that is often conducive to tornados. At approximately 15:20 EDT (19:20 UTC), the tornado touched down in Kinzua Bridge State Park, from the Kinzua Bridge. The tornado, classified as F1 on the Fujita scale, passed by the bridge and continued another before it lifted. It touched down again from Smethport and traveled another before finally dissipating. It was estimated to have been wide and it left a path long. The same storm also spawned an F3 tornado in nearby Potter County. When the tornado touched down, the winds had increased to at least and were coming from the east, perpendicular to the bridge, which ran north–south. An investigation determined that Towers 10 and 11 had collapsed first, in a westerly direction. Meanwhile, Towers 12 through 14 had actually been picked up off their foundations, moved slightly to the northwest and set back down intact and upright, held together by only the railroad tracks on the bridge. Next, towers four through nine collapsed to the west, twisting clockwise, as the tornado started to move northward. As it moved north, inflow winds came in from the south and caused Towers 12, 13, and 14 to finally collapse towards the north, twisting counterclockwise. The failures were caused by the badly-rusted iron base bolts holding the bases of the towers to concrete anchor blocks embedded into the ground. An investigation determined that the tornado had a wind speed of at least , which applied an estimated of lateral force against the bridge. The investigation also hypothesized that the whole structure oscillated laterally four to five times before fatigue started to cause the base bolts to fail. The towers fell intact in sections and suffered damage upon impact with the ground. The century-old bridge was destroyed in less than 30 seconds. Aftermath The state decided not to rebuild the Kinzua Bridge, which would have cost an estimated $45 million. Instead, it was proposed that the ruins be used as a visitor attraction to show the forces of nature at work. Kinzua Bridge State Park had attracted 215,000 visitors annually before the bridge collapsed, and was chosen by the Pennsylvania Bureau of Parks for its list of "Twenty Must-See Pennsylvania State Parks". The viaduct and its collapse were featured in the History Channel's Life After People as an example of how corrosion and high winds would eventually lead to the collapse of any steel structure. The bridge was removed from the National Register of Historic Places on July 21, 2004. The Knox and Kane Railroad was forced to suspend operations in October 2006 after a 75 percent decline in the number of passengers, possibly brought about by the collapse of the Kinzua Bridge. The Kovalchick Corporation bought the Knox and Kane's tracks and all other property owned by the railroad, including the locomotives and rolling stock. The Kovalchick Corporation also owns the East Broad Top Railroad and was the company that owned the Kinzua Bridge before selling it to the state in 1963. The company disclosed plans in 2008 to remove the tracks and sell them for scrap. The right-of-way would then be used to establish a rail trail. Sky Walk The state of Pennsylvania reimagined the Kinzua State Park as one anchored by a "sky walk" viewing platform and network of hiking trails. It released $700,000 to design repairs on the remaining towers and plan development of the new park facilities in June 2005. In late 2005, the Pennsylvania Department of Conservation and Natural Resources (DCNR) put forward an $8 million proposal for a new observation deck and visitors' center, with plans to allow access to the bridge and a hiking trail giving views of the fallen towers. The Kinzua Sky Walk was opened on September 15, 2011, in a ribbon-cutting ceremony. The Sky Walk consists of a pedestrian walkway to an observation deck with a glass floor at the end of the bridge that allows views of the bridge and the valley directly below. The walkway cost $4.3 million to construct, but in 2011 a local tourism expert estimated it could eventually bring in $11.5 million of tourism revenue each year. See also List of bridges documented by the Historic American Engineering Record in Pennsylvania List of Erie Railroad structures documented by the Historic American Engineering Record National Register of Historic Places listings in McKean County, Pennsylvania Tornadoes of 2003 References Sources External links Kinzua Railway Viaduct Historic Civil landmark at the American Society of Civil Engineers site "Collapse at Kinzua" (Open University) 1882 establishments in Pennsylvania Bridges completed in 1882 Transportation buildings and structures in McKean County, Pennsylvania Demolished buildings and structures in Pennsylvania Erie Railroad bridges Former railway bridges in the United States Historic Civil Engineering Landmarks Historic American Engineering Record in Pennsylvania Railroad bridges in Pennsylvania Towers in Pennsylvania Viaducts in the United States Steel bridges in the United States Trestle bridges in the United States
Kinzua Bridge
[ "Engineering" ]
3,240
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
13,561,780
https://en.wikipedia.org/wiki/Seychelles%20sheath-tailed%20bat
The Seychelles sheath-tailed bat (Coleura seychellensis) is a sac-winged bat found in the central granitic islands of the Seychelles. They are nocturnal insectivores that roost communally in caves. The species was previously abundant across much of the archipelago, but has since seen a substantial loss of habitat. The International Union for Conservation of Nature has listed the species as being critically endangered, due to population decline. This is mainly due to an increase in land development and the introduction of invasive species. Ecology The weight of Seychelles sheath-tailed bats averages about . Bats in this genus generally roost in caves and houses, in crevices and cracks. In the 1860s, the Seychelles sheath-tailed bat was reported to fly around clumps of bamboo towards twilight, and in the daytime to be found roosting in the clefts of the mountainside facing the sea and with a more or less northern aspect. These hiding places were generally covered over with the large fronds of endemic palms. The Seychelles sheath-tailed bat is insectivorous. It feeds predominantly on marsh-associated Ceratopogonidae, in contrast to Curlionidae in palm woodland. Its colonies are apparently divided into harem groups. It has been the focus of recent intensive research, which has determined that it is a species associated with small clearings in forests where it feeds on a wide variety of insect species. Observations of coastal or marsh feeding are thought to be bats that have been forced into feeding in unusual situations due to habitat deterioration. Although the species is not a specialist and has a high reproductive potential, it is very vulnerable to disturbance and requires several roost sites within healthy habitat. Status It was probably abundant throughout the Seychelles in the past, but it has declined drastically and is now extinct on most islands. The International Union for Conservation of Nature lists this bat as being critically endangered. In 2013, Bat Conservation International listed this species as one of the 35 species of its worldwide priority list of conservation. It is one of the most endangered animals, fewer than 100 are believed to exist in the world. The Seychelles sheath-tailed bat has suffered from habitat deterioration due to the effects of cultivation of coconut plantations and the introduction of the kudzu vine, both of which have reduced the incidence of scrub and the availability of insect prey. The largest surviving roost is on Silhouette Island, although small roosts do exist in Mahé and also Praslin and La Digue islands. Its lifespan is 20 years; its length is . It finds its mates by fighting with another male bat in front of the females. Echolocation Echolocation in bats is the combination of producing sound waves via a bat's vocalization, using echoes from an environment, and highly evolved ears in bats. These sound waves are projected from an origin (the individual bat) until they come upon an object and are promptly bounced back to the origin at a lesser frequency and received by the source individual. The variation in the return frequency can then be used by the individual to make a "visual" map of the environment in order for the bat to find their food or perform other tasks such as navigation as well as just communicating with other individuals in the colony. There are a few different and important variables that can affect acoustic signals and soundwaves, first of which includes time/length (temporal character) of the calls by an individual. Temporal control can be important in perceptual organization of echoes as they are returning back to a source individual from various directions and distances. A second variable is speed/rate (spectral character) at which an individual calls and is important in helping an individual to perceptually visualize their surroundings when they are flying through terrain. The speed (frequency) that an individual calls also can become faster as they close the distance between themselves and a food source, which is how they hunt flying insects such as Lepidoptera. The speed at which calls can be adjusted from an individual also is important to keep track of each echo in more complex audio terrain. In order to use echolocation effectively bats have gone through much evolution to specialize in this method of movement and hunting. The two evolutionary pathways of echolocation in the two current suborders of bats, Yinpterochiroptera and Yangochiroptera are first, that echolocation has evolved separately between the two suborders, and second, that echolocation evolved from a single point in bat ancestral history and later was lost in some, but not all, Yinpterochiroptera suborder species. There are two different structures that can be utilized for the inner ear of bats, the wall-less canal or the fenestral canal. The wall-less canal allows for ganglion axons to cluster together without restrictions and allows for more space for more neurons. The fenestral canal is more restrictive and does not allow for the increase of space for more neurons and also does not allow the clustering of ganglion axons which makes this structure more restrictive when concerning the variation of ganglion. The highly derived spiral ganglion structure of the inner ear in Yangochiroptera, the suborder of Coleura seychellensis, is referred to as a trans-otic ganglion with a wall-less Rosenthal canal and is what makes echolocation work so well in bats of a similar evolutionary pathway. Vocalization Within echolocation, there is vocalization which can be best described as how a frequency is altered for different purposes and needs. At a family level (Emballonuridae) four call structures have been described, broadband FM (Frequency modulation), narrowband FM, long multi-harmonic calls and short multi-harmonic calls. Broadband FM is simply an FM sweep, narrowband FM is a downwards FM sweep that is then followed by a more narrow band tail. Long multi-harmonic calls are calls that have a minimum of 4 narrowband FM harmonics in a 2 ms period. Similarly, short multi-harmonic calls are ones that also have 4 narrowband FM harmonics but in under 2 ms. The structure of calls can be altered for a specific need, they can be faster, slower, louder, or quieter from a source individual depending if they are hunting, navigating, communicating, protecting territory, or courting a mate. To date, there are 21 simple syllables and 62 composite syllables in Saccopteryx bilineata males which are in the same family, Emballonuridae, as Coleura seychellensis. Using these syllables there are seven vocalization types in the species S. bilineata, identified as pulses, barks, chatter, whistles, screeches, territorial songs, and courtship songs. Pulses have a CF (constant frequency), start with an upward FM hook, end with a downward FM hook, and last about 7.4 ms. Barks are similar to pulses but are longer at about 10.5 ms and mainly come from males. Chatter calls are in sequences of up to 50 calls in about 5.5 ms, a single chatter call can resemble pulses but usually has a higher degree of FM. Whistles are very loud and tonal vocalizations by males hovering in front of females and last about 66.7 ms, start with an FM upstroke, increasing in fundamental frequency, and end with an FM downstroke. At the same time females vocalize a screech that can last up to 300 ms and are related to territorial conflicts and response to males hovering, these calls, also vocalized by males, typically have a duration of about 97 ms. Territorial songs are the most noticeable vocalizations in a colony with 10-50 tonal calls that first have an upward FM, then a V-shaped call in the middle, ending with a lower fundamental frequency that is headed by a noisy buzz. The territorial calls can last anywhere from 10 ms to 100 ms and their structure can vary throughout the day. Complex songs in mammals are rare and uncommon in S. bilineata but are used as courtship songs by males. The complex courtship songs also allow for individual identification of males by females in the species S. bilineata. These courtship calls will only happen after territorial calls are finished in the morning and before territorial calls start in the evening and require ultrasound recording systems because they are above 20 kHz, that is, out of human hearing range (About 20 Hz - 20 kHz). These courtship songs also can last for up to 1 hour while being directed at a single female. As mentioned, S. bilineata are in the same family as C. seychellensis and while there is not as much known for C. seychellensis, 4 types of calls have been categorized specifically for C. seychellensis: complex calls, orientation calls, orientation calls in open areas, and foraging calls. Complex social calls have a wide frequency range and are mainly directed at other bats with no repetition in call structure. Orientation calls are used for orientation in various terrains, usually in confined spaces. Orientation calls in open areas involve no changes in frequency or amplitude. Last, foraging calls are similar to orientation calls in open areas but include two alternating CF pulses. Lower frequencies are usually used for navigation in C. seychellensis while higher frequencies are used for prey detection when an individual is in a more clustered environment. References Further reading External links ARKive - images and movies of the Seychelles sheath-tailed bat (Coleura seychellensis) Animal Info Conservation of the Seychelles sheath-tailed bat Coleura Bats of Africa Endemic fauna of Seychelles EDGE species Mammals of Seychelles Critically endangered fauna of Africa Mammals described in 1868 Taxa named by Wilhelm Peters
Seychelles sheath-tailed bat
[ "Biology" ]
1,972
[ "EDGE species", "Biodiversity" ]
13,561,805
https://en.wikipedia.org/wiki/Peters%27s%20tube-nosed%20bat
Peters's tube-nosed bat (Harpiola grisea) is a species of vesper bat in the family Vespertilionidae, found in the Indian Subcontinent, mainly in the Western Himalayas. They have tube-shaped nostrils (hence the name) which assist them with their feeding. They are brown with white-yellow and underparts and have specks of orange around their neck. While they are roosting, their fur, which seems to appear as a dead plant, camouflages them from predators. They are 3.3-6.0 cm in length and have round heads, large eyes and soft fur. This bat is found in India. They are endangered due to clearing of the rain forests in which they live in and are not protected by the World Conservation Union. They feed on rain forest fruit and blossoms. References External links ITIS.gov: Harpiola grisea (Peters's tube-nosed bat) Edgeofexistence.org: Harpiola grisea (Peters's tube-nosed bat) Murininae Bats of Asia Endemic fauna of India Mammals of India Western Himalayan broadleaf forests EDGE species Mammals described in 1872 Taxa named by Wilhelm Peters Bats of India
Peters's tube-nosed bat
[ "Biology" ]
250
[ "EDGE species", "Biodiversity" ]
13,561,807
https://en.wikipedia.org/wiki/Wroughton%27s%20free-tailed%20bat
Wroughton's free-tailed bat (Otomops wroughtoni) is a free-tailed bat formerly considered to be confined to the Western Ghats area of India, though it has also recently been discovered in northeast India and in a remote part of Cambodia. It is classified as a Data Deficient species as little is known about their habitat, ecology, or foraging range. Distribution In India, the species is found in two locations in the southern Indian state of Karnataka and in Meghalaya in northeast India. In Karnataka, it is found in the Barapede Caves, located between Krishnapur and Talewadi, in Belgaum district, adjacent to the Bhimgad Wildlife Sanctuary near the state of Goa and was the only known location of this species for years. From 2012 – 2015, the average number of individuals in Barapede cave was 82. In 2000 it was reported from Cambodia. In Meghalaya, it was recently discovered in 2001 in Siju cave near Nongrai village, Shella confederacy proximately midway between the previous two locality records. There is a foraging record of a single specimen from Meghalaya collected in 2001. Since then, there is no record of sighting and/or collection of this species from that locality in Meghalaya in northeastern India. Therefore, J.R.B. Alfred states in 2006 that the distribution record of this species from Meghalaya needs confirmation and authentication. On the other hand, the sighting of the colony/collection records of this species were reported at different times by Topal and Ramkrishna (1980), Bates (1992), Mistry and Parab (2001), and Ramakrina and Pradhan (2003). Pending further confirmation and authentication of the distribution of this species from Meghalaya in northeastern India, the distribution of Otomops wroughtoni in India should be restricted to a single locality record from Karnataka state. In February 2014, Manuel Ruedi et al. discovered three new colonies of the species in Meghalaya. They report that these roosts represents near one hundred individuals, which doubles the known population size of this bat. Habitat Members of the family Molossidae roost in caves, hollow trees and human-made structures. Populations of this bat have been found in large natural caves, situated near forested areas. Description Wroughton's free-tailed bat has a forearm length of . Males weigh approximately and females . This species has large forward-pointing ears connected to each other by a membrane over the forehead. The face is naked and the nostril pad is large and prominent. The hair is short and velvety. It is a rich dark brown colour on the crown of the head, back and rump. There is a thin white border on each flank, extending from the armpit to the groin, and on the membranes of the forearms. The shoulders and the nape of the neck are a pale greyish white. The ventral surface is a dull brown, but with a contrasting grey collar, which extends onto the chin and upper chest. A small throat sac is present in both sexes. The tail projects far beyond the free edge of the narrow tail membrane, hence the common name "free-tailed bats" for this family. Habitat Very little is known about the ecology of this species. It is thought to be active throughout the year. Its diet is unknown, but probably consists of insects like that of other Molossids. The bats are active at night, and roost upside down in caves during the day. In India they live in small groups of usually five to seven individuals in narrow gaps and deep hollows in the roofs of the cave. Females are thought to have one litter per year, consisting of a single young. Specimens of this species collected in India in December had newborn young, while others were on the verge of delivery. Threats This species was considered to be one of the 15 most critically endangered bat species until the two new colonies were discovered. The new discoveries have given researchers cause to hope that the species could be distributed much more widely than is known today. However, the species is extremely vulnerable to habitat destruction and roost disturbance, and the Western Ghats population may be suffering as a result of encroachment from mining, timber and hydroelectric companies. Their habitat is threatened by limestone miners and timber contractors, and the Barapede cave could be submerged if a nearby Mahadeyi river were dammed for a hydroelectric plant as proposed by the Karnataka Government. Conservation efforts The species is listed on Schedule I of the Wildlife (Protection) Act of India, affording it the highest degree of protection. It has recently been proposed to receive the highest level of protection under Cambodian wildlife law. However, these listings will not protect the species from indirect threats resulting from habitat disturbance and human activities. Monitoring of the bats at all sites from which the species is known is recommended as a priority, followed by habitat management and public awareness programmes. The Bhimgad Forest in the Western Ghats, from which the original population is known, was first proposed as a national reserve more than eight years ago. However, despite repeated efforts by local organisations the area remains unprotected. References External links Wroughton's free-tailed bat at EDGE Otomops Bats of India Bats of South Asia Bats of Southeast Asia EDGE species Endemic fauna of the Western Ghats Taxa named by Oldfield Thomas
Wroughton's free-tailed bat
[ "Biology" ]
1,100
[ "EDGE species", "Biodiversity" ]