id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
51,429,830
https://en.wikipedia.org/wiki/Acoziborole
Acoziborole (SCYX-7158) is an antiprotozoal drug invented by Anacor Pharmaceuticals in 2009, and now under development by the Drugs for Neglected Diseases Initiative for the treatment of African trypanosomiasis (sleeping sickness). It is a structurally novel drug described as a benzoxaborole derivative, and is a one-day, one-dose oral treatment. Phase I human clinical trials were completed successfully in 2015. A single arm phase II/III trial, with no control group, was conducted from 2016 to 2019 in the Democratic Republic of the Congo and Guinea involving 208 eligible patients with trypanosomiasis caused by Trypanosoma brucei gambiense. The results of the study, published in The Lancet on 29 November 2022, found the treatment regimen had a efficacy greater than 95%. Two follow-up studies, one comparing acoziborole to nifurtimox/eflornithine and a double-blind, randomized trial of the drug based on WHO recommendations with 1,200 total participants, are underway as of November 2022. As the regimen is significantly easier to administer compared to existing treatment options, some commentators expressed hope that acoziborole could significantly slow down or even eliminate the transmission of African trypanosomiasis in humans. See also Tavaborole References Trifluoromethyl compounds Boronate esters Fluoroarenes Oxaboroles
Acoziborole
[ "Biology" ]
304
[ "Antiprotozoal agents", "Biocides" ]
47,344,746
https://en.wikipedia.org/wiki/Zero-Force%20Evolutionary%20Law
The Zero-Force Evolutionary Law (ZFEL) is a theory proposed by Daniel McShea and Robert Brandon regarding the evolution of diversity and complexity. Under the ZFEL, diversity is understood as the variation among organisms and complexity as the variation among the parts within an organism. A part is understood as a system that is to some degree internally integrated and isolated from its surroundings. In a multicellular organism, for example, a cell is a part, and therefore complexity is the number of different cell types. Like the theory of relativity, the theory has a special and general formulation. The special formulation states that in the absence of natural selection, an evolutionary system with variation and heredity will tend spontaneously to diversify and complexify. The general formulation states that evolutionary systems have a tendency to diversify and complexify, but that these processes may be amplified or constrained by other forces, including natural selection. The mechanism of the ZFEL is the inherently error-prone process of replication and reproduction. In the absence of selection, errors tend to accumulate, with the result that individuals within a population tend to become more different from each other (diversity) and parts within an individual tend to become more different from each other (complexity). Both of these tendencies can be overcome by selection, including stabilizing or negative selection, with the result that diversity or complexity often does not change, or even decreases. What the ZFEL offers is not so much a prediction as a null expectation, telling us what will happen in evolution when selection is absent. It is the analogue of Newton's law of momentum, which tells us the trajectory of a moving object in the absence of forces (a straight line). See also Constructive neutral evolution Evolution of biological complexity Neutral theory of molecular evolution References Evolutionary biology Neutral theory
Zero-Force Evolutionary Law
[ "Biology" ]
364
[ "Evolutionary biology", "Non-Darwinian evolution", "Neutral theory", "Biology theories" ]
47,346,791
https://en.wikipedia.org/wiki/S5.4
The S5.4 (AKA TDU-1, GRAU Index 8D66), was a Russian liquid rocket engine burning TG-02 and AK20F in the gas generator cycle. It was originally used as the braking (deorbit) engine of the Vostok and Voskhod crewed spacecraft, and Zenit satellites, which later switched to solid motors. The engine produced of thrust with a specific impulse of 266 seconds in vacuum, and burned for 45 seconds, enough for the deorbit. It had a main fixed combustion chamber and four small verniers to supply vector control. It was housed in the service module and had two toroidal tanks for pressurization. It was designed by OKB-2, the Design Bureau led by Aleksei Isaev, for the Vostok program. The braking engine for the first crewed spacecraft was a difficult task that no design bureau wanted to take. It was considered critical, as a failure would have left a cosmonaut stranded in space. A solid motor was considered, but the ballistic experts predicted a landing error, versus a tenth of that for a liquid engine. It took the coordinated efforts of Boris Chertok and Sergei Korolev to convince Isaev to accept the task. References External links KB KhIMMASH Official Page (in Russian, Archived) Rocket engines of Russia Rocket engines of the Soviet Union Rocket engines using hypergolic propellant
S5.4
[ "Astronomy" ]
295
[ "Rocketry stubs", "Astronomy stubs" ]
47,348,340
https://en.wikipedia.org/wiki/Exidiopsis%20effusa
Exidiopsis effusa is a species of fungus in the family Auriculariaceae, and the type species of the genus Exidiopsis. It is associated with the formation of hair ice on dead wood. References External links Auriculariales Fungi described in 1888 Fungus species
Exidiopsis effusa
[ "Biology" ]
59
[ "Fungi", "Fungus species" ]
47,348,378
https://en.wikipedia.org/wiki/KTDU-35
The KTDU-35 (GRAU Index 11D62) was a Soviet spacecraft propulsion system composed of two liquid rocket engines, the primary, S5.60 (SKD) and the secondary S5.35 (DKD), fed from the same propellant tanks. Both engines burn UDMH and AK27I in the gas generator cycle. It was designed by OKB-2, the famous Isaev Design Bureau, for the original Soyuz programme. Within the Soyuz and Progress, the SKD is the primary engine, the DKD is the backup engine for main orbital correction and de-orbit operations. The engine generate (SKD) or (DKD) of thrust with a specific impulse of 278 seconds and 270 seconds, respectively. The SKD nozzle is fixed in the aft of the craft, and the dual DKD nozzles are on either side. The spacecraft attitude system (DPO) is responsible for pointing the vehicle in the correct direction and keep it that way during SKD burns. Versions This engine has been used in three variants: S5.53: Orbital correction engine for the lunar version of the Soyuz. S5.60 (AKA KTDU-35 GRAU Index 11D62): Version for the LEO version of the Soyuz. S5.66 (AKA KTDU-66): Maneuvering engine version for the Salyut 1 and Salyut 4 stations. Increased burn time to 1000 seconds and increased number of starts. Also was composed of primary and secondary engines. See also Soyuz 7K-OK Soyuz 7K-OKS Soyuz 7K-T Soyuz 7K-TM Progress 7K-TG Isaev S5.4 References External links KB KhIMMASH Official Page (in Russian) Rocket engines of Russia Rocket engines of the Soviet Union Rocket engines using hypergolic propellant
KTDU-35
[ "Astronomy" ]
390
[ "Rocketry stubs", "Astronomy stubs" ]
47,349,249
https://en.wikipedia.org/wiki/MGC1
MGC1 is a globular cluster in the constellation of Pisces. It lies about 650,000 light-years (about 200 kpc) away from the Andromeda Galaxy (M 31)'s galactic center. MGC1 is considered as one of the most isolated globular clusters in the Local Group. The radial velocity of MGC1 is close to the systematic velocity of M 31 and likely within its escape velocity, therefore the cluster is likely gravitationally bound to it. Its absolute magnitude is −8.5. In 2010, three astronomers (Charlie Conroy, Abraham Loeb and David Spergel) submitted an article to The Astrophysical Journal, explaining with evidence how the two globular clusters MGC1 and NGC 2419, another globular cluster 90,000 light-years (30 kpc) away from the center of the Milky Way Galaxy, did not have dark matter halos surrounding them. In another article submitted to Science Magazine, it was explained that "Using data obtained by other astronomers, the team created computer models of what globular clusters should look like in the presence and absence of dark matter halos. Over time, clusters without dark matter slowly lose their gravitational grip on the stars at their edges, the team found, whereas those with halos hold onto these stars. Both NGC 2419 and MGC1 are missing stars at their fringes, leading the researchers to conclude that they formed in the absence of dark matter halos. The same may be true of most globular clusters in the local universe." This apparently proves that dark matter does not form all globular clusters, since the belief before this discovery was that dark matter helped form all globular clusters. References Globular clusters Pisces (constellation) Andromeda Subgroup
MGC1
[ "Astronomy" ]
371
[ "Pisces (constellation)", "Constellations" ]
47,349,294
https://en.wikipedia.org/wiki/Subjective%20expected%20relative%20similarity
Subjective expected relative similarity (SERS) is a normative and descriptive theory that predicts and explains cooperation levels in a family of games termed Similarity Sensitive Games (SSG), among them the well-known Prisoner's Dilemma game (PD). SERS was originally developed in order to (i) provide a new rational solution to the PD game and (ii) to predict human behavior in single-step PD games. It was further developed to account for: (i) repeated PD games, (ii) evolutionary perspectives and, as mentioned above, (iii) the SSG subgroup of 2×2 games. SERS predicts that individuals cooperate whenever their subjectively perceived similarity with their opponent exceeds a situational index derived from the game's payoffs, termed the similarity threshold of the game. SERS proposes a solution to the rational paradox associated with the single step PD and provides accurate behavioral predictions. The theory was developed by Prof. Ilan Fischer at the University of Haifa. The Prisoner's Dilemma The dilemma is described by a 2 × 2 payoff matrix that allows each player to choose between a cooperative and a competitive (or defective) move. If both players cooperate, each player obtains the reward (R) payoff. If both defect, each player obtains the punishment (P) payoff. However, if one player defects while the other cooperates, the defector obtains the temptation (T) payoff and the cooperator obtains the sucker's (S) payoff, where (and, assuring that sharing the payoffs awarded for uncoordinated choices does not exceed the payoffs obtained by mutual cooperation). Given the payoff structure of the game (see Table 1), each individual player has a dominant strategy of defection. This dominant strategy yields a better payoff regardless of the opponent's choice. By choosing to defect, players protect themselves from exploitation and retain the option to exploit a trusting opponent. Because this is the case for both players, mutual defection is the only Nash equilibrium of the game. However, this is a deficient equilibrium (since mutual cooperation results in a better payoff for both players). The PD game payoff matrix: The repeated prisoner's dilemma Players that knowingly interact for several games (where the end point of the game is unknown), thus playing a repeated Prisoner's Dilemma game, may still be motivated to cooperate with their opponent while attempting to maximise their payoffs along the entire set of their repeated games. Such players face a different challenge of choosing an efficient and lucrative strategy for the repeated play. This challenge may become more complex when individuals are embedded in an ecology, having to face many opponents with various and unknown strategies. The SERS theory SERS assumes that the similarity between the players is subjectively and individually perceived (denoted as , where ). Two players confronting each other may have either identical or different perceptions of their similarity to their opponent. In other words, similarity perceptions need neither be symmetric nor correspond to formal logic constraints. After perceiving , each player chooses between cooperation and defection, attempting to maximize the expected outcome. This means that each player estimates his or her expected payoffs under each of two possible courses of action. The expected value of cooperation is given by and the expected payoff of defection is given by . Hence, cooperation provides a higher expected payoff whenever which may also be expressed further as: Cooperate if . Defining , we obtain a simple decision rule: cooperate whenever , where denotes the level of perceived similarity with the opponent, and denotes the similarity threshold derived from the payoff matrix. To illustrate, consider a PD payoff matrix with . The similarity threshold calculated for the game is given by: . Thus a player perceiving the similarity with the opponent, , exceeding 0.71 should cooperate in order to maximise his expected payoffs. Empirical evidence Several experiments were conducted to test whether SERS provides not only a normative theory but also a descriptive theory of human behaviour. For example, an experiment involving 215 university undergraduates revealed an average of 30% cooperation rate for a payoff matrix with and an average of 46% cooperation rate for a payoff matrix . Participants cooperated 47% under high level of induced similarity and only 29% under low level of induced similarity. The cooperation rate for manipulating the perception of similarity of the opponent, revealed an increase from 67% to 80% of cooperation for the lower similarity threshold and from 40% to 70% cooperation for the higher similarity threshold. Other experiments with various similarity induction methods and payoff matrices further confirmed SERS's status as a descriptive theory of human behaviour. The SERS theory for Repeated PD Games Experiments on the impact of SERS on repeated games are presently being conducted and analysed at the University of Haifa and the Max Planck Institute for Research on Collective Goods in Bonn. Similarity sensitive games The PD game is not the only similarity sensitive game. Games for which the choice of the action with the higher expected value depends on the value of are defined as Similarity Sensitive Games (SSGs), whereas others are nonsimilarity sensitive. Focusing only on the 24 completely rank-ordered and symmetric games, we can mark 12 SSGs. After eliminating games that reflect permutations of other games generated either by switching rows, columns, or both rows and columns, we are left with six basic (completely rank-ordered and symmetric) SSGs. These are games for which SERS provides a rational and payoff-maximizing strategy that recommends which alternative to choose for any given perception of similarity with the opponent. Mimicry and Relative Similarity (MaRS) Developing the SERS theory into an evolutionary strategy yields the Mimicry and Relative Similarity (MaRS) algorithm. Fusing enacted and expected mimicry generates a powerful and cooperative mechanism that enhances fitness and reduces the risks associated with trust and cooperation. When conflicts take the form of repeated PD games, individuals get the opportunity to learn and monitor the extent of similarity with their opponents. They can then react by choosing whether to enact, expect, or exclude mimicry. This rather simple behavior has the capacity to protect individuals from exploitation and drive the evolution of cooperation within entire populations. MaRS paves the way for the induction of cooperation and supports the survival of other cooperative strategies. The existence of MaRS in heterogeneous populations helps those cooperative strategies that do not have the capacity of MaRS to combat hostile and random opponents. Despite the fact that MaRS cannot prevail in a duel with an unconditional defector, interacting within heterogeneous populations allows MaRS to fight unpredictable and hostile strategies and cooperate with cooperative ones, including itself. The operation of MaRS promotes cooperation, minimizes the extent of exploitation, and accounts for high fitness levels. Testing the model in computer simulations of behavioral niches, populated with agents that enact various strategies and learning algorithms, shows how mimicry and relative similarity outperforms all the opponent strategies it was tested against, pushes noncooperative opponents toward extinction, and promotes the development of cooperative populations. See also Game theory Nash Equilibrium Tit for Tat Win stay lose shift References Game theory
Subjective expected relative similarity
[ "Mathematics" ]
1,449
[ "Game theory" ]
47,349,607
https://en.wikipedia.org/wiki/Chlorine%20gas%20poisoning
Chlorine gas poisoning is an illness resulting from the effects of exposure to chlorine beyond the threshold limit value. Acute chlorine gas poisoning primarily affects the respiratory system, causing difficulty breathing, cough, irritation of the eyes, nose, and throat, and sometimes skin irritation. Higher exposures can lead to severe lung damage, such as toxic pneumonitis or pulmonary edema, with concentrations around 400ppm and beyond potentially fatal. Chronic exposure to low levels can result in respiratory issues like asthma and chronic cough. Common exposure sources include occupational settings, accidental chemical mixing, and industrial accidents. Diagnosis involves tests like pulse oximetry, chest radiography, and pulmonary function tests. Treatment is supportive, with no antidote, and involves oxygen and bronchodilators for lung damage. Most individuals with mild exposure recover within a few days, though some may develop long-term respiratory issues. Signs and symptoms The signs of acute chlorine gas poisoning are primarily respiratory, and include difficulty breathing and cough; listening to the lungs will generally reveal crackles. There will generally be sneezing, nose irritation, burning sensations, and throat irritations. There may also be skin irritations or chemical burns and eye irritation or conjunctivitis. A person with chlorine gas poisoning may also have nausea, vomiting, or a headache. Chronic exposure to relatively low levels of chlorine gas may cause pulmonary problems like acute wheezing attacks, chronic cough with phlegm, and asthma. Causes Occupational exposures constitute the highest risk of toxicity and common domestic exposures result from the mixing of chlorine bleach with acidic washing agents such as acetic, nitric or phosphoric acid. They also occur as a result of the chlorination of table water. Other exposure risks occur during industrial or transportation accidents. Wartime exposure is rare. Dose toxicity Humans can smell chlorine gas at ranges from 0.1–0.3 ppm. According to a review from 2010: "At 1–3 ppm, there is mild mucous membrane irritation that can usually be tolerated for about an hour. At 5–15 ppm, there is moderate mucous membrane irritation. At 30 ppm and beyond, there is immediate chest pain, shortness of breath, and cough. At approximately 40–60 ppm, a toxic pneumonitis and/or acute pulmonary edema can develop. Concentrations of about 400 ppm and beyond are generally fatal over 30 minutes, and at 1,000 ppm and above, fatality ensues within only a few minutes." Mechanism The concentration of the inhaled gas and duration of exposure and water contents of the tissues exposed are the key determinants of toxicity; moist tissues like the eyes, throat, and lungs are the most susceptible to damage. Once inhaled, chlorine gas diffuses into the epithelial lining fluid (ELF) of the respiratory epithelium and may directly interact with small molecules, proteins and lipids there and damage them, or may hydrolyze to hypochlorous acid and hydrochloric acid which in turn generate chloride ions and reactive oxygen species; the dominant theory is that most damage is via the acids. Diagnosis Tests performed to confirm chlorine gas poisoning and monitor patients for supportive care include pulse oximetry, testing serum electrolyte, blood urea nitrogen (BUN), and creatinine levels, measuring arterial blood gases, chest radiography, electrocardiogram (ECG), pulmonary function testing, and laryngoscopy or bronchoscopy. Treatment There is no antidote for chlorine poisoning; management is supportive after evacuating people from the site of exposure and flushing exposed tissues. For lung damage caused by inhalation, oxygen and bronchodilators may be administered. Outcomes There is no way to predict outcomes. Most people with mild to moderate exposure generally recover fully in three to five days, but some develop chronic problems such as reactive airway disease. Smoking or pre-existing lung conditions like asthma appear to increase the risk of long term complications. Epidemiology In 2014, the American Association of Poison Control Centers reported about 6,000 exposures to chlorine gas in the US in 2013, compared with 13,600 exposures to carbon monoxide, which was the most common poison gas exposure; the year before they reported about 5,500 cases of chlorine gas poisoning compared with around 14,300 cases of carbon monoxide poisoning. Mass poisoning incidents Wartime In 1915, the German Army used chlorine against Allied soldiers in the 2nd Battle of Ypres. In 2007 chlorine was used by insurgents in the Iraqi insurgency (2003–11), In 2014 chlorine was allegedly used in Kafr Zita, Syria. Industrial accidents United States There have been many instances of mass chlorine gas poisonings in industrial accidents. In 2002 in Missouri, a flex hose ruptured during unloading a train car at a chemical plant, releasing approximately of chlorine gas. 67 persons were injured. In 2004 in Macdona, Texas, a freight train accident released of chlorine gas and other toxic chemicals. At least 40 people were injured and three died, including two residents and the train conductor. In 2005 in South Carolina a freight train derailed, releasing an estimated of chlorine. Nine people died, and at least 529 persons sought medical care. Globally In 2015, In Nigeria, the explosion of a chlorine gas storage tank at a water treatment plant in Jos killed eight people. In 2017, chlorine gas was released in Fort McMurray, Alberta, Canada, after chemicals were mixed improperly at a water treatment plant. In 2020 the Regional Municipality of Wood Buffalo was fined $150,000 (CAD) for the incident. In 2017, in Iran, at least 475 people, including nine firemen, suffered respiratory and other symptoms after a chlorine gas leak in the southwestern Iranian province of Khuzestan. In 2020, on March 6, an incident occurred at EPCL (Engro Polymer and Chemicals Limited) Port Qasim, Karachi, where over 50 people were hospitalized as a result of chlorine gas leakage. No fatalities were reported. In 2022, on June 27, a tank holding chlorine gas in the port of Aqaba, Jordan, fell and ruptured. 14 people were killed and more than 260 were injured. References Further reading External links Toxic effects of substances chiefly nonmedicinal as to source Gases Medical emergencies Industrial hygiene
Chlorine gas poisoning
[ "Physics", "Chemistry", "Environmental_science" ]
1,353
[ "Matter", "Toxicology", "Phases of matter", "Toxic effects of substances chiefly nonmedicinal as to source", "Statistical mechanics", "Gases" ]
47,349,966
https://en.wikipedia.org/wiki/UGC%204879
UGC 4879, which is also known as VV 124, is the most isolated dwarf galaxy in the periphery of the Local Group. It is an irregular galaxy at a distance of 1.38 Mpc. Low-resolution spectroscopy yielded inconsistent radial velocities for different components of the galaxy, hinting at the presence of a stellar disk. There is also evidence of this galaxy containing dark matter. Appearance UGC 4879 is a transition type galaxy, meaning it has no rings (Denoted rs). It is also a spheroidal (dSph) galaxy, meaning it has a low luminosity. It has little to no gas or dust, and little recent star formation. It is also irregular, meaning it has no specific form. Gallery References External links C2.staticflickr.com Dwarf galaxies Local Group 04879 Ursa Major
UGC 4879
[ "Astronomy" ]
180
[ "Ursa Major", "Constellations" ]
47,350,023
https://en.wikipedia.org/wiki/RDH13
Retinol dehydrogenase 13 (all-trans/9-cis) is a protein that in humans is encoded by the RDH13 gene. This gene encodes a mitochondrial short-chain dehydrogenase/reductase, which catalyzes the reduction and oxidation of retinoids. The encoded enzyme may function in retinoic acid production and may also protect the mitochondria against oxidative stress. Alternatively spliced transcript variants have been described. Gene The human RDH13 gene is on the 19th chromosome, with its specific localization being 19q13.42. The gene contains 12 exons in total. Structure The analysis of the submitochondrial localization of RDH13 indicates its association with the inner mitochondrial membrane. The primary structure of RDH13 contains two hydrophobic segments, 2–21 and 242–261, which are sufficiently long to serve as transmembrane segments; however, as shown in the present study, alkaline extraction completely removes the protein from the membrane, indicating that RDH13 is a peripheral membrane protein. The peripheral association of RDH13 with the membrane further distinguishes this protein from the microsomal retinaldehyde reductases, which are integral membrane proteins that appear to be anchored in the membrane via their N-terminal hydrophobic segments. Function RDH13 is most closely related to the NADP+-dependent microsomal enzymes RDH11, RDH12 and RDH14. Purified RDH13 acts on retinoids in an oxidative reductive manner, and strongly prefers the cofactor NADPH over NADH. Moreover, RDH13 is much has much more efficient reductase activity than dehydrogenase activity. RDH13 as a retinaldehyde reductase is significantly less active than that of a related protein RDH11, primarily because of the much higher Km value for retinaldehyde. However, the kcat value of RDH13 for retinaldehyde reduction. arable with that of RDH11, and the Km values of the two enzymes for NADPH are also very similar. Thus, consistent with its sequence similarity to RDH11, RDH12 and RDH14, RDH13 acts as an NADP+-dependent retinaldehyde reductase. RDH13 is localized in the mitochondria, which is different from the other members of this family, as they localize to the endoplasmic reticulum. The exact sequence targeting RDH13 to the mitochondria remains to be established. Clinical significance RDH13 is part of a subfamily of four retinol dehydrogenases, RDH11, RDH12, RDH13, and RDH14, that display dual-substrate specificity, uniquely metabolizing all-trans- and cis-retinols with C(15) pro-R specificity. The metabolites involved in these reactions are known as retinoids, which are chromophores involved in vision, transcriptional regulation, and cellular differentiation. RDH11-14 could be involved in the first step of all-trans- and 9-cis-retinoic acid production in many tissues. RDH11-14 fill the gap in our understanding of 11-cis-retinal and all-trans-retinal transformations in photoreceptor and retinal pigment epithelial cells. The dual-substrate specificity of this subfamily explains the minor phenotype associated with mutations in 11-cis-retinol dehydrogenase (RDH5) causing fundus albipunctatus in humans. References Further reading Proteins
RDH13
[ "Chemistry" ]
773
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
47,350,206
https://en.wikipedia.org/wiki/Andromeda%20XVIII
Andromeda XVIII, discovered in 2008, is a dwarf spheroidal galaxy (has no rings, low luminosity, much dark matter, little gas or dust), which is a satellite of the Andromeda Galaxy (M31). It is one of the 14 known dwarf galaxies orbiting M31. It is relatively isolated, being about 1.8 million light-years (579 kpc) away. However, for an isolated dwarf galaxy it is also unusually quiescent. This suggests that Andromeda XVIII is a backsplash galaxy, a galaxy that once had a close orbital encounter with a more massive galaxy which stripped it of much of its star-forming matter. However, alternative hypotheses are also possible for Andromeda XVIII. It was announced in 2010 that the orbiting galaxies lie close to a plane running through M31's center. See also List of Andromeda's satellite galaxies References Dwarf spheroidal galaxies Andromeda Subgroup Andromeda (constellation)
Andromeda XVIII
[ "Astronomy" ]
213
[ "Andromeda (constellation)", "Constellations" ]
47,350,333
https://en.wikipedia.org/wiki/KKh%20060
KKH 060 is an irregular galaxy and a low surface brightness galaxy located in the direction of the constellation Leo at a distance of 84.2 million light years from Earth. The galaxy was thought to be close to the Local Group, but is actually a field galaxy, very far from it. References External links Dwarf galaxies Irregular galaxies Leo (constellation)
KKh 060
[ "Astronomy" ]
72
[ "Leo (constellation)", "Constellations" ]
55,862,876
https://en.wikipedia.org/wiki/4OR
4OR - A Quarterly Journal of Operations Research is a peer-reviewed scientific journal that was established in 2003 and is published by Springer Science+Business Media. It is a joint official journal of the Belgian, French, and Italian Operations Research Societies. The journal publishes research papers and surveys on the theory and applications of Operations Research. The Editors-in-chief are Yves Crama (University of Liège), Michel Grabisch (Pantheon-Sorbonne University), and Silvano Martello (University of Bologna). Abstracting and indexing The journal is abstracted and indexed in the following databases: DBLP EconLit EBSCO Information Services Google Scholar International Abstracts in Operations Research Journal Citation Reports Mathematical Reviews OCLC ProQuest Science Citation Index SCImago Journal Rank Scopus Zentralblatt Math According to the Journal Citation Reports, the journal has a 2021 impact factor of 1.763. References External links Operations research Behavioural sciences English-language journals Springer Science+Business Media academic journals Academic journals established in 2003
4OR
[ "Mathematics", "Biology" ]
205
[ "Behavioural sciences", "Applied mathematics", "Behavior", "Operations research" ]
55,864,774
https://en.wikipedia.org/wiki/Facebook%20Watch
Facebook Watch (currently rebranding to Facebook Video) is a video on demand service operated by American company Meta Platforms (previously named Facebook, Inc.). The company announced the service in August 2017 and it was available to all U.S. users that month. Facebook Watch's original video content is produced for the company by others, who earn 55% of advertising revenue (Facebook keeps the other 45%). Facebook Watch offers tailored video recommendations and organizes content into categories based on metrics like popularity and user engagement. The platform hosts both short and long-form entertainment. In 2018, Facebook allocated a $1 billion budget for content creation. The company generates revenue from mid-roll ads and also explored the introduction of pre-roll ads in the same year. As of August 30, 2018, Facebook Watch became globally accessible to all Facebook users. As of September 2020, Facebook reported that Facebook Watch had more than 1.25 billion monthly visitors, 46% of its monthly active user base at that time. History On August 9, 2017, Facebook, Inc. announced that it would be launching its own video on demand service. During the same announcement it was stated that the new service would be called Facebook Watch. The video on demand service was launched for a small group of U.S. users a day later, with a rollout to all U.S. users beginning at the end of August. In May and June 2018, Facebook launched around six news programs from partners including BuzzFeed, Vox, CNN, and Fox News. These programs, developed by Facebook's head of news partnerships Campbell Brown, reportedly had an overall budget of US$90 million. On July 25, 2018, Facebook gave their first presentation ever at the annual Television Critics Association's annual summer press tour. During Facebook's allotted time, Fidji Simo, the Vice President of Product for Video, and Ricky Van Veen, the Head of Global Creative Strategy, showcased Facebook's continuing ramp-up of original programming on Facebook Watch. On August 30, 2018, Facebook Watch became available internationally to all users of the social network worldwide. Budgets and monetization Content budgets For short-form videos, Facebook originally had a budget of roughly $10,000-$40,000 per episode, though renewal contracts have placed the budget in the $50,000-$70,000 range. Long-form TV-length series have budgets between $250,000 to over $1 million. The Wall Street Journal reported in September 2017 that the company was willing to spend up to $1 billion on original video content through 2018. Monetization Facebook keeps 45% of ad-break revenue for content shown on Facebook Watch, while its content-producing partners receive 55% of ad revenue. In January 2017, the company announced that it would be adding "mid-roll" advertising to its videos, in which ads will appear in videos after users have watched at least 20 seconds. In December 2017, Ad Age reported that Facebook was lifting a long-time ban on "pre-roll" ads, an advertising format that shows promotional content before users start the actual video. Facebook had resisted using pre-roll ads because the format has a "reputation for annoying viewers" who want to get to the desired content, though the report stated that the company would nevertheless try the format. Steve Ellis, CEO of WhoSay, a social influencer marketing company, told Ad Age that "YouTube already established that people will sit through and tolerate pre-roll" and that "It's proven that they haven't sent consumers fleeing, so it makes sense that Facebook would pursue a similar strategy as it builds out its original content experience". Two weeks after Ad Ages report, Facebook updated its blog to note that the pre-roll advertising format would begin testing in 2018, and that there were going to be changes to mid-roll ads; specifically, they cannot appear until a minute into a video, and are only available for videos that run for at least three minutes, as opposed to the original rule of appearing after 20 seconds on videos potentially as short as 90 seconds. Content In addition to original programming, Facebook Watch also distributes content licensed from other companies. On November 30, 2018, it was announced that the streaming service had struck a deal with 20th Century Fox Television to stream television series Buffy the Vampire Slayer, Angel, and Firefly. As of 2024, they included a documentary film titled The Rise of Artist Dubose, a documentary about the life and career of American musician A Boogie wit da Hoodie. Reception Morgan Stanley analyst Brian Nowak estimates that "Facebook Watch" can bring in $565 million in revenue to Facebook by the end of 2018. Jefferies analyst Brent Thill has predicted that the service has the potential to earn $12 billion in revenue by 2022. See also Facebook Reels References External links Video on demand services Video hosting Watch Streaming media systems Internet properties established in 2017
Facebook Watch
[ "Technology" ]
1,006
[ "Streaming media systems", "Telecommunications systems", "Computer systems" ]
55,865,352
https://en.wikipedia.org/wiki/Seismic%20intensity%20scales
Seismic intensity scales categorize the intensity or severity of ground shaking (quaking) at a given location, such as resulting from an earthquake. They are distinguished from seismic magnitude scales, which measure the magnitude or overall strength of an earthquake, which may, or perhaps may not, cause perceptible shaking. Intensity scales are based on the observed effects of the shaking, such as the degree to which people or animals were alarmed, and the extent and severity of damage to different kinds of structures or natural features. The maximal intensity observed, and the extent of the area where shaking was felt (see isoseismal map, below), can be used to estimate the location and magnitude of the source earthquake; this is especially useful for historical earthquakes where there is no instrumental record. Ground shaking Ground shaking can be caused in various ways (volcanic tremors, avalanches, large explosions, etc.), but shaking intense enough to cause damage is usually due to rupturing of the Earth's crust known as earthquakes. The intensity of shaking depends on several factors: The "size" or strength of the source event, such as measured by various seismic magnitude scales. The type of seismic wave generated, and its orientation. The depth of the event. The distance from the source event. Site response due to local geology Site response is especially important as certain conditions, such as unconsolidated sediments in a basin, can amplify ground motions as much as ten times. Where an earthquake is not recorded on seismographs an isoseismal map showing the intensities felt at different areas can be used to estimate the location and magnitude of the quake. Such maps are also useful for estimating the shaking intensity, and thereby the likely level of damage, to be expected from a future earthquake of similar magnitude. In Japan this kind of information is used when an earthquake occurs to anticipate the severity of damage to be expected in different areas. The intensity of local ground-shaking depends on several factors besides the magnitude of the earthquake, one of the most important being soil conditions. For instance, thick layers of soft soil (such as fill) can amplify seismic waves, often at a considerable distance from the source. At the same time, sedimentary basins will often resonate, increasing the duration of shaking. This is why, in the 1989 Loma Prieta earthquake, the Marina district of San Francisco was one of the most damaged areas, though it was nearly from the epicenter. Geological structures were also significant, such as where seismic waves passing under the south end of San Francisco Bay reflected off the base of the Earth's crust towards San Francisco and Oakland. A similar effect channeled seismic waves between the other major faults in the area. History The first simple classification of earthquake intensity was devised by Domenico Pignataro in the 1780s. The first recognizable intensity scale in the modern sense of the word was drawn up by the German mathematician Peter Caspar Nikolaus Egen in 1828. However, the first modern mapping of earthquake intensity was made by Robert Mallet, an Irish engineer who was sent by Imperial College, London, to research the December 1857 Basilicata earthquake, also known as The Great Neapolitan Earthquake of 1857. The first widely adopted intensity scale, the 10-grade Rossi–Forel scale, was introduced in the late 19th century. In 1902, Italian seismologist Giuseppe Mercalli, created the Mercalli Scale, a new 12-grade scale. Significant improvements were achieved, mainly by Charles Francis Richter during the 1950s, when (1) a correlation was found between seismic intensity and the Peak ground acceleration (PGA; see the equation that Richter found for California). and (2) a definition of the strength of the buildings and their subdivision into groups (called type of buildings) was made. Then, the seismic intensity was evaluated based on the degree of damage to a given type of structure. That gave the Mercalli Scale, as well as the European MSK-64 scale that followed, a quantitative element representing the vulnerability of the building's type. Since then, that scale has been called the Modified Mercalli intensity scale (MMS) and the evaluations of the seismic intensities are more reliable. In addition, more intensity scales have been developed and are used in different parts of the world: See also Earthquake engineering Peak ground acceleration Seismic performance Spectral acceleration Notes Sources . . . . . Also available here (sections renumbered). . . . Further reading External links USGS ShakeMap Providing near-real-time maps of ground motion and shaking intensity following significant earthquakes. Seismology measurement Seismology Earthquake engineering
Seismic intensity scales
[ "Engineering" ]
949
[ "Earthquake engineering", "Civil engineering", "Structural engineering" ]
55,865,407
https://en.wikipedia.org/wiki/Net%20Neutrality%20%28Last%20Week%20Tonight%20with%20John%20Oliver%29
"Net Neutrality" is the first segment devoted to net neutrality in the United States of the HBO news satire television series Last Week Tonight with John Oliver. It aired for 13 minutes on June 1, 2014, as part of the fifth episode of Last Week Tonight's first season. During this segment, as well Oliver's follow-up segment entitled "Net Neutrality II", comedian John Oliver discusses the threats to net neutrality. Under the administration of President Barack Obama, the Federal Communications Commission (FCC) was considering two options for net neutrality in early 2014. The FCC proposed permitting fast and slow broadband lanes, which would compromise net neutrality, but was also considering reclassifying broadband as a telecommunication service, which would preserve net neutrality. After a surge of comments supporting net neutrality that were inspired by Oliver's episode, the FCC voted to reclassify broadband as a utility in 2015. Context Last Week Tonight Prior to the 2014 segment about net neutrality, Last Week Tonight had only aired four episodes, all of which were complex investigations of obscure problems. Bloomberg News called Last Week Tonight's approach "hardly a tried-and-true recipe for TV success." The late New York Times columnist David Carr commented that prior to the net neutrality segment, he thought Oliver's comedic style would "never work." 2014 fast-lane proposal In January 2014, the United States Circuit Court of the District of Columbia provided a ruling in the case of Verizon v. FCC, in which Verizon Communications, an internet service provider (ISP), sued the Federal Communications Commission for violating its rights under the United States Constitution. The FCC had passed the Open Internet Order in 2010 following the outcome of Comcast Corp. v. FCC, where it was found that the FCC could not censure Comcast's interference with their customers' peer-to-peer traffic. The order was meant as a further step toward ensuring net neutrality in the sense that ISPs could not block or discriminate against lawfully operated websites, apps, or web services. The ruling in Verizon v. FCC was that the FCC could not enforce net neutrality rules as long as service providers were not identified as "common carriers". However, the FCC was given permission to regulate broadband and craft more specific rules that stop short of identifying service providers as common carriers. The ruling created a dispute as to whether net neutrality could be guaranteed under existing law, or if reclassification of ISPs was needed to ensure net neutrality. FCC chair Tom Wheeler stated that the FCC had the authority under Section 706 of the Telecommunications Act of 1996 to regulate ISPs. However, others including President Barack Obama supported reclassifying ISPs using the Communications Act of 1934. Their reclassification would move ISPs from being a general provision, which fell under the act's Title I, to a common carrier, which fell under the act's Title II. Critics of Section 706 pointed out that the section has no clear mandate to guarantee equal access to content provided over the internet, while subsection 202(a) of the Communications Act stated that common carriers cannot "make any unjust or unreasonable discrimination in charges, practices, classifications, regulations, facilities, or services." Advocates of net neutrality generally supported reclassifying ISPs under Title II, while FCC leadership and ISPs generally opposed such reclassification. The FCC stated that if they reclassified ISPs as common carriers, the commission would selectively enforce Title II, so that only sections relating to broadband would apply to ISPs. In April 2014, the FCC proposed a set of new regulations that, among other things, would allow for ISPs to levy charges on websites in exchange for faster connection speeds. The "fast lane", as the proposal was called, would prioritize that website's internet connection over those of other websites that did not pay, although the ISP could not outright block web users from accessing websites that did not pay for "fast lanes". In addition, in enacting these "fast lanes", ISPs had to divulge whether they were promoting the content of sponsors or affiliates. This was at least the FCC's third attempt to create internet fast lanes. By May 2014, the FCC was considering two options: permitting fast and slow broadband lanes, thereby compromising net neutrality; or reclassifying broadband as a telecommunication service, thereby preserving net neutrality. Draft plans for the "fast-lane" option were approved, with three Democratic FCC commissioners voting to have the public review the proposal, and two Republican communications voting against public feedback. The FCC's proposal was heavily criticized for its two-tier, preferential system, whose very core would go against the principle of net neutrality. The director of the Common Cause organization's Media and Democracy Reform Initiative compared the FCC proposal to "toll roads" that "represent Washington at its worst." A reporter for The Verge wrote that these regulations "would destroy net neutrality" precisely because it slowed down traffic. In response, Wheeler said that any statements saying that the proposed regulations would restrict the open Internet were "flat out wrong".' Episode Description Oliver delivered his 13-minute segment about net neutrality on June 1, 2014, as part of the show's main segment. He introduces the subject by praising "the internet, a.k.a. the electronic cat database," and noting how easy it is to buy merchandise such as coyote urine on the internet compared to if these items were bought in person. Oliver uses the coyote-urine analogy as a way to segue into a discussion of Wheeler's net-neutrality proposal. He pans "net neutrality" as a seemingly uninteresting topic, saying that videotaped FCC meetings about the issue might seem very boring "even by C-SPAN standards." Oliver then introduces the concept of net neutrality as something where all data is given the same priority regardless of its creator. He states that the Internet's relative equality, up to that point, had allowed startup companies to supersede bigger companies. Oliver introduces the topic of how "the Internet is not broken, and the FCC is taking steps to fix that". The segment then displays some news clippings and broadcasts that explain the FCC's priority-lane proposal. Oliver returns onto the segment, and he protests vehemently against the proposed rules, jokingly stating that the rules would ensure "my startup video streaming service, Nutflix, a one-stop resource for videos of men getting hit in the nuts", would not be able to compete with larger companies like Netflix. He then takes a more serious approach, stating that the proposal would allow large ISPs such as Verizon and Comcast to buy the "fast-lane" data more easily compared to smaller ISPs with fewer funds. Oliver rebuts a telecommunication lawyer's claim that it would be a "fast-lane-versus-hyperspeed-lane" contrast, stating that the proposed rules were more comparable to Olympic gold medalist sprinter Usain Bolt versus "Usain bolted to an anchor". The comedian refutes telecommunications companies' claims that they would not slow down other web traffic to get more internet users to subscribe to their services instead. Oliver points out an example in which Comcast slowed down Netflix download speeds in 2013 and 2014 unless Netflix paid Comcast a smooth-streaming fee. From October 2013 until Netflix finally agreed to pay in February 2014, Netflix download speeds for Comcast customers had slowed up to 25%, compared to on other ISPs where download speeds had consistently increased in the same time period. Oliver compared it to a "mob shakedown." The comedian then says that the fight to keep net neutrality is so important that pro-net-neutrality activists are on the same side as corporations like Google, Netflix, Amazon, and Facebook, an alliance which Oliver describes as very unlikely. He compares this to Lex Luthor knocking on his nemesis Superman's apartment door for an offer to team up to "get rid of the asshole in apartment 3B". Oliver then says that the only entities that would benefit from the rule change were the cable companies who are lobbying Congress, including Comcast, who is the second-largest congressional lobbyist. Oliver says that President Barack Obama had been seen golfing with Comcast's CEO Brian Roberts, as well as invited Roberts to a fundraiser dinner. He also states that Obama's nomination of Tom Wheeler, a former cable and wireless lobbyist, for the FCC Chairman position was "the equivalent of needing a babysitter and hiring a dingo". Oliver cites a 2010 FCC report on broadband, and says that 96% of Americans have at most two cable broadband providers to choose from. The segment then displays a clip of Roberts saying that if Comcast were to merge with another major ISP like Time Warner, there would be no reduction in competition. Oliver responds, "you could not be describing a monopoly more clearly if you were wearing a metal top hat", a player token used in the game Monopoly. Then the segment shows a graphic of Ookla Speed Test that shows a list of countries, sorted by their average broadband speed. The U.S., ranking 31st on the list, had an average speed slower than Estonia, a country Oliver described as "still worried about Shrek attacks". Oliver goes on to point out that Comcast and Time Warner had the lowest customer satisfaction ratings of any corporation in America, according to the quarterly American Customer Satisfaction Index that was released two weeks prior to the segment. He says that ISPs were not being truthful when they said they are committed to an open internet, and that representatives for the ISPs describe their plans in such a boring way that it goes unnoticed by many Americans. Oliver quips, "The cable companies have figured out the great truth of America: if you want to do something evil, put it inside something boring", comparing it to Apple Inc. putting Mein Kampf inside their user agreement. At the end of the segment, Oliver displays the web address for the FCC's comment section. He delivers an exhortation toward "the Internet commenters out there", saying that "we need you to get out there and, for once in your life, focus your indiscriminate rage in a useful direction. Seize your moment, my lovely trolls, turn on caps lock, and fly my pretties! Fly! Fly!" Aftermath The segment received 800,000 views on YouTube in two days, while the TV broadcast saw over 1 million views. The segment was thought to spur over 45,000 comments on the FCC's electronic filing page about the net neutrality proposal. The FCC also received an extra 300,000 comments in an email inbox designated specifically for the proposal. By comparison, the proposal with the second highest number of comments had 2,000 such responses. The day after the episode, the FCC comment page experienced a surge in traffic. Shortly after the first segment aired, the FCC website crashed, and Last Week Tonight viewers noted that the website's commenting function was not working. A spokeswoman for the FCC said that it was "unclear if the high volume was directly related to the John Oliver segment". Bloomberg News wrote that even though the segment was only a small part of the net-neutrality debate, as compared to the electronic mailing lists convincing tens of millions of people to vote against the proposed rules, it "gave a bump to a political movement" and ultimately helped to reverse the FCC's position in regards to net neutrality. Soraya Nadia McDonald of The Washington Post stated that Oliver "may be just the firebrand activist we’re looking for" in regards to the net-neutrality debate. Terrance F. Ross of The Atlantic wrote, "John Oliver’s segment on net neutrality this past June perfectly summed up what his HBO show Last Week Tonight is so good at: transcending apathy." Not all commentators had positive reviews of the segment. Jon Healey of the Los Angeles Times wrote that "Oliver misled his audience badly on a couple of key points", saying that the federal courts would not allow the FCC to unfairly discriminate between different forms of web traffic; that large ISPs would not need the new rules to implement a speed-tiered system; and that Wheeler had left open the possibility of outlawing the ISPs' promotion of certain websites for a fee. He stated that in the case of Netflix versus Comcast, the problem had been a third-party transit provider who had argued with Comcast over the price and amount of data that the ISP would provide. Robert McMillan of Wired said that "complaints about a fast-lane don't make much sense" because large websites like Google and Facebook already benefited from "fast lanes", albeit in the form of large servers embedded in the ISPs' Internet exchange points. He wrote that instead of advocating against a change that had already occurred, internet users should look for ways to increase ISPs' competitiveness. Chairman Wheeler himself responded to the segment, praising it as "creative" but saying "I am not a dingo". Wheeler said, "I think that it represents the high level of interest that exists in the topic in the country, and that's good." However, he also stated that the segment did not talk about the FCC's plan to reinstate the open-internet protections that had been halted in an appeals court earlier that year. The University of Delaware's Center for Political Communication conducted a study in which it concluded that viewers of late-night shows were generally more informed about the net neutrality issue than regular cable news viewers. The study found that knowledge of the net neutrality debate was highest among Last Week Tonight viewers and lowest among Fox News viewers. According to the study, 74% of Last Week Tonight watchers heard about net neutrality, of which 29% heard "a lot" about the issue, compared to 52% of Fox News watchers, only 7% of which heard "a lot". The "Net Neutrality" segment increased Last Week Tonight's viewership to approximately 4 million per episode by the end of the first season, and contributed to its popularity in U.S. late-night television. In November 2014, after the season had ended, David Carr of The New York Times wrote that the show had become "a smash" since the segment first aired. Carr stated that the "Net Neutrality" segment had helped convince FCC leadership to support net neutrality. Effect on net-neutrality debate In September 2014, the Pew Research Center found that the FCC filing page received 3,076 comments the week before the June 1 segment, and that there were another 79,838 comments posted the week immediately afterward. Google searches for the term "net neutrality" rose in popularity that week compared to the previous and following weeks. Two interns analyzing the data for the Pew Research Center wrote that the sudden rise in the number of comments on the FCC net-neutrality page could not be attributed to cable or printed news media, since these outlets' coverage of net neutrality was more infrequent than in previous weeks. Ultimately, less than 1 percent of the proposal's total 800,000 comments could be classified as "clearly opposed to net neutrality", with the majority either indicating support, taking no particular position, or being irrelevant comments. The Verge later requested that the FCC publish emails related to the Last Week Tonight episode under the Freedom of Information Act. Of the emails that were released, most were positively critical of the video. In one exchange, a CBS executive sent a link to FCC employees, who joked about "Nutflix" and Usain Bolt. One of the FCC employees said, "We had a good laugh about it. The cable companies... not so much." When one reporter satirically asked if Chairman Wheeler commented on the "dingo" quip, an FCC spokesperson said "Hey John, no, no comment on that" with a smiley emoticon. This prompted Oliver to create a subsequent video parodying the FCC's response. A Twitter policy spokesman said, "We all agreed that John Oliver’s brilliant net neutrality segment explained a very complex policy issue in a simple, compelling way that had a wider reach than many expensive advocacy campaigns." On February 26, 2015, the FCC voted to apply the "common carrier" designation of the Communications Act of 1934 and Section 706 of the Telecommunications act of 1996 to the internet. The decision was driven partly because most Americans only had one high-speed internet provider available in their areas. On the same day, the FCC also voted to preempt state laws in North Carolina and Tennessee that limited the ability of local governments in those states to provide broadband services to potential customers outside of their service areas. While the latter ruling affected only those two states, the FCC indicated that the agency would make similar rulings if it received petitions from localities in other states. In response to ISPs and opponents, FCC Chairman Wheeler said, "This is no more a plan to regulate the Internet than the First Amendment is a plan to regulate free speech. They both stand for the same concept." On March 12, 2015, the FCC released the specific details of its new net neutrality rules, which included prohibiting content blocking, slower connections to websites, and "fast and slow lanes". It was thought that Oliver's segment had a major role in the decision, which was the opposite of the FCC's original "lane" proposal. On April 13, 2015, the final rule was published. Updates since "Net Neutrality" After Donald Trump won the 2016 United States presidential election, he appointed Republican FCC board member Ajit Pai as chairman of the FCC. Pai announced proposals to scrap Title II shortly after his appointment on the grounds that higher regulation of the internet led to decreased business. This marked a turnaround from the previous FCC's position under Chairman Wheeler. In May 2017, the FCC successfully voted to proceed with a plan to remove the net neutrality rules enacted under the Obama administration. Like the 2014 proposal vote, this vote was also partisan, with one Democratic board member opposing the removal and two Republicans supporting it. The vote caused John Oliver to release a second segment on the subject three years later, entitled "Net Neutrality II". See also 2014 in American television References 2014 in American television Net neutrality Last Week Tonight with John Oliver segments
Net Neutrality (Last Week Tonight with John Oliver)
[ "Engineering" ]
3,753
[ "Net neutrality", "Computer networks engineering" ]
55,865,649
https://en.wikipedia.org/wiki/Andr%C3%A9%20Lalande%20%28philosopher%29
André Lalande (19 July, 1867 Dijon – 15 November, 1964 Asnières) was a French philosopher. In 1904, he was appointed Professor of philosophy at the University of Paris. Whilst still at school in 1883-4 he was taught by Émile Durkheim, whom he greatly appreciated. His notes have provided the basis for the publication Durkheim's Philosophy Lectures: Notes from the Lycée de Sens Course, 1883–1884 in 2004. His doctoral thesis was entitled L'idée directrice de la dissolution opposée à celle de l'évolution. In 1901, he was one of the founders of the French Philosophical Society. Works 1893: Lectures sur la philosophic des sciences, Paris 1899: L'idée directrice de la dissolution opposée à celle de l'évolution Paris (revised and reissued as Les illusions évolutionnistes, Paris, 1930) 1899 Quid de Mathematica vel Rationali vel Naturali Senserit Baconus Verulamius, Paris, 1899 (in Latin) Précis raisonné de morale pratique, Paris: Hachette 1929 Les théories de l'induction et de l'expérimentation, Paris: Boivin 1948 La raison et les normes, Paris, 1948. 1960 Vocabulaire technique et critique de la philosophic 8Paris: Presses Universitaires de France References 1867 births 1964 deaths 19th-century French essayists 19th-century French non-fiction writers 19th-century French philosophers 20th-century French essayists 20th-century French philosophers French male essayists French male non-fiction writers French male writers French lecturers French philosophers of education Philosophers of language Philosophers of mathematics French philosophers of science French philosophy academics Philosophy writers
André Lalande (philosopher)
[ "Mathematics" ]
359
[ "Philosophers of mathematics" ]
55,865,809
https://en.wikipedia.org/wiki/One-third%20octave
A one-third octave is a logarithmic unit of frequency ratio equal to either one third of an octave (1200/3 = 400 cents: major third) or one tenth of a decade (3986.31/10 = 398.631 cents: M3 ). An alternative (unambiguous) term for one tenth of a decade is a decidecade. Definitions Base 2 ISO 18405:2017 defines a "one-third octave" (or "one-third octave (base 2)") as one third of an octave, corresponding to a frequency ratio of . A one-third octave (base 2) is precisely 400 cents. Base 10 IEC 61260-1:2014 and ANSI S1.6-2016 define a "one-third octave" as one tenth of a decade, corresponding to a frequency ratio of . This unit is referred to by ISO 18405 as a "decidecade" or "one-third octave (base 10)". One decidecade is equal to 100 savarts (approximately 398.631 cents). See also Decibel Octave band Pseudo-octave Tritonic scale References Further reading (22 pages) Intervals (music) Units of level
One-third octave
[ "Physics", "Mathematics" ]
251
[ "Physical quantities", "Units of level", "Quantity", "Logarithmic scales of measurement", "Units of measurement" ]
55,866,076
https://en.wikipedia.org/wiki/Acylindrically%20hyperbolic%20group
In the mathematical subject of geometric group theory, an acylindrically hyperbolic group is a group admitting a non-elementary 'acylindrical' isometric action on some geodesic hyperbolic metric space. This notion generalizes the notions of a hyperbolic group and of a relatively hyperbolic group and includes a significantly wider class of examples, such as mapping class groups and Out(Fn). Formal definition Acylindrical action Let G be a group with an isometric action on some geodesic hyperbolic metric space X. This action is called acylindrical if for every there exist such that for every with one has If the above property holds for a specific , the action of G on X is called R-acylindrical. The notion of acylindricity provides a suitable substitute for being a proper action in the more general context where non-proper actions are allowed. An acylindrical isometric action of a group G on a geodesic hyperbolic metric space X is non-elementary if G admits two independent hyperbolic isometries of X, that is, two loxodromic elements such that their fixed point sets and are disjoint. It is known (Theorem 1.1 in ) that an acylindrical action of a group G on a geodesic hyperbolic metric space X is non-elementary if and only if this action has unbounded orbits in X and the group G is not a finite extension of a cyclic group generated by loxodromic isometry of X. Acylindrically hyperbolic group A group G is called acylindrically hyperbolic if G admits a non-elementary acylindrical isometric action on some geodesic hyperbolic metric space X. Equivalent characterizations It is known (Theorem 1.2 in ) that for a group G the following conditions are equivalent: The group G is acylindrically hyperbolic. There exists a (possibly infinite) generating set S for G, such that the Cayley graph is hyperbolic, and the natural translation action of G on is a non-elementary acylindrical action. The group G is not virtually cyclic, and there exists an isometric action of G on a geodesic hyperbolic metric space X such that at least one element of G acts on X with the WPD ('Weakly Properly Discontinuous') property. The group G contains a proper infinite 'hyperbolically embedded' subgroup. History Properties Every acylindrically hyperbolic group G is SQ-universal, that is, every countable group embeds as a subgroup in some quotient group of G. The class of acylindrically hyperbolic groups is closed under taking infinite normal subgroups, and, more generally, under taking 's-normal' subgroups. Here a subgroup is called s-normal in if for every one has . If G is an acylindrically hyperbolic group and or with then the bounded cohomology is infinite-dimensional. Every acylindrically hyperbolic group G admits a unique maximal normal finite subgroup denoted K(G). If G is an acylindrically hyperbolic group with K(G)={1} then G has infinite conjugacy classes of nontrivial elements, G is not inner amenable, and the reduced C*-algebra of G is simple with unique trace. There is a version of small cancellation theory over acylindrically hyperbolic groups, allowing one to produce many quotients of such groups with prescribed properties. Every finitely generated acylindrically hyperbolic group has cut points in all of its asymptotic cones. For a finitely generated acylindrically hyperbolic group G, the probability that the simple random walk on G of length n produces a 'generalized loxodromic element' in G converges to 1 exponentially fast as . Every finitely generated acylindrically hyperbolic group G has exponential conjugacy growth, meaning that the number of distinct conjugacy classes of elements of G coming from the ball of radius n in the Cayley graph of G grows exponentially in n. Examples and non-examples Finite groups, virtually nilpotent groups and virtually solvable groups are not acylindrically hyperbolic. Every non-elementary subgroup of a word-hyperbolic group is acylindrically hyperbolic. Every non-elementary relatively hyperbolic group is acylindrically hyperbolic. The mapping class group of a connected oriented surface of genus with punctures is acylindrically hyperbolic, except for the cases where (in those exceptional cases the mapping class group is finite). For the group Out(Fn) is acylindrically hyperbolic. By a result of Osin, every non virtually cyclic group G, that admits a proper isometric action on a proper CAT(0) space with G having at least one rank-1 element, is acylindrically hyperbolic. Caprace and Sageev proved that if G is a finitely generated group acting isometrically properly discontinuously and cocompactly on a geodetically complete CAT(0) cubical complex X, then either X splits as a direct product of two unbounded convex subcomplexes, or G contains a rank-1 element. Every right-angled Artin group G, which is not cyclic and which is directly indecomposable, is acylindrically hyperbolic. For the special linear group is not acylindrically hyperbolic (Example 7.5 in ). For the Baumslag–Solitar group is not acylindrically hyperbolic. (Example 7.4 in ) Many groups admitting nontrivial actions on simplicial trees (that is, admitting nontrivial splittings as fundamental groups of graphs of groups in the sense of Bass–Serre theory) are acylindrically hyperbolic. For example, all one-relator groups on at least three generators are acylindrically hyperbolic. Most 3-manifold groups are acylindrically hyperbolic. References Further reading Group theory Geometric group theory Geometric topology Geometry
Acylindrically hyperbolic group
[ "Physics", "Mathematics" ]
1,311
[ "Geometric group theory", "Group actions", "Geometric topology", "Group theory", "Fields of abstract algebra", "Topology", "Geometry", "Symmetry" ]
55,866,337
https://en.wikipedia.org/wiki/Animal%E2%80%93computer%20interaction
Animal–computer interaction (ACI) is a field of research for the design and use of technology with, for and by animals covering different kinds of animals from wildlife, zoo and domesticated animals in different roles. It emerged from, and was heavily influenced by, the discipline of Human–computer interaction (HCI). As the field expanded, it has become increasingly multi-disciplinary, incorporating techniques and research from disciplines such as artificial intelligence (AI), requirements engineering (RE), and veterinary science. A central theme of ACI research is establishing how user-centred design approaches and methods from HCI can be adapted to design for animals. Accordingly, many studies seek to adopt 'animal-centred' approaches to design and research. In her ACI Manifesto (2011), Clara Mancini defines one aim of ACI as understanding "the interaction between animals and computing technology within the contexts in which animals habitually live, are active, and socialise with members of the same or other species, including humans". She additionally proposes three core design goals for the field: enhancing animals' quality of life and wellbeing; supporting animals in the functions assigned to them by humans; and supporting human-animal relationships. Accordingly, some ACI research has given considerable attention to questions of animal ethics, welfare, consent and power. Applications Much ACI work focuses on technologies to support communication and relationships between animals and humans. Researchers have investigated digital technologies for dogs, including systems for remote communication with dogs left at home, wearable interactive devices for them, and interfaces for working dogs. They have also explored technology for interactions with other domestic animals, including cats. An increasing focus in the ACI community is investigating the wider context of these technologies and the impact they have beyond the individual animals that use them, from security and privacy considerations of pet wearables, the effect they may have on humans living with these animals, the context they are deployed in, to supporting veterinary science, and animal behavior research. Animal internet technologies Recent work in ACI has focused on how internet connected technologies, such as Internet of Things (IoT), can support animals. This includes technologies such as remote video call devices for dogs to call their owners, speculative technologies for dogs to sense their owners and technologies to support dog-to-dog interactions mediated by the internet. Much of this work has focused on how to incorporate interspecies design into the process and what the user experience and what interactive internet systems look like with animal users. Conferences The ACI community has organised its flagship conference, the International Conference on Animal-Computer Interaction, as a yearly stand-alone event since 2016 with its proceedings published in the ACM Digital Library. It incorporates doctoral consortia for junior researchers to become acquainted with the field, and co-located workshops to stimulate collaboration on emerging topics. References External links International Conference on Animal–Computer Interaction ACM Digital Library Proceedings of the ACI Conference International Summer School on Animal-Centered Computing Human–computer interaction Animal intelligence
Animal–computer interaction
[ "Engineering" ]
607
[ "Human–computer interaction", "Human–machine interaction" ]
55,867,423
https://en.wikipedia.org/wiki/Gyoukou
is a supercomputer developed by and PEZY Computing, based around ExaScaler's ZettaScaler immersion cooling system. It was deployed at the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) Yokohama Institute for Earth Sciences, the same floor where the Earth Simulator is located. Amid the scandal regarding the development grant, it was removed from JAMSTEC in April 2018. System Gyoukou is based on ExaScaler's ZettaScaler-2.x technology which features liquid immersion cooling system using Fluorinert. Each immersion tank can contain 16 Bricks. A Brick consists of a backplane board, 32 PEZY-SC2 modules, 4 Intel Xeon D host processors, and 4 InfiniBand EDR cards. Modules inside a Brick are connected by hierarchical PCI Express fabric switches, and the Bricks are interconnected by InfiniBand. Each PEZY-SC2 module contains 2048 processing elements (1 GHz design), six MIPS64 control processors, and 4 DDR4 DIMMs (64GB per module as of November 2017). Performance With partial configuration, Gyoukou was ranked 69th at 1,677.1 teraflops on the June 2017 TOP500 ranking. After upgrade to full scale (equivalent of 19.5 immersion tanks) using newer ZettaScaler-2.2 system, it ranked 4th at 19,135.8 teraflops on the November 2017 TOP500 ranking. At the time of benchmarking, 1984 out of 2048 cores of each PEZY-SC2 were used at 700 MHz clock. Gyoukou has high energy efficiency, and it ranked 5th at 14.173 gigaflops/watt on the November 2017 Green500 energy efficiency ranking. at the time benchmarked as 19,860,000 "cores" (after an upgrade). Notes External links ExaScaler Inc. PEZY Computing K.K. Supercomputers Supercomputing in Japan Japan Agency for Marine-Earth Science and Technology
Gyoukou
[ "Technology" ]
425
[ "Supercomputers", "Supercomputing" ]
55,867,494
https://en.wikipedia.org/wiki/Pottery%20in%20Azerbaijan
Pottery is one of the most ancient handicrafts in Azerbaijan. History Pottery is one of the oldest areas of handicraft in Azerbaijan. This art appeared in the Neolithic Age. In ancient times women dominated this craft. In the Eneolite Age this became an independent art as a result of technical advances. The Ethnography of Azerbaijan encyclopedia shows that pottery development accelerated from the end of the Middle Ages. The invention of foot-powered wheels increased the number of kinds of pottery products and the establishment of pottery centers. Demand for pottery products created favourable conditions for earthenware products. These products spread across Azerbaijan. Unglazed and glazed products were both made. At the end of the eighteenth and beginning of the nineteenth century, the use of domestic ceramics was more developed. Potters were primarily many clay products for use as water vessels, cookware, etc. Domestic pottery products were divided into several groups according to their usage. Water vessels pitchers, pots, jugs, vessels, doliums, carafes and mugs. Other earthenware products were used for washing and performing wudu. Nakhchivan During the Nakhchevan Khanate the workshop belonging to Ehsan Khan of Nakhchivan produced earthenware pitchers. Earthenware products were also produced in Shamakhi, Ardabil, Tabriz, Ganja and other cities. Clay dishes were produced in several villages of Sheki Khanate, especially in Nukha. Baku Pottery was also developed in Baku. The outskirts of the city were rich with clay, creating favourable production conditions. Bowls, plates and other artefacts from the seventeenth century were found there. Karabakh Pottery production in Karabakh reached a high level after the Early Middle Ages. Earthenware products in those times were much more developed in comparison with earlier or later work in terms of production mechanisms and decorative elements. Ceramic water pipes, tile and decorative bricks began in that period. Mongol invasions caused heavy damage to pottery production along with other fields of handicrafts in Karabakh and in Azerbaijan as a whole. Modern times In modern times the pottery has three many lines: production of construction bricks; production of various clay dishes (tiles, clay pipe); production of faience and porcelain. Types Pottery patterns are different in size, shape, materials and technologies. Types include Boralı ceramics, glossy ceramics, glazed ceramics, Basma-nakhıshlı ceramics and adhesion patterned ceramics. The different types were put to different applications: Pottery without glaze – cubes, boilers, cakes, sticks, aftershocks, cabbage, milk containers, spinach, lamps, hookahs. Pottery with glaze – drum, vase, vase, lamp, safe. In general, the glazed dishes in Karabakh were cooked in two stages: first, they cook the product in an ordinary way and take them out the fire, then made it with glaze, pushed back to cook. Construction materials – The glazed materials and ceramic mosaic were widely used during the construction of the Karabakh palace, caravanserai and baths. also lanka References Handicrafts Ceramic materials Pottery by country Azerbaijan art
Pottery in Azerbaijan
[ "Engineering" ]
646
[ "Ceramic engineering", "Ceramic materials" ]
55,867,774
https://en.wikipedia.org/wiki/Environmental%20privilege
Environmental privilege is a concept in environmental sociology, referring to the ability of privileged groups to keep environmental amenities for themselves and deny them to less privileged groups. More broadly, it refers to the ability of privileged groups to keep an exclusive grip on the advantages of "social place," including non-ecological amenities. It has been characterized as "the other side of the coin" from environmental racism. Like other forms of racial privilege, it does not depend on personal racism, but rather structural racism. Environmental privilege is a consequence of both class and racial privilege with respect to access to the overall environment, influencing the social and economic realm. It is the result of cultural, economic, and political power being wielded. It provides exclusive access to environmental facilities such as elite neighborhoods that contain exclusive rivers, parks, and open areas to particular people. These groups are more likely to participate in sustainable efforts and have access to premium amenities. Furthermore, during the COVID-19 epidemic, wealthy communities were able to better adhere to safety protocols. Background The concept of environmental privilege first developed from the historical scholarship of Dorceta Taylor, who led the shift in scholarship on environmental racism away from consideration of environmental disadvantage in isolation, and toward a more holistic approach that accounted for the discriminatory effects of restrictive zoning. In her book, The rise of the American Conservation Movement , Taylor describes how early conservation efforts in America set the stage for the reservation of natural resources and amenities for the wealthy. She describes how the conservation movement in the United States began in the middle of the nineteenth century by white American elites with Eurocentric ideologies that mirrored Manifest Destiny. Their chief aim was to preserve the wilderness and reserve the serene landscapes for themselves, displacing Indigenous communities in the West. The preservation of the wilderness, in turn, reserved the land for white America. The conservation movement has involvement in racism, sterilization, and eugenics, and ultimately resulted in the exclusivity of nature for white male recreation. Scholars state that people of color, Indigenous communities, and the working class are more likely to live around hazardous waste, emission producing power plants, mining, and live in areas with a high probably for natural disasters. Environmental privilege is said to be the origin of this environmental polarity between the haves and the have-nots. Lisa Sun-Hee Park and David Pellow's book The Slums of Aspen: Immigrants VS. the Environment in America's Eden, outline these connections, specifying how environmental privilege is enjoyed by a small, wealthy population and the rest confront environmental burdens. This is studied on a local and global scale. Poor communities face ecological devastation from the exploitation of resources. This includes deforestation, intensive agriculture, fossil fuel mining, and the dumping of electronic waste, all of which occur among poor communities globally. Food Access Today's environmental movement is maintained predominantly by wealthy whites in urban centers, therefore the city reflects the white perspective and mirrors their culture. Environmental privilege is often used in critiques of green gentrification, where environmental amenities such as urban agriculture cater largely to white or otherwise privileged urban groups. It has proven particularly illuminating in understanding the correlation between whiteness and participation in farmer's markets. Research shows low to middle-class African Americans are less likely to involve themselves in farmer's markets or other methods of alternative food institutions as opposed to conventional food resources. Alternative food institutions are often held in primarily white, affluent communities, thereby creating the exclusivity of healthy, organic food. High-priced organic foods and luxurious and energy-efficient infrastructure generates uneven development in cities, causing low-income families to concentrate in devalued regions. The process of demarcated devaluation in cities, as described by Nathan McClintock, results in food deserts. Housing and health inequities Environmental Privilege provides benefits such as eco-friendly lifestyles, sustainable living, and green consumerism. Access to greater green space and cleaner air in neighborhoods, and energy-efficient, LEED certified structures are just a few examples. In addition, there is access to alternative markets where sustainable apparel and food can be purchased, yet overall customary designs for exclusion are reproduced. Historically, policy makers and city planners quarantined low-income and devalued centers from newly developed urban spaces. As a result, sustainable goods and services along with new environmental projects are reserved for the wealthy. Affluent people oftentimes pollute the most via greenhouse gas emissions, waste, and over consumption, while low income communities endure their negative externalities: landfills, superfund sites, city pollution, and toxic runoff are a few examples. Zoning policies reduce the number of affordable housing available to migrant workers in Aspen. The wealthy resort town fought hard to prevent low-income families from moving in because they believed it would ruin their image. In turn, workers resort to living in dangerous spaces like flood plains and must to drive up to one-hundred miles to reach their place of employment. During the height of the COVID-19 pandemic, affluent individuals had better access to resources, medical treatment, and housing. Wealthy communities were able to leave the dense cities and travel to more rural areas, second homes, or vacation spots. Infection-rates studied in Sweden revealed that low-income communities were six-times more likely to catch the virus than affluent communities. In another analysis, African Americans and LatinX communities in the U.S. contracted COVID-19 more so than white communities because many blue-collar jobs were considered "essential" during the pandemic. Unsafe interactions with other people in dense cities and neighborhoods created a higher probability of contracting the virus. Many wealthy whites, on the other hand, were able to work from home, go on vacation, or minimize the hours worked. Access to nature Author Justin Farrell in Billionaire Wilderness (2020) argues that there are powerful connections between nature and wealthy Americans, and that preservation of the environment is a tool utilized by affluent U.S. citizens to increase their earnings and establish exclusive pockets of the United States for themselves, often masking their influence as philanthropy. In Aspen, Colorado, American elites indulge in the picturesque scenery of surrounding nature and satiate themselves in luxurious amenities provided by migrant employees working in the tourist industry. It is the lower class who facilitate much of the opulent services to the wealthy whilst living in poverty. Billionaire Wilderness explores how the ultra-rich are buying up land and utilizing one of the world's most pristine ecosystems to climb even further up the socioeconomic ladder, weaving captivating storytelling with thought-provoking analysis. In Teton County, Wyoming, the well-off are tormented by stigmas, shame, and concern about their social standing, and who appropriate nature and rural people to create more virtuous and deserving versions of themselves. Billionaire Wilderness uncovers the hidden links between wealth concentration and the environment, two of the most serious and contested concerns of our day. Teton County, with a per capita income of $194,485, has the highest per capita income of all 3,144 counties in the United States, according to the US Department of Commerce. New York County (Manhattan) is a distant second at $148, 002, and Wheeler County, Georgia is the lowest in the US at $15,787. Teton County has one of the highest median family incomes in the country, at $96,113, putting it in the top 2.6 percent of all counties in the country. Teton County was not always prosperous, but as time passed, the local economy improved. See also Environmental justice References Environmental sociology Social privilege
Environmental privilege
[ "Environmental_science" ]
1,553
[ "Environmental sociology", "Environmental social science" ]
55,868,631
https://en.wikipedia.org/wiki/List%20of%20antibiotic-resistant%20bacteria
A list of antibiotic resistant bacteria is provided below. These bacteria have shown antibiotic resistance (or antimicrobial resistance). Gram positive Clostridioides difficile Clostridioides difficile is a nosocomial pathogen that causes diarrheal disease worldwide. Diarrhea caused by C. difficile can be life-threatening. Infections are most frequent in people who have had recent medical and/or antibiotic treatment. C. difficile infections commonly occur during hospitalization. According to a 2015 CDC report, C. difficile caused almost 500,000 infections in the United States per year. Associated with these infections were an estimated 15,000 deaths. The CDC estimates that C. difficile infection costs could amount to $3.8 billion over five years. C. difficile colitis is most strongly associated with fluoroquinolones, cephalosporins, carbapenems, and clindamycin. Some research suggests the overuse of antibiotics in the raising of livestock is contributing to outbreaks of bacterial infections such as C. difficile.[16] Antibiotics, especially those with a broad activity spectrum (such as clindamycin) disrupt normal intestinal flora. This can lead to an overgrowth of C. difficile, which flourishes under these conditions. Pseudomembranous colitis can follow, creating generalized inflammation of the colon and the development of "pseudomembrane", a viscous collection of inflammatory cells, fibrin, and necrotic cells.[4] Clindamycin-resistant C. difficile was reported as the causative agent of large outbreaks of diarrheal disease in hospitals in New York, Arizona, Florida, and Massachusetts between 1989 and 1992. Geographically dispersed outbreaks of C. difficile strains resistant to fluoroquinolone antibiotics, such as ciprofloxacin and levofloxacin, were also reported in North America in 2005. Enterococcus Multidrug-resistant Enterococcus faecalis and Enterococcus faecium are associated with nosocomial infections. These strains include: penicillin-resistant Enterococcus, vancomycin-resistant Enterococcus, and linezolid-resistant Enterococcus. Mycobacterium tuberculosis Tuberculosis (TB) resistant to antibiotics is called MDR TB (multidrug-resistant TB). Globally, MDR TB causes 150,000 deaths annually. The rise of the HIV/AIDS epidemic has contributed to this. Mycobacterium tuberculosis is an obligate pathogen that has evolved to ensure its persistence in human populations. This is evident in that Mycobacterium tuberculosis must cause a pulmonary disease in order to be successfully transmitted from one person to another. Tuberculosis better known as TB has one of the highest mortality rates among pathogens in the world. Mortality rates have not seen a significant decrease due to its growing resistance to certain antibiotics. Although years of research have been devoted to the creation of a vaccine, one still does not exist. TB is extremely transmissible, contributing significantly to its very high level of virulence. TB was considered one of the most prevalent diseases, and did not have a cure until the discovery of streptomycin by Selman Waksman in 1943. However, the bacteria soon developed resistance. Since then, drugs such as isoniazid and rifampin have been used. M. tuberculosis develops resistance to drugs by spontaneous mutations in its genomes. These types of mutations can lead to genotype and phenotype changes that can contribute to reproductive success, leading to the evolution of resistant bacteria. Resistance to one drug is common, and this is why treatment is usually done with more than one drug. Extensively drug-resistant TB (XDR TB) is TB that is also resistant to the second line of drugs. Resistance of Mycobacterium tuberculosis to isoniazid, rifampin, and other common treatments has become an increasingly relevant clinical challenge. Evidence is lacking for whether these bacteria have plasmids. M. tuberculosis lack the opportunity to interact with other bacteria in order to share plasmids. Mycoplasma genitalium Mycoplasma genitalium is a small pathogenic bacterium that lives on the ciliated epithelial cells of the urinary and genital tracts in humans. It is still controversial whether or not this bacterium is to be recognized as a sexually transmitted pathogen. Infection with Mycoplasma genitalium sometimes produces clinical symptoms, or a combination of symptoms, but sometimes can be asymptomatic. It causes inflammation in the urethra (urethritis) both in men and women, which is associated with mucopurulent discharge in the urinary tract, and burning while urinating. Treatment of Mycoplasma genitalium infections is becoming increasingly difficult due to rapidly developing multi-drug resistance, and diagnosis and treatment is further hampered by the fact that M. genitalium infections are not routinely detected. Azithromycin is the most common first-line treatment, but the commonly-used 1 gram single-dose azithromycin treatment can lead to the bacteria commonly developing resistance to azithromycin. An alternative five-day treatment with azithromycin showed no development of antimicrobial resistance. Efficacy of azithromycin against M. genitalium has decreased substantially, which is thought to occur through SNPs in the 23S rRNA gene. The same SNPs are thought to be responsible for resistance against josamycin, which is prescribed in some countries. Moxifloxacin can be used as a second-line treatment in case azithromycin is not able to eradicate the infection. However, resistance against moxifloxacin has been observed since 2007, thought to be due to parC SNPs. Tetracyclines, including doxycycline, have a low clinical eradication rate for M. genitalium infections. A few cases have been described where doxycycline, azithromycin and moxifloxacin had all failed, but pristinamycin was still able to eradicate the infection. Staphylococcus aureus Staphylococcus aureus is one of the major resistant pathogens. It caused more than 100,000 deaths attributed to AMR in 2019 and MRSA was present in 748,000 global deaths that year. Found on the mucous membranes and the human skin of around a third of the population, it is extremely adaptable to antibiotic pressure. It was one of the earlier bacteria in which penicillin resistance was found, in 1947, just four years after mass-production began. Methicillin was then the antibiotic of choice, but has since been replaced by oxacillin because of significant kidney toxicity. Methicillin-resistant Staphylococcus aureus (MRSA) was first detected in Britain in 1961, and it is now "quite common" in hospitals. MRSA was responsible for 37% of fatal cases of sepsis in the UK in 1999, up from 4% in 1991. Half of all S. aureus infections in the US are resistant to penicillin, methicillin, tetracycline, and erythromycin. Vancomycin non-susceptible isolates of Staph aureus have been isolated in Asia and the United States. A study performed in 1992 demonstrated that Vancomycin resistance could be transferred from Enterococcus faecalis/faecium to Staph aureus through the transfer of the VanA and VanB genes. Streptococcus Streptococcus pyogenes (Group A Streptococcus: GAS) infections can usually be treated with many different antibiotics. Strains of S. pyogenes resistant to macrolide antibiotics have emerged; however, all strains remain uniformly susceptible to penicillin. Resistance of Streptococcus pneumoniae to penicillin and other beta-lactams is increasing worldwide. It was identified as one of six leading pathogens for disease associated with resistance in 2019 and that year there were 596,000 deaths globally of people with drug-resistant infection from the pathogen. The major mechanism of resistance involves the introduction of mutations in genes encoding penicillin-binding proteins. Selective pressure is thought to play an important role, and use of beta-lactam antibiotics has been implicated as a risk factor for infection and colonization. S. pneumoniae is responsible for pneumonia, bacteremia, otitis media, meningitis, sinusitis, peritonitis and arthritis. Gram negative Campylobacter Campylobacter causes diarrhea (often bloody), fever, and abdominal cramps. Serious complications such as temporary paralysis can also occur. Physicians rely on ciprofloxacin and azithromycin for treating patients with severe disease although Campylobacter is showing resistance to these antibiotics. Neisseria gonorrhoeae Neisseria gonorrhoeae is a sexually transmitted pathogen that causes gonorrhea, a sexually transmitted disease that can result in discharge and inflammation at the urethra, cervix, pharynx, or rectum. It can cause pelvic pain, pain on urination, penile and vaginal discharge, as well as systemic symptoms. It can also cause severe reproductive complications. Gamma proteobacteria Enterobacteriaceae As of 2013 hard-to-treat or untreatable infections of carbapenem-resistant Enterobacteriaceae (CRE), also known as carbapenemase-producing Enterobacteriaceae (CPE), were increasing among patients in medical facilities. CRE are resistant to nearly all available antibiotics. Almost half of hospital patients who get bloodstream CRE infections die from the infection. Klebsiella pneumoniae Klebsiella pneumoniae carbapenemase (KPC)-producing bacteria are a group of emerging highly drug-resistant Gram-negative bacilli causing infections associated with significant morbidity and mortality whose incidence is rapidly increasing in a variety of clinical settings around the world. Klebsiella pneumoniae was identified as one of six leading pathogens for disease associated with resistance in 2019 and that year there were 642,000 deaths globally of people with drug-resistant infection from the pathogen. Klebsiella pneumoniae includes numerous mechanisms for antibiotic resistance, many of which are located on highly mobile genetic elements. Carbapenem antibiotics (heretofore often the treatment of last resort for resistant infections) are generally not effective against KPC-producing organisms. Salmonella and E. coli Infection with Escherichia coli and Salmonella can result from the consumption of contaminated food and polluted water. Both of these bacteria are well known for causing nosocomial (hospital-linked) infections, and often, these strains found in hospitals are antibiotic resistant because of adaptations to wide spread antibiotic use. When both bacteria are spread, serious health conditions arise. Many people are hospitalized each year after becoming infected, with some dying as a result. Since 1993, some strains of E. coli have become resistant to multiple types of fluoroquinolone antibiotics. E. coli was identified as one of the six leading pathogens for deaths associated with resistance in 2019 and that year there were 829,000 deaths globally of people with drug-resistant infection from the pathogen. Although mutation alone plays a huge role in the development of antibiotic resistance, a 2008 study found that high survival rates after exposure to antibiotics could not be accounted for by mutation alone. This study focused on the development of resistance in E. coli to three antibiotic drugs: ampicillin, tetracycline, and nalidixic acid. The researchers found that some antibiotic resistance in E. coli developed because of epigenetic inheritance rather than by direct inheritance of a mutated gene. This was further supported by data showing that reversion to antibiotic sensitivity was relatively common as well. This could only be explained by epigenetics. Epigenetics is a type of inheritance in which gene expression is altered rather than the genetic code itself. There are many modes by which this alteration of gene expression can occur, including methylation of DNA and histone modification; however, the important point is that both inheritance of random mutations and epigenetic markers can result in the expression of antibiotic resistance genes. Resistance to polymyxins first appear in 2011. An easier way for this resistance to spread, a plasmid known as MCR-1 was discovered in 2015. Pseudomonadales Acinetobacter Acinetobacter is a gram-negative bacteria that causes pneumonia or bloodstream infections in critically ill patients. Multidrug-resistant Acinetobacter have become very resistant to antibiotics. Acinetobacter baumannii was identified as one of the six leading pathogens for deaths associated with resistance in 2019 and that year there were 423,000 deaths globally of people with drug-resistant infection from the pathogen. On November 5, 2004, the Centers for Disease Control and Prevention (CDC) reported an increasing number of Acinetobacter baumannii bloodstream infections in patients at military medical facilities in which service members injured in the Iraq/Kuwait region during Operation Iraqi Freedom and in Afghanistan during Operation Enduring Freedom were treated. Most of these showed multidrug resistance (MRAB), with a few isolates resistant to all drugs tested. Pseudomonas aeruginosa Pseudomonas aeruginosa is a highly prevalent opportunistic pathogen. It was identified as one of the six leading pathogens for deaths associated with resistance in 2019 and that year there were 334,000 deaths globally of people with drug-resistant infection from the pathogen. One of the most worrisome characteristics of P. aeruginosa is its low antibiotic susceptibility, which is attributable to a concerted action of multidrug efflux pumps with chromosomally encoded antibiotic resistance genes (e.g., mexAB-oprM, mexXY) and the low permeability of the bacterial cellular envelopes. P. aeruginosa has the ability to produce 4-hydroxy-2-alkylquinolines (HAQs), and it has been found that HAQs have prooxidant effects and overexpressing modestly increased susceptibility to antibiotics. The study experimented with the P. aeruginosa biofilms and found that a disruption of relA and spoT genes produced an inactivation of the Stringent response (SR) in cells with nutrient limitation, which provides cells be more susceptible to antibiotics. See also Antimicrobial resistance References External links Animation of Antibiotic Resistance CDC Guideline "Management of Multidrug-Resistant Organisms in Healthcare Settings, 2006" Antimicrobial Stewardship Project, at the Center for Infectious Disease Research and Policy (CIDRAP), University of Minnesota Evolutionary biology Health disasters Pharmaceuticals policy Veterinary medicine Global issues Infectious disease-related lists
List of antibiotic-resistant bacteria
[ "Biology" ]
3,185
[ "Evolutionary biology", "Bacteria", "Antibiotic-resistant bacteria" ]
55,869,410
https://en.wikipedia.org/wiki/Oracle%20Cloud%20Platform
Oracle Cloud Platform refers to a Platform as a Service (PaaS) offerings by Oracle Corporation as part of Oracle Cloud Infrastructure. These offerings are used to build, deploy, integrate and extend applications in the cloud. The offerings support a variety of programming languages, databases, tools and frameworks including Oracle-specific, open source and third-party software and systems. Deployment models Oracle Cloud Platform offers public, private and hybrid cloud deployment models. Architecture Oracle Cloud Platform provides both Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). The infrastructure is offered through a global network of Oracle managed data centers. Oracle deploys their cloud in Regions. Inside each Region are at least three fault-independent Availability Domains. Each of these Availability Domains contains an independent data center with power, thermal and network isolation. Oracle Cloud is generally available in North America, EMEA, APAC and Japan with announced South America and US Govt. regions coming soon. See also Platform as a service Oracle Cloud (including Oracle Cloud Infrastructure) Oracle Advertising and Customer Experience (CX) Oracle Cloud Enterprise Resource Planning (ERP) Oracle Cloud Human Capital Management (HCM) Oracle Cloud Supply Chain Management (SCM) Further reading Ovum Decision Matrix: Selecting a Cloud Platform for Hybrid Integration Vendor, 2019-20 References External links Official website Oracle Corporation Cloud computing Cloud computing providers Cloud platforms Cloud infrastructure Oracle Cloud Services
Oracle Cloud Platform
[ "Technology" ]
283
[ "Cloud infrastructure", "Cloud platforms", "Computing platforms", "IT infrastructure" ]
55,870,791
https://en.wikipedia.org/wiki/Functional%20accounts%20of%20emotion
A functional account of emotions posits that emotions facilitate adaptive responses to environmental challenges. In other words, emotions are systems that respond to environmental input, such as a social or physical challenge, and produce adaptive output, such as a particular behavior. Under such accounts, emotions can manifest in maladaptive feelings and behaviors, but they are largely beneficial insofar as they inform and prepare individuals to respond to environmental challenges, and play a crucial role in structuring social interactions and relationships. Researchers who subscribe to a functional perspective of emotions disagree as to whether to define emotions and their respective functions in terms of evolutionary adaptation or in terms of socially constructed concepts. However, the goal of a functional account of emotions is to describe why humans have specific emotions, rather than to explain what exactly constitutes an emotion. Thus, functionalists generally agree that in order to infer the functions of specific emotions, researchers should examine the causes, or input, and consequences, or output, of those emotions. The events that elicit specific emotions and the behavioral manifestations of those emotions can vary significantly based on individual and cultural context. Thus, researchers claim that a functional account of emotions should not be understood as a rigid input and output system, but rather as a flexible and dynamic system that interacts with an individual's goals, experiences, and environment to adaptively shape individuals’ emotional processing and responding. History Historically, emotions were primarily understood and studied in terms of their maladaptive consequences. For example, Stoicism, an Ancient Greek tradition of philosophy, described how most emotions, particularly negative emotions like anger, are irrational and prevent people from achieving inner peace. Early psychologists followed this approach, often describing how emotions interfere with rational deliberation and can lead to reckless behaviors that risk well-being or relationships. Around the 1960s, however, the focus of emotions research began shifting towards the beneficial consequences of emotions, and a growing body of psychological research contributed to understanding emotions as functional. For example, emotions structure relationships by facilitating bonding that promotes survival. Additionally, the expression of emotions can coordinate group behavior, thus promoting cooperation and collaboration. Interdisciplinary research in fields such as cultural psychology, sociology, and anthropology found that sociocultural norms often interact with and even emerge from individual and collective emotional experiences, providing further support for the role of emotions in organizing social life. While some researchers retained that emotions may have once been functional but are no longer necessary in the present environment, many researchers began to adopt the now-dominant view that emotions are systems that aim to provide solutions to problems in the present-day environment. Emotion functions A functional account of any system assesses its specific function in terms of the factors that elicit the activation of that system, and the changes that follow the activation of that system. Importantly, not every cause and consequence of a system pertains to its primary function; the primary function is the specific purpose that the system fulfills. For example, tools have specific functions that are defined in terms of why the tool has certain features and the problem that it typically solves. So, while a pair of scissors can be used as a weapon, or a paper-weight, the sharp blades of scissors were designed to cut, and the problem that scissors typically solve is the need to cut something. Thus, the primary function of scissors is to cut. Functional accounts of emotion similarly define the functions of specific emotions in terms of why those emotions are associated with certain features, such as particular bodily and cognitive changes, as well as the environmental problem that the emotion helps to solve. For example, why is anger typically associated with an increase in heart rate and the desire to approach the source of anger. When people become angry in response to an environmental problem, how does it help them change their environment in a way that benefits them? Emotion researchers attempt to answer such questions in relation to various prominent emotions, including negative emotions such as sadness, embarrassment, and fear, and positive emotions such as love, amusement, and awe. In order to identify the primary function of each emotion, researchers investigate its intrapersonal functions, or how emotions function at the level of the individual to help them navigate their surroundings, and interpersonal functions, or how emotions function at the group level to facilitate efficient communication, cooperation, and collaboration. Intrapersonal functions In investigating the intrapersonal functions of emotions, or how emotions help individuals navigate and respond to their environments, researchers typically document the physiological changes, subjective experiences, and behavioral motivations associated with different emotions. For example, anger is associated with high arousal, feelings of disapproval or dissatisfaction with some event, and the motivation to express that disapproval or take action against the source of dissatisfaction. Given how emotional responses affect individual experience and behavior, researchers describe the intrapersonal function of specific emotions in terms of how they inform and prepare individuals to respond to a particular environmental challenge. For example, feeling anger usually informs individuals of something unjust in the environment, such as betrayal from a loved one, threats of physical violence from a bully, or corruption. Anger is associated with blood flow in the body shifting away from internal organs towards the limbs, physiologically preparing individuals for movement towards the cause of anger. Even when locomotion or physical confrontation is not required to address an unjust actor or event, the high arousal and emotional sensitivity associated with anger tend to motivate individuals to confront the issue. Emotional responses tend to diminish once the emotion elicitor, or the environmental cause of the emotion, changes, suggesting that emotions at the individual level function to evoke some sort of action or behavior to address the elicitor. For example, anger typically diminishes following an apology or the perception that justice has been restored. Interpersonal functions A crucial aspect of how emotions help individuals adaptively navigate the world is tied to their interpersonal functions, or how they influence social interactions and relationships. Emotional expressions, such as a smile or a frown, are relatively involuntary, so they can provide a fairly reliable source of information about a person's emotions, beliefs, and intentions to those around them. The communication of such information is crucial for structuring social relationships, and for negotiation and cooperation within groups, because it conveys not only how people are thinking and feeling, but also how they are likely to behave. This information can in turn guide how other people think, feel, and behave towards those expressing their emotions. For example, emotional expressions can evoke complementary emotional responses, such as fear in response to anger, or guilt in response to disappointment. They can also evoke reciprocal emotions, such as empathy or love. Thus, emotions play a crucial role in conveying valuable information in social interactions that can rapidly coordinate group behavior even in the absence of explicit verbal communication. Given this communicative role of emotions, emotions facilitate learning by serving as incentives or deterrents for certain kinds of actions or behaviors. For example, when children see how their parents or friends emotionally respond to things they do, they learn what types of actions and behaviors are likely to lead to desirable outcomes, including positive emotional responses from those around them. This communicative role is important in informing how people behave in both professional and intimate adult human relationships as well, since emotions can convey how a particular relationship or interaction is evolving in positive or negative directions. For example, anger can signal that an individual or group has reached its limit within a negotiation, and can immediately structure the behavioral responses from the opposite party. Meanwhile, sadness can communicate the readiness to disengage from a goal, and the potential for social withdrawal from a person or group, thereby conveying that a potentially valuable relationship is at risk. Emotions have also been found to play a role in organizing group identity insofar as shared emotional experiences tend to strengthen communal identity, in-group solidarity, and cultural identity. Furthermore, emotions play a role in defining and identifying an individual's role within a group, such that the specific role that an individual assumes (ex. nurturing, protecting, leading) is associated with the expression of particular emotions, such as sympathy, anger, fear, or embarrassment. Negative and positive emotion Researchers who adopt a functional perspective of emotions have devoted attention to several prevalent emotions. For example, research suggests that the function of anger is to correct injustice, the function of sadness is to disengage from an unattainable goal, the function of embarrassment is to appease others, and the function of fear is to avoid danger. The focus of emotions research for some time was on negative emotions, with positive emotions primarily being understood as “undoing” the arousing effects of negative emotion. In other words, while negative emotions increase arousal to help individuals address an environmental problem, positive emotions quell that arousal to return an individual to baseline. While positive emotions can return individuals to baseline following a negative emotional experience, for example joy after an angering event has been addressed, or amusement that distracts from sadness, positive emotions themselves can increase arousal from baseline. Thus, a growing body of literature describes the distinct functions of positive emotions. For example, research suggests that the function of romantic love is to facilitate mating, the function of amusement is to facilitate play, which encourages learning, and the function of awe is to accommodate new information. Variability Emotions are highly personal insofar as they play a critical role in defining an individual's subjective experiences and interact with how individuals think about and judge the world around them. Since individuals differ in their personal goals and past experiences, individuals within one society or group can vary greatly in how they experience and express specific emotions. Emotions are also highly social insofar as they facilitate communication and often arise in response to the actions or feelings of other people. Given their highly social nature, the ways that emotions are experienced and expressed, and the specific roles that they play in structuring interactions and relationships, can vary significantly according to social and cultural context. For example, research investigating cultural differences in facial expressions found that East Asian models of anger show characteristic early signs of emotional intensity with the eyes, which are under less voluntary control than the mouth, as compared with Western Caucasian models. Such findings suggest that contextual factors such as a particular society's display rules may directly modulate both how an emotion is expressed, and how it is perceived and responded to by others. Furthermore, some emotions are generally experienced less in certain societies. For example, anger is not frequently reported amongst Utku Eskimos. Given this immense variation in how individuals experience and express emotions, functionalists emphasize the dynamic quality of emotion systems. Under a functional account, emotion systems process feedback from the environment about when and how various emotions are likely to serve adaptive functions in a specific environment. In other words, emotion systems are flexible and can incorporate information that an individual learns across their lifespan to modify how the system operates. Furthermore, emotions interact with cognition such that how an individual learns and thinks about their own emotions can affect how they experience and express emotions. Relation to mental illness There are cases when an emotion, for example a constantly excessive level of anxiety, actually inhibits life functions rather than facilitating them. This is sometimes regarded as part of a mental illness. References Emotion Social psychology Subjective experience
Functional accounts of emotion
[ "Biology" ]
2,262
[ "Emotion", "Behavior", "Human behavior" ]
55,870,946
https://en.wikipedia.org/wiki/Counterfeit%20illegal%20drug%20selling
Selling counterfeit illegal drugs is a crime in many U.S. states' legal codes and in the federal law of the United States. The fake drugs are sometimes termed as imitation controlled substances. Relation to drug-related crimes There is a low chance of law punishing fraud among illicit drug traders, however it is likely that informal social control among drug traders reduces the likelihood of fraud between illegal trade partners. For instance, getting robbed or losing a business contact may not justify dealer's increased profits for a short-term from fraudulent behavior. Legal status Selling counterfeit illicit drugs is illegal even if the substances used to make the imitation drug are not illegal on themselves. It is illegal to distribute or sell counterfeit fake drugs in many U.S. states including Nevada, Ohio, Illinois, Florida, Michigan and Massachusetts. U.S. Federal Law Selling counterfeit illicit drugs is illegal under the U.S. federal law. Relevant parts of the U.S. federal law include 21 U.S.C. Section 331 and 18 U.S. Code § 1001. 21 U.S.C. Section 331 makes it illegal to sell an adulterated or misbranded drug in interstate commerce. 18 U.S. Code § 1001 bans falsifying, concealing or covering up a material fact; making any materially false, fictitious or fraudulent statement or representation; or making or using any false writing or document knowing that it contains materially false, fictitious or fraudulent statements. Deaths Europe Amsterdam On 25 November 2014 two British tourists aged 20 and 21 died in a hotel room in Amsterdam, after snorting white heroin that was sold as cocaine by a street dealer. The bodies were found less than a month after another British tourist died in similar circumstances. At least 17 other people have had medical treatment after taking the white heroin. Sweden Nine deaths occurred in Sweden during 2010–11 relating to use of Krypton, a mixture of kratom, caffeine and O-desmethyltramadol, a metabolite of the opioid analgesic tramadol. See also Unclean hands References Causes of death Crime Drug control law Drug policy History of drug control Illegal drug trade
Counterfeit illegal drug selling
[ "Chemistry" ]
449
[ "Drug control law", "Regulation of chemicals" ]
55,872,170
https://en.wikipedia.org/wiki/Michael%20Hengartner
Michael Otmar Hengartner (born 5 June 1966, St. Gallen, Switzerland) is a Swiss-Canadian biochemist and molecular biologist. From February 2020 he has been president of the ETH Board. Before that he was the president of the University of Zurich and president of the Swiss Rectors' Conference, swissuniversities. Early life and education Hengartner was born in 1966, the son of a Swiss mathematics professor. The family moved first to Paris, later to Bloomington, Indiana and then to Quebec City, where he grew up. He studied biochemistry at the Université Laval in Quebec, and graduated with a B.S. in 1988. He received a doctorate in 1994 from the Massachusetts Institute of Technology under H. Robert Horvitz. He then led a research group at the Cold Spring Harbor Laboratory. Career In 1997, he co-founded the biotech company Devgen. In 2001, he was appointed to the newly established Ernst Hadorn Endowed Professorship at the Institute of Molecular Biology of the University of Zurich. In 2008, he co-founded the scientific consultancy company Evaluescience. From 2009 to 2014, he was Dean of the Faculty of Mathematics and Natural Sciences of the University of Zurich; and from 2014 to 2019, rector of the University of Zurich. Since 2019, he has been director of the ETH Board and since 2009, a member of the National Academy of Sciences Leopoldina. On February 1, 2020, he took up his position as President of the ETH Board. In August 2020 Hengartner spoke about the Swiss lack of courage to "think big" despite investing heavily in education and basic research. He confirmed ETH Zurich's efforts to fostering a culture of innovation achieving faster market maturity of innovative products, especially in the area of digitization and climate. In November 2021 Hengartner highlighted the concerns about the consequences of brain drain from Switzerland. In January 2022, Hengartner drew attention to Switzerland experiencing the first consequences of being excluded from Horizon Europe with top Swiss scientists losing leadership roles in Horizon projects and young scientists being denied internationally recognised grants and the resulting problems of attracting leading scientists. He emphasised the urgency for the Swiss State Secretariat for Education, Research and Innovation to find alternative ways forward and for politicians to ensure its funding, but he also hoped for an EU agreement by the end of 2022. In February 2022, in run up to the Swiss referendum on animal testing, Hengartner stood his ground in a debate, emphasising that Swiss laws ranked human life higher than that of animals, which is a moral value people either do or don't share, but at the same time he also highlighted the resulting tightrope walk of balancing the benefit of research for humans against the suffering of animals. In October 2022 Hengartner discussed Switzerland's scientific advantage through the high number of well-known residential scientists making it an attractive choice for the upcoming generation, but also referred to his concerns over the EU's decision to relegate Switzerland to a „non-associated third country“, robbing Switzerland of its position of influence in Horizon Europe, the EU's 7-year scientific research programme, as well as denying it access to future funding from the European Research Council. He is a member of the Swiss National Science Foundation. Private life Hengartner is married to biologist Denise Hengartner. The couple has six children. Research interests Hengartner is researching the molecular basis of apoptosis. He uses especially the nematode Caenorhabditis elegans as molecular organism. He is also investigating mechanisms of cancer, Alzheimer's and geriatric diseases. Awards 2003: Dr. Josef Steiner Cancer Research Prize 2006: National Latsis Prize of Switzerland 2006: Cloëtta Prize 2010: Credit Suisse Award for Best Teaching from the University of Zurich 2016: Honorary doctorate from Université Pierre et Marie Curie and the University of Paris-Sorbonne References External links The Hengartner Lab at the University of Zurich Curriculum vitae National Academy of Sciences Leopoldina Prof. Dr. * Michael O. Hengartner, President of the ETH Board Hengartner as rector at the University of Zurich Swiss biochemists Canadian biochemists Molecular biologists 1966 births Université Laval alumni Massachusetts Institute of Technology alumni People associated with the University of Zurich People from St. Gallen (city) Swiss emigrants to Canada Living people
Michael Hengartner
[ "Chemistry" ]
894
[ "Molecular biologists", "Biochemists", "Molecular biology" ]
55,872,661
https://en.wikipedia.org/wiki/Category%20of%20representations
In representation theory, the category of representations of some algebraic structure has the representations of as objects and equivariant maps as morphisms between them. One of the basic thrusts of representation theory is to understand the conditions under which this category is semisimple; i.e., whether an object decomposes into simple objects (see Maschke's theorem for the case of finite groups). The Tannakian formalism gives conditions under which a group G may be recovered from the category of representations of it together with the forgetful functor to the category of vector spaces. The Grothendieck ring of the category of finite-dimensional representations of a group G is called the representation ring of G. Definitions Depending on the types of the representations one wants to consider, it is typical to use slightly different definitions. For a finite group and a field , the category of representations of over has Objects: Pairs (, ) of vector spaces over and representations of on that vector space Morphisms: Equivariant maps Composition: The composition of equivariant maps Identities: The identity function (which is an equivariant map). The category is denoted by or . For a Lie group, one typically requires the representations to be smooth or admissible. For the case of a Lie algebra, see Lie algebra representation. See also: category O. The category of modules over the group ring There is an isomorphism of categories between the category of representations of a group over a field (described above) and the category of modules over the group ring [], denoted []-Mod. Category-theoretic definition Every group can be viewed as a category with a single object, where morphisms in this category are the elements of and composition is given by the group operation; so is the automorphism group of the unique object. Given an arbitrary category , a representation of in is a functor from to . Such a functor sends the unique object to an object say in and induces a group homomorphism ; see Automorphism group#In category theory for more. For example, a -set is equivalent to a functor from to Set, the category of sets, and a linear representation is equivalent to a functor to Vect, the category of vector spaces over a field . In this setting, the category of linear representations of over is the functor category → Vect, which has natural transformations as its morphisms. Properties The category of linear representations of a group has a monoidal structure given by the tensor product of representations, which is an important ingredient in Tannaka-Krein duality (see below). Maschke's theorem states that when the characteristic of doesn't divide the order of , the category of representations of over is semisimple. Restriction and induction Given a group with a subgroup , there are two fundamental functors between the categories of representations of and (over a fixed field): one is a forgetful functor called the restriction functor and the other, the induction functor . When and are finite groups, they are adjoint to each other , a theorem called Frobenius reciprocity. The basic question is whether the decomposition into irreducible representations (simple objects of the category) behaves under restriction or induction. The question may be attacked for instance by the Mackey theory. Tannaka-Krein duality Tannaka–Krein duality concerns the interaction of a compact topological group and its category of linear representations. Tannaka's theorem describes the converse passage from the category of finite-dimensional representations of a group back to the group , allowing one to recover the group from its category of representations. Krein's theorem in effect completely characterizes all categories that can arise from a group in this fashion. These concepts can be applied to representations of several different structures, see the main article for details. Notes References External links https://ncatlab.org/nlab/show/category+of+representations Representation theory Category theory
Category of representations
[ "Mathematics" ]
834
[ "Mathematical structures", "Fields of abstract algebra", "Category theory", "Categories in category theory", "Representation theory" ]
60,682,233
https://en.wikipedia.org/wiki/Nikiel%27s%20conjecture
In mathematics, Nikiel's conjecture in general topology was a conjectural characterization of the continuous image of a compact total order. The conjecture was first formulated by in 1986. The conjecture was proven by Mary Ellen Rudin in 1999. The conjecture states that a compact topological space is the continuous image of a total order if and only if it is a monotonically normal space. Notes Topology Conjectures that have been proved
Nikiel's conjecture
[ "Physics", "Mathematics" ]
87
[ "Mathematical theorems", "Topology stubs", "Topology", "Space", "Geometry", "Conjectures that have been proved", "Spacetime", "Mathematical problems" ]
60,682,609
https://en.wikipedia.org/wiki/Windows%20Terminal
Windows Terminal is a multi-tabbed terminal emulator developed by Microsoft for Windows 10 and later as a replacement for Windows Console. It can run any command-line app in a separate tab. It is preconfigured to run Command Prompt, PowerShell, WSL and Azure Cloud Shell Connector, and can also connect to SSH by manually configuring a profile. Windows Terminal comes with its own rendering back-end; starting with version 1.11 on Windows 11, command-line apps can run using this newer back-end instead of the old Windows Console. Since Windows 11 22H2 and Windows Terminal 1.15, Windows Terminal replaces Windows Console as the default. History Windows Terminal was announced at Microsoft's Build 2019 developer conference in May 2019 as a modern alternative for Windows Console, and Windows Terminal's source code first appeared on GitHub on May 3, 2019. The first preview release was version 0.2, which appeared on July 10, 2019. The first stable version of the project (version 1.0) was on May 19, 2020, at which point, Microsoft started releasing preview versions as the Windows Terminal Preview app, which could be installed side-by-side with the stable version. Features Terminal is a command-line front-end. It can run multiple command-line apps, including text-based shells in a multi-tabbed window. It has out-of-the-box support for Command Prompt, PowerShell, and Bash on Windows Subsystem for Linux (WSL). It can natively connect to Azure Cloud Shell. Terminal augments the text-based command experience by providing support for: Notebook tabs, to hold multiple instances in a single window ANSI VT sequence support UTF-8 and UTF-16 (including CJK ideograms and emojis) Hardware-accelerated text rendering via DirectWrite Modern font and font feature support (see below) 24-bit color Window transparency effects Themes, background images and tab color settings Different window modes (e.g. fullscreen mode, focus mode, always on top mode) Split panes Command palette Jump list support Microsoft Narrator compatibility via a User Interface Automation (UIA) tree Support for embedded hyperlinks Copying text to clipboard in HTML and RTF format Mouse input Customizable key bindings Incremental search Terminal shortcut Sixel support Cascadia Code Cascadia Code is a purpose-built monospaced font by Aaron Bell of Saja Typeworks for the new command-line interface. It includes programming ligatures and was designed to enhance the look and feel of Windows Terminal, terminal applications and text editors such as Visual Studio and Visual Studio Code. The font is open-source under the SIL Open Font License and available on GitHub. It is bundled with Windows Terminal since version 0.5.2762.0. See also List of terminal emulators Comparison of command shells Notes References Further reading External links Introductory post Windows Terminal overview Free terminal emulators Windows commands Windows components Microsoft free software Free software programmed in C++ Software using the MIT license 2019 software Windows-only free software
Windows Terminal
[ "Technology" ]
646
[ "Windows commands", "Computing commands" ]
60,682,613
https://en.wikipedia.org/wiki/T%20Leporis
T Leporis (T Lep / HD 32803 / HIP 23636) is a variable star in the constellation of Lepus, the Hare. It is located half a degree from ε Leporis in the sky; its distance is approximately 1,100 light years from the Solar System. It has the spectral type M6ev, and is a Mira variable — as is R Leporis, in the same constellation — whose apparent magnitude varies between +7.40 and +14.30 with a period of 368.13 days. The annual parallax of T Leporis was measured by the Hipparcos mission, but the results were hopelessly imprecise. The parallax from Gaia Data Release 2 is more accurate and yields a distance of . The distance has also been measured using very-long-baseline interferometry and found to be . Mira variables are some of the major sources of molecules and dust in the Universe. With each pulsation, T Leporis expels matter into space, each year losing an amount equivalent to the mass of Earth. Images of T Leporis obtained with the Very Large Telescope interferometer of the European Southern Observatory (ESO) have revealed a shell of gas and dust surrounding the star, whose diameter is some 100 times larger than that of the Sun. Given the great distance at which this class of stars lie, its apparent angular diameter — despite its enormous size — is no more than a millionth of the solar apparent angular diameter. See also Mira R Leporis References Lepus (constellation) Mira variables Durchmusterung objects Leporis, T 032803 023636 M-type giants
T Leporis
[ "Astronomy" ]
350
[ "Lepus (constellation)", "Constellations" ]
60,682,761
https://en.wikipedia.org/wiki/Step%20migration
Step migration is a migration pattern conceptualized in 1885 by Ernst Georg Ravenstein, who observed migration as occurring stage by stage as rural inhabitants move closer to urban areas of growth. It is a migration pattern regarded by some scholars to be a widely popular form of international migration in the twenty-first century globalized world. There is a large breadth of study proving the existence of step migration in many international migration patterns, although there is lack of consensus over its exact specification and measurement. Step migration scholars deem it to be an important international trend that has the power to aid in the design of policy development efforts in both rural and urban areas worldwide. According to Abrahm Lustgarten, Senior Environmental Reporter for ProPublica, in his May 2021 report, Step migration – or "stepwise migration" – is a characteristic pattern of migration driven by climate change. Overview Dennis Conway has researched into how migration scholars have conceptualized step migration and attempts to clarify these competing definitions into an operational and consistent definition: "...a process of human spatial behavior in which individuals or families embark on a migration path of acculturation which gradually takes them, by way of intermediate steps, from a traditional-rural environment to the modern-urban environment." In 'Laws of Migration', Ravenstein explained how migration could be gradual and occurred step by step geographically. According to Ravenstein, step migration occurred in short distance migration when individuals migrated from rural towns to an urban centre by stepping through intermediate-sized towns. Scholars today see step migration as occurring globally as individuals step up through hierarchies of countries toward their preferred destination. This idea evolved as scholars such as Conway saw step migration as characterised by movements up and down an urban hierarchy. Step migration has been explained as requiring multiple stages of migration due to a failure of the migrant to integrate into the country such as failure to find work or cultural disconnect, which then prompts another migration. However, there has been confusion in migration literature over the step migration as a concept. The specification of step migration has been contested and there are inconsistencies in definitions by different scholars over the progression of step migration theory. It is still a contested issue as to how relevant step migration is and in what situations it occurs, although it has been stated that it is widespread in Third World Migration. Todaro sees multistage migration as explaining labour migration patterns in less developed countries. These patterns are characterised by two stages: the first being the unskilled and rural worker migrating to the urban city, and the second being the attainment of a more permanent job in the urban area. In the nineteenth century especially, the definition of step migration was contested and often inconsistent. Ravenstein's conceptualization focused on the spatial movements of migrants in the United Kingdom, where he did not see this pattern of migration as related to an urban hierarchy as he saw urban cities as merely the focus point to which migrants were drawn. However, in the 20th and 21st centuries, the conception of step migration has grown to include the notions of step migration as related to an urban hierarchy and being a "spatial manifestation of a social process of adjustment." Researchers have found that migration often occurs through a 'hierarchy of places' as people a-spatially migrate in a stepwise formation up a hierarchy of places from rural areas to urban, progressive areas with more opportunities. Other scholars such as Paul see step migration as an option for migrants to overcome structural barriers that prevent them from gaining legal entry to their preferred destinations by stepping up the hierarchy and accumulating sufficient migrant capital. Scholars have found the existence of step migration in New Zealand, Australia, the Philippines and Arctic Alaska. Significance as a phenomenon Step migration is deemed an increasingly popular migration pattern among students and workers and as part of a wider circulatory transnational migratory movement. Stepwise migration is seen to be relevant as a partial contributor to the increase of international migration and as impacting international labour migration which in turn impacts world politics. Whether or not migration scholars use the exact term 'step migration', many have identified that migration in the globalised age is often no longer only one stage, but a multistage phenomenon. Countries such as Australia, New Zealand, and Canada are increasingly introducing study migration pathways, based on step migration, to attract international students with the aim that these students will later become skilled workers. Step and stage-like migration was also seen to exist in migration flows from Poland, the UK and Germany. Scholars emphasize the economic value that step migration can bring to local economies, and how it creates a reliance on skilled migrants. Around 130,000 international students studying in Australia currently are likely to permanently migrate to Australia after they graduate. In Canada, the number of international students becoming permanent residents is rising, in 2018, 99,410 international students became permanent residents which was a significant increase from the previous year. These international students, who migrate to countries through step migration, are valuable as they are skilled migrants who can aid countries in skill shortages and supply skills and expertise in key industries. The multistage nature of international labour migration flows are said to have boosted the economies of the United Kingdom and Ireland as employment is rising which means the filling of skill shortages, boosting of export income, and lowering of inflation. Australia's economy relies on skilled migrants to form two thirds of their workforce, partly because of step migration through international students who migrate to study and ultimately end up becoming working citizens. Skilled migrants, which include international student graduates, participate in the labour force at higher rates than permanent residents of Australia. According to scholars, international students have become a highly demanded human capital resource as they are both an education export as well as a skilled contributor to the economy. Hawthorne explains how the phenomenon of step migration may require nations to reform their policies around study migration pathways and improve the outcomes of international students going through these pathways. Countries are now said to compete for international students through promoting their step and study migration pathways in order to be economically competitive. Current trends of Migration Step Migration of Filipino Workers Step migration was found by Paul to be used by domestic Filipino workers in the Philippines, Hong Kong and Singapore to gain migrant capital and work their way up a destination hierarchy of countries to gain legal entry into their ultimate preferred destinations, usually in the West. Step migration is a popular phenomenon among the developing world as it is a strategy to overcome barriers to migration. According to Paul, it is a necessary pattern as many of these migrants have low-capital and face high cost barriers and immigration policy restrictions which prevent them from migrating to their preferred destinations. Step migration allows migrants to increase the likelihood of reaching the West by undertaking multistage migration often starting with low-wage countries with low immigration restrictions in the Middle East and working their way through multiple countries toward their ultimate destination. Through this process, they increase their savings, gain work experience and educational qualifications which allow them to qualify for jobs in their preferred destination countries. Migrant capital is used to accumulate cultural, financial and social capital to move on to preferred destinations, and these form the reasons given by scholars as to what motivates migration. Multinational mobility of migrant workers is increasingly popular in the global marketplace as migrants follow an upward pattern of destination hierarchies based on opportunities. Paul sees step-wise international labour migrants as those workers who take incremental steps in a hierarchy of labour migrations from country to country and being able to leverage resources and experience gained. The workers Paul interviewed all started out with menial jobs in their home countries such as working in fast food, and then leaving their home country to become maids in Western countries. However, Western countries are not easily accessible from countries such as the Philippines and so migrants often go through an intermediary country such as Hong Kong which has high wage rates for migrant domestic workers and better labour protections than most Asian countries. The time in an intermediary country allows migrants to gain migrant capital, work experience, and good job references, which enable further migration to a Western country. After working for the required number of years in Western countries, migrants can apply for permanent residency and the process of step migration is complete. Step migration of Chinese migrants in New Zealand Step migration is seen as a highly popular migration pattern and part of a wider circulatory transnational migratory movement which describes the migration route of Chinese immigrants. Step migration of Chinese migrants can be seen through New Zealand as a case study. Scholars have identified a pattern in which Chinese Migrants use New Zealand as a step destination toward their preferred destination of Australia. The aim found behind this particular pattern of step migration from New Zealand to Australia is that moving to New Zealand first helps these migrants overcome structural obstacles of migrating to Australia. There has been an increase in New Zealand departures to Australia due to better employment and lifestyle reasons. Historically, due to events like the Chinese being subject to military defeats by the Western world, the Chinese regard immigration to Western countries as a move toward personal advancement that could only be undertaken by those who possess a significant amount of human and cultural capital. By emigrating first to New Zealand and gaining permanent residence or citizenship, this allows migrants to gain a safeguard or a step toward another destination with better financial and social prospects, such as Australia. There has been a 20 percent increase in migrants who originally moved to New Zealand to live, departing New Zealand and migrating elsewhere. In specific regions Step migration in Arctic Alaska Ravenstein's step migration hypothesis was found to exist among the Inupiat peoples in Arctic Alaska according to scholars evaluating census data models. Arctic Alaska is also an example of a rural and remote location in the United States where migrants move in and out of. Berman, Huskey and Howe discovered that step migration occurred both upward and downward in an urban and rural hierarchy. Step migration is hierarchical in Alaska as people move from rural villages to regional centres. The key factors identified by scholars in broader migration literature influencing migration decisions are networks such as family, friends, and community ties. Migrants are said to choose to move because of household and personal characteristics and technological changes, which have significantly impacted migration patterns, as the bridge between rural and urban areas can be bridged over networks of communication. Migration can also be impacted by the environment since are undertaking step migration towards urban cities because of coastal erosion in the Arctic, which has increased the risk of storms to rural and coastal villages. According to Berman, Huskey and Howe, step migration is more likely to occur if family and friends are concentrated in an area, and the quality of information about a place declines with distance. Step migration to the European Union The enlargement of the European Union in May 2004 was said to have resulted in a multistage nature of migration to the new member countries of the EU. Kraler and Iglicka analyzed the migration step migration pattern from Poland to the new members of Germany, and the United Kingdom. The first stage was the migratory labour movement from Ukraine, a neighbouring country of the EU, to Poland and then from Poland to the UK and Germany. The scholars see the motivation behind multistage migration as being both economic and institutional. Ukrainians migrating to Poland do so due to the growing GDP per person of those employed in Poland. At the second stage of migration, those who migrated from Poland to Germany did so due to its proximity, the unemployment level in Poland and the level of GDP per capita in Germany. They conclude that the multistage nature of migration seen in the Ukrainian migration to Poland, then migration to Germany and the UK occurred due to the EU enlargement, the free movement of labour and economic factors. Step migration of African Americans post Civil War During the mid-1860s to 1870s, congress implemented various contributions to reconstruct the South prior to the Civil War. This was in an effort to rebuild the unification of an irregular segregated society. After the former slaves became free following the Emancipation Proclamation many ventured up towards the Lower Midwest from the South in search of opportunity. Step migration was seen 91.9% of cases of this new migration pre-1890. One reason for this serge in step migration was the lack of Northern newspapers and common illiteracy in the black communities, leading to a reliance on verbal communication to have awareness of prospective destinations. Difficulties in transportation also made a singular journey a difficult task. During the American Civil War many railway systems were destroyed, which left many rebuilding projects to take place. It wasn't until 1890 that the three states of the Lower Midwest had stable connections to the South. The journey to the North was also unmanageable for most in one trip due to poor conditions and high prices. The trip would be about 30 hours sitting on wooden benches with only the food and drink one brought abroad. Instead many African Americans elected to travel via steamboat, which was a cheaper alternative to its railroad counterpart. Even with cheaper prices, many had to work in shipping docks for extended periods of time to accumulate more money to continue on the rest of their journey. Step migration was also more commonplace pre-1890 because of the mass amounts of African Americans heading towards the Lower Midwest compared to later time periods. References Anthropology Demographic economics Migration Genetic genealogy Demography Human migrations Urbanization
Step migration
[ "Environmental_science" ]
2,659
[ "Demography", "Environmental social science" ]
60,682,886
https://en.wikipedia.org/wiki/Yabiji
, also known as Yaebiji, Yaebise, Yaebishi, and Yapiji and appearing on some historical Western nautical charts as Providence Reef, is the largest coral reef group in Japan. Located in the Miyako Islands, it contains over 100 coral reefs. Known for being above water level several times a year, most visibly around March 3 of the lunar calendar (around the beginning of April in the Gregorian calendar), it was made a Natural Monument of Japan in 2013. Geography Centered roughly around , Yabiji stretches across an area between and north of Ikema Island in the Miyako Islands in Okinawa Prefecture in the Ryukyu Islands. It measures about from north to south and from east to west. It is formed by a cluster of smaller coral reefs (reef groups) centered on eight large coral reefs that form what is known as the Main Reef (or Great Reef). The total number of coral reefs is over 100. Most of the reefs that make up Yabiji are named after various parts of the human body, such as , meaning head, meaning torso, or body in Ryukyuan, and meaning stomach. Other reefs in Yabiji are named for animals, for example, , meaning parrotfish. The reefs also have local names, such as , meaning "exile," and , meaning "flea," and there are about 130 of these names.. The Main Reef extends from north to south with the largest of its eight reefs, Du ("Torso"), at its center, and parallel to it are two to three rows of table reefs, other reefs, and shoals. Yabiji′s structure is geographically consistent with the direction of the geological structure of the Miyako Islands, and it is estimated that the lower part of Yabiji has the same geology as the Miyakos. There is a strong possibility that the lower part of Yabiji was dry land during the Last Glacial Period. Yabiji is usually below the sea surface, but some parts emerge above the surface during low spring tides. In particular, during the period from spring to summer when the tidal range is large, there are some reefs that appear on the sea surface like islands, leading to Yabiji sometimes being called the "Phantom Continent." The greatest tidal difference of the year is on March 3 of the lunar calendar (around early April in the Gregorian calendar). Yabiji′s intermittent land area during low tides is approximately one-tenth that of Miyako Island, which has an area of . Including reefs and reef slopes that never rise above the sea surface, Yabiji′s area is approximately one-third that of Miyako Island. Name There are various views on the origin of the name Yabiji. According to one explanation, the name is derived from Yabiji being made up of eight main coral reefs, while another claims that the name is based on Yabiji being formed from layers of reefs. There are variations of Yabiji′s name in different localities. The reef group is called "Yabiji" on Ikema Island and in the Shimajiri area of Miyakojima, but in the Karimata area of Miyakojima it is called "Yapiji." It also sometimes is called or transliterated as "Yaebiji," "Yaebise," and "Yaebishi." In 1999, the Geospatial Information Authority of Japan called for standardization of the name upon the publication of a 1/25,000-scale topographic map of Yabiji, and the city of Hirara (which in 2005 became part of the city of Miyakojima) decided to make "Yabiji" the official name because that was the name used on Ikema Island, the island closest to Yabiji, and also because municipal events traditionally used the name "Yabiji" as well. Culture and recreation Each year on March 3 of the lunar calendar (around the beginning of April in the Gregorian calendar), when the largest annual tidal difference takes place in Yabiji, a traditional event known in the Miyako Islands as , meaning "going down the beach," takes place in which women gather to drive off bad luck and ward off evil spirits. Yabiji is a good fishing ground because it is home to coral colonies. It is a popular spot for freediving, scuba diving, snorkeling, and recreational fishing. Flora and fauna Yabiji′s reefs contain about 300 kinds of coral. History Fishing apparently had begun in Yabiji by 1350. A 1697 map known as the depicted Yabiji with great precision and included its name and an explanation that its range is one and a half ri from east to west and five ri from north to south. On May 16, 1797, the British Royal Navy sloop-of-war was surveying East Asia under the command of William Robert Broughton when she ran aground on a coral reef at the northwestern tip of Yabiji, flooded from the bottom, and sank. After this, nautical charts began to refer to Yabiji as "Providence Reef." While cruising in the waters of the Miyako Islands on June 7, 1902, the Imperial Japanese Navy destroyer ran aground in Yabiji. The protected cruisers and Saien and the destroyer came to her assistance. Eventually refloated, she reached Sasebo, Japan on August 5, 1902. In 1983, the Miyako Ferry company and the Hayate Company began holding tours of Yabiji in conjunction with the annual Yabiji Festival in which their ferries landed tourists on the reefs that emerged above the sea surface during low tides for several days before and after Sanitsu (March 3 on the lunar calendar, around the beginning of April in the Gregorian calendar). During these tours between 2,000 and 3,000 people landed on Yabiji's emergent reefs every year through 2014. The tourist landings faced criticism that they would lead to the devastation of the Yabiji ecosystem, and countermeasures were taken such as the pilot introduction of coral reef guides and efforts to formulate guidelines for the landings. In response to a request from the city of Hirara (since 2005 a part of the city of Miyakojima), the Geospatial Information Authority of Japan published a 1:25,000 topographic map of Yabiji on December 1, 1999. It was the first map to color-code the coral reefs that appear above the sea surface at maximum low tide and the coral reefs that are below the sea surface. It sometimes is credited as the first topographic map in Japan that included only tidal flats, without dry land, but it also included Ikema Island and other land areas to clearly show Yabiji's geographic relationship with land areas. The creation of this map prompted the city of Hirara to choose the name "Yabiji" as the official name of the reef group in 1999 at the request of the Geospatial Information Authority of Japan. In December 2001, Japan's Ministry of the Environment listed Yabiji as one of Japan's 500 important wetlands. In 2008, a diving survey conducted by the Okinawa Prefectural Buried Cultural Properties Center found the remains of a foreign ship, believed to be HMS Providence. Iron ingots believed to be cargo or belongings of Providence′s crew, ceramics from China and Europe, glass bottles, and glass beads have been discovered at the shipwreck site. On August 1, 2008, the Geospatial Information Authority of Japan published a re-edited version of the 1:25,000 topographic map of Yabiji that included , or Fude Rock. Yabiji also is included in the 1:50,000 topographic map of northern Miyako Island. On March 27, 2013, the Government of Japan′s Agency for Cultural Affairs recognized Yabiji as a place of great importance — an "excellent coastal scenic landscape that has been enjoyed for its connection with Miyakojima's unique lifestyle culture" and "the largest coral reef group in Japan" — and designated it as a Place of Scenic Beauty (名勝; meishō) and a Natural Monument (記念物, kinenbutsu) of Japan. In 2014, Fude Rock and its surrounding waters were added to the Yabiji natural monument. The last landing tours of Yabiji took place in 2014. With the opening of the Irabu Ohashi Bridge between Miyako Island and Irabu Island in January 2015, both Miyako Ferries and Hayate discontinued their operation of regular ferry service between Hirara Port in Miyakojima on Miyako Island and Sarahama Port on Irabu and sold their ferries, bringing the landing tours to an end before the low-tide season of 2015. In April 2016, the Japanese Ministry of the Environment upgraded Yabiji′s status on the list of Japan's wetlands, making it a wetlands of high importance from a biodiversity perspective. Between 2008 and 2018, the living part of the reef decreased by an estimated 70% due to coral bleaching resulting from rising water temperatures in Yabiji. Access Private operators conduct snorkeling and recreational diving tours of Yabiji. In addition, from April to August of each year the Hayate Company cruise ship Mont Blanc conducts tours of Yabiji about twice a month, coinciding with the emergence of some of Yabiji's reefs above the sea surface during the spring and summer tides, although landing on Yabiji during these periods is not possible. References External links Coral reef deemed natural treasure dying at alarming rate (in Japanese) on YouTube Coral reefs Ryukyu Islands Natural monuments of Japan
Yabiji
[ "Biology" ]
1,990
[ "Biogeomorphology", "Coral reefs" ]
60,682,970
https://en.wikipedia.org/wiki/List%20of%20SMTP%20server%20return%20codes
This is a list of Simple Mail Transfer Protocol (SMTP) response status codes. Status codes are issued by a server in response to a client's request made to the server. Unless otherwise stated, all status codes described here is part of the current SMTP standard, . The message phrases shown are typical, but any human-readable alternative may be provided. Basic status code A "Basic Status Code" SMTP reply consists of a three digit number (transmitted as three numeric characters) followed by some text. The number is for use by automata (e.g., email clients) to determine what state to enter next; the text ("Text Part") is for the human user. The first digit denotes whether the response is good, bad, or incomplete: 2yz (Positive Completion Reply): The requested action has been successfully completed. 3yz (Positive Intermediate Reply): The command has been accepted, but the requested action is being held in abeyance, pending receipt of further information. 4yz (Transient Negative Completion Reply): The command was not accepted, and the requested action did not occur. However, the error condition is temporary, and the action may be requested again. 5yz (Permanent Negative Completion Reply): The command was not accepted and the requested action did not occur. The SMTP client SHOULD NOT repeat the exact request (in the same sequence). The second digit encodes responses in specific categories: x0z (Syntax): These replies refer to syntax errors, syntactically correct commands that do not fit any functional category, and unimplemented or superfluous commands. x1z (Information): These are replies to requests for information. x2z (Connections): These are replies referring to the transmission channel. x3z : Unspecified. x4z : Unspecified. x5z (Mail system): These replies indicate the status of the receiver mail system. Enhanced status code The Basic Status Codes have been in SMTP from the beginning, with in 1982, but were extended rather extensively, and haphazardly so that by 2003 rather grumpily noted that: "SMTP suffers some scars from history, most notably the unfortunate damage to the reply code extension mechanism by uncontrolled use." defines a separate series of enhanced mail system status codes which is intended to be better structured, consisting of three numerical fields separated by ".", as follows: class "." subject "." detail class = "2" / "4" / "5" subject = 1 to 3 digits detail = 1 to 3 digits The classes are defined as follows: 2.XXX.XXX Success: Report of a positive delivery action. 4.XXX.XXX Persistent Transient Failure: Message as sent is valid, but persistence of some temporary conditions has caused abandonment or delay. 5.XXX.XXX Permanent Failure: Not likely to be resolved by resending the message in current form. In general the class identifier MUST match the first digit of the Basic Status Code to which it applies. The subjects are defined as follows: X.0.XXX Other or Undefined Status X.1.XXX Addressing Status X.2.XXX Mailbox Status X.3.XXX Mail System Status X.4.XXX Network and Routing Status X.5.XXX Mail Delivery Protocol Status X.6.XXX Message Content or Media Status X.7.XXX Security or Policy Status The meaning of the "detail" field depends on the class and the subject, and are listed in and . A server capable of replying with an Enhanced Status Code MUST preface (prepend) the Text Part of SMTP Server responses with the Enhanced Status Code followed by one or more spaces. For example, the "221 Bye" reply (after QUIT command) MUST be sent as "221 2.0.0 Bye" instead. The Internet Assigned Numbers Authority (IANA) maintains the official registry of these enhanced status codes. Common status codes This section list some of the more commonly encountered SMTP Status Codes. This list is not exhaustive, and the actual text message (outside of the 3-field Enhanced Status Code) might be different. — 2yz Positive completion 211 System status, or system help reply 214 Help message (A response to the HELP command) 220 <domain> Service ready 221 <domain> Service closing transmission channel 221 2.0.0 Goodbye 235 2.7.0 Authentication succeeded 240 QUIT 250 Requested mail action okay, completed 251 User not local; will forward 252 Cannot verify the user, but it will try to deliver the message anyway — 3yz Positive intermediate 334 (Server challenge - the text part contains the Base64-encoded challenge) 354 Start mail input — 4yz Transient negative completion "Transient Negative" means the error condition is temporary, and the action may be requested again. The sender should return to the beginning of the command sequence (if any). The accurate meaning of "transient" needs to be agreed upon between the two different sites (receiver- and sender-SMTP agents) must agree on the interpretation. Each reply in this category might have a different time value, but the SMTP client SHOULD try again. 421 Service not available, closing transmission channel (This may be a reply to any command if the service knows it must shut down) 432 4.7.12 A password transition is needed 450 Requested mail action not taken: mailbox unavailable (e.g., mailbox busy or temporarily blocked for policy reasons) 451 Requested action aborted: local error in processing 451 4.4.1 IMAP server unavailable 452 Requested action not taken: insufficient system storage 454 4.7.0 Temporary authentication failure 455 Server unable to accommodate parameters — 5yz Permanent negative completion The SMTP client SHOULD NOT repeat the exact request (in the same sequence). Even some "permanent" error conditions can be corrected, so the human user may want to direct the SMTP client to reinitiate the command sequence by direct action at some point in the future. 500 Syntax error, command unrecognized (This may include errors such as command line too long) 500 5.5.6 Authentication Exchange line is too long 501 Syntax error in parameters or arguments 501 5.5.2 Cannot Base64-decode Client responses 501 5.7.0 Client initiated Authentication Exchange (only when the SASL mechanism specified that client does not begin the authentication exchange) 502 Command not implemented 503 Bad sequence of commands 504 Command parameter is not implemented 504 5.5.4 Unrecognized authentication type 521 Server does not accept mail 523 Encryption Needed 530 5.7.0 Authentication required 534 5.7.9 Authentication mechanism is too weak 535 5.7.8 Authentication credentials invalid 538 5.7.11 Encryption required for requested authentication mechanism 550 Requested action not taken: mailbox unavailable (e.g., mailbox not found, no access, or command rejected for policy reasons) 551 User not local; please try <forward-path> 552 Requested mail action aborted: exceeded storage allocation 553 Requested action not taken: mailbox name not allowed 554 Transaction has failed (Or, in the case of a connection-opening response, "No SMTP service here") 554 5.3.4 Message too big for system 556 Domain does not accept mail Example Below is an example SMTP connection, where a client "C" is sending to server "S": S: 220 smtp.example.com ESMTP Postfix C: HELO relay.example.com S: 250 smtp.example.com, I am glad to meet you C: MAIL FROM:<bob@example.com> S: 250 Ok C: RCPT TO:<alice@example.com> S: 250 Ok C: RCPT TO:<theboss@example.com> S: 250 Ok C: DATA S: 354 End data with <CR><LF>.<CR><LF> C: From: "Bob Example" <bob@example.com> C: To: Alice Example <alice@example.com> C: Cc: theboss@example.com C: Date: Tue, 15 Jan 2008 16:02:43 -0500 C: Subject: Test message C: C: Hello Alice. C: This is a test message with 5 header fields and 4 lines in the message body. C: Your friend, C: Bob C: . S: 250 Ok: queued as 12345 C: QUIT S: 221 Bye {The server closes the connection} And below is an example of an SMTP connection in which the SMTP Server supports the Enhanced Status Code, taken from : S: 220 dbc.mtview.ca.us SMTP service ready C: EHLO ymir.claremont.edu S: 250-dbc.mtview.ca.us says hello S: 250 ENHANCEDSTATUSCODES C: MAIL FROM:<ned@ymir.claremont.edu> S: 250 2.1.0 Originator <ned@ymir.claremont.edu> ok C: RCPT TO:<mrose@dbc.mtview.ca.us> S: 250 2.1.5 Recipient <mrose@dbc.mtview.ca.us> ok C: RCPT TO:<nosuchuser@dbc.mtview.ca.us> S: 550 5.1.1 Mailbox "nosuchuser" does not exist C: RCPT TO:<remoteuser@isi.edu> S: 551-5.7.1 Forwarding to remote hosts disabled S: 551 5.7.1 Select another host to act as your forwarder C: DATA S: 354 Send message, ending in CRLF.CRLF. ... C: . S: 250 2.6.0 Message accepted C: QUIT S: 221 2.0.0 Goodbye {The server closes the connection} References Internet-related lists
List of SMTP server return codes
[ "Technology" ]
2,105
[ "Computing-related lists", "Internet-related lists" ]
60,691,225
https://en.wikipedia.org/wiki/Camouflage%20%28Chris%20Sievey%20song%29
"Camouflage" is a single released by the English musician and comedian Chris Sievey in 1983. The single is notable for its B-side, which rather than containing another song, contains the audio tones for three programs Sievey created for the Sinclair ZX81 computer. Two programs were for a video game Sievey created called Flying Train, and the other was the code for the music video to "Camouflage". The video claimed that this was "the world's first computer promo". Origin The song was released during a hiatus with Sievey's band The Freshies. The documentary film Being Frank: The Chris Sievey Story features clips from interviews Sievey gave at the time of the song's release, stating that it came about when he went to pay a telephone bill and instead purchased a ZX81. He then learned how to program and created the song, "which used the Cold War as a metaphor for love's frustrations." While the A-side was a conventional single, the B-side contained what sounds like random noise. This noise was actually a series of three programs for the ZX81. Two of these were the 16 kilobyte and 1 kilobyte versions of "Flying Train", a video game he created, which involved having to land a flying train onto a track while avoiding birds. The third program was an animated music video for the song "Camouflage". In order to play the video, the user had to record the B-side onto cassette tape, load the data from the tape into the ZX81, and then run the program while playing the A-side at the same time, so that the music synchronised with the video. Reception "Camouflage" failed to chart when it was released, but retrospectively critics have praised the creation of the single for how forward-thinking it was at the time. Bob Sorokanich writing for Gizmodo said that: "It's a graphic style that artists and musicians find fascinating today, and Sievey's experiment foreshadowed the Enhanced CDs that offered up all kinds of easter eggs when you'd pop them in your computer's CD-ROM drive". Aftermath In 1984, Sievey created the video game The Biz for the ZX Spectrum, a simulation set in the music industry. The B-side of The Biz featured eight songs by Sievey and the debut of his most famous creation, the comic character Frank Sidebottom. Rhys James Jones wrote in The Conversation that: "It's possible to see Sievey's retreat into his Frank disguise as a reaction against "Camouflages failure. But it's telling that the computer press, particularly the Spectrum magazine Crash, adored Frank. Home microcomputing, and the tools surrounding it, helped bring about one of the best-loved pop parodists of the past 30 years." Personnel Chris Sievey – Lead vocals, keyboard Paula and Winifred (The Vizzable Girls) – Backing vocals Mike Doherty – Drums References 1983 computer-animated films 1983 animated short films 1983 singles 1980s music videos Animated music videos British new wave songs EMI Records singles Home computer software Vinyl data 1983 songs
Camouflage (Chris Sievey song)
[ "Engineering" ]
670
[ "Audio engineering", "Vinyl data" ]
60,692,294
https://en.wikipedia.org/wiki/If%20Trees%20Could%20Talk
If Trees Could Talk: Life Lessons from the Wisdom of the Woods is a non-fiction book by American author and podcaster, Holly Worton that offers spirituality and self-help through making contact with nature and talking to trees. Summary Part self-help and part spiritual, Worton's If Trees Could Talk is a guide to taking time out to connect with nature, talk to trees, and to live a happier and more fulfilled life. The author, who lives in England, believes that "all trees are living, breathing organisms that humans can connect with and talk to on a deeper level through silent, telepathic communication." A druid, coach and healer, Worton writes about the individual trees she has encountered on her many nature walks – each with their own history, character, personality, and story; and she describes the different species of trees, and their place and reverence in pagan ways, such as that of the Order of Bards, Ovates and Druids, and the ancient alphabet, the ogham which ascribes a tree to each letter. The book is structured around this description, the stories that each individual tree has to tell, and the advice they have to offer. Interviews On 6 May 2019, Worton was interviewed on ITV's programme This Morning by television presenters Eamonn Holmes and Rochelle Humes, to talk about her new book. In a garden outside the television studios, she also gave the presenters a practical demonstration of how she communicates with trees, with the aid of a sound engineer. On 26 February 2020, Worton was again interviewed on This Morning, by Alison Hammond and Phillip Schofield, discussing the Allerton Oak, the UK's nomination for the European Tree of the Year competition, and communicating with it. Reception The televised item on This Morning attracted a largely humorous and dismissive response in the social media, and was reported in several newspapers, including the Daily Mirror, the Birmingham Mail, and Entertainment Daily. Writing in the Daily Express on 13 May 2019, life coach and columnist, Carole Ann Rice is, however, more positive about the book. She describes If Trees Could Talk as "a wise and beautiful book coming at the right time for many of us." "Stop rushing, be patient, respect nature, stray from your normal path, be mindful, ask permission [to communicate with the trees]" – these, the reviewer says, are a few of the things that we can learn from the trees, and she advises the reader to "show love and respect to what grows around you" and to "learn some true lessons from the wild side of life." About the author Originally from California in the United States, Worton has lived in Spain, Costa Rica, Mexico and Chile before moving to England. She is an author and podcaster, as well as a druid, coach and healer. See also Celtic sacred trees References External links Author's web site Interview on This Morning 2019 non-fiction books Self-help books Nature books Works about trees Books about spirituality Plant communication
If Trees Could Talk
[ "Biology" ]
624
[ "Plant communication", "Plants" ]
60,694,300
https://en.wikipedia.org/wiki/Lia%20Addadi
Lia Addadi (born 1950) is a professor of structural biology at the Weizmann Institute of Science. She works on crystallization in biology, including biomineralization, interactions with cells, and crystallization in cell membranes. She was elected a member of the National Academy of Sciences (NAS) in 2017 for “distinguished and continuing achievements in original research”, and the American Philosophical Society (2020). Early life and education Addadi was born in Padua. She earned her bachelor's and master's degrees in chemistry at the University of Padua and graduated in 1973. She moved to Rehovot for her PhD supervised by Meir Lahav on the synthesis of chiral polymers at the Weizmann Institute of Science, which she completed in 1979. Research and career After her PhD Addadi joined Jeremy R. Knowles at Harvard University. She started to work on crystal growth during her PhD, and, by chance, met Steve Weiner, who was working on biomineralization. Together they investigated many biominerals, including demonstrating the matrix sheets of crystals in nacre (mother of pearl). Addadi returned to the Weizmann Institute of Science as an associate professor in 1988. She was promoted to full professor in 1993 and head of the Structural Biology in 1994. She works on ordered crystal arrays and mineralized tissues. She has investigated the relationship between acidic proteins with biominerals including calcite and apatite. She demonstrated that macromolecules in the shells of mollusks determine the polymorphism of aragonite and calcite. She went on to establish the role of amorphous calcium carbonate in biomineralization. Addadi identified that mollusks build their shells using hydrophobic silk gels, aspartic acid, acid-rich proteins, and an amorphous precursor. Addadi is interested in how macromolecules nucleate oriented growth and how morphology changes through interactions with surfaces. Addadi looks at the structures of crystal protein composites. She demonstrated that protein intercalation into the lattice can change the texture and mechanical properties of the material. She showed that immunoglobulins and serum albumins can selectively adhere to the surfaces of crystals and nucleate further crystal growth. This can help too understand how diseases such as gout, osteoarthritis, and atherosclerosis form crystals in body fluids. Addadi studies the formation pathways of these mineralized tissues in foraminifera and zebrafish bone. She was the first woman to win the ETH Zurich prelog prize in 1989. She was appointed dean of the faculty of chemistry in 2001. Her work has considered molecular recognition at crystal interfaces. When introduced to an organism, crystals appear as highly structured, repetitive macromolecular substrates. She studies monoclonal antibodies that are sensitive to specific crystalline organisations. She also investigates cross-talk between crystals and the biological environments they exist in. Her inaugural year article for Proceedings of the National Academy of Sciences of the United States of America (PNAS) considered the formation of cholesterol crystals in atherosclerosis. She demonstrated that in cell culture, crystals adopt a similar shape to the atherosclerotic plaque that forms in cells, because they are formed from the same cholesterol. The crystals adopt helical or tubular forms. Addadi used stochastic optical reconstruction microscopy and soft X-ray tomography to identify the cholesterol inside cells. Awards and honours 1986 Weizmann Institute of Science Ernst David Bergmann prize in Chemistry 1989 Israel Chemical Society Annual Award 1996 NIDR Prize for Distinguished Scientists 1998 ETH Zurich Prelog Medal in Stereochemistry 2006 Technion – Israel Institute of Technology Kolthof prize 2009 Israel Chemical Society Prize for Excellence 2011 Royal Swedish Academy of Sciences Gregori Aminoff Prize for Crystallography 2017 Elected to the National Academy of Sciences 2018 ETH Zurich Honorary Doctorate References 1950 births Living people Italian women scientists Italian women chemists Academic staff of Weizmann Institute of Science Weizmann Institute of Science alumni University of Padua alumni Women biochemists Members of the American Philosophical Society Foreign associates of the National Academy of Sciences Date of birth missing (living people)
Lia Addadi
[ "Chemistry" ]
858
[ "Biochemists", "Women biochemists" ]
60,696,089
https://en.wikipedia.org/wiki/Chemical%20industry%20in%20China
The chemical industry in China is one of China's main manufacturing industries. It valued at around $1.44 trillion in 2014, and China is currently the largest chemicals manufacturing economy in the world. The chemical industry is central to modern China's economy. It uses special methods to alter the structure, composition or synthesis of substances to produce new products, such as steel, plastic, and ethyl. Chemical industry provides building materials for China's infrastructure, including subway, high-speed train, and highway. Prior to 1978, most of the product was produced by the state-owned business, and the share in product outputted by state-owned business had decreased in 2002. The Chinese chemical industry is also one of the world's largest producers of both controlled and non-controlled precursor chemicals used in the Global illicit drug trade, particularly in the Golden Triangle, Mexico, Latin America and Europe, with large volumes of these substances being traded through the growing research chemical (RC) industry online through social media and on B2B platforms and the dark web. History The modern chemical industry was born after the Industrial Revolution which took place in 1760 to sometime between 1820 and 1840. This revolution included the change from hand production methods to machines, iron production processes and new chemical manufacturing. Before that, China's chemical products were mainly produced by hand workshop. Medical chemistry Shennong has tested hundreds of herbs to find their medical value, and have written "The Divine Farmer's Herb-Root Classic". This book recorded the efficacy of 365 medicines derived from plants, animals, and minerals and gave rarity ratings and grade. Shennong's work led the way to Chinese medicine. In the Ming Dynasty, Li Shizhen wrote "Compendium of Materia Medica" which contained more than 1,800 kinds of drugs. It also described the nature, flavor, form, type and usage in disease cure of over 1000 herbs. The book is considered as the primary reference work for herbal preparation. These works were significant to the development of traditional Chinese medicine, and they laid the foundation for modern Chinese medical chemistry. Tu Youyou is a pharmaceutical chemist of China. She discovered qinghaosu (artemisinin) and applied to cure malaria. Qinghaosu saves millions of lives in South China, South America, Southeast Asia, and Africa. It is an important breakthrough in the medicine area last century, and Tu Youyou received the 2015 Nobel Prize in Physiology or Medicine and Lasker Award in Clinical Medicine for her work. She is the first Chinese female to receive a Nobel Prize in Physiology or Medicine. Agriculture chemistry China's agriculture production efficiency boosted in the 20th century, because of the application of chemical pesticides and fertilizers. In 1909, Franklin Hiram King, US Professor of Agriculture, made a tour of China. His book "Farmers of Forty Centuries" described China's farming. This book inspired many Chinese farmers to conduct ecological farming and use fertilizers. Beginning in 1978, the Chinese government created the Family Production Responsibility System and encouraged farmers to use fertilizer. Chemical fertilizer can increase the output by 50% to 80%. The chemical industry produces micronutrients fertilizer contained nitrogen, phosphorus and potassium, which can meet the demand of different crops and soil structure. China is currently the largest consumer and producer of nitrogen fertilizers. Chemical material According to statistics, by 1984, there were actually about 9 million chemical substances in the world, of which about 43% were materials. Although the number of materials is large, if classified according to chemical composition, it can be summarized into three categories: metal materials, inorganic non-metal materials and composite materials. Metal Steel is an important metal material in the chemical industry of China. In 2016, the annual production of steel of the world is 1621 million tons, of which 804 million tons are produced in China (49.6%), 105 million tons are produced in Japan (6.5%), 89 million tons are produced in India (5.5%), 79 million tons are produced in US (4.9%). China's production of steel increased from 100 million tons in 2000 to 250 million tons in 2004. It caused rising demand for raw materials which is necessary for steel production, included pig iron, iron ore, scrap metal, lime and dolomite, coke and coal. The price of iron ore increased by over 70% from 2004 to 2005. Thus, in December 2005, China decided to limit production of steel to not more than 400 million tons per year within five years, in order to lower the increasing rate of raw material prices. Non-metal In 2016, China ethyl alcohol and other basic organic chemicals markets and plastic materials and resins market were valued at $137 and $184 billion respectively, which had 9% and 10% growth rates. China is the largest producer and exporter plastic materials market in the world. The main driver of this market is the expanded application of ethanol in China. The demand for ethanol in China is about 2.3 million tons now. China has a key operating division, Chenguang Institute, which has developed a number of advanced epoxy resin, organic silicone, polymer material and specialty engineering plastics. It has signed a JV agreement with DuPont's high-performance polymer division, to produce and sale premixed rubber and raw fluoro-rubber. The JV agreement included the establishment of an ultra-modern pre-mixed rubber factory in Shanghai and it began to operate in 2011. Composite material The composite material is new structural material. It is characterized by a combination of volumetric strength, volumetric stiffness and corrosion resistance over metallic materials. It is composed of a matrix material such as synthetic resin, metal or ceramic, and a reinforcing material composed of inorganic or organic synthetic fibres. There are a variety of substrates and reinforcing materials so that a selective fit can be made to produce various composites with satisfactory performance, which has a broader prospect for chemical materials. Sinochem and Shanghai Chemical Industry Institute have set up a laboratory for composite materials. The two sides will jointly develop technology, transform the results and apply in the industry of carbon fiber and its curing resins, in order to promote the technologies and products of high-performance composite materials and facilitate its industrialization and marketization. At present, this laboratory has launched a project to research and develop the spray-free carbon fiber composite material. At first, this material will be applied to new energy cars, which can not only reduce the weight of the cars but also reduce the cost of applying composite materials while improving production efficiency significantly. Companies China has a company which is the top 3 chemical companies all over the world. That is Sinopec. It has $43.8 billion in chemical sales in 2015. A list of the top 20 China's chemical corporation by turnover in 2018 shows below. Chinese companies plan to go into the specialties side of the market, and some of them already become one of the players in the market, such as Zhejiang NHU, a vitamin maker; Yantai Wanhua, an isocyanates maker; and Bairun, the leader in the Chinese flavors-and-fragrances market. Development tendency Overview The chemical market value of China had increased in the past 30 years. In 2015, it represented about 30% of the chemicals demand all over the world. China's demand growth of the chemical industry has slowed down from the two-digit rates in the past 10 years, but it still has 60% growth for the global demand from 2011 to 2020. As of the end of November 2011, there were 24,125 enterprises above designated size in the China national chemical industry, with a total output value of 6.0 trillion yuan, a year-on-year increase of 35.2%, accounting for 58.61% of the total output value of the whole industry. In the first 11 months of 2011, the fixed assets investment in the chemical industry was 861.721 billion yuan, a year-on-year increase of 26.9%, which was 5.5 percentage points higher than the industry average, accounting for 70.12%. In the first 10 months of 2011, the total profit of the chemical industry was 320.88 billion yuan, a year-on-year increase of 44.4%, accounting for 47.1% of the total industry profits. The annual output value of the chemical industry is expected to be about 6.58 trillion yuan, a year-on-year increase of 32%, and the total profit is 350 billion yuan, an increase of 35%. In 2011, the added value of the chemical industry increased by 14.8% year-on-year, and the growth rate slowed by 1% year-on-year. A list of China chemical industry's main products in 2011 shows below. Government policy goals China government set up policy goals to solve the unemployment issue and boost the economy, in order to against the increasing population. The government's policies and goals have progressed as the economy was opened up in 1978. It can be divided into three periods: 1978-1990: China's market was opened to the world in 1978, and the government knew the importance of the chemical industry, so permitted the foreign direct investment get into domestic but control heavily. Meanwhile, China's domestic chemical demand increased, so most companies decided to invest in produce. 1990-2000: Multinationals were allowed to enter the Chinese market, to join the chemical produce cooperate with Chinese firms. 2000-2011: Foreign direct investment in this period has not to limit, while multinationals booming because China became a major exporter for chemical in the world. Environmental pollution China's chemical industry has developed over the past 40 years, from an economic backwater to the largest chemicals manufacturing economy, that consumes raw materials and energy. This change has helped hundreds of millions of Chinese out of poverty but polluted China's air and water at the same time. China government has made efforts to fight the pollution. Free plastic shopping bags were banned in 2008. The production of plastic bags causes a waste of resource and energy and environmental pollution because plastic bags are non-recyclable. Chemical industries in China are starting to research and develop green technologies by the recommendation of the government such as the use of alternative fuels to produce chemical products. Some industries are using carbon dioxide and others naturally available to produce industrial products, fuels and other substances. For example, a specialty chemicals company called Elevance Renewable Sciences produces highly concentrated detergents by using green technology metathesis, which significantly lowers the energy consumption and minimizes pollution. See also Chemical industry Chemical engineering Industry of China References Ch Manufacturing in China
Chemical industry in China
[ "Chemistry" ]
2,193
[ "Chemical industry by country" ]
60,696,171
https://en.wikipedia.org/wiki/Custirsen
Custirsen, with aliases including custirsen sodium, OGX-011, and CC-8490, is an investigational drug that is under clinical testing for the treatment of cancer. It is an antisense oligonucleotide (ASO) targeting clusterin expression. In metastatic prostate cancer, custirsen showed no benefit in improving overall survival. Custirsen was developed through a collaboration between OncoGenex Pharmaceuticals Inc. and Isis. In 2009, OncoGenex Pharmaceuticals Inc. and Teva Pharmaceutical Industries Ltd. agreed to develop and commercialise Custirsen. Mechanism of action An antisense oligonucleotide (ASO) is a single-strand DNA sequence complementary to a desired messenger RNA (mRNA) sequence. Antisense therapy targets gene sequences using antisense oligonucleotides by binding the ASO to the mRNA strand. This creates an inhibitory complex that reduces plasma protein levels by preventing translation. Custirsen is a second-generation phosphorothioate antisense oligonucleotide. Phosphorothioates are oligonucleotides with a sulfur ion replacing an oxygen molecule in the chain. They have high antisense activity due to their increased chirality, nuclease stability, and solubility. Second-generation oligonucleotides are highly specific to the target mRNA sequence, increasing the affinity of the compound. Custirsen acts as an anti-cancer drug by binding to the mRNA initiation site of the clusterin gene, reducing clusterin protein plasma concentrations. The synthetic addition of a 2’-methoxyethyl on each nucleotide bookending the phosphorothioate backbone causes: An increased affinity for the RNA54 targeted gene sequence An increased resistance to digestive nucleases Decreased toxicity Increased tissue half-life by approximately seven days Decrease adverse side effects, enabling more potent concentrations Clusterin is upregulation in many tumours including prostate, breast, non-small cell lung, ovary and colorectal. It has been linked with the development of aggressive tumours by protecting the cells from apoptosis. It is also upregulated in response to standard cancer treatments including chemotherapy, androgen deprivation therapy, and radiation therapy. This resistance is caused by the inhibition of the pro-apoptotic BLC2 gene, prevention of protein aggregation, and increased NF-KB. The anti-apoptotic activity of clusterin in aiding tumour growth is due to interactions with protein complexes. These include: The binding of clusterin to a structurally altered Ku70–Bax complex Regulation of Nuclear Factor (NF)-κB activity and signalling Kinase ERK and AKT Kinase Promoted epithelial-mesenchymal transition Pharmacokinetics A meta-analysis study evaluated 5588 Custirsen plasma concentrations from 631 subjects over seven clinical studies. Subjects with cancer received multiple doses between 40 mg and 640 mg intravenously over two hours, whilst healthy subjects received either a single or double dose at 320 mg-640 mg. The pharmacokinetics of Custirsen was described by a three-compartment model with first-order elimination where: Three-compartment model refers to the distribution of the drug. The body is divided into central, peripheral 1, and peripheral 2. Distribution rate of the drug is respectively highest to lowest. First-order elimination describes the elimination rate of the drug. In first-order kinetics, elimination is proportional to concentration of drug present in the body. For a representative sixty-six year old with a body mass (kg) of eighty-two and a blood Custirsen level of 0.933 mg/dL, the estimated parameter values were: Clearance (CL) = 2.36 Central Volume of Distribution (V) = 6.08 Peripheral Volume of Distribution (V ) = 1.13 Volume of the Second Peripheral Compartment (V ) = 15.8 Side effects A phase I study investigated the maximum tolerated dose of Custirsen in patients with recurrent or refractory high-grade gliomas. The recommended and safe dosing of Custirsen was determined in a Phase I same-dose escalation scheme involving forty patients with tumours known to upregulate clusterin with metastatic or locally recurrent disease (prostate, ovary, breast). Custirsen was infused intravenously on days 1, 3, and 5, with weekly dosing starting on day 8 for four weeks. The drug was increasingly administered in six dose cohorts of 40 mg, 80 mg, 160 mg, 320 mg, 480 mg, and 640 mg. The results found that the recommended dose of Custirsen was 640 mg, with maximum decrease of clusterin blood plasma occurring at this dose. Researchers found a statistically significant increase in the apoptotic index in prostatectomy specimens. This study provided the dosing framework for further studies into Custirsen, determining 640 mg as a tolerable and biologically active dose. In this study, no dose-limiting toxicities were reported in doses up to and including 480 mg. For patients who received the 640 mg dose, the following adverse reactions were present: Thrombocytopenia Anaemia Leukopenia Fever Fatigue Rigors Alopecia Anorexia In patients who received combined treatment with Docetaxel, four of sixteen patients experienced dose-limiting toxicities at 640 mg: Dyspnea Pleural effusion Neutropenia Fatigue Mucositis Phase III Studies Phase III studies into the effectiveness of the combinational treatment of Custirsen and chemotherapy as a treatment of metastatic-castration-resistant prostate cancer and also Custirsen as a biomarker for clusterin are currently under evaluation. The Phase III SYNERGY trial looked into the addition of Custirsen to first-line Docetaxel and Prednisone chemotherapy, concluding that there was no marked increased survival rate compared to the Custirsen-independent treatment group. Researchers also found no difference in the progressive rates of the cancer. Contrasting previous research, subjects in the Custirsen group had more adverse reactions to the treatment than the Custirsen-independent group. The findings of this clinical study contradict the results found in previous studies. However, researchers suggested further studies into patients with metastatic-castration-resistant prostate cancer whom have poor prognostic features, believing there is a therapeutic effect of Custirsen in individuals with this feature of the disease. References Antineoplastic and immunomodulating drugs Experimental cancer drugs Antisense RNA Therapeutic gene modulation
Custirsen
[ "Biology" ]
1,397
[ "Therapeutic gene modulation" ]
60,699,425
https://en.wikipedia.org/wiki/Metallo-ene%20reaction
The metallo-ene reaction is a chemical reaction employed within organic synthesis. Mechanistically similar to the classic ene reaction, the metallo-ene reaction involves a six-member cyclic transition state that brings an allylic species and an alkene species together to undergo a rearrangement. The initial allylic group migrates to one terminus of the alkene reactant and a new carbon-carbon sigma bond is formed between the allylic species and the other terminus of the alkene reactant. In the metallo-ene reaction, a metal ion (Mg, Zn, Pd, Ni etc.) acts as the migrating group rather than a hydrogen atom as in the classic ene reaction. Initial Studies Metallo-ene reaction was first studied by Lehmkuhl et al., and since then has gradually gained popularity among the synthetic community throughout the better understanding of its mechanism and potential as a synthetic tool. Classification Generally speaking, metallo-ene reaction has both an intramolecular and an intermolecular version. For the former, the reaction can be classified into two types by the skeletal connectivity. In Type I, a carbon linkage connects the alkene fragment to the terminal carbon of the allyl fragment of the molecule, while in Type II the alkene fragment is connected to the internal carbon of the allyl fragment. Mechanism Historically, there has been a long-standing uncertainty about the precise mechanism of metallo-ene reaction. Three possible mechanisms—a concerted mechanism, a stepwise mechanism and a metal-catalyzed mechanism have been postulated and studied over the past few decades. According to computational analyses, metallo-ene reaction tends to proceed via a concerted six-member transition state, although the exact mechanism was found to vary and depends on the metal. Selectivity Regioselectivity For Type II reaction, two possible products can be expected if the two termini of the allyl piece are unsymmetrically substituted, depending on which carbon engages in the formation of a new sigma bond. Interestingly, Oppolzer et al. have found that the more substituted terminus of the allyl piece will participate in new sigma bond formation regardless of the length of the internal carbon linkage. Stereoselectivity Since a six-member cyclic transition state is involved in metallo-ene reaction, high level of stereoselectivity can be expected due to the conservation of orbital symmetry. Indeed, this happens to be the case in many synthetic applications of this reaction. Felkin et al. have found the cis product to be formed as the predominant product kinetically, while the trans product could also be obtained selectively under thermodynamic conditions. The fact that stereochemical outcome of this metallo-ene reaction could be tuned by altering the reaction conditions regardless of the geometry of allyl fragment reveals its reversible nature. Synthetic Application Asymmetric synthesis In 2016, Trost et al. have developed a highly diastero- and enantioselective intramolecular interrupted metallo-ene reaction using a chiral phosphoramidite ligand to achieve high levels of stereoselectivity. Starting from linear precursors, a wide range of vicinally disubstituted five-member rings could be synthesized. An additional stereocenter is generated during the process by reaction with water. Sequential coupling In 2017, Liu et al. have developed a highly efficient palladium- catalyzed cascade metallo-ene/Suzuki coupling reaction of allene-amides, delivering polyfunctionalized 2,3-dihydropyrrole derivatives in excellent yields. Total synthesis In their synthetic efforts towards Coriolin, Oppolzer et al. devised a metallo-ene-carbonylation cascade reaction to construct the fused bicyclic core of Coriolin in an efficient fashion. They started with a simple aldehyde to which a propargyl alcohol appendage was attached via nucleophilic addition. Reduction followed by Appel reaction and Finkelstein reaction would yielded a key intermediate, which in the presence of nickel catalyst and CO atmosphere could be transformed to the target cyclopentanone in decent yield. References Chemical reactions
Metallo-ene reaction
[ "Chemistry" ]
883
[ "nan" ]
60,700,840
https://en.wikipedia.org/wiki/Unmitigated%20communion
In psychology, unmitigated communion is focusing on others while excluding an individual's self. It is opposed to unmitigated agency, which is focusing on self while excluding others. Unmitigated communion is portrayed as a way of being concerned with others excessively and placing other human beings' needs or wants before one's own. Unmitigated communion and unmitigated agency are also correlated with unusual behaviour and psychological problems. Background and history Unmitigated communion was first introduced by David Bakan in the year 1966. Unmitigated communion originated from an analysis of two aspects of behaviour and personality: agency and communion. Bakan defined communion as a focus on relationship or interrelation with others and a focus that more characterises women compared to men in the culture, whereas agency indicates the focus on an individual's self or autonomy. It is believed that communion is referred to as a measure of psychological femininity. However, it is claimed that now communion is perceived to reflect a particular part of female gender-related traits, a communal orientation. Bakan (1996) had never explicitly pinpointed the construct of unmitigated communion, which was developed by other psychologists such as Vicki S. Helgeson and Heidi L. Fritz. Nevertheless, Bakan believed that high levels of communion could be mitigated by a personal sense of agency. Thus, the agency in the unmitigated communion is absolutely absent. Unmitigated communion is different from communion as unmitigated communion is the exaggerated version of communion. Communion can be viewed as a caring act toward other people in a positive way, whereas unmitigated communion can be seen as a psychological distress. In addition, communion is associated with the belief that other individuals are valuable, while unmitigated communion is not affiliated with any good or bad view of others. Causes of unmitigated communion A study from Carnegie Mellon University shows that there are four main causes of unmitigated communion behaviour which includes the way a person was raised, modelling of a family member especially mother, lack of self-esteem as well as genetics. However, the most accurate and possible cause for unmitigated communion is the combination of genetics, which might be inherited from parents, and socialisation, where the environment and surroundings affect the way a person behaves. For instance, poor and unsupportive family surroundings and social environment would lead to high unmitigated communion behaviour. It is claimed that adolescents with high unmitigated communion would more likely to be raised in a family with less expressive parents. In addition, an individual with high unmitigated communion would tend to come from less cohesive family. Roles of unmitigated communion Over-involvement with others Over-involvement with others means that people with high unmitigated communion are too unreasonably involved in other people's problems and treat others' issues as their own. Additionally, high unmitigated communion individuals tend to feel responsible for helping others and frequently have thoughts about other individuals' problems. Therefore, the higher a person scores at unmitigated communion, the more frequent and common that person would be involved in others' problems and be affected. In addition, an individual with strong unmitigated communion would tend to feel more stressful events eventuates. There are three ways to explain the over-involvement with others. Caretaking Caretaking refers to an action where individuals with unmitigated communion are more strongly correlated to support provision compared to communion. Therefore, individuals with high unmitigated communion would tend to exhibit "helping behaviours" in an extreme way. In a study about adjustment to heart disease, couples who scored relatively high in unmitigated communion would be more likely to be overprotective with the other partner. In addition, according to a study of college students and adults, unmitigated communion is associated with self-reports of interpersonal problems such as intrusive, overly nurturant, as well as self-sacrificing. Imbalanced relationship An imbalanced relationship is a situation where individuals with high unmitigated communion would feel uncomfortable receiving support from others. One reason for the imbalanced relationship is that individuals with unmitigated communion assume that not receiving any help from others could control over the relationship among friends. Individuals characterised by unmitigated communion do not expect that others would support as they are afraid that others would not respond to the needs. Therefore, unmitigated communion individuals would be more likely to drop the expectations to minimise any disappointment. People with high unmitigated communion also feel that their problems should not burden others to avoid damage in the relationship. Motives for helping The motives for helping are likely to be different for individuals with high unmitigated communion than for people with high communion. Although unmitigated communion and communion are similar as they are correlated with providing support and related to empathy, the motives are entirely different. Individuals with high communion would tend to help out others genuinely. However, individuals who scored high on unmitigated communion would be more likely to help others in order to improve their self worth in front of others Neglect of the self Individuals with high unmitigated communion are correlated with self-neglect indicators, which include being exploitable, difficulties in declaring one's needs, and hindering self-effacement from keeping away from conflict with others. The higher an individual's score on unmitigated communion, the stronger the feeling of responsibility for others. Thus, it would be less likely for them to prioritise themselves. Externalised self-evaluation Externalised self-evaluation is defined as basing an individual's self-evaluation on what other people think. It is believed that the mixture of both externalised self-evaluation and the belief of negative opinion from others would lead to low self-esteem as well as subsequent depressive symptoms. For instance, in a relationship, a person who scored high in unmitigated communion would be more emotional than a person who scored lower. Additionally, people with high unmitigated communion would more likely to evaluate themselves based on the other's opinions, and they would tend to be pessimistic if they cannot meet the society and others' opinions and expectations. Unmitigated communion in gender In 1966, Bakan claimed that there are two modes of existence, which includes agency and communion. Self-enhancement and self-assertion would be correlated with the agency, whereas society or group cooperation would be related to communion. It is perceived that individual with the unmitigated agency would tend to isolate themselves from others, while unmitigated communion would exclude themselves for others. From the gender perspectives, it is claimed that males would tend to be 'agentic' than females. The existence of sex variance in communal and agentic could trigger conflicts between male and female. For instance, a majority of females complained that they could not communicate their inner feelings with their partners, while most of the males complained that their partners are too emotional. Moreover, males are associated with unmitigated agency whereas females would tend to be associated with unmitigated communion. Therefore, males would more likely to be related to dominant acts which means males would prefer to control tasks individually. This can be reflected that males would tend to be more individualistic than females. Unlike males, female is reported to have higher chance of developing unmitigated communion. Hence, it is probable that females are correlated with submissive acts. This shows that females would tend to apologise repeatedly. In addition, as females are likely to score high in unmitigated communion, thus females are more likely to be involved in relationship as well as socialise more than males. Besides, females are also more expressive than males. The negative thinking and views of females would also lead to lower self confidence, hence this triggers higher unmitigated communion. Additionally, as women frequently make "internal attributions" for a particular event or tragedy, thus, when it comes to a failure in fulfilling others' needs or solving others' problem, women would tend to blame themselves which would reflect to higher unmitigated communion. Direct implication to psychology High unmitigated communion would lead to psychological distress. It is shown as a self-rating depression scale that was created by Zung, had shown that the higher the unmitigated communion the higher the depression level. To explain further about psychological distress, two types of psychological distress, general distress and situation-specific distress will be distinguished General psychological distress Psychological distress is correlated with unmitigated communion as this relates to a couple of interpersonal behaviour which includes, over-involvement with others as well as self neglect. The psychological distress would cover anxiety and depression as the symptoms. The psychological distress might occur in college students, adolescents, and even adults. Unlike the unmitigated communion, people with high communion would not have a risk of psychological distress such as anxiety and depression. Additionally, one factor of how unmitigated communion is related to psychological distress would be the absence of self-worth. This is because the absence of self worth or self-esteem would be related to the depression and anxiety. A study also reported that one of the main reason the high level of unmitigated communion would be negative-self view. Situation-specific distress The over-involvement in other's problem would result in even further psychological distress, called situation-specific distress. This eventuates as individuals with high unmitigated communion would tend to involve in as well as take care of others' problem, they would be more likely to gain exposure toward a particular stressful moment or event. Hence, it is probable that they become distressed. Additionally, a study from Carnegie Mellon University showed that unmitigated communion is affected by events of other people. Unmitigated communion is also correlated with intrusive thoughts. For example, when a friend or family are in a trouble people with high unmitigated communion are very likely to think repeatedly and focus on their problem which in turn could lead people with unmitigated communion to feel guilty for not helping them as they always think that it is their responsibility to help their family or friend. Therefore, an individual with high unmitigated communion would tend to be vulnerable toward a situation specific distress that would result from the existence of other people's problem. The form of distress or suffering in situation specific distress would be greater compared to the generalised distress Implication and relation to other factors Relation to physical health Physical health can be impacted from the psychological distress. As person with high unmitigated communion would tend to involve in others' problems. Therefore, when they are not able to meet the others' need and problem is not solved, a person with high unmitigated communion would tend to suffer from variety of illnesses such as heart disease or diabetes. For instance, the stress and depression from failure in meeting other people's need would lead individuals with high unmitigated communion have poor metabolic control such as poor control over blood glucose levels which could eventually result in diabetes. In addition, the anxiety and depression that is caused by the unmitigated communion could lead humans to suffer from chronic diseases or illness such as cardiac diseases as well as breast cancer that more likely to attack women. According to numerous empirical studies, people with high unmitigated communion would also likely to suffer from rheumatoid arthritis Relation to disturbed eating disorder in adolescents Eating disorder is correlated with unmitigated communion, especially in teenagers or adolescents. One reason of how both unmitigated communion and eating disorder are linked is because unmitigated communion could lead to low self-esteem. Hence, the adolescent would be pressured to look more fit and skinny in front of their peers. During adolescence, individuals with high unmitigated communion would tend to have overly other-focused behaviour; thus self-image from other people's perspective is tremendously essential. As people with high unmitigated communion overly focus on others and how others view them, hence when they do not meet the criteria of being fit and skinny from the perspective of the peers, this would trigger bulimic symptoms such as feeling insecure of own body. Relation to economic costs in distributive and integrative bargaining Unmitigated communion is related to the economic cost in distributive and integrative bargaining. This is because unmitigated communion might apply in the business environment during business negotiations as it involves self-concern and relationship. The objective of distributive bargaining is to gain a big portion of a certain pie of value, whereas the goal of integrative bargaining is to increase the size of the pie. In order to understand the consequence of unmitigated communion in negotiation, two conflict situations which includes distributive and integrative should be distinguished. Distributive conflict is a few and simple issues that occur during negotiation. While, integrative conflict is the issue that occurs in a complex business relationship. From the perspective of distributive bargaining, the consequences of unmitigated communion are not as complex as integrative bargaining individuals with high unmitigated communion would be more likely to agree on low monetary outcomes during business negotiations. Besides, joint gains cannot be maximised in the integrative negotiation as relationship matter of unmitigated communion would hinder the negotiator. See also Altruism vs. Egoism Collectivism vs. Individualism Interpersonal circumplex References Cognitive psychology Interpersonal relationships 1960s neologisms
Unmitigated communion
[ "Biology" ]
2,805
[ "Behavior", "Behavioural sciences", "Cognitive psychology", "Interpersonal relationships", "Human behavior" ]
60,705,165
https://en.wikipedia.org/wiki/%C5%8Cm%C4%81pere
Ōmāpere is a settlement on the south shore of Hokianga Harbour in Northland, New Zealand. State Highway 12 runs through Ōmāpere. Opononi is on the shore to the north of Ōmāpere. The New Zealand Ministry for Culture and Heritage gives a translation of "place of cutty grass" for . History European settlement The first European settler in the Ōmāpere area was John Martin, who arrived in the Hokianga Harbour in 1827. In 1832 Martin purchased land on the flat area, along the beach at Ōmāpere. In 1838 Martin extended his land purchase to the Hokianga Harbour's South Head, where he established a signal station to guide ships crossing the challenging harbour entrance. The signal station remained in operation until 1951. With permission from Ngāti Korokoro, the local hapū (sub-tribe), in 1838 John Whiteley established a Wesleyan mission at Pākanae on land purchased with blankets, tools and tobacco. In 1869, a bush licence was granted to Charles Bryers at Ōmāpere. In the mid 1870s, a liquor licence was then given to the establishment called the 'Heads'. This later became the 'Travellers Rest'. By 1876 the farm of John Martin had become the township of Pakia. It was home to a hotel, two stores, several houses and a school house. The name Ōmāpere began to be used more frequently and became Ōmāpere by residents agreement in 1874. By the latter 19th century, Ōmāpere became an important location for the kauri gum digging trade. Marae Waiwhatawhata or Aotea Marae and Te Kaiwaha meeting house are affiliated to Ngāti Korokoro and Ngāti Whārara. Demographics Statistics New Zealand describes Ōmāpere as a rural settlement. It covers and had an estimated population of as of with a population density of people per km2. The settlement is part of the larger Waipoua Forest statistical area. Ōmāpere had a population of 447 in the 2023 New Zealand census, an increase of 21 people (4.9%) since the 2018 census, and an increase of 84 people (23.1%) since the 2013 census. There were 213 males and 231 females in 183 dwellings. 3.4% of people identified as LGBTIQ+. The median age was 53.6 years (compared with 38.1 years nationally). There were 72 people (16.1%) aged under 15 years, 42 (9.4%) aged 15 to 29, 192 (43.0%) aged 30 to 64, and 144 (32.2%) aged 65 or older. People could identify as more than one ethnicity. The results were 57.7% European (Pākehā); 60.4% Māori; 3.4% Pasifika; 5.4% Asian; 0.7% Middle Eastern, Latin American and African New Zealanders (MELAA); and 2.7% other, which includes people giving their ethnicity as "New Zealander". English was spoken by 98.0%, Māori language by 28.9%, Samoan by 0.7% and other languages by 8.1%. No language could be spoken by 1.3% (e.g. too young to talk). The percentage of people born overseas was 12.8, compared with 28.8% nationally. Religious affiliations were 43.6% Christian, 0.7% Hindu, 4.0% Māori religious beliefs, 0.7% Buddhist, and 1.3% other religions. People who answered that they had no religion were 39.6%, and 9.4% of people did not answer the census question. Of those at least 15 years old, 57 (15.2%) people had a bachelor's or higher degree, 204 (54.4%) had a post-high school certificate or diploma, and 96 (25.6%) people exclusively held high school qualifications. The median income was $25,600, compared with $41,500 nationally. 24 people (6.4%) earned over $100,000 compared to 12.1% nationally. The employment status of those at least 15 was that 126 (33.6%) people were employed full-time, 54 (14.4%) were part-time, and 12 (3.2%) were unemployed. Notes References Hokianga Populated places in the Northland Region Kauri gum
Ōmāpere
[ "Physics" ]
925
[ "Amorphous solids", "Unsolved problems in physics", "Kauri gum" ]
60,705,768
https://en.wikipedia.org/wiki/Register%20of%20Antarctic%20Marine%20Species
The Register of Antarctic Marine Species, also known as RAMS, is a taxonomic database that provides a list of marine species found in the Southern Ocean surrounding Antarctica. Its purpose is to provide authoritative and comprehensive information on the diversity of marine life in the region, which provides a reference point for marine science, research, conservation and sustainable management. The database includes marine species found on the sea floor, in the water column, and around sea-ice. RAMS is a regionally-focused database within the World Register of Marine Species. References Database
Register of Antarctic Marine Species
[ "Biology" ]
108
[ "Taxonomy (biology)" ]
60,706,181
https://en.wikipedia.org/wiki/Fe%20FET
A ferroelectric field-effect transistor (Fe FET) is a type of field-effect transistor that includes a ferroelectric material sandwiched between the gate electrode and source-drain conduction region of the device (the channel). Permanent electrical field polarisation in the ferroelectric causes this type of device to retain the transistor's state (on or off) in the absence of any electrical bias. FeFET based devices are used in FeFET memory - a type of single transistor non-volatile memory. Description In 1955, Ian Munro Ross filed a patent for a FeFET or MFSFET. Its structure was like that of a modern inversion channel MOSFET, but ferroelectric material was used as a dielectric/insulator instead of oxide. Use of a ferroelectric (triglycine sulfate) in a solid state memory was proposed by Moll and Tarui in 1963 using a thin film transistor. Further research occurred in the 1960s, but the retention characteristics of the thin film based devices was unsatisfactory. Early field effect transistor based devices used bismuth titanate (Bi4Ti3O12) ferroelectric, or Pb1−xLnxTiO3 (PLT) and related mixed zirconate/titanates (PLZT). In the late 1980 Ferroelectric RAM was developed, using a ferroelectric thin film as capacitor, connected to an addressing FET. FeFET based memory devices are read using voltages below the coercive voltage for the ferroelectric. Issues involved in realising a practical FeFET memory device include (as of 2006) : choice of a high permitivity, highly insulating layer between ferroelectric and gate; issues with high remanent polarisation of ferroelectrics; limited retention time (c. a few days, cf required 10 years). Provided the ferroelectric layer can be scaled accordingly FeFET based memory devices are expected to scale (shrink) as well as MOSFET devices; however a limit of ~20 nm laterally may exist (the superparaelectric limit, aka ferroelectric limit). Other challenges to feature shrinks include : reduced film thickness causing additional (undesired) polarisation effects; charge injection; and leakage currents. Research and development In 2017 FeFET based non-volatile memory was reported as having been built at 22nm node using FDSOI CMOS (fully depleted silicon on insulator) with hafnium dioxide (HfO2) as the ferroelectric- the smallest FeFET cell size reported was 0.025 μm2, the devices were built as 32Mbit arrays, using set/reset pulses of ~10ns duration at 4.2V - the devices showed endurance of 105 cycles and data retention up to 300C. the startup Ferroelectric Memory Company is attempting to develop FeFET memory into a commercial device, based on hafnium dioxide. The company's technology is claimed to scale to modern process node sizes, and to integrate with contemporary production processes, i.e. HKMG, and is easily integrable into conventional CMOS processes, requiring only two additional masks. See also Ferroelectric RAM, RAM that uses a ferroelectric material in the capacitor of a conventional DRAM structure References Further reading Non-volatile memory Field-effect transistors Ferroelectric materials
Fe FET
[ "Physics", "Materials_science" ]
738
[ "Physical phenomena", "Ferroelectric materials", "Materials", "Electrical phenomena", "Hysteresis", "Matter" ]
44,244,084
https://en.wikipedia.org/wiki/Black%20Sea%20raid
The Black Sea raid was an Ottoman naval sortie against Russian ports in the Black Sea on 29 October 1914, supported by Germany, that led to the Ottoman entry into World War I. The attack was conceived by Ottoman War Minister Enver Pasha, German Admiral Wilhelm Souchon, and the German foreign ministry. The German government had been hoping that the Ottomans would enter the war to support them but the government in Istanbul was undecided. The Germanophile Ottoman War Minister, Enver Pasha, began conspiring with the German ambassador to bring the empire into the war. Attempts to secure widespread support in the government failed, so Enver decided to instigate conflict. With the help of the Ottoman naval minister and German Admiral Wilhelm Souchon, Enver arranged for the Ottoman fleet to go out to sea on 29 October supposedly to perform maneuvers. They were to provoke Russian vessels into opening fire and then accuse them of inciting war. Instead, Souchon raided the Russian coast in a flagrant display of hostility, causing little lasting damage but enraging the Russians. Enver impeded attempts by anti-war officials in Istanbul to apologise for the incident. The Russians declared war on the Ottoman Empire on 2 November, followed by the British and the French three days later; the British quickly initiated naval attacks in the Dardanelles. The Ottomans did not officially declare war until 11 November. Background In the months before the outbreak of World War I, officials of the Ottoman Empire vainly tried to secure an alliance with a great power. The Germanophile Ottoman War Minister Enver Pasha directly proposed an alliance on 22 July 1914 to the German ambassador in Istanbul, Hans Freiherr von Wangenheim, but he was rebuffed. Kaiser Wilhelm II overruled Wangenheim two days later, and an Ottoman draft for an alliance was delivered in Berlin on 28 July—the day World War I began. The July Crisis had climaxed and it appeared Germany would be fighting a two-front war with France and Russia. With the Germans hesitant to make any more significant military obligations, Wangenheim was authorised by German Chancellor Theobald von Bethmann Hollweg to sign the agreement only if the Ottoman Empire would "undertake action against Russia worthy of its name." On 1 August, Enver offered Wangenheim the new battleship Sultân Osmân-ı Evvel in exchange for German protection. This was likely a clever ploy; United Kingdom officials, in order to bolster the Royal Navy to wage war against Germany, had already seized Sultan Osman-ı Evvel and the battleship Reşadiye, which were under construction in their shipyards. Wagenheim and the majority of the Ottoman government were unaware of this. Enver probably already knew of the seizure, since actually releasing the battleship to a foreign nation would have caused an uproar from the public and the government. Ambassador Wagenheim signed the treaty the next day, creating the secret Ottoman–German alliance. However, the alliance did not automatically bring the Ottomans into the war as Germany had hoped. The literal wording of the treaty obligated Germany to oppose any foreign infringements on Ottoman territory—particularly by Russia—but only required that the Ottoman Empire assist Germany as per the latter's own terms with Austria-Hungary. Since Germany had proactively declared war on Russia several days before Austria-Hungary, the Ottoman Empire was not compelled to join the conflict. Grand Vizier Said Halim Pasha and Finance Minister Djavid Bey were opposed to Ottoman involvement in the war and viewed the alliance as a passive agreement. Other Ottoman officials were hesitant to rush into an armed conflict following the disastrous First Balkan War, especially considering the possibility that the Balkan states might attack the Empire should it become belligerent. Meanwhile, in the Mediterranean, the German battlecruiser and the light cruiser were cruising off of French Algeria. Admiral Wilhelm Souchon, the squadron's commander, had been holding his position in order to interfere with Triple Entente troop convoys. He had received orders on 3 August that his ships should retreat to Ottoman waters, but chose to linger for one day and shell two ports. It had already been arranged on 1 August between the Germans and Enver that Souchon's squadron would be allowed safe passage. While coaling in Messina, Souchon received a telegram rescinding these orders, as other Ottoman officials now learning of Enver's deal objected to the plan. Despite the development, Souchon resolved to continue towards the Ottoman Empire, having concluded that an attempt to return to Germany would result in his ships' destruction at the hands of the British and the French and a withdrawal to the Austro-Hungarian coast would leave them trapped in the Adriatic Sea for the remainder of the war. With the Royal Navy in close pursuit, Souchon continued east, feinting a retreat towards Austria-Hungary in an attempt to confuse the British. To make the ruse more convincing, Austrian Admiral Anton Haus sortied south with a large fleet in a maneuver meant to appear like a rendezvous with Souchon. Once the latter reached Greek waters, the former returned to port. The Germans insisted that Haus follow Souchon to Istanbul so that his ships could support an anticipated campaign against the Russians in the Black Sea, but the Austrian admiral thought that the Ottoman capital would make for a poor base of operations and did not want to leave Austria-Hungary's coast undefended. Meanwhile, Souchon approached the Ottoman Empire, which still had not authorised his ships' entry to its waters. On 8 August, he decided to force the issue and dispatched a support vessel to Istanbul with a message for the German naval attaché to give to the Ottomans: he needed immediate passage through Dardanelles on the grounds of "military necessity" and was prepared to enter them "without formal approval." On the morning of 10 August, Souchon was given permission to enter the straits. The day before, the Ottoman government had proposed to Wangenheim that a fictitious purchase of the German ships be arranged, so their presence would not compromise Ottoman neutrality. The next day the German Chancellor cabled Wangenheim, rejecting the idea and demanding that the Ottomans immediately join the war. The Grand Vizier accosted Wangenheim for the premature arrival of the ships and repeated the demand for a fictitious sale. The Ottoman government subsequently declared that it had purchased both ships for 80 million German marks. On 14 August, Wangenheim advised the German government that it would be best to go along with the sale, lest they risk angering the Ottomans. On 16 August, the ships were formally integrated into the Ottoman Navy while their crews were given new uniforms and formally reenlisted. The British had thought this action was meant to counterbalance their seizure of the Ottoman battleships, but this was not strictly the case. The Ottomans feared the Entente, particularly Russia, would attempt to partition the empire if they won the war, whereas Germany and Austria-Hungary would not. Once the British became aware of this, they feared that the Ottomans were more likely to enter the conflict in Germany's favour. Following Russia's failures in its operations against Germany in late August, Russian incursion in Ottoman territory seemed unlikely. Meanwhile, the Ottoman officials reached neutrality agreements with the governments of Greece and Romania while Bulgaria displayed pro-German tenancies, alleviating their fears of a Balkan threat. Enver then began to move his defensive policy towards an aggressive one. Prelude In a discussion over the weekend of 12–13 September, Enver gave Souchon permission to take his ships into the Black Sea to perform maneuvers. The Ottoman naval minister, Ahmed Djemal, discovered Souchon's plans and strictly forbade him from moving out of the Bosphorus. The Ottoman cabinet debated the matter over the next few days, and on 17 September Enver told Souchon that his authorisation to operate in the Black Sea was "withdrawn until further notice." Furious, Souchon went ashore the next day and berated Grand Vizier Halim for his government's "faithless and indecisive behavior" while threatening to take matters into his own hands and "behave as dictated by the conscience of a military officer." He subsequently demanded that Enver, at the very least, allow the German light cruiser to stage exercises near the mouth of the Bosphorus with several Ottoman destroyers. Here Souchon hoped the ships could engage the Russian Black Sea Fleet and bring the Ottoman Empire into the war. Enver promised to do what he could. On 24 September, Souchon was made vice admiral and commander-in-chief of the Ottoman Navy. Two days later Enver ordered the closing of the Dardanelles to foreign shipping without the consultation of his advisers. This had an immediate effect on the Russian economy, as nearly half of the country's exports traveled through the straits. On 9 October Enver told Ambassador Wangenheim that he had won the sympathy of Minister of the Interior Mehmed Talaat and President of the Chamber of Deputies Halil Menteşe and that he planned on securing the support of Djemal. If that failed, he would provoke a cabinet crisis and create a pro-war government. After gaining Djemal's sympathies, the conspiring Ottomans informed the Germans that they would go to war as soon as they received the equivalent of two million lira in gold, money the Germans knew the Ottoman Empire would need to fund a war. The money was shipped through neutral Romania, and the last of it arrived on 21 October. Informants working for the Russian ambassador in Istanbul, Mikhail Nikolayevich von Giers, forwarded the information about the payments to Russian Foreign Minister Sergey Sazonov. Sazonov had suspected the Ottomans' and Germans' intentions, and warned the Russian naval commanders in Sebastopol to be prepared for an attack. On 21 October, Admiral Kazimir Ketlinski assured the foreign minister that the Black Sea Fleet was "completely ready" for action. On 22 October 1914, Enver covertly presented a series of plans to Wangenheim on how to bring the country into the war. The Germans approved of an attack on Russian naval forces. At the last minute Talaat and Menteşe changed their minds and resolved that the Ottomans should keep the gold and remain neutral, though Talaat soon reverted to his old position. Enver gave up on trying to unify the government to pass a declaration of war, and concluded that the Russians would need to be provoked into declaring war to instigate desirable action. He told the Germans this on 23 October, and assured them that he would only need Minister Djemal's support to achieve his goals. The next day Enver told Admiral Souchon he should take the fleet into the Black Sea and attack Russian ships if a "suitable opportunity presented itself." Djemal then secretly ordered all Ottoman naval officers to strictly follow Souchon's directives. On 25 October, Ambassador Girs forwarded one of his informant's predictions to Sazonov: the attack would take place on 29 October. Raid Sortie On 27 October, the Ottoman fleet put to sea under the guise of performing maneuvers. Enver had originally envisioned an encounter at sea in which the Ottomans would claim self-defence, but Admiral Souchon conceived a direct assault on Russian ports. He would later say his intention was "to force the Turks, even against their will, to spread the war." The German battlecruiser, now known as Yavuz Sultan Selim, was to sail with two destroyers and a gunboat to attack Sebastopol. The light cruiser Breslau, now known as Midilli, protected cruiser , and the torpedo cruiser were to attack Novorossiysk and Feodosia. Three destroyers were detailed for Odessa. On the way, one of these destroyers experienced engine trouble and was forced to turn back. Russian naval officers were under specific instruction not to fire first on the Ottomans in the event of a confrontation. The Russian government wanted to make it clear to any third party that the Ottomans would be the ones to instigate hostilities. Attacks Odessa Shortly after 03:00 on 29 October, the destroyers Muavenet and Gairet entered the harbour of Odessa. From a distance about , a torpedo was launched into the Russian gunboat Donetz, quickly sinking it. The two destroyers proceeded to damage merchant vessels, shore installations, five oil tanks, and a sugar refinery. The destroyers had conducted their raid earlier than Souchon had intended, and the Russians managed to radio a warning to the forces in Sebastopol. By the time Yavuz arrived, the coastal artillery was manned. Sebastopol Just before 06:30, Yavuz sighted Sebastopol and proceeded to bombard the port for 15 minutes. During this time she exchanged fire with the pre-dreadnought and shore batteries. Three heavy caliber shells from the batteries managed to damage Yavuz before she withdrew. The loaded Russian minelayer happened upon the attack and was scuttled by her crew to avoid being detonated. Since Pruts arrival had been expected, the defensive minefield around the port was inoperative. By the time it was activated 20 minutes later, the Ottomans had cleared the area. Three Russian destroyers attempted to pursue, but their attack dissolved after the lead ship was struck by a shell. Feodosia and Yalta At around the same time Hamidieh arrived off of Feodosia. Seeing no signs of armed opposition, a German and a Turkish officer went ashore to warn the civilian population before bombarding the port two hours later. After attacking Feodosia, Hamidieh bombarded Yalta, setting several granaries on fire. Novorossiysk Shortly before 10:50, Berk-i Satvet sent a shore party to warn the defenceless population of Novorossiysk, before opening up with her guns. She was soon thereafter joined by Midilli, which had been busy laying mines in the Kerch Strait. Midilli fired a total of 308 shells, sinking several Russian grain cargo ships and destroying about 50 oil tanks. On their way back to Ottoman territory, Midilli'''s crew attempted to cut Sebastopol's undersea telegraph cable with Varna, Bulgaria, but failed. Aftermath On the afternoon following the raid, Souchon radioed Istanbul that Russian ships had "shadowed all movements of the Turkish fleet and systematically disrupted all exercises," and as such had "opened hostilities." The Russians attempted to but were unable to pursue the Ottoman fleet. The raiding force returned to Ottoman waters on 1 November. The Ottoman press reported the action on 31 October, claiming that the Russians had planned on mining the Bosphorus and destroying their fleet without a formal declaration of war, compelling the Ottoman navy to retaliate after an engagement at sea by bombarding the Russian coast. German military officers were disappointed by the limited extent of the attack, which ultimately achieved more political goals than strategic ones. Russia's Black Sea Fleet was not seriously damaged by the raid. The gunboat Donetz'' was later raised and returned to service. Ramifications A two-day political crisis followed the raid. It was obvious to the Ottoman government that Enver had allowed the attack to occur. As soon as the news of the event reached Istanbul, the Grand Vizier and the Cabinet forced Enver to wire a ceasefire order to Souchon. Several officials, including the Grand Vizier, threatened to resign in protest of the raid. Four later would, including Djavid Bey. Though many in the government thought it opportune to attack Russia, cabinet solidarity was regarded as vital and a letter of apology was soon drafted. On 31 October Enver informed the Germans of the planned apology and said there was nothing he could do. The British, ill-informed of the situation in Istanbul, believed the entire Ottoman government was conspiring with the Germans. The British Cabinet sent an ultimatum to the Ottomans, demanding that they remove Admiral Souchon and his German subordinates from their posts and expel Germany's military mission, which consisted of approximately 2,000 men. The Ottomans did not comply. On 31 October, First Lord of the Admiralty Winston Churchill, acting on his own initiative, ordered British forces in the Mediterranean to commence hostilities against the Ottoman Empire. This was not carried out immediately, so the Ottomans were unaware of what had transpired. The Russian Foreign Ministry withdrew Ambassador Girs from Istanbul. Meanwhile, Enver, still fearing that the Russians would accept the Ottoman apology, decided to interfere. Just before the message was sent, he inserted a passage that accused the Russians of instigating the conflict. On 1 November the message arrived in Petrograd. Foreign Minister Sazonov responded with an ultimatum, demanding that the Ottomans expel the German military mission. The Ottomans rejected this proposal. That same day British forces in the Mediterranean carried out Churchill's orders by attacking Ottoman merchant vessels off of the port of İzmir. That night at an Ottoman Cabinet meeting, the Grand Vizier's anti-war faction was forced to accept that the Empire was at war, and that there was little they could do to avoid conflict. The Russians declared war on the Ottoman Empire on 2 November 1914. Admiral Andrei Eberhardt immediately ordered Russia's fleet to retaliate against the Ottomans for the raid. On 4 November, a Russian task force bombarded Zonguldak. On 3 November British warships bombarded outer forts in the Dardanelles. Two days later the United Kingdom extended a declaration of war to the Ottoman Empire, as did France. Due to these attacks, there was a common impression in Britain that Churchill had brought the Ottomans into the war. Prime Minister Lloyd George held this belief for several years to come. In the meantime, Churchill tried to promote the advantages of the conflict, such as the possibility of territorial gains in the Middle East (the reason that would ultimately bring Italy and Balkans nations like Greece into the war). The Ottoman government finally declared war on the Triple Entente on 11 November. Three days later Ottoman Sultan Mehmed V called for a Jihad campaign by Sunni and Shia Muslims against the Entente powers. Notes References Citations Bibliography Naval battles of World War I involving Russia Naval battles of World War I involving the Ottoman Empire Black Sea naval operations of World War I Conflicts in 1914 October 1914 False flag operations Military history of Odesa Military history of Sevastopol History of Yalta Novorossiysk History of Krasnodar Krai Naval bombing operations and battles of World War I Attacks on ports and harbours Attacks on military installations in Crimea Attacks on granaries Building bombings in Ukraine Building bombings in Russia Ship fires Industrial fires and explosions in Russia Industrial fires and explosions in Ukraine Building and structure fires in Ukraine Mine warfare 1914 fires 1910s fires in Europe Attacks on military installations in the 1910s
Black Sea raid
[ "Engineering" ]
3,886
[ "Military engineering", "Mine warfare" ]
44,244,186
https://en.wikipedia.org/wiki/Four-point%20flexural%20test
The four-point flexural test provides values for the modulus of elasticity in bending , flexural stress , flexural strain and the flexural stress-strain response of the material. This test is very similar to the three-point bending flexural test. The major difference being that with the addition of a fourth bearing the portion of the beam between the two loading points is put under maximum stress, as opposed to only the material right under the central bearing in the case of three-point bending. This difference is of prime importance when studying brittle materials, where the number and severity of flaws exposed to the maximum stress is directly related to the flexural strength and crack initiation. Compared to the three-point bending flexural test, there are no shear forces in the four-point bending flexural test in the area between the two loading pins. The four-point bending test is therefore particularly suitable for brittle materials that cannot withstand shear stresses very well. It is one of the most widely used apparatus to characterize fatigue and flexural stiffness of asphalt mixtures. Testing method The test method for conducting the test usually involves a specified test fixture on a universal testing machine. Details of the test preparation, conditioning, and conduct affect the test results. The sample is placed on two supporting pins a set distance apart and two loading pins placed at an equal distance around the center. These two loadings are lowered from above at a constant rate until sample failure. Calculation of the flexural stress for four-point bending test where the loading span is 1/2 of the support span (rectangular cross section) for four-point bending test where the loading span is 1/3 of the support span (rectangular cross section) for three-point bending test (rectangular cross section) in these formulas the following parameters are used: = Stress in outer fibers at midpoint, (MPa) = load at a given point on the load deflection curve, (N) = Support span, (mm) = Width of test beam, (mm) = Depth or thickness of tested beam, (mm) Calculation of the Elastic modulus In the 4-point bending test, the specimen is placed on two supports and loaded in the middle by a test punch with two loading points. This results in a constant bending moment between the two supports. Consequently, a shear-free zone is created, where the specimen is subjected only to bending. This has the advantage that no additional shear force acts on the specimen, unlike in the 3-point bending test. The bending modulus for a flat specimen is calculated as follows: b: Specimen width in mm a: Specimen thickness in mm lA: Span length (distance between support point and the nearest loading point of the test punch) in mm lB: Length of the reference beam (between the loading points, symmetrically placed relative to the loading points) in mm DL: Distance between the reference beam and the main beam (centered between the loading points) in mm E: Bending modulus in kN/mm² lv: Span length in mm XH: End of bending modulus determination in kN XL: Start of bending modulus determination in kN DL: Deflection in mm between XH and XL Advantages and disadvantages Advantages of three-point and four-point bending tests over uniaxial tensile tests include: simpler sample geometries minimum sample machining is required simple test fixture possibility to use as-fabricated materials Disadvantages include: more complex integral stress distributions through the sample Application with different materials Ceramics Ceramics are usually very brittle, and their flexural strength depends on both their inherent toughness and the size and severity of flaws. Exposing a large volume of material to the maximum stress will reduce the measured flexural strength because it increases the likelihood of having cracks reaching critical length at a given applied load. Values for the flexural strength measured with four-point bending will be significantly lower than with three-point bending., Compared with three-point bending test, this method is more suitable for strength evaluation of butt joint specimens. The advantage of four-point bending test is that a larger portion of the specimen between two inner loading pins is subjected to a constant bending moment, and therefore, positioning the joint region is more repeatable. Composite materials Plastics Standards ASTM C1161: Standard Test Method for Flexural Strength of Advanced Ceramics at Ambient Temperature ASTM D6272: Standard Test Method for Flexural Properties of Unreinforced and Reinforced Plastics and Electrical Insulating Materials by Four-Point Bending ASTM C393: Standard Test Method for Core Shear Properties of Sandwich Constructions by Beam Flexure ASTM D7249: Standard Test Method for Facing Properties of Sandwich Constructions by Long Beam Flexure ASTM D7250: Standard Practice for Determining Sandwich Beam Flexural and Shear Stiffness See also Bending Euler–Bernoulli beam equation Flexural strength Three-point flexural test List of area moments of inertia Second moment of area References External links ASTM C1161: Standard Test Method for Flexural Strength of Advanced Ceramics at Ambient Temperature ASTM D6272: Standard Test Method for Flexural Properties of Unreinforced and Reinforced Plastics and Electrical Insulating Materials by Four-Point Bending ASTM C393: Standard Test Method for Core Shear Properties of Sandwich Constructions by Beam Flexure ASTM D7249: Standard Test Method for Facing Properties of Sandwich Constructions by Long Beam Flexure ASTM D7250: Standard Practice for Determining Sandwich Beam Flexural and Shear Stiffness ASTM C78: Standard Test Method for Flexural Strength of Concrete (Using Simple Beam with Third-Point Loading) Materials testing Mechanics
Four-point flexural test
[ "Physics", "Materials_science", "Engineering" ]
1,151
[ "Materials testing", "Mechanics", "Materials science", "Mechanical engineering" ]
44,244,665
https://en.wikipedia.org/wiki/Trepipam
Trepipam (; developmental code name SCH-12679) is a dopamine receptor agonist of the benzazepine group that was never marketed. It acts specifically as an agonist of the dopamine D1 receptor. It is closed related structurally to fenoldopam, a peripherally acting selective D1 receptor partial agonist which is used as an antihypertensive agent. References 1-Phenyl-2,3,4,5-tetrahydro-1H-3-benzazepines Abandoned drugs Benzazepines D1-receptor agonists Peripherally selective drugs
Trepipam
[ "Chemistry" ]
134
[ "Pharmacology", "Drug safety", "Medicinal chemistry stubs", "Pharmacology stubs", "Abandoned drugs" ]
44,245,314
https://en.wikipedia.org/wiki/Petrochemical%20Heritage%20Award
The Petrochemical Heritage Award was established in 1997, "to recognize individuals who made outstanding contributions to the petrochemical community." The award is intended to inspire achievement and to promote public understanding. The award winner is chosen annually by the Founders Club and the Science History Institute (formerly the Chemical Heritage Foundation). The award is traditionally presented at the International Petrochemical Conference hosted by the American Fuel and Petrochemical Manufacturers (AFPM), formerly known as NPRA, the National Petrochemical & Refiners Association. Recipients The following people have received the Petrochemical Heritage Award: 2020, Jim Teague, CEO of Enterprise Products 2019, James Y. Chao and Albert Chao, founders of the Westlake Chemical Corporation 2018, Gary K. Adams, former president, CEO, and chair of the board of Chemical Market Associates 2017, David N. Weidman, retired chair and CEO of Celanese Corporation 2016, Stephen D. Pryor, retired president of ExxonMobil Chemical Company 2015, James L. Gallogly, former chief executive officer of LyondellBasell 2014, Frank Popoff, retired chairman and chief executive officer of the Dow Chemical Company 2013, Jim Ratcliffe, INEOS Founder and CEO 2012, Marvin O. Schlanger, chairman of the supervisory board of LyondellBasell Industries 2011, Raj Gupta, former chairman and CEO of Rohm and Haas 2010, Hiromasa Yonekura, chairman of the Sumitomo Chemical Company 2009, Mohamed Al-Mady, vice chairman and CEO of the Saudi Basic Industries Corporation (SABIC) 2008, Peter R. Huntsman, president and CEO of the Huntsman Corporation 2008, Dave C. Swalm, founder of Texas Petrochemicals Company/Texas Olefins 2007, Dan L. Duncan, chairman, Enterprise Products Partners 2006, J. Virgil Waggoner, former president and CEO of Sterling Chemicals 2005, Ting Tsung Chao, founder of the Westlake Chemical Corporation 2004, William A. McMinn, Jr., former president of Cain Chemical 2003, Harold Sorgenti, former president and chief executive officer of ARCO Chemical Company 2002, Herbert D. "Ted" Doan, former chairman and CEO of the Dow Chemical Company 2001, Jon M. Huntsman, founder of Huntsman Corporation 2000, Ralph Landau, cofounder of Scientific Design Group and Halcon. 1999, John R. Hall, chemical engineer and former president, chairman, and CEO of Ashland 1998, John T. Files, founder and chairman of the Merichem Company 1997, Gordon Cain, chemical engineer and entrepreneur Photo gallery See also List of chemistry awards References Business and industry awards Petrochemical industry
Petrochemical Heritage Award
[ "Chemistry" ]
549
[ "Petrochemical industry" ]
44,245,492
https://en.wikipedia.org/wiki/Tara%20Shears
Tara Georgina Shears (born 1969) is a Professor of Physics at the University of Liverpool. Early life Shears was born in Salisbury in Wiltshire. She remained in Wiltshire, living in Wootton Rivers and attending the co-educational comprehensive school Pewsey Vale School, where she was inspired by her chemistry teacher. The school had no sixth form, and her parents moved to Wedhampton (near Urchfont), where she attended the co-educational independent school Dauntsey's School, which offered many state scholarships at the time — many of the pupils were state-funded. At A-level she studied Maths, Physics, Chemistry and English, where she was the only female in her Physics class — not uncommon in British co-educational schools, even independent schools. She obtained A grades in all her sixth form exams. Her experience of being the only female in the Physics class would have been an advantage when she attended Imperial College London to study Physics. She obtained a 1st Class honours degree in 1991. She went to the University of Cambridge to complete a PhD in Particle Physics at Corpus Christi College, Cambridge. She completed a PPARC (Particle Physics and Astronomy Research Council, now the Science and Technology Facilities Council since 2007) fellowship at the Victoria University of Manchester. Career Particle physics Shears was awarded a Royal Society Research Fellowship with the University of Liverpool in 2000 to continue her work at the Collider Detector at Fermilab (CDF) experimental collaboration at the Fermilab facility in the USA. In 2004 she joined the LHCb experiment at CERN's Large Hadron Collider (LHC) particle accelerator (the world's largest), for which she initiated and developed the electroweak and exotica physics working group. Physics professor Shears became the first female Professor of Physics at the University of Liverpool, where she researches the properties of bottom quarks using hadron colliders, testing the Standard Model theory in the electroweak sector, to seek answers for the reasons that there is so little antimatter in the universe. She is also employed as a science communicator, being able to promote female interest in physics as a role model. She is Chair of the STFC's Education, Training and Careers Committee. Awards and Major Projects Shears was awarded a CERN fellowship to conduct research on the Large Electron–Positron Collider (LEP). In 1995 she conducted a project: A Measurement of the B"+ and B"0 Meson Lifetimes and Lifetime Ratio Using the OPAL Detector at LEP. See also Daphne Jackson, from Peterborough, the UK's first female professor of physics (University of Surrey at age 34) Gillian Gehring (née Murray), from Nottingham, the UK's second female professor of physics Women's Engineering Society References External links University of Liverpool Her academic page Dr. Tara Shears - The Large Hadron Collider in 10' (with english subtitles) Scientific publications of Tara Shears on INSPIRE-HEP 1969 births Academics of the University of Liverpool Alumni of Corpus Christi College, Cambridge Alumni of Imperial College London British women physicists English physicists Experimental physicists Particle physicists People associated with CERN People educated at Dauntsey's School People from Pewsey People from Salisbury Science education in the United Kingdom Living people
Tara Shears
[ "Physics" ]
684
[ "Particle physicists", "Experimental physics", "Experimental physicists", "Particle physics" ]
44,245,515
https://en.wikipedia.org/wiki/Clocental
Clocental (dolcental) is a carbamate-derived sedative hypnotic. Synthesis Clocental was first prepared by the acylation of 1-ethynylcyclohexanol with allophanyl chloride. See also Methylpentynol References Ethynyl compounds Carbamates Hypnotics Sedatives GABAA receptor positive allosteric modulators
Clocental
[ "Biology" ]
86
[ "Hypnotics", "Behavior", "Sleep" ]
44,245,765
https://en.wikipedia.org/wiki/Translational%20Health%20Science%20and%20Technology%20Institute
Translational Health Science and Technology Institute (THSTI) is an institute of the Biotechnology Research and Innovation Council (BRIC), Department of Biotechnology, Ministry of Science and Technology, Government of India. It was set up in 2009 and is located in NCR Biotech Science Cluster, Faridabad. Envisioned by former secretary of DBT, Dr. M. K. Bhan, the institute was created to enable a faster transition of lab research to market. Prof. Ganesan Karthikeyan is the Executive Director of THSTI. THSTI has developed capabilities in indigenous vaccines, monoclonal antibodies, in vitro diagnostic kits, biotherapeutics, drug discovery and provides a scientific atmosphere for clinical research propelling healthcare advancements forward. THSTI fosters a dynamic and collaborative research environment, bringing together diverse scientific minds - physicians, biologists, chemists, mathematicians, and more - to translate innovative concepts into tangible healthcare products. THSTI operates a network of specialized research centres addressing various healthcare areas. These centers include: • Centre for Maternal and Child Health • Centre for Virus research, Therapeutics and Vaccines • Centre for Tuberculosis Research • Centre for Microbial Research • Centre for Immunobiology and Immunotherapy • Computational and Mathematical Biology Centre • Centre for Bio-design and Diagnostics • Centre for Drug Discovery • Clinical Development Services Agency Augmenting these centres are the state-of-the-art facilities of THSTI viz., Bioassay Laboratory, Biorepository, Biosafety Level -3 Lab, Data Management Centre, Immunology Core laboratory, Multi-OMICS facility, Experimental Animal Facility, Vaccine design and Development facility, and School of Innvoation in Biodesign which serve as huge resources for the research programs of THSTI and also the National Capital Region Biotech Science Cluster and other academic and industrial partners. Committed to capacity building in the healthcare research sector, THSTI offers educational programs like the Master of Science in Clinical Research and PhD programs. THSTI has more than 700 scientific publications in reputed scientific journals. It has also filed more than 100 patent applications in India and abroad. Scientists at THSTI Executive Director Prof. Ganesan Karthikeyan External links Translational Health Science and Technology Institute 2009 establishments in Haryana Biotechnology organizations Biotechnology in India Research institutes established in 2009
Translational Health Science and Technology Institute
[ "Engineering", "Biology" ]
475
[ "Biotechnology in India", "Biotechnology organizations", "Biotechnology by country" ]
44,245,768
https://en.wikipedia.org/wiki/Medrylamine
Medrylamine is an antihistamine related to diphenhydramine. References Dimethylamino compounds Ethers H1 receptor antagonists Muscarinic antagonists Muscle relaxants
Medrylamine
[ "Chemistry" ]
41
[ "Organic compounds", "Functional groups", "Ethers" ]
44,245,935
https://en.wikipedia.org/wiki/Pyroxamine
Pyroxamine (INN), also known as pyroxamine maleate (USAN) (developmental code names AHR-224, NSC-64540), is an antihistamine and anticholinergic related to diphenylpyraline. See also Benzatropine 2-Diphenylmethylpyrrolidine Difemetorex Diphenylprolinol Desoxypipradrol Pipradrol References H1 receptor antagonists Pyrrolidines Ethers Stimulants
Pyroxamine
[ "Chemistry" ]
114
[ "Organic compounds", "Functional groups", "Ethers" ]
44,246,644
https://en.wikipedia.org/wiki/Henny%20van%20der%20Windt
Hendrik Johannes (Henny) van der Windt (born 22 August 1955, in Vlaardingen) is a Dutch associate professor at the Rijksuniversiteit Groningen, specialized in the relationship between sustainability and science, in particular the relationship between nature conservation and ecology and between energy technologies, locale energy-initiatives and the energy transition. Youth and study Van der Windt grew up in Vlaardingen where he went to high school ('Hogere Burgerschool-B'). He was active in the regional environmental group Centraal Aksiekomitee Rijnmond and various student committees on environmental protection. After high school he studied biology at the Rijksuniversiteit Groningen (1972-1981). PhD and academic position He received his doctorate in 1995 with his PhD dissertation "En dan: wat is natuur nog in dit land?: natuurbescherming in Nederland 1880-1990" (After all, what is nature in this country, nature conservation in the Netherlands 1980–1990), with chapters on the rise of nature conservation, tensions between agriculture and nature conservation, forestry, ecological restoration and the management of the Wadden Sea. At that time he worked as a junior scientist and lecturer at the Biology Department of the University of Groningen. After his doctorate, he worked several years as a researcher (post-doc) in Groningen within the Ethics and Policy research programme of NWO Around 2000 he became associate professor at the Science & Society Group (later Integrated Research on Energy, Environment and Society (IREES)) of the University of Groningen. Research and education He studied science-society interactions concerning genomics, food, ecological restoration, energy and sustainability, combining approaches and insights from biology, environmental science, environmental history and science and technology studies. His education tasks include various courses such as second year Bachelor programmes Science & Society, the minor Future Planet Innovation and courses of the mastertrack Science, Business & Policy and the master Energy and Environmental sciences. Publications In addition to scientific papers, journalistic articles and policy reports Van der Windt was author or editor of several books or chapters. A selection: 1995. En dan: wat is natuur nog in dit land?: natuurbescherming in Nederland 1880-1990. Boom. 2001. Een Spiegel der Wetenschap: 200 Jaar Koninklijk Natuurkundig Genootschap te Groningen. With Adriaan Blaauw, Bert Boekschoten, Ulco Kooystra, Dick Leijenaar, Franck Smit, Kees Wiese & Marten van Wijhe. Profiel. 2005. Harmony or diversity? In: Nature and Art: The Hoge Veluwe. Waanders. 2006. Een groene voorzitter, raadheer en bruggenbouwer: prof. H.J.L. Vonhoff als voorzitter van NP De Hoge Veluwe en de Natuurbeschermingsraad. With Elio Pelzers. Waanders. 2008. Tussen dierenliefde en milieubeleid. Academia Press. 2012. Knocking on Doors: Boundary Objects in Ecological Conservation and Restoration. With Sjaak Swart. In: Sustainability Science, The Emerging Paradigm and the Urban Environment, Springer. 2012. Parks without Wilderness, Wilderness without Parks? In: Civilizing Nature, National Parks in Global Historical Perspective. Berghahn. 2019. Community Energy Storage: Governance and Business Models. With Binod Koirala, Rudi Hakvoort & Ellen van Oost. In: Consumer, Prosumer, Prosumager, Elsevier. 2021. New Pathways for Community Energy and Storage. With Ellen van Oost, Binod Koirala |& Esther van der Waal. MDPI. References External links Henny van der Windt Rijksuniversiteit Groningen profile Henny van der Windt NARCIS profile 1955 births Living people Dutch biologists Environmental scientists University of Groningen alumni Academic staff of the University of Groningen People from Vlaardingen
Henny van der Windt
[ "Environmental_science" ]
860
[ "Environmental scientists" ]
44,247,035
https://en.wikipedia.org/wiki/ZAP%20%28software%29
ZAP (Zed Attack Proxy) is a dynamic application security testing tool published under the Apache License. When used as a proxy server it allows the user to manipulate all of the traffic that passes through it, including HTTPS encrypted traffic. It can also run in a daemon mode which is then controlled via a REST-based API. History ZAP was originally forked from Paros which was developed by Chinotec Technologies Company. Simon Bennetts, the project lead, stated in 2014 that only 20% of ZAP's source code was still from Paros. The first release was announced on Bugtraq in September 2010, and became an OWASP project a few months later. In 2023, ZAP developers moved to the Linux Foundation, where they became a part of the Software Security Project. As of September 24, 2024, all of the main developers joined Checkmarx as employees and ZAP was rebranded as ZAP by Checkmarx. ZAP was listed in the 2015 InfoWorld Bossie award for The best open source networking and security software. Features Some of the built in features include: An intercepting proxy server, Traditional and AJAX Web crawlers An automated scanner A passive scanner Forced browsing A fuzzer WebSocket support Scripting languages Plug-n-Hack support See also Web application security Burp suite W3af Fiddler (software) Further reading References External links Official website Computer security software Cross-platform free software Free security software Java platform software
ZAP (software)
[ "Engineering" ]
307
[ "Cybersecurity engineering", "Computer security software" ]
44,247,164
https://en.wikipedia.org/wiki/DnaD
DnaD is a 232 amino acid long protein that is part of the primosome involved in prokaryotic DNA replication. In Bacillus subtilis, genetic analysis has revealed three primosomal proteins, DnaB, DnaD, and DnaI, that have no obvious homologues in E. coli. They are involved in primosome function both at arrested replication forks and at the chromosomal origin. DnaB and DnaD proteins are both multimeric and bind individually to DNA. DnaD induces DnaB to bind. DnaD alone and the DnaD/DnaB complex then interact with PriA of Bacillus subtilis at several DNA sites. This suggests that the nucleoprotein assembly is sequential in the PriA, DnaD, DnaB order. References Bacterial proteins DNA replication
DnaD
[ "Chemistry", "Biology" ]
172
[ "Genetics techniques", "Molecular biology stubs", "DNA replication", "Molecular genetics", "Molecular biology" ]
44,247,756
https://en.wikipedia.org/wiki/Cryogenic%20Low-Energy%20Astrophysics%20with%20Neon
The Cryogenic Low-Energy Astrophysics with Noble liquids (CLEAN) experiment by the DEAP/CLEAN collaboration is searching for dark matter using noble gases at the SNOLAB underground facility. CLEAN has studied neon and argon in the MicroCLEAN prototype, and running the MiniCLEAN detector to test a multi-ton design. Design Dark matter searches in isolated noble gas scintillators with xenon and argon have set limits on WIMP interactions, such as recent cross sections from LUX and XENON. Particles scattering in the target emit photons detected by PMTs, identified via pulse shape discrimination developed on DEAP results. Shielding reduces the cosmic and radiation background. Neon has been studied as a clear, dense, low-background scintillator. CLEAN can use neon or argon and plans runs with both to study nuclear mass dependence of any WIMP signals. Status The MiniCLEAN detector will operate with argon in 2014. It will have 500 kg of noble cryogen in a spherical steel vessel with 92 PMTs shielded in a water tank with muon rejection. References Experiments for dark matter search
Cryogenic Low-Energy Astrophysics with Neon
[ "Physics" ]
230
[ "Dark matter", "Unsolved problems in physics", "Experiments for dark matter search", "Particle physics", "Particle physics stubs" ]
44,248,016
https://en.wikipedia.org/wiki/Interface%20%28journal%29
Interface (also known as The Electrochemical Society Interface) is a quarterly open access scientific journal published by the Electrochemical Society covering developments in electrochemistry and solid-state chemistry, as well as news and information about and for members of the society. History The journal was established in 1992, because the Journal of the Electrochemical Society became a purely technical publication. The new publication was intended to provide members with information on matters affecting their society interests. The first issue was published in the Winter of 1992, with a cover that featured Nobel Laureate Rudolph Marcus, who learned of his winning the prize while at the ECS fall meeting in Toronto. Indexing and abstracting The journal is indexed and abstracted in the following bibliographic databases: References External links Electrochemistry journals Academic journals published by learned and professional societies Quarterly journals English-language journals Academic journals established in 1992 Electrochemical Society academic journals
Interface (journal)
[ "Chemistry" ]
181
[ "Electrochemistry journals", "Electrochemistry", "Electrochemistry stubs", "Physical chemistry journals", "Physical chemistry stubs" ]
44,249,205
https://en.wikipedia.org/wiki/Thomas%20W.%20Tucker
Thomas William Tucker (born July 15, 1945) is an American mathematician, the Charles Hetherington Professor of Mathematics at Colgate University, and an expert in the area of topological graph theory. Tucker did his undergraduate studies at Harvard University, graduating in 1967, and obtained his Ph.D. from Dartmouth College in 1971, under the supervision of Edward Martin Brown. Tucker's father, Albert W. Tucker, was also a professional mathematician, and his brother, Alan Tucker, and son, Thomas J. Tucker, are also professional mathematicians. References 20th-century American mathematicians 21st-century American mathematicians Harvard University alumni Dartmouth College alumni Colgate University faculty Graph theorists Living people 1945 births Mathematicians from New York (state)
Thomas W. Tucker
[ "Mathematics" ]
145
[ "Mathematical relations", "Graph theory", "Graph theorists" ]
44,249,351
https://en.wikipedia.org/wiki/Network%20Termination%20Device%20%28NBN%29
A Network Termination Device (NTD), network termination (NT), or NTE (for network termination equipment) is a customer-side network interface device used by the Australian National Broadband Network (NBN). Network termination devices provide multiple bridges for customers to access the NBN. There are different types of NTDs for the various connection technologies encompassed by NBN. All connection types except FTTN use NTDs on premises. Depending on the kind of link, NTDs typically provide two telephony and four data channels. An external power source is required, and an uninterruptible power supply (UPS) can be used to maintain connection in power outages (battery backups are available for the FTTP NTD). FTTC requires power to be provided from the premises to the kerb (distribution point). Standard NBN installation is usually free, but additional charges may apply for special circumstances or equipment requirements NTDs provide user–network interface (UNI) connections for connection of in-premises devices. They typically have multiple RJ45 jacks for the UNI-D (data) connection, and some models have RJ11 jacks for the UNI-V (voice) connection. All NTDs are capable of passing VoIP traffic. FTTN requires premises to have a compatible VDSL2 modem. Each UNI-D port can be activated by retail service providers for different NBN services. The NTD cannot be used as a Layer 3 router for in-premises networking. Most devices used in NBN are produced by Alcatel-Lucent, currently a division of Nokia Corporation. In FTTC networks, the hardware from domestic manufacturers CASA Systems (formerly NetComm) and Adtran are used (noting that the device is formally called an NCD), and in HFC networks from Arris, currently a division of CommScope. FTTN networks, based on VDSL2 technology, can be accessed with any compatible modem or router that supports VDSL2. NBN itself doesn't sell any VDSL2-compatible hardware and it must be supplied by end-user of the network. The network termination used in the specific case of an ISDN Basic Rate Interface is called an NT1. Network technologies Gallery References National Broadband Network Networking hardware
Network Termination Device (NBN)
[ "Engineering" ]
483
[ "Computer networks engineering", "Networking hardware" ]
44,249,505
https://en.wikipedia.org/wiki/Microsoft%20Band
Microsoft Band is a discontinued smart band with smartwatch and activity tracker/fitness tracker features, created and developed by Microsoft. It was announced on October 29, 2014. The Microsoft Band incorporates fitness tracking and health-oriented capabilities and integrated with Windows Phone, iOS, and Android smartphones through a Bluetooth connection. On October 3, 2016, Microsoft stopped sales and development of the line of devices. On May 31, 2019, the Band's companion app was decommissioned, and Microsoft offered a refund for customers who were lifelong active platform users. History The Microsoft Band was announced by Microsoft on October 29, 2014 and released in limited quantities in the US the following day. The Band was initially sold exclusively on the Microsoft Store's website and retail locations; due to its unexpected popularity, it sold out on the first day it was released and was in short supply over the 2014 holiday shopping season. Production was ramped up in March 2015 to increase availability, several months after the release of Android Wear but ahead of the Apple Watch. Availability was expanded in the US to include retailers Amazon, Best Buy, and Target. On April 15, 2015, the Microsoft Band was released in the UK priced at £169.99 and available for purchase through the Microsoft Store, or from select partners. Features The Microsoft band incorporates ten sensors, though only eight were documented on Microsoft's product page: Optical heart rate monitor Three-axis accelerometer Gyrometer GPS Microphone Ambient light sensor Galvanic skin response sensors UV sensor Skin temperature sensor Capacitive sensor The Band's battery was designed to run for two days on a full charge, and the device partially relies on its companion app Microsoft Health, which was available for operating systems beginning with Windows Phone 8.1, Android 4.3+, and iOS 7.1+, if Bluetooth was available. Despite being designed as a fitness tracker, the Band has numerous smartwatch-like features, such as built in apps (called tiles) like Exercise, UV, Alarm & Timer, Calls, Messages, Calendar, Facebook, Weather and more. The Band worked with any Windows Phone 8.1 device. If paired with a device running Windows Phone 8.1 Update 1, Cortana might also be available, although some features still require direct use of the paired phone. This Update 1 was included with the Lumia Denim firmware for Microsoft Lumia phones. Users can view their latest notifications on their phone by using the Notifications Center Tile. The device functioned as a way to promote Microsoft software and license it to developers and OEMs. See also Smart Personal Objects Technology Apple Watch Android Wear Microsoft Band 2 Fitbit Garmin References External links Microsoft hardware Products introduced in 2014 Discontinued Microsoft products Smartwatches Smart bands
Microsoft Band
[ "Technology" ]
560
[ "Smartwatches" ]
44,249,810
https://en.wikipedia.org/wiki/S5%200014%2B81
S5 0014+81 is a distant, compact, hyperluminous, broad-absorption-line quasar, or blazar, located near the high declination region of the constellation Cepheus, near the North Equatorial Pole. Characteristics The object is an OVV (optically violent variable) quasar, a type of blazar. It belongs to the most energetic subclass of active galactic nuclei, which are produced by the rapid accretion of matter by a central supermassive black hole, changing gravitational energy to light energy that can be visible across cosmic distances. In the case of S5 0014+81, it is one of the most luminous quasars known, with a total luminosity of over 1041 watts, equal to an absolute bolometric magnitude of −31.5. If the quasar were at a distance of 280 light-years from Earth, it would give out as much energy per square meter as the Sun does at Earth, despite being 18 million times more distant. The quasar's luminosity is therefore about 3 × 1014 (300 trillion) times that of the Sun, or over 25,000 times as luminous as all the 100 to 400 billion stars of the Milky Way combined, making it one of the most energetic objects in the observable universe. However, because of its huge distance of 12.1 billion light-years it can only be studied by spectroscopy. The central black hole of the quasar devours an extremely huge amount of matter, equivalent to 4,000 solar masses of material every year. The quasar is also a very strong source of radiation, from gamma rays and X-rays down to radio waves. The quasar's designation, S5, is from the Fifth Survey of Strong Radio Sources, 0014+81 was its coordinates in epoch B1950.0. It also has the other designation 6C B0014+8120, from the Sixth Cambridge Survey of Radio Sources by the University of Cambridge. The host galaxy of S5 0014+81 is a giant elliptical starburst galaxy, with the apparent magnitude of 24. Supermassive black hole The host galaxy of S5 0014+81 is an FSRQ (Flat Spectrum Radio Quasar) blazar, a giant elliptical galaxy that hosts a supermassive black hole at its center. In 2009, a team of astronomers using the Swift spacecraft used the luminosity of S5 0014+81 to measure the mass of its black hole. They found it to be about 10,000 times more massive than the black hole at the center of our galaxy, or equivalent to 40 billion solar masses. This makes it one of the most massive black holes ever discovered, more than six times the value of the black hole of Messier 87, which for 60 years was the largest known black hole, and was dubbed an "ultramassive" black hole. The Schwarzschild radius of this black hole is 120 billion kilometers, giving a diameter of 240 billion kilometers, 1,600 astronomical units, or about 40 times the radius of Pluto's orbit, and has a mass equivalent to four Large Magellanic Clouds. The fact that such a large black hole existed so early in the universe, at only 1.6 billion years after the Big Bang, suggests that supermassive black holes can form very quickly. Evolution models based on the mass of S5 0014+81's supermassive black hole predict that it will live for roughly years (near the end of the Black Hole Era of the universe, when it is more than 1088 times its current age), before it dissipates via Hawking radiation. See also List of most massive black holes References External links QSO S5 0014+81 Beobachtungen zu Eduard's Astropage, 29th Oct. 2009 Cepheus (constellation) Active galaxies Quasars Supermassive black holes
S5 0014+81
[ "Physics", "Astronomy" ]
819
[ "Black holes", "Unsolved problems in physics", "Supermassive black holes", "Constellations", "Cepheus (constellation)" ]
44,250,724
https://en.wikipedia.org/wiki/Foucault%27s%20lectures%20at%20the%20Coll%C3%A8ge%20de%20France
On the proposal of Jules Vuillemin, a chair in the department of Philosophy and History was created at the to replace the late Jean Hyppolite. The title of the new chair was The history of systems of thought and it was created on November 30, 1969. Vuillemin put forward Michel Foucault to the general assembly of professors and Foucault was duly elected on 12 April 1970. He was 44 years old, and at the time was relatively unknown beyond the borders of his native France. As required by this appointment, he held a series of public lectures from 1970 until his death in 1984 (excepting a sabbatical year in 1976–1977). These lectures, in which he further advanced his work, were summarised from audio recordings and edited by Michel Senellart. They were subsequently translated into English and further edited by Graham Burchell and published posthumously by St Martin's Press. Lectures On The Will To Know (1970–1971) This was an important time for Foucault and marks an important switch of methodology from 'archaeology' to 'genealogy' (according to Foucault he never abandoned the archaeology method). This was also a period of transition of thought for Foucault; the Dutch TV-televised Foucault Noam Chomsky Human nature Justice versus Power debate of November 1971 at the Eindhoven University of Technology appears at this exact time period as his first inaugural lecture were delivered at the entitled "the Order Of Discourse" delivered on 2 December 1970 (translated and published into English as "The Discourse On Language") then a week later (9 December 1970) his first ever full inaugural lecture course was delivered at the "The Will to Knowledge" course Foucault promised to explore; "fragment by fragment," the "morphology of the will to knowledge," through alternating historical periods, inquiries and theoretical questioning. The lectures produced were called "Lectures On The Will To Know"; all of this within a space of a year. The first phase of Foucault's thought is characterized by knowledge construction of various types and how each thread of knowledge systems combine together to produce a series of networks (Foucault uses the term 'Grille') to produce a successful fully functional 'subject' and a workable fully functional human society. Foucault uses the terms epistemological indicators and epistemological breaks to show, contrary to popular opinion, that these "indicators" and "breaks" require skilled trained technical group of 'specialists' in the various knowledge fields and a trained rigorous professionalized regulatory body of which know-how on behalf of those who use the terms (discourse formations or "speech/discourse") with a professional body that can make the terms used stand up to further rational scrutiny. Scientific knowledge for Foucault isn't an advancement for human progress as is so often portrayed by the human sciences (such as the humanities and the social sciences) but is much more of a subtle method of organizing and producing firstly an individual subject, and secondly, a fully functional society functioning as a self-replicated control apparatus not as a group of 'free' atomized individuals but as a collective societal, organised (or drilled) unit both in terms of industrial Production, labour power and a militarily organized unit (in the guise of armies) which is beneficial for the production of "epistemological indicators" or "breaks" enabling society to "control itself" rather than have external factors (such as the state for example) to do the job. In the inaugural lecture course "The Will To Know" Foucault goes into detail on how the 'natural order of things' from the 16th century transpired into a fully organised human society which includes a "Governmentality" apparatus and a complex machine (by "governmentality", Foucault means a state apparatus which is conceived as a scientific machine) as a rational organizing principle. This was the first time (contrary to popular opinion that this was a rather late invention in Foucault's thought) that Foucault started to go into the Greek dimensions of his thought of which he would return to in later lectures towards the end of his life. First of all a few pointers should be made explicit on certain points. Foucault mentions the western notions of money, production and trade (Greek society) starting about 800 BCE to 700 BCE. However, other 'non-western' societies also had these very same problems and is automatically assumed by some historians that these were entirely western inventions. This isn't entirely true; China and India for example had the most sophisticated trading and monetary institutions by the 6th century B.C.E., indeed the concept of a corporation existed in India from at least 800 BCE and lasted until at least 1000 C.E. Most importantly there was a social security system in India at this time. Foucault begins his notions from these lectures on the very notion of truth and the 'Will to knowledge' and the challenge is on when Foucault asks the very question of the entire western philosophical and political tradition: Namely knowledge (at least scientific knowledge) and its close association with truth is entirely desirable and is politically and philosophically natural and neutral. First of all Foucault puts these notions (at least its political notions) to a thorough test, firstly, Foucault asks the politically 'neutral' question on the very first appearance of money which became not only an important economic symbol but above all else became a measure of value and a unit of account. Money once established as a social process and social reality had (if one could say the word) an extremely rocky and precarious history. First of all while it had a social reality but the actual social authority to use money didn't develop a standard practice or knowledge on how to use it; it was rather undisciplined. Kings and emperors could squander large taxation revenues with impunity regardless of the consequences. They could default on repayments on loans as witnessed during the Hundred Years' War and During the Anglo-French War (1627-1629). Above all else kings and monarchs could take out forced loans and get others(their subjects) to pay for these forced loans and to add insult to injury get them to pay interest on the loans at extortionate rates of interest charged on the loans because they and their advisers regarded it as their own 'income'. However, whole societies were dependent on money particularly when the whole of society had to use and be ready for its function. Money took at least 3,000 years of history to get a more disciplined approach and became the sole prerogative of the fiscal responsibility of the state after the medieval 'order of things' was entirely dismantled 'to get it right' namely; the ruthlessness and rigorous efficiency needed for its proper function and it wasn't until the 16th century with the advent of modern political economy with its analysis of production, labour and trade you then get a sense of why money, particularly its relationship with capital and its complex relationship with the rest of society conversion, from labour power into money via the essential route of surplus value became a much maligned and misunderstood category and hot potato. Foucault now is asking how is it that modern western political economy, together with political philosophy and political science came to ask the question concerning money but was utterly perplexed by it (this is a question that particularly irritated and irked Karl Marx throughout his life)? That money and its various association with production, labour, government and trade was beyond doubt but its exact relationship with the rest of society was entirely missed by economists but yet still its version of events was entirely accepted as true? Foucault begins to try to go into the whole production of truth (both philosophical and political) its whole "breaks" "discontinuity" 'epistemological unconscious' and theoretical splitting "Episteme". From this Greek period starting from 800 BCE Foucault pursues the path of scientific and political knowledge the emergence and conditions of possibility for philosophical knowledge and ends up with "the problem of political knowledge (i.e. Aristotelian notions of the political animal) of what is necessary in order to govern the city and put it right." He then divided his work on the history of systems of thought into three interrelated parts, the "re-examination of knowledge, the conditions of knowledge, and the knowing subject." Penal Theories and Institutions (1971–1972) In these lectures, to be published in English in 2020, Foucault used the first precursor of Discipline and Punish to study the foundations of what he calls "disciplinary institutions" (punitive power) and the productive dimensions of penalty. The Punitive Society (1972–1973) In these lectures, published in English in 2015, continued the investigation of power and penal institutions begun in 1971-2. Foucault spent a lot of time during this period trying to make intelligible the internal and external dynamics of what we call the prison. He questioned, "What are the relations of power which made possible the historical emergence of something like the prison?". This was correlated to three terms; firstly 'measure' "a means of establishing or restoring order, the right order, in the combat of men or the elements; but also a matrix of mathematical and physical knowledge."(treated in more detail in The Will To Knowledge lectures of 1971); Secondly the 'inquiry' "a means of establishing or restoring facts, events, actions, properties, rights; but also a matrix of empirical knowledge and natural sciences"(from the 1972 lectures Theories On Punishment and Penal Theories and Institutions) and thirdly 'the examination' treated as "the permanent control of the individual, like a permanent test with no endpoint". Foucault links the examination with 18th century Political economy and the productive labourers with the wealth they produce and the forces of production. Abnormal (1974–1975) Influenced by the work of Georges Canguilhem, in these lectures (first published in English in 2003) Foucault explored how power defined the categories of "normality" and "abnormality" in modern psychiatry. "Society Must Be Defended" (1975–1976) This series of lectures forms a trilogy with Security, Territory, Population and The Birth of Biopolitics, and it contains Foucault's first discussion of biopower. It also contains an explanation of the term "civil war" in the form of rigorous treatment of a working definition. Foucault goes into great detail how power (as Foucault saw it) becomes a battleground drifting from civil war to generalized pacification of the individual and particularly the systems he (the individual) relies upon and to which he gives loyalty: "According to this hypothesis, the role of political power is perpetually to use a sort of silent war to re-inscribe that relationship of force, and to re-inscribe it in institutions, economic inequalities, language, and even the bodies of individuals." Foucault begins to explain that this generalized form of power is not only rooted in disciplinary institutions but is also concentrated in "political sovereignty, the military, and war," so it is in turn spread evenly throughout modern society as a network of domination. Foucault then discusses what lies behind the "academic chestnut" which could not be deciphered by his historical predecessors: namely the disjointed and discontinuous movement of history and power (bio-power). What is meant by this? For Foucault's predecessors, history was concerned by deeds of monarchs and a full list of their accomplishments in which the sovereign is presented in the text as doing all things 'great,' added to this 'greatness' of deeds this 'greatness' of the sovereign was accomplished all by the sovereign himself without any help; monument building, allegedly built by the monarch, without any help from skilled and trained professionals serves as a perfectly good example of the sovereign "greatness". However, for Foucault, this is not the case. Foucault's genealogy comes into play here where Foucault tries to build a bridge between two theoretical notions: disciplinary power (disciplinary institutions) and biopower. He investigates the constant shift throughout history between these two 'paradigms,' and what developments-from these two 'paradigms' became new subjects. The previous historical dimensions so often portrayed by historians Foucault argues, was sovereign history, which acts as a ceremonial tool for sovereign power "It glorifies and adds lustre to power. History performs this function in two modes: (1) in a "genealogical" mode (understood in the simple sense of that term) that traces the lineage of the sovereign. By the time of the 17th century with the development of mercantilism, statistics (mathematical statistics) and political economy this reaches a most vitriolic and vicious form later to be called nation states where whole populations were involved (in the guise of armies both industrial and military), in which a continuous war is enacted out not amongst ourselves (the population) but in a struggle for the state's very existence which ultimately leads to a "thanatopolitics" (a philosophical term that discusses the politics of organizing who should live and who should die (and how) in a given form of society) of the population on a large industrial scale. This is where Foucault discusses a "counterhistory" of "race struggle or race war." According to Foucault, Marx and Engels used or borrowed the term "race" and transversed the term race into a new term called "class struggle" which later Marxist accepted and began to use. This is more partly to do with Marx's antagonistic relationship with Carl Vogt who for his time was a convinced polygenist which Marx and Engels had inherited Vogt's belief. Foucault quotes letters written by Marx to Engels in 1854 and Joseph Weydemeyer in 1852 Foucault challenges the traditional notions of racism in explaining the operation of the modern state. When Foucault talks of racism he is not talking about what we might traditionally understand it to be–an ideology, a mutual hatred. In Foucault's reckoning modern racism is tied to power, making it something far more profound than traditionally assumed. Tracing the genealogy of racism, Foucault proposes that 'race', previously used to describe the division between two opposing societal groups distinguished from one another for example by religion or language, came to be conceived in the late 18th century in biological terms. The concept of "race war" that referred to conflict over the legitimacy of the power of the established sovereign, was "reformulated" into a struggle for existence driven by concern about the biopolitical purity of the population as a single race that could be threatened from within its own body. For Foucault "racism is born at the point when the theme of racial purity replaces that of race struggle" (p. 81). For Foucault, racism "is an expression of a schism within society ... provoked by the idea of an ongoing and always incomplete cleaning of the social body…it structures social fields of action, guides political practice, and is realized through state apparatuses…it is concerned with biological purity and conformity with the norm" (pp.43–44). In modern states, racism is not defined by the action of individuals, rather it is vested in the State and finds form in its structures and operation – it is state racism. State racism serves two functions. Firstly, it makes it possible to divide the population into biological groups, "good and bad" or "superior or inferior" 'races'. Fragmented into subspecies, the population can be brought under State control. Secondly, it facilitates a dynamic relationship between the life of one person and the death of another. Foucault is clear that this relationship is not one of warlike confrontation but rather a biological one, that is not based on the individual but rather on life in general "the more inferior species die out, the more abnormal individuals are eliminated the fewer degenerates there will be in the species as a whole, and the more I – as species rather than individual – can live, the stronger I will be, the more vigorous I will be, I will be able to proliferate" (p.255) In effect race, defined in biological terms, "furnished the ideological foundation for identifying, excluding, combating, and even murdering others, all in the name of improving life not of an individual but of life in general" (p. 42). What is important here is that racism, inscribed as one of the modern state's basic techniques of power, allows enemies to be treated as threats, not political adversaries. But through what mechanism are these threats treated? Here the technologies of power described by Foucault become important. Foucault argues that new technologies of power emerged in the second half of the 18th century, which Foucault termed biopolitics and biopower(Foucault uses both terms synonymously), these technologies focused on man-as-species and were concerned with optimising the state of life, with taking control of life and intervening to "make live and let die". Importantly, Foucault argues, the technologies did not replace the technologies of sovereign power with their exclusive focus on disciplining the individual body to be more productive by punishing or killing individuals, but embedded themselves into them. It was in exploring how this new power, with life as its object, could come to include the power to kill that Foucault theorizes the emergence of state racism. Foucault argues that the modern state must at some point become involved with racism in order to function since once a State functions in a biopolitical mode it is racism alone that can justify killing. Determined as a threat to the population, the State can take action to kill in the name of keeping the population safe and thriving, healthy and pure. It is racism that allows the right to kill to be squared off with a power that seeks to improve life. State racism delivers actions that while appearing to derive from altruistic intentions, veil the murder of the "Other" Following this argument to its logical end, it is only when there is never a need for the State to claim the right to kill or to let die that State racism will disappear. Since killing is predicated on racism, it follows that the "most murderous states are also the most racist" (p.258). Foucault refers to the way in which Nazism and the state socialism of the Soviet Union dealt with ethnic or social groups and their political adversaries as examples of this. Threats, however, can change over time and here the utility of 'race' a concept comes into its own. While never defining 'race', Foucault suggests that the word 'race' is "not pinned to a stable biological meaning" (p. 77). with the implication that it is a concept that is socially and historically constructed where a discourse of truth is enabled. This makes 'race' something that is easy for the State to adopt and exploit for its own purpose. 'Race' becomes a technology that is used by the state to structure threats and to make decisions over the life and death of sub-populations. In this way it helps to explain how the idea of 'race' or cultural difference are used to wage wars such as the "war on terror" or the "humanitarian war" in East Timor. Security, Territory, Population (1977–1978) Source: The course deals with the genesis of a political knowledge that was to place at the centre of its concerns the notion of population and the mechanisms capable of ensuring its regulation but even of its procedures and means employed to ensure, in a given society, "the government of men". A transition from a "territorial state" to a "population state" (Nation state)? Foucault examines the notion of biopolitics and biopower as a new technology of power over populations that is distinct from punitive disciplinary systems, by tracing the history of governmentality, from the first centuries of the Christian era to the emergence of the modern nation state. These lectures illustrate a radical turning point in Foucault's work at which a shift to the problematic of the government of self and others occurred. Foucault's challenge to himself in these series of lectures is to try and decipher the genealogical split between power in ancient and Medieval society and late modern society, such as our own. By split Foucault means power as a force for manipulation of the human body. Previous notions of power failed to account for the historical subject and general shifts in techniques of power-according to Foucault's genealogy or genesis of power – it was totally denied that manipulation of the human body by unforeseen, outside forces ever existed. According to this theory, it was human ingenuity and man's ability to increase his own rationalisation was the primary motion behind social phenomena and the human subject and change was a result of increasing human reason and human conscience ingenuity. Foucault denies that any such notion had ever existed in the historical record and insists that this kind of thought is a misleading abstraction. Foucault cites the main driving force behind this set of accelerated change was the modern human sciences and the technologies both available to skilled professionals from the 16th century and a whole set of clever techniques used to shift the whole old social order into the new order of things. However, what was significant was the notion of Population practised upon the entire human species on a global mass scale, not in separately locally defined areas. By population, Foucault means its fluidness and malleability, Foucault refers to 'a multiplicity of men, not to the extent that they are nothing more than individual bodies, but to the extent that they form, on the contrary, a global mass that is affected by overall processes of birth, death, production, taxation, illness and so forth, one should also take note that Foucault does not just mean population as singular event but a means of circulation tied to factors of security. What again was also significant was the idea of "freedom" the population's "freedom" which was the new modern Nation state and the 'neo-discourse' erected around such notions as freedom, work and Liberalism, the ideological stance of the state (mass popular democracy and the voting franchise) and the state was only too willing to recognize and give freedom for example as the object of security. Population, in Foucault's understanding, is understood as a self-regulating mass;an agglomeration or circulation of people and things which co-operate and co-produce order free from heavy state regulation the state governs less allowing the population to "govern itself". For Foucault, the freedom of population is grasped at the level of how elements of population circulate. Techniques of security enact themselves through, and upon, the circulation which occurs at the level of population. In Foucault's opinion the modern concept of population, as opposed to the ancient Antiquity and medieval version of "populousness" which has in its roots going as far back as the time period of the Book of Numbers in the Old Testament Bible and the work that it sustained both in political theory and practice certainly does so; or, at least, the construction of the concept population is central to the creation of new orders of knowledge, new objects of intervention, new forms of subjectivity. However, in order to fully understand what Foucault is trying to convey a few things should be said about the alteration techniques used that Foucault talks about in this series of lectures. The ancient and medieval version of Political power was centered around a central figure who was called a King, Emperor, Prince or ruler (and in some cases the pope) of his principle territory whose rule was considered absolute (Absolute monarchy) by both Political philosophy and political theory of the day even in our time such notions still exist. Foucault uses the term population state to designate a new founded technology founded on the principal of security and territory which would mean a "population" to govern on a global mass with each population having its own territorial integrity(a separate nation) mapped out by experts in treaty negotiations and the new emerging field of 15th century Advances in map-making technologies and the profession of Cartography eventually producing in the 18th century what we now know as nation states. These technologies take place at the level of "population" Foucault argues, and with the shifting aside of the body of the King or territorial ruler. By the time of the ending of the medieval period the body(or the persona of the king)of the territorial ruler became under increasingly under financial pressure and a cursory look at the medieval financial records tends to show that the monarch could not pay back all debts due to his creditors; the monarch would easily and readily default on loans due to any creditors causing financial ruin to creditors. Foucault notices that by the time of the 18th century several changes began to take place like the re-organization of armies, an emerging industrial working population begins to appear, (both military and industrial), the emergence of the Mathematical sciences, Biological sciences and Physical sciences which, coincidently gave birth to a-what Foucault calls-Biopower and a political apparatus (machine) to take care of biological (in the form of medicine and health) and political life (mass democracy and the voting franchise for the population). An apparatus (both economic and political) was required much more sophisticated than previous social organisations of previous societies had at their disposal. For example, Banks, which function as financial intermediaries and tied to the apparatus of the new 'state' machine which can easily pay back any large scale debts (large debts) which the King cannot, due to the king's own financial resources are limited;the king cannot pay back for example, the national debt, nor pay for a modern army out of his own personal resources, which can amount to trillions of US Dollars out of his own personal finances, that would be both impracticable and impossible. The Birth of Biopolitics (1978–1979) The Birth of Biopolitics develops further the notion of biopolitics that Foucault introduced in his lectures on "Society must be defended". It traces how eighteenth-century political economy marked the birth of a new governmental rationality and raises questions of political philosophy and social policy about the role and status of neo-liberalism in twentieth century politics. Over the course of many centuries the association between biological phenomena and human political behaviour has received a great deal of attention. Recently (the last 60 years or so) in the academic field and journals there has been some development within the field of political and biological behaviour. In his College de France lecture course of January 1978 Foucault use the term Biopolitics (not for the first time) to denote politic power over every aspect of human life. Why did Foucault use the term 'biopolitics' in the first place? First of all the term has many different meanings to many different people and to fully understand the term as Foucault saw and used and understood it, we have to look at the very different meanings of the concept. For Foucault the term means to him the association between biological phenomena and human political behaviour maximizing and increasing the human abilities machine (as we know the term). Over the course of evolutionary time this abilities machine of man becomes species specific, such as language capabilities, neuronal and cognitive capabilities so on and so forth. This then becomes over the course of the history of discursive technologies of scientific knowledge, Foucault argues, a field of knowledge established by groups of experts in disciplines, such as astronomy, biology, chemistry, geoscience, physics, anthropology, archaeology, linguistics, psychology, sociology, and history, for example. The study of a new and rigorous discipline allied together with a new language (discourse technologies) in which a grasp of the new language is needed developing into a powerful force in the political realm as well as biological evolution the two become powerful allies (both biology and politics). Genetics and the change that develops (over time) over the course of the human organism existence. However, the two become co-joined unwittingly but one of them both political philosophy and political science have specific problems, both cannot have or lay claim to independent knowledge which is problematic for both lines of thought. Not in the case of ideology (as in Marxism) but in the case of discursive technologies. Foucault insists that the scientific knowledge being presented by historians is not an endeavour by the whole of humankind, particularly when written about by historians who claim that 'man' invented the sciences anymore than the Nazi represented the whole of humankind and the whole of humankind were to blame for the Nazi atrocities the ultimate embodiment of evil. But is, for all attempts and purposes a collaborative enterprise by groups of specially trained specialists producing a scientific community who have unfettered access to the whole of society through their scientific knowledge and expertise. Change does indeed happen both within the organism and the organisms properties, the specific species is unable to correct them directly and biological change moves beyond any individual or single member of the species. However, these changes are aimed at the species as a whole and characteristics and traits are retained both at the biological, ecological and environmental level. In the human sciences (biology and genetics) these changes happen at a genetic and biological level which are unalterable and transpire from one generation to the next not at the individual level of the species. This is at the heart of the core theory of Charles Darwin and his proponents and the theory of Evolution and natural selection. Foucault's analysis try's to show that contrary to previous thought that the modern human sciences were somehow an obscure universal objective source which somehow had an absence of any lineage, took over the role of the Christian church in disciplining the body by replacing the soul and confession of the Catholic church plus also the specific director of the process which in this case would be the deity (God), with indefinite supervision and discipline. However, these new techniques required a new 'director(s)' or 'editor(s)' who replaced the priestly and Pharaonic versions of much similar past vintages. These new governmental mechanism based upon the right of sovereignty and law both supported the fixed hierarchical organisation of the previous mode of feudal governmental mechanism, but stripping the modern human subject of any kind of self autonomy; not only fully fit for indoctrination, work, and education a fully fit conversant subject but left them vulnerable as well to face a permanent exam which he(the ordinary individual) had no chance in passing and was supposed to fail with no end point. Foucault maintains that these techniques were deliberate, cold, calculating and ruthless; the human sciences, far from being "a way at looking at the world" the knowledge/power dynamic/relationship Paradigm was a 'cheap' efficient and 'cost' effective method into a way of producing a subjugated and docile human subject (not only a citizen, but a political and productive citizen) as an instrument for administrative control and concern (through the state) for the well being of the population(and a constant help to the spread of biopower) with the help of scientific classifications and new disciplinary technologies including the polity readily available to the human body and mind. Here are a few examples on what Foucault means by this type of "biopower" and bio-history of man As with the most recent discovery of mirror neurons has demonstrated Foucault has (while these techniques used in Psychiatry and Psychology are not mentioned alongside Foucault's name) hit on something that rigorous research methods may prove beyond a reasonable doubt that manipulation of social phenomena(which includes the human body and the mind) is most certainly possible. Techniques developed from the First and Second World war which started out as field experiments, among military personnel, were then extended into ordinary civilian life; techniques borrowed from the Human cognitive sciences and found its way into Psycho-analysis, Psychiatry, Psychology, Clinical psychology, Lightner Witmer and Clinical psychiatry (see this encyclopedia's article on Political abuse of psychiatry):"Mobilisation and manipulation of human needs as they exist in the consumer". He (Ernest Dichter) "was the first to coin the term focus group and to stress the importance of image and persuasion in advertising". In Vance Packard's book, The Hidden Persuaders Dichter's name is mentioned extensively. Subjectivation, a term Foucault coined for this purpose in which Biological life itself is given over to constant testing and research(an examination) without ever ending. One could argue;who are these new experts answerable too?Foucault argues that these new experts are answerable to absolutely no one. Just like previous notions of the past, absolute monarchy and divine rights of kings were answerable to nobody, their predecessors are just replacements of the past these new experts have now been democratised. Where mans body (and his soul)his mind can be manipulated and altered and is liable to be vulnerable. Every single aspect of the human subject is ripe for 'subjectification' and the technology-as it stands today-is unknown to us. This Biological allegory of man carries with it endless possibilities from the perspective of the Biological sciences and Physical sciences. The above extractions clearly show this "Biopower" of man requires man himself to administer these sophisticated technologies, where one group of experts or professionals(the enquiry) can completely subjugate another producing new human subjects(and new experts) through their expertise at manipulating social phenomena. In these few examples and according to this view:"the criminal is treated like a cancer" whereas human nature does not change which is the only society that ever gets produced, past, present or future. On The Government Of The Living (1979–1980) In the On The Government Of The Living lectures delivered in the early months of 1980, Foucault begins to ask questions of Western man obedience to power structures unreservedly and the pressing question of Government: "Government of children, government of souls and consciences, government of a household, of a state, or of oneself." Or governmentality, as Foucault prefers to call it, although he fleshes out the development of that concept in his earlier lectures titled "Security, Territory, Population." Foucault tries to trace the kernel of "the genealogy of obedience" in western society. The 1980 lectures attempt to relate the historical foundations of "our obedience"—which must be understood as the obedience of the Western subject. Foucault argues confessional techniques are an innovation of the Christian West intended to guarantee men's obedience to structures of power in return, so the belief goes, for Christian salvation. In his summary of the course Foucault asks the question: "How is it that within Western Christian culture, the government of men requires, on the part of those who are led, in addition to acts of obedience and submission, 'acts of truth,' which have this particular character that not only is the subject required to speak truthfully but to speak truthfully about himself?" The reader should take note here that much of this kind of work has been done before, albeit in what is best described as brilliant, lost and forgotten scholarship by such scholars as Ernst Kantorowicz (his work on the body politic and the king's two bodies), Percy Ernst Schramm, Carl Erdmann, Hermann Kantorowicz, Frederick Pollock and Frederick Maitland. However, Foucault was after the genealogical dynamics and his main thrust was "regimes of truth" and the emergence and gradual development of "reflexive acts of truth". Foucault locates the very beginning of this act of obedience to power structures and the truth that they bring to the first Christian institutions between the 2nd century and the 5th century C.E. This is where Foucault starts to use his main tool—that is Genealogy as his main focus and it is with this genealogical tool that you finally get to understand fully what genealogy actually means. Foucault goes into great painstaking detail into the Christian baptism and its contingency and discontinuity in order to find "the genealogy of confession". This is an attempt—argues Foucault—to write a "political history of the truth". Subjectivity and Truth (1980–1981) In Subjectivity and Truth, Foucault undertakes a deep analysis of sexuality, sexual ethics, and marriage. He looks at the evolving concept of relationships, marriage, and spouses as historical constructs. The Hermeneutics of the Subject (1981–1982) In these lectures, Foucault develops notions on the ability of the concept of truth to shift through time as described by the modern human sciences (for example ethnology) in contrast to ancient society (Aristotelian notions). It discusses how these notions are accepted as truth and produce the self as true. This is followed by a discussion on the existence of this truth and the discourse of truth for the experience of the self. The Government of Self and Others (1982–1983) The final two years of lectures deal with the concept of parrhesia, translated by Foucault as 'frank speech' and the relationship between the political and the self. The Courage of Truth (1983–1984) The last course Foucault gave at the was delayed by illness, for which Foucault received treatment in January 1984. The lectures were ultimately delivered over nine consecutive Wednesdays in February and March of that year. In several of the lectures, Foucault complains of suffering from a bad flu and apologizes for his diminished strength. Although relatively little was known about AIDS at the time, there are several indications that Foucault already suspected he had contracted the virus. The content of the course expands on the analysis of parrhesia Foucault developed during the previous year, with renewed focus on Plato, Socrates, Cynicism, and Stoicism. On February 15, Foucault delivered a moving lecture on the death of Socrates and the meaning of Socrates' last words. On March 28, twelve weeks before he succumbed to AIDS-related complications, Foucault delivered his final lecture. His last words at the lectern were: References External links Michel Foucault Audio Archive Guide Michel Foucault Biopolitics Political philosophy Political science University and college lecture series Recurring events established in 1970 Books of lectures
Foucault's lectures at the Collège de France
[ "Engineering", "Biology" ]
8,032
[ "Biopolitics", "Genetic engineering" ]
44,250,808
https://en.wikipedia.org/wiki/Psychological%20autopsy
Psychological autopsy in suicidology (or also psychiatric autopsy) is a systematic procedure for evaluating suicidal intention in equivocal cases. It was invented by American psychologists Norman Farberow and Edwin S. Shneidman during their time working at the Los Angeles Suicide Prevention Center, which they founded in 1958. The method entails collecting all available information on a deceased individual through forensic examinations, examining health records, and conducting interviews with relatives and friends. This information is then used to determine the individual’s risk factors and psychological state before their death to help determine their cause of death. History Farberow and Shneidman pioneered the psychological autopsy while working at the Los Angeles Suicide Prevention Center in the 1950s. They developed the procedure after being asked by the Coroner to help identify the cause of death in equivocal suicides. The procedure was influenced by Farberow and Shneidman’s time studying suicide notes from the Los Angeles County Coroner’s Office. The psychological autopsy method was first used when Coroner Theodore J. Curphrey asked for the Suicide Prevention Center’s help in investigating a high number of drug-induced deaths. The procedure was also used after Curphrey enlisted psychiatrist Robert E. Litman and Farberow to help determine the mental state of Marilyn Monroe before her death. Farberow ruled Monroe’s death a probable suicide after the investigation. The psychological autopsy method has been adopted by the United States Department of Defense and in 2002, psychological autopsies became a part of its training curriculum. The psychological autopsy has also been used to help determine the likelihood of suicide in criminal cases such as Jackson v. State and U.S. v. St. Jean and civil cases such as Mutual Life Insurance Company v. Terry. Processes The psychological autopsy was developed to help clarify equivocal deaths, or deaths without a clear or appropriate mode. Examples of equivocal death scenarios include drug-related deaths, autoerotic and self-induced asphyxia, vehicular deaths, and drownings. When conducting psychological autopsies, investigators attempt to identify a decedent’s intention in regard to their death. Psychological autopsies first attempt to answer how an individual died, why they died at a specific time, and the most probable cause of death. If the cause of death is clear, investigators attempt to determine the reasons for an individual’s actions that led to death. Suicidal intent is measured by factors such as means of death, prior threats to commit suicide, and settling of financial accounts. In psychological autopsies, mental disorders are also strongly associated with suicide. Intent is determined by analyzing information about the decedent collected from interviews with friends and family, along with information gained from the related forensic examination into the decedent’s death. Information from the decedent’s health records is also examined, including any illnesses, treatment, and therapy and family history of death. Investigators usually look for details such as behavioral patterns in response to stress, recent changes in behavior, suicidal ideation, use of alcohol and/or drugs, and recent traumatic events. Ovenstone criteria European Union Agency for Railways uses so-called Ovenstone criteria to distinguish a death as a deliberate act. Also, the British College of Policing advises to use these criteria named after Irene Ovenstone to determine a suspected suicide. Irene Ovenstone introduced these criteria in 1973. She applied this method in the review of the verdicts in Edinburgh. The review revealed a potential under-reporting of suicide of 40.67%. These criteria are: Suicide note, written or oral, where the intention is communicated and where the traffic incident supports a suicide A traffic incident that indicates a suicide in combination with knowledge of Recent suicide attempts Recent indirect suicidal communication Communication about committing suicide or having no reason to live Ongoing mental illness or prolonged depression Previous major traumatic life event A traffic incident that strongly suggests a suicide References Psychiatric research Interdisciplinary subfields of sociology Suicide
Psychological autopsy
[ "Biology" ]
805
[ "Behavior", "Human behavior", "Suicide" ]
44,251,177
https://en.wikipedia.org/wiki/AP%20Computer%20Science%20Principles
Advanced Placement (AP) Computer Science Principles (also known as AP CSP) is an AP Computer Science course and examination offered by the College Board under the Advanced Placement program. The course is designed as an equivalent to a first-semester course in computing. Assessment for AP Computer Science Principles is divided into two parts: a Create Performance Task due during the course, as well as an AP exam. AP Computer Science Principles examines a variety of computing topics on a largely conceptual level, and teaches procedural programming. In the Create "Through-Course Assessment", students must develop a program, demonstrated in a video and a written reflection. The course may be taught in any programming language with procedures, mathematical expressions, variables, lists, conditionals, and loops. Coding portions of the AP exam are based in both text-based and block-based pseudocode, as defined by the provided reference sheet. The AP Computer Science Principles Exam was administered for the first time on May 5, 2017. Course The framework focuses on computational thinking practices which are applied throughout the curriculum. The concept outline included in the curriculum is divided into seven units called "Big Ideas". Each unit contains a series of "Learning Objectives". Each "Learning Objective" is a general benchmark of student performance or understanding which has an associated "Enduring Understanding". An "Enduring Understanding" is a core comprehension which students should retain well after completing the course. Each "Learning Objective" is split into multiple "Essential Knowledge" standards, which are specific facts or content which the student must know to demonstrate mastery of the learning objective when assessed. Curriculum providers Through-Course Assessment Task 1: Create – Applications from Ideas Task Description: Students create computational artifacts through the design and development of programs. Task Time Limit: 12 hours in Class Time Task Response Format Individual Program: Source Code PDF and Video Individual Reflection: 300 words Evaluate, Archive and Present Task Prior to 2021, the first task was the Explore section. The explore section was removed prior to the 2021 exam. The exam prior to 2021 is described as follows: Task 1: Explore – Implications of Computing Innovations Task Description: In the classroom, students explore the impacts of computing on social, economic, and cultural areas of our lives Task Time Limit: 8 hours in Class Time Task Response Format Written Response: Innovation: 400 word Max Written Response: Population and Impact : 300 Word Max Visual Artifact: Visualization or Graphic Visual Artifact Summary: 50 Words Evaluate, Archive and Present Task Exam The AP exam uses paper and pencil. (With the exception of year 2020, only Create and Explore were tested. In 2021, only Create and the multiple choice section were tested.) It lasts 180 minutes and includes approximately 76 questions. The exam is composed of two sections: 74 Multiple-Choice Questions Single Select Multiple-Choice: Select 1 answer from among 4 options. Multiple Select Multiple-Choice: Select 2 answers from among 4 options. 2 Written Responses References External links AP Central AP Students Computer science education Advanced Placement
AP Computer Science Principles
[ "Technology" ]
600
[ "Computer science education", "Computer science" ]
44,251,626
https://en.wikipedia.org/wiki/Vector%20NTI
Vector NTI was a commercial bioinformatics software package used by many life scientists in the early 2000s to work, among other things, with nucleic acids and proteins in silico. It allowed researchers to, for example, plan a DNA cloning experiment on the computer before actually performing it in the lab. It was originally created by InforMax Inc, North Bethesda, MD in 1993 and versions in the early 2000s were well reviewed at the time. However, in 2008 it was locked and turned into a commercial software after 2008 which created problems for locked in users who were forced to buy the software to continue accessing their data on newer computers. What was previously a single software package was subsequently split into Vector NTI Express, Advanced, and Express Designer. Vector NTI was discontinued by its corporate parent Thermo Fisher at the end of 2019 and support ceased a year later. Features create, annotate, analyse, and share DNA/protein sequences perform and save BLAST searches design primers for PCR, cloning, sequencing or hybridisation experiments plan cloning and run gels in silico align multiple protein or DNA sequences search NCBI's Entrez, view, and save DNAs, proteins, and citations edit chromatogram data, assemble into contigs See also Bioinformatics Cloning vector Computational biology Expression vector List of open source bioinformatics software Restriction map Vector (molecular biology) Vector DNA References External links Description of software Vector NTI homepage at Invitrogen.com Vector NTI at openwetware.org Vector NTI v10 (only PC) Tutorials Vector NTI tutorial at NorthWestern.edu Other description of Vector NTI Viewer Bioinformatics software
Vector NTI
[ "Biology" ]
359
[ "Bioinformatics", "Bioinformatics software" ]
44,252,081
https://en.wikipedia.org/wiki/List%20of%20largest%20exoplanets
Below is a list of the largest exoplanets so far discovered, in terms of physical size, ordered by radius. Limitations This list of extrasolar objects may and will change over time due to diverging measurements published between scientific journals, varying methods used to examine these objects, and the notably difficult task of discovering extrasolar objects in general. These objects are not stars, and are quite small on a universal or even stellar scale. Then there is the fact that these objects might be brown dwarfs, sub-brown dwarfs, or not exist at all. Because of this, this list only cites the most certain measurements to date and is prone to change. List The sizes are listed in units of Jupiter radii (, 71 492 km). This list is designed to include all planets that are larger than 1.6 times the size of Jupiter. Some well-known planets that are smaller than ( or ) have been included for the sake of comparison. Notes Candidates for largest exoplanets Unconfirmed exoplanets These planets are also larger than 1.6 times the size of the largest planet in the Solar System, Jupiter, but have yet to be confirmed or are disputed. Note: Some data may be unreliable or incorrect due to unit or conversion errors Exoplanets with uncertain radii This list contains planets with uncertain radii that could be below or above the adopted cut-off of , depending on the estimate. Notes Chronological list of largest exoplanets These exoplanets were the largest at the time of their discovery. Notes See also Lists of planets List of smallest exoplanets List of largest cosmic structures List of largest galaxies List of largest nebulae List of largest known stars Lists of astronomical objects List of most massive stars References Largest Exoplanets, largest
List of largest exoplanets
[ "Astronomy" ]
367
[ "Astronomy-related lists", "Lists of superlatives in astronomy" ]
44,252,271
https://en.wikipedia.org/wiki/List%20of%20largest%20galaxies
This is a list of largest galaxies known, sorted by order of increasing major axis diameters. The unit of measurement used is the light-year (approximately 9.46 kilometers). Overview Galaxies are vast collections of stars, planets, nebulae and other objects that are surrounded by an interstellar medium and held together by gravity. They do not have a definite boundary by nature, and are characterized with gradually decreasing stellar density as a function of increasing distance from its center. Because of this, measuring the sizes of galaxies can often be difficult and have a wide range of results depending on the sensitivity of the detection equipment and the methodology being used. Some galaxies emit more strongly in wavelengths outside the visible spectrum, depending on its stellar population, whose stars may emit more strongly in other wavelengths that are beyond the detection range. It is also important to consider the morphology of the galaxy when attempting to measure its size – an issue that has been raised by the Russian astrophysicist B.A. Vorontsov-Vel'Yaminov in 1961, which considers separate determination methods in measuring the sizes of spiral and elliptical galaxies. For a full context about how the diameters of galaxies are measured, including the estimation methods stated in this list, see section Galaxy#Physical diameters. List Listed below are galaxies with diameters greater than 700,000 light-years. This list uses the mean cosmological parameters of the Lambda-CDM model based on results from the 2015 Planck collaboration, where H0 = 67.74 km/s/Mpc, ΩΛ = 0.6911, and Ωm = 0.3089. Due to different techniques, each figure listed on the galaxies has varying degrees of confidence in them. The reference to those sizes plus further additional details can be accessed by clicking the link for the NASA/IPAC Extragalactic Database (NED) on the right-hand side of the table. Listed below are some notable galaxies under 700,000 light-years in diameter, for the purpose of comparison. All links to NED are available, except for the Milky Way, which is linked to the relevant paper detailing its size. See also List of largest known stars List of most massive stars List of most massive black holes List of largest cosmic structures List of largest nebulae Notes References Further reading Galaxies Lists of superlatives in astronomy Lists of extreme points Lists of galaxies
List of largest galaxies
[ "Astronomy" ]
485
[ "Astronomy-related lists", "Lists of superlatives in astronomy" ]
44,252,795
https://en.wikipedia.org/wiki/Thomas%27%20cyclically%20symmetric%20attractor
In the dynamical systems theory, Thomas' cyclically symmetric attractor is a 3D strange attractor originally proposed by René Thomas. It has a simple form which is cyclically symmetric in the x, y, and z variables and can be viewed as the trajectory of a frictionally dampened particle moving in a 3D lattice of forces. The simple form has made it a popular example. It is described by the differential equations where is a constant. corresponds to how dissipative the system is, and acts as a bifurcation parameter. For the origin is the single stable equilibrium. At it undergoes a pitchfork bifurcation, splitting into two attractive fixed points. As the parameter is decreased further they undergo a Hopf bifurcation at , creating a stable limit cycle. The limit cycle then undergoes a period doubling cascade and becomes chaotic at . Beyond this the attractor expands, undergoing a series of crises (up to six separate attractors can coexist for certain values). The fractal dimension of the attractor increases towards 3. In the limit the system lacks dissipation and the trajectory ergodically wanders the entire space (with an exception for 1.67%, where it drifts parallel to one of the coordinate axes: this corresponds to quasiperiodic torii). The dynamics has been described as deterministic fractional Brownian motion, and exhibits anomalous diffusion. References Nonlinear systems Dynamical systems Chaotic maps
Thomas' cyclically symmetric attractor
[ "Physics", "Mathematics" ]
298
[ "Functions and mappings", "Mathematical objects", "Nonlinear systems", "Mechanics", "Mathematical relations", "Chaotic maps", "Dynamical systems" ]
44,253,213
https://en.wikipedia.org/wiki/Conservation%20Plan
The Conservation Plan is an important publication written by James Semple Kerr in 1982 and revised many times. It was a landmark in Australian conservation. The document "...outlines the logical processes of the Burra Charter, and how to prepare a Conservation Plan to guide and manage change to a heritage item appropriately. Subtitled, "a guide to the preparation of conservation plans for places of European cultural significance it has guided building conservation in Australia and around the world. The Conservation Plan is widely used by heritage practitioners and property owners in Australia, and worldwide as a primary guide to the process of researching, documenting and managing historic places in accordance with the Burra Charter, through a logical process. First published by the National Trust of Australia (NSW) in 1982, it has subsequently been reprinted in expanded form over seven editions and twelve printing impressions. The concept has been adopted worldwide as a critical process for conserving heritage places, for example in the British Heritage Lottery Fund guidance note Conservation Plans for Historic Places, Wales and British Columbia. References External links Conservation Plan online edition, Australia ICOMOS > Publications, 2013 Understanding The Burra Charter Excerpts from an Australia ICOMOS brochure explaining the principles of heritage conservation. Retrieved 15 August 2011]. Architectural history Cultural heritage of Australia Conservation and restoration of cultural heritage Nature conservation in Australia
Conservation Plan
[ "Engineering" ]
267
[ "Architectural history", "Architecture" ]
67,146,822
https://en.wikipedia.org/wiki/1st%20Military%20Intelligence%20Brigade%20%28United%20Kingdom%29
1st Military Intelligence Brigade (1 MI Bde) was a formation of the British Army formed after the Future Army Structure review reform, but in 2014 was absorbed into the new 1st Intelligence, Surveillance and Reconnaissance Brigade. History After the 2003 Iraq War, code-named Operation Telic by the British Army, a thorough reorganisation of the combat service support forces took place, known as the Future Army Structure. As part of this reorganisation, new 'support brigades' were formed. One of the new formations create was the 1st Military Intelligence Brigade, which commanded the military intelligence and psychological operation troops. Other new formations included the 8th Engineer Brigade and 2nd Medical Brigade. The brigade's mission was "to command troops, and to prepare, deliver & sustain MI (military intelligence) & PSYOPS (Psychological operations) formations in order to conduct land operations in support of Land Command and Defence tasks". Even though the brigade never deployed, its sub-units did have detachments serve during Operation Telic and Operation Herrick. Under the Army 2020 programme announced in 2010, intelligence, surveillance, and reconnaissance formations were to be grouped together to form a new unit under Force Troops Command. Therefore, by September 2014 the brigade was disbanded and its military intelligence battalions went to form part of 1st Intelligence, Surveillance and Reconnaissance Brigade, while 15 Psychological Operations Group moved to 77th Brigade. Organisation Brigade organisation was: Headquarters 1st Military Intelligence Brigade, at Chicksands Station 1st Military Intelligence Battalion, at Joint Headquarters Rheindahlen, Germany (supporting 1st (United Kingdom) Armoured Division) 2nd Military Intelligence Battalion, at AAC Netheravon (Intelligence Exploitation) 3rd Military Intelligence Battalion (TA), HQ in London (supporting HQ ARRC, Permanent Joint Headquarters, and Defence Intelligence Staff) 4th Military Intelligence Battalion, at Ward Barracks, Bulford Camp (supporting 3rd (United Kingdom) Mechanised Division) 5th Military Intelligence Battalion (TA), HQ in Edinburgh – formed on 1 April 2008 15 Psychological Operations Group, at Chicksands Station (Joint Tri-Services) Footnotes References Military units and formations disestablished in 2014 Military intelligence units and formations of the United Kingdom Brigades of the British Army Military units and formations established in 2003 British Army Landmark programme
1st Military Intelligence Brigade (United Kingdom)
[ "Engineering" ]
446
[ "British Army Landmark programme", "Military projects" ]
67,149,242
https://en.wikipedia.org/wiki/Biswajeet%20Pradhan
Biswajeet Pradhan (born 1975) is a spatial scientist, modeller, author and who is now working as a Distinguished Professor and the founding Director of the Centre for Advanced Modelling and Geo-spatial Information Systems (CAMGIS), Faculty of Engineering and IT at the University of Technology Sydney, Australia. He is working primarily in the fields of remote sensing, geographic information systems (GIS), complex modelling, machine learning and Artificial intelligence (AI) based algorithms and their application to natural hazards, natural resources and environmental problems. Many of his research outputs were put into practice. His research platform is mainly Asia and Australia, and he has been sharing his findings worldwide. He is also a permanent resident of Australia and Malaysia. Education Biswajeet Pradhan received his Bachelor of Science (Hons.) from the Berhampur University in Odisha, India, in 1995. Then, he furthered his studies at the Indian Institute of Technology (IIT) in Bombay, India. There, he was awarded a Master of Science in Remote Sensing/Applied Geology, in 1998. Later, he was awarded Master of Technology in Civil Engineering from the Indian Institute of Technology (IIT), Kanpur, India and Dresden University of Technology, Germany. He was awarded a PhD in 2006 in Geographical Information Systems (GIS) and Geomatics from the Universiti Putra Malaysia. In 2011, he has been awarded a Habilitation qualification in Remote Sensing from Dresden University of Technology (Germany), the highest academic qualification in several European countries. Early career (1998-2010) Pradhan started his career as a Research Scientist with research at the Indian Institute of Technology (IIT), Kanpur and Dresden University of Technology, Germany from 1998 to 2002 serving for 2 years consecutively in each institution. In 2002, he was employed as a Senior Lecturer in the Department of Civil Engineering, Asian Institute of Medicine, Science and Technology, Malaysia. Here he was involved in teaching and research activities on new compression algorithms for laser scanning data. From 2005 to 2007, he was the Senior Manager at Cilix Corporation, Malaysia, where he undertook research and supervised R&D projects for the Malaysian Center for Remote Sensing under the Ministry of Science and Technology. Concurrently in 2007, he joined UPM as a research associate at the Institute of Advanced Technology. In 2008, he was awarded an Alexander von Humboldt Fellowship and spent two years at TU-Dresden (Germany) where he undertook research to develop geospatial modelling tools for landslide hazard and risk assessment Career in Malaysia and Australia (2011- till date) Pradhan has been an active researcher and university professor (full), actively supervising postgraduate students and projects. In 2010, Pradhan joined the Universiti of Putra Malaysia (UPM) as associate professor and promoted to a full professor in 2017. He was also the principal researcher at Geo-spatial Information Science Research Centre (GISRC), UPM. Since September 2017, Pradhan is a research-only academic at the University of Technology Sydney. He is the founding director of Centre for Advanced Modelling and Geo-spatial Information Systems (CAMGIS). Biswajeet Pradhan leads a multi-disciplinary research at CAMGIS focusing on advanced spatial modelling, remote sensing, artificial intelligence (AI) applied to natural hazards and other environmental and urban problems. Achievements He has been awarded as Highly Cited Researcher in the field of computer science for four years, with h-index of over 95. He was awarded the 2018 World Class Professor Award by the Indonesian Ministry of Research, Technology and Higher Education. In 2017, he was awarded with the Research Star Award by the Malaysian Ministry of Education. In recognition of his research excellence and important contributions to remote sensing and GIS, he was nominated as an Ambassador Scientist for the Alexander von Humboldt Foundation (Germany) to oversee research and networking opportunities between Germany and southeast Asian countries. He has been an invited keynote speaker and forum panel member for remote sensing scientific events both internationally and nationally on several occasions. Selected works Journals Books References External links 1975 births Living people Indian scientists Indian environmental scientists University of Putra Malaysia alumni IIT Kanpur alumni Berhampur University alumni TU Dresden alumni IIT Bombay alumni
Biswajeet Pradhan
[ "Environmental_science" ]
866
[ "Indian environmental scientists", "Environmental scientists" ]
67,149,288
https://en.wikipedia.org/wiki/NGTS-13b
NGTS-13b is an exoplanet that was discovered by NGTS. It takes 4.12 days to orbit its host star and its discovery was announced in January 2021. Discovery The planet was discovered by the Next Generation Transit Survey, and the paper states that exoplanets are usually not found around giants and subgiants due to the host engulfing the planet. Properties NGTS-13b has 4 times more mass than Jupiter, but maintains a radius similar to the Jovian planet. The planet has a typical 4 day orbit of a Hot Jupiter, and has an average temperature of 1,605 K, but has a hotter dayside temperature of 1,828 K. References Hot Jupiters Centaurus Exoplanets discovered in 2021
NGTS-13b
[ "Astronomy" ]
158
[ "Centaurus", "Constellations" ]
67,150,624
https://en.wikipedia.org/wiki/Online%20fair%20division
Online fair division is a class of fair division problems in which the resources, or the people to whom they should be allocated, or both, are not all available when the allocation decision is made. Some situations in which not all resources are available include: Allocating food donations to charities (the "food bank" problem). Each donation must be allocated immediately when it arrives, before future donations arrive. Allocating donated blood or organs to patients. Again, each donation must be allocated immediately, and it is not known when and what future donations will be. Some situations in which not all participants are available include: Dividing a cake among people in a party. Some people come early and want to get a cake when they arrive, but other people may come later. Dividing the rent and rooms among tenants in a rented apartment, when one or more of them are not available during the allocation. The online nature of the problem requires different techniques and fairness criteria than in the classic, offline fair division. Online arrival of people The party cake-cutting problem Walsh studies an online variant of fair cake-cutting, in which agents arrive and depart during the division process, like in a party. Well-known fair division procedures like divide and choose and the Dubins-Spanier moving-knife procedure can be adapted to this setting. They guarantee online variants of proportionality and envy-freeness. The online version of divide-and-choose is more robust to collusion, and has better empirical performance. The sequential fair allocation problem Sinclair, Jain, Bannerjee and Yu study allocation of divisible resources when individuals arrive randomly over time. They present an algorithm that attains the optimal fairness-efficiency threshold. The secretive agent problem Several authors studied fair division problems in which one agent is "secretive", i.e., unavailable during the division process. When this agent arrives, he is allowed to choose any part of the resource, and the remaining n-1 parts should be divided among the remaining n-1 agents such that the division is fair. Note that divide and choose satisfies these requirements for n=2 agents, but extending this to 3 or more agents is non-trivial. The following extensions are known: Meunier and Su show that there always exists an envy-free cake-cutting among any number of agents, when there is a single secretive agent. Frick, Houston-Edwards and Meunier show that there always exists an envy-free allocation of rooms and rent (also called rental harmony) when there is a single secretive agent. The result holds for very general class of the tenants' preferences, including quasilinear valuations, "miserly tenants", and more. Cheze shows a polynomial-time algorithm for connected proportional cake-cutting among any number of agents, when there is a single secretive agent. The algorithm is based on the Even–Paz protocol and uses O(n log n) queries. Arunachaleswaran, Barman and Rathi show a polynomial-time algorithm for rental harmony when there are n-1 agents with quasilinear utilities, and the n-th agent is secretive. They also show efficient algorithms for almost envy-free (EF1) item allocation and ε-approximate envy-free cake-cutting. The cake redivision problem The cake redivision problem is a variant of fair cake-cutting in which the cake is already divided in an unfair way (e.g. among a subset of the agents), and it should be re-divided in a fair way (among all the agents) while letting the incumbent owners keep a substantial fraction of their present value. The model problem is land reform. Online arrival of resources The food bank problem The food bank problem is an online variant of fair allocation of indivisible goods. Each time, a single item arrives; each agent declares his/her value for this item; and the mechanism should decide which of the agents should receive it. The model application is a central food bank, which receives food donations and has to allocate each donation to one of the charities who want it. The donations are consumed immediately, and it is not known what donations are going to come next, so the decision must be made based only on the previous donations. Binary valuations Working with Foodbank Australia, Aleksandrov, Aziz, Gaspers and Walsh have initiated the study of the food bank problem when all agents have binary valuations {0,1}, that is, for each arriving item, every agent states whether he likes the item or not. The mechanism should decide which of the agents who like the item should receive it. They study two simple mechanisms for this setting: LIKE: each item is given uniformly at random to one of the agents who likes it. It is strategyproof and envy-free ex-ante, but does not guarantee even approximate envy-free ex-post. With binary valuations, it attains the optimal egalitarian and utilitarian social welfare. With additive valuations, its expected egalitarian and utilitarian social welfare are least 1/n of the optimal values attainable by an offline algorithm. The same is true whether the agents are sincere or strategic (i.e., its price of anarchy is n). BALANCED-LIKE: each item is given uniformly at random to one of the agents who likes it, from among those who received the least number of items so far. It is envy-free ex-ante, and guarantees EF1 ex-post when all agents have binary valuations. It is strategyproof for two agents with binary valuations, but not strategyproof for three or more agents even with binary valuations. When agents bid sincerely, with binary valuations, it attains the optimal egalitarian and utilitarian social welfare. With additive valuations, its expected egalitarian and utilitarian social welfare do not attain any constant-factor approximation of the offline optimal values, even with two agents. When agents bid strategically, even with binary valuations, its price of anarchy is n. Additive valuations In a more general case of the food bank problem, agents can have additive valuations, which are normalized to [0,1]. Due to the online nature of the problem, it may be impossible to attain some fairness and efficiency guarantees that are possible in the offline setting. In particular, Kahana and Hazon prove that no online algorithm always finds a PROP1 (proportional up to at most one good) allocation, even for two agents with additive valuations. Moreover, no online algorithm always finds any positive approximation of RRS (round-robin share). Benade, Kazachkov, Procaccia and Psomas study another fairness criterion - envy-freeness. Define the envy of agent i at agent j as the amount by which i believes that j's bundle is better, that is, . The max-envy of an allocation is the maximum of the envy among all ordered agent pairs. Suppose the values of all items are normalized to [0,1]. Then, in the offline setting, it is easy to attain an allocation in which the max-envy is at most 1, for example, by the round-robin item allocation (this condition is called EF1). However, in the online setting, the envy might grow with the number of items (T). Therefore, instead of EF1, they aim to attain vanishing envy—the expected value of the max-envy of the allocation of T items should be sublinear in T (assuming the value of every item is between 0 and 1). They show that: The LIKE algorithm (allocating each item uniformly at random) attains vanishing envy - the envy after T items is in . There is a deterministic algorithm with a similar envy-bound, using the method of pessimistic estimators. For any n ≥ 2 and r < 1, there exists an adaptive adversary (an adversary who chooses the valuations of item t after seeing the allocation at time t-1) such that any algorithm must have envy after T rounds in Ω((T/n)r/2). In particular, in contrast to the case of binary valuations, no algorithm can guarantee EF1. They also study a more general setting in which the items come in batches of m each time, rather than 1 at a time. In this case, there is a deterministic algorithm with envy in . Jiang, Kulkarni and Singla improve the bound for the case of n=2 agents, when the values are random (rather than adversarial). They reduce the problem to the problem of Online Stripe Discrepancy, which is a special case of discrepancy of permutations, with two permutations and online item arrival. They show that their algorithm for Online Stripe Discrepancy attains envy in , for some universal constant c, with high probability (that depends on c). Their algorithm even bounds a stronger notion of envy, which they call ordinal envy: it is the worst possible cardinal envy that is consistent with the item ranking. Zeng and Psomas study the trade-off between efficiency and fairness under five adversary models, from weak to strong. Below, vit denotes the value of the item arriving at time t to agent i. Identical independent agents: the adversary picks a probability distribution D0. On each round t, vit is drawn independently from D0. Different independent agents: The adversary picks a probability distribution Di for each agent i. On each round t, vit is drawn independently from Di. Correlated agents: The adversary picks a joint probability distribution D. On each round t, the vector (v1t, ..., vnt) is drawn independently from D. Non-adaptive adversary: After seeing the algorithm code, the adversary picks the valuations of all n agents to all T items. Adaptive adversary: After seeing the algorithm code, and after seeing the allocation of the first t-1 items, the adversary picks the valuations of item t (In this is the model). For adversary 3 (hence also 2 and 1), they show an allocation strategy that guarantees, to each pair of agents, either EF1, or EF with high probability, and in addition, guarantees ex-post Pareto efficiency. They show that the "EF1 or EF w.h.p." guarantee cannot be improved even for adversary 1 (hence also for 2 and 3). For adversary 4 (hence also 5), they show that every algorithm attaining vanishing envy can be at most 1/n ex-ante Pareto-efficient. See also Benade, Kazachkov, Procaccia, Psomas and Zeng. The costly reallocation problem In some cases, items that were previously allocated may be reallocated, but reallocation is costly, so the number of adjustments should be as small as possible. An example is the allocation of expensive scientific equipment among different university departments. Each piece of equipment is allocated as soon as it arrives, but some previously allocated equipment may be reallocated in order to attain a fairer overall allocation. He, Procaccia and Psomas show that, with two agents, algorithms that are informed about values of future items can attain EF1 without any reallocations, whereas uninformed algorithms require Θ(T) reallocations. With three or more agents, even informed algorithms must use Ω(T) reallocations, and there is an uninformed algorithm that attains EF1 with O(T3/2) reallocations. Uncertain supply In many fair division problems, such as production of energy from solar cells, the exact amount of available resource may not be known at the time the allocation is decided. Buermann, Gerding and Rastegari study fair division of a homogeneous divisible resource, such as electricity, where the available amount is given by a probability distribution, and the agents' valuations are not linear (for example, each agent has a cap on the amount of the resource he can use; above this cap, his utility does not increase by getting more of the resource). They compare two fairness criteria: ex-post envy-freeness and ex-ante envy-freeness. The latter criterion is weaker (since envy-freeness holds only in expectation), but it allows a higher social welfare. The price of ex-ante envy-freeness is still high: it is at least Ø(n), where n is the number of agents. Moreover, maximizing ex-ante social welfare subject to ex-ante envy-freeness is strongly NP-hard, but there is an integer program to calculate the optimal ex-ante envy-free allocation for a special class of valuation functions - linear functions with a saturation cap. Uncertain demand In many fair division problems, there are agents or groups of agents whose demand for resources is not known when the resources are allocated. For example, suppose there are two villages who are susceptible to power outages. Each village has a different probability distribution over storms: In village A, with probability 40%, two houses are hit; In village B, with probability 70%, three houses are hit. The government has two generators, each of which can supply electricity to a single house. It has to decide how to allocate the generators between the villages. Two important considerations are utilization and fairness: Utilization is defined as the expected number of houses who need a generator and get it. The utilization is maximized (at 1.4) when both generators are given to village B. Fairness is defined as equalizing the difference between the fraction of served houses between the two villages. Here, the fairest allocation is giving a single generator to each village; the fraction is 1/2 in village A and 1/3 in village B, so the difference is 1/6. Donahue and Kleinberg prove upper and lower bounds on the price of fairness—the maximum possible utilization, divided by the maximum utilization of a fair allocation. The bounds are weak in general, but stronger bounds are possible for some specific probability distributions that are commonly used to model demand. Other applications with uncertain demands are allocation of orders in service supply chains, allocation of aircraft to routes, allocation of doctors to surgeries, and more. Uncertain value Morgan studies a partnership dissolution setting, where the partnership assets have the same value for all partners, but this value is not known. Each partner has a noisy signal about the value, but the signals are different. He shows that Divide and choose is not fair - it favors the chooser. He presents another mechanism that can be considered fair in this setting. See also Fair resource allocation in a volatile marketplace. Resource monotonicity—a property of division rules, guaranteeing that if the same rule is applied to a larger cake and the same population, then all agents are better-off. Population monotonicity—a property of division rules, guaranteeing that if the same rule is applied to a smaller population and the same cake, then all agents are better-off. References Fair division
Online fair division
[ "Mathematics" ]
3,125
[ "Recreational mathematics", "Game theory", "Fair division" ]
67,150,672
https://en.wikipedia.org/wiki/N41%20%28nebula%29
N41 (also known as LMC N41, LHA 120-N 41) is an emission nebula in the north part of the Large Magellanic Cloud in the Dorado constellation. Originally catalogued in Karl Henize's "Catalogue of H-alpha emission stars and nebulae in the Magellanic Clouds" of 1956, it is approximately 100 light-years wide and 160,000-170,000 light-years distant. References Dorado Large Magellanic Cloud Emission nebulae Star-forming regions
N41 (nebula)
[ "Astronomy" ]
107
[ "Nebula stubs", "Dorado", "Astronomy stubs", "Constellations" ]
67,151,159
https://en.wikipedia.org/wiki/HD%20155448
HD 155448 is a quintuple star system consisting of 5 young B-type stars . With an apparent magnitude of 8.72, it is too dim to be visible with the naked eye. Parallax measurements from the Hipparcos spacecraft in 1997 give the system a distance of 1,976 light years with a margin of error larger than the parallax itself. The New Hipparcos Reduction gives a distance of 6,272 light years, but still with a statistical margin of error larger than the parallax value. Gaia parallaxes are available for the visible components. For component C, the Gaia Data Release 2 and Gaia Early Data Release 3 (EDR3) parallaxes are both negative and somewhat meaningless. For components A, B, and D, the Gaia EDR3 parallaxes are , , and respectively, implying a distance around 4,000 light years. Before 2011, this star was mistaken as either a Herbig Ae/Be star or a post-AGB object. When the system was studied in 2011, it was originally believed to contain only 4 stars (or at least more than 2 stars). In 2011, a study conducted at the European Southern Observatory in Chile concluded that the "B" star is actually a binary star, thus reclassifying it as a quintuple star system. HD 155448 A, B, C, and D. Periods have been estimated at 27,000 years for Bab, 59,000 years for AB, 111,000 years for Ac, and 327,000 years for AD. However, analysis in 2011 states that the stars are not gravitationally bound to each other. All of the stars are currently on the ZAMS. At present the primary star has a mass greater than 7 solar masses and an effective temperature of 25,000 K, while the companions have masses ranging from 3-6 times the mass of the Sun, and temperatures ranging from 10,000-16,000 K. References Star systems B-type main-sequence stars Scorpius 155448 Be stars 084228 Durchmusterung objects 5
HD 155448
[ "Astronomy" ]
441
[ "Scorpius", "Constellations" ]
67,151,652
https://en.wikipedia.org/wiki/Piezoelectrochemical%20transducer%20effect
The piezoelectrochemical transducer effect (PECT) is a coupling between the electrochemical potential and the mechanical strain in ion-insertion-based electrode materials. It is similar to the piezoelectric effect – with both exhibiting a voltage-strain coupling - although the PECT effect relies on movement of ions within a material microstructure, rather than charge accumulation from the polarization of electric dipole moments. Many different materials have been shown to exhibit a PECT effect including: lithiated graphite.; carbon fibers inserted with lithium, sodium, and potassium; sodiated black phosphorus; lithiated aluminium; lithium cobalt oxide; vanadium oxide nanofibers inserted with lithium and sodium; and lithiated silicon.   These materials all exhibit a voltage-strain coupling, whereby the material expands when it is charged with ions, and contracts when it is discharged. The reverse is also true: when applying a mechanical strain the electrical potential changes. This has led to various proposals of applications for the PECT effect with research focusing on actuators, strain-sensors, and energy harvesters. Origins The PECT effect was first reported by Dr. F Lincoln Vogel in 1981 when studying how intercalation voltages could be used to provide an actuation force in graphitized carbon fibres. The research used sulphate (SO4) ions from sulfuric acid to intercalate into the microstructure of carbon fibers, forming graphite intercalation compounds (GICs). It was hypothesized that an axial strain of up to 2% should be possible, however only 0.2% was observed due to experimental limitations. The effect is often explained by the theories of Larché and Cahn who derived mathematical formulations for the equilibrium relationships between the electric potential, chemical potential, and mechanical stress in solid materials. In summary the theory states that solid materials under mechanical stress undergo a change in chemical potential, which in turn affects their electrical potential. Applications Actuation Since PECT materials expand and contract upon ion-insertion it is possible to use this effect for actuation. Several different materials have been proposed for this, including: carbon fibers inserted with lithium, sodium, and potassium; lithium cobalt oxide; and vanadium oxide nanofibers inserted with lithium and sodium. Applications for PECT-based actuation range from microelectromechanical systems (MEMS), to large morphing structures. Different materials exhibit different amounts of expansion/contraction, with a response that is dependent on the type of ion, as well as the amount of charge. For example, silicon expands by more than 300% when inserted with lithium, whereas graphite expands by around 13%. Carbon fibres expand by up to 1% when inserted with lithium, but only around 0.2% when inserted with potassium. Strain-sensing As PECT materials exhibit a change in voltage upon application of strain, it is possible to calibrate this change in voltage to the level of strain in a material. This has been proposed for applications in battery health monitoring, as well as structural health monitoring. Electricity production When mechanical strain is applied to a PECT material it changes the chemical potential, and therefore the electric potential of that material. Since current flows from more negative materials to more positive materials, it is possible to induce a current flow between two ionically connected materials by simply applying a mechanical strain. It is therefore possible to harness and convert mechanical energy into electrical energy. A number of materials have been demonstrated to be capable of PECT-based energy harvesting, including: carbon fibers inserted with lithium, sodiated black phosphorus; lithiated aluminium; and lithiated silicon. A structural carbon fibre composite has also been shown to be capable of harvesting energy using the PECT effect. Conventional lithium-ion batteries have also been shown to be capable of PECT-based energy harvesting. This effect has most often been demonstrated using a two-electrode bending setup: Two electrodes of the same material are connected ionically through an electrolyte, and electrically via an outer circuit. A bending deformation is applied causing tension in one electrode and compression in the other. The resulting change in chemical potential results in current flow in the outer circuit, which can be used to power an external device. PECT energy harvesting is limited by the rate of ionic diffusion, and therefore is only efficient at low frequency (typically below around 1 Hz). Figures of merit for comparing different PECT-based energy harvesters were formulated by Preimesberger et al. Implications for batteries The PECT effect is also present in typical ion-insertion-based battery electrodes (e.g. Li-ion). The electrodes expand and contract when inserted with ions, which is one of the issues that leads to battery ageing and capacity loss over time. The PECT effect in battery electrodes could be an issue in situations where battery electrodes are mechanically stressed (e.g. in structural batteries), causing a change in electrical potential when the stress-state changes. It has been proposed that the PECT effect in Li-ion batteries could be exploited to measure battery health., and to harvest mechanical energy. References Electrochemistry Piezoelectric materials
Piezoelectrochemical transducer effect
[ "Physics", "Chemistry" ]
1,070
[ "Physical phenomena", "Materials", "Electrical phenomena", "Electrochemistry", "Piezoelectric materials", "Matter" ]
67,151,959
https://en.wikipedia.org/wiki/H1%20Inc.
H1 Inc. is a global healthcare data technology company founded in 2017, and headquartered in New York City. The company's database is used by healthcare and pharmaceutical companies and related organizations to identify healthcare professionals to partner with on research in order to accelerate development of drugs and other treatments. The company has over 400 employees worldwide and about 100 clients including pharmaceutical companies Novartis and AstraZeneca as of November 2021. History The company was founded by Ariel Katz and Ian Sax in November 2017 as H1 Insights. The company started by helping biotech and pharma companies connect with Key Opinion Leaders (KOLs) to advance research, drug development, and other treatments. According to Katz, the company is named for "the statistical representation of a true hypothesis", often expressed as H, as opposed to a null hypothesis, expressed as H. In 2018, the company launched its first market offering, Da Vinci, and subsequently participated in Y Combinator's Winter 2020 batch. In August 2021, H1 acquired Portland, Oregon startup Carevoyance. By November 2021, H1 had over 100 clients including Novartis and AstraZeneca and over 400 employees worldwide. In February 2022, H1 acquired London-based Faculty Opinions, a discovery tool for finding relevant published medical research and assessing its quality. Subject matter experts on the platform recommend and share their opinion on the top 1% of the biomedical literature indexed in PubMed. Faculty Opinions currently has over 190,000 individual recommended articles from over 4,000 journals. Products H1's primary product is its database which includes 160 million peer-reviewed publications, 350,000 clinical trials, 8 billion medical claims, as well as related data points. Data sources include public databases and contributions by its clients and healthcare providers. Clients include pharmaceutical, biotech, financial, data and healthcare organizations. A companion product, H1 Explorer, was introduced in 2021 allowing healthcare professionals to manage their own profiles in the H1 network. Its first market offering, Da Vinci, launched in 2018 and is intended to help pharma companies accelerate the market research phase of drug development. H1's Trial Landscape product was introduced in September 2021 to help pharmaceutical companies identify the right sites and physician investigators for a trial. Funding H1 has raised more than $200M in funding in three rounds: Series C: $100 million led by Altimeter Capital in November 2021, and $33 million in an extension round led by Goldman Sachs Asset Management, Menlo Ventures, Transformation Capital and Novartis Pharma AG in 2022. Series B: $58 million co-led by IVP and Menlo Ventures in December 2020 Series A: $12.9 million led by Menlo Ventures in April 2020 Other investors include Joe Montana, Novartis, Baron Davis, Y Combinator, and Underscore VC. References External links American companies established in 2017 Drug discovery companies Health care companies based in New York (state) Medical databases Medical technology companies of the United States
H1 Inc.
[ "Chemistry" ]
610
[ "Drug discovery companies", "Drug discovery" ]
67,153,627
https://en.wikipedia.org/wiki/Tissue%20clearing
Tissue clearing refers to a group of chemical techniques used to turn tissues transparent. By turning tissues transparent to certain wavelengths of light, it allows one to gain optical access to a tissue. That is, light can pass into and out of the cleared tissue freely, allowing one to see the structures deep within the tissue without physically cutting it open. Many tissue clearing methods exist, each with different strengths and weaknesses. Some are generally applicable, while others are designed for specific applications. Tissue clearing is usually useful only combined with one or more fluorescent labeling techniques such as immunolabeling and subsequently imaged, most often by optical sectioning microscopy techniques. Tissue clearing has been applied to many areas in biological research. It is one of the more efficient ways to perform three-dimensional histology. History In the early 1900s, Werner Spalteholz developed a technique that allowed the clarification of large tissues, using Wintergrünöl (methyl salicylate) and benzyl benzoate. Various scientists then introduced their own variations on Spalteholz's technique. Tuchin et al. introduced tissue optical clearing (TOC) in 1997, adding a new branch of tissue clearing that was hydrophilic instead of hydrophobic like Spalteholz's technique. In the 1980s, Andrew Murray & Marc Kirschner developed a two-step process, wherein tissues were first dehydrated with alcohol and subsequently made transparent by immersion in a mixture of benzyl alcohol and benzyl benzoate (BABB), a technique they coupled with light sheet fluorescence microscopy, which remains the method with the highest clearing efficacy to date, regardless any tissue pre-processing step. In the most extreme case, it allows the clearing of a whole mouse of even a whole human brain. Principles Tissue opacity is thought to be the result of light scattering due to heterogeneous refractive indices. Tissue clearing methods chemically homogenize refractive indices, resulting in almost completely transparent tissue. Classifications While there are multiple class names for tissue-clearing methods, they are all classified based on the final state of the tissue by the end of the clearing method. These include hydrophobic clearing methods, which may also be known as organic, solvent-based, organic solvent-based, or dehydration clearing methods; hydrophilic clearing methods, which may also be known as aqueous-based or water-based methods, and hydrogel-based clearing methods. Labeling Tissue clearing methods have varying compatibility with different methods of fluorescent labeling. Some are better suited to genetic labelling by endogenously expressed fluorescent protein, while others externally delivered probes as immunolabeling and chemical dye labeling. The latter is more general and applicable to all tissues, notably human tissues, but the penetration of the probes becomes a critical problem. Imaging After clearing and labeling, tissues are typically imaged using confocal microscopy, two-photon microscopy, or one of the many variants of light-sheet fluorescence microscopy. Other less commonly used methods include optical projection tomography and stimulated Raman scattering. As long as the tissue allows for the unobstructed passing of light, the optical resolution is fundamentally limited by Abbe diffraction limit. The compatibility of any tissue clearing method with any microscopy system is, therefore, configurational rather than optical. Data Tissue clearing is one of the more efficient ways to facilitate 3D imaging of tissues, and hence generates massive volumes of complex data, which requires powerful computational hardware and software to store, process, analyze, and visualize. A single mouse brain can generate terabytes of data. Both commercial and open-source software exists to address this need, some of it adapted from solutions for two-dimensional images and some of it designed specifically for the three-dimensional images produced by imaging of cleared tissues. Applications Tissue clearing has been applied to the nervous system, bones (including teeth), skeletal muscles, hearts and vasculature, gastrointestinal organs, urogenital organs, skin, lymph nodes, mammary glands, lungs, eyes, tumors, and adipose tissues. Whole-body clearing is less common, but has been done in smaller animals, including rodents. Tissue clearing has also been applied to human cancer tissues. For some techniques, bone tissue must be decalcified to remove light-scattering hydroxyapatite crystals, leaving behind a protein matrix suitable for clearing. References Tissue engineering
Tissue clearing
[ "Chemistry", "Engineering", "Biology" ]
906
[ "Biological engineering", "Cloning", "Chemical engineering", "Tissue engineering", "Medical technology" ]
67,154,389
https://en.wikipedia.org/wiki/British%20unmanned%20aerial%20vehicles%20of%20World%20War%20I
Unmanned Aerial Vehicles (UAVs) include both autonomous (capable of operating without human input) drones and remotely piloted vehicles (RPVs). The UAVs used in World War I were RPVs. Soon after its re-purposing from the Army Balloon Factory to the Royal Aircraft Factory in 1912, designers at this Farnborough base turned their thoughts to flying an unmanned aircraft. During the First World War this pioneering work resulted in trials of remotely controlled aircraft for the Royal Flying Corps and unmanned boats for the Royal Navy that were controlled from 'mother' aircraft. By the end of the war in 1918 Britain had successfully flown and controlled a drone aircraft and a number of fast unmanned motor boats operating in close flotilla formation which had been individually controlled by radio from operators flying in "mother" aircraft. This work then continued in the interwar years. The Factory's 1914 design There is a Royal Aircraft Factory engineering drawing dated October 1914 of an unmanned powered monoplane long with a wingspan. This was developed as a possible defence to counter the threat of aerial bombing from German dirigible airships. This new potential weapon was called "Aerial Target" (AT), a misnomer to fool the Germans into thinking it was a drone plane to test anti-aircraft capabilities. Ruston Proctor Henry Folland at the Royal Aircraft Factory designed an AT powered by an ABC Gnat engine which was built by Ruston Proctor of Lincoln in 1916/1917. Sopwith With Harry Hawker, Sopwith at Kingston upon Thames built a single-bay biplane AT with a wingspan of about which was to carry a explosive charge. Stability came from pronounced dihedral and there was a four-wheel undercarriage. The aircraft was damaged during erection at Feltham and was never tested. The design was later reworked into the Sopwith Sparrow. 1917 Aerial Target The history of UAV target drones started when the Royal Flying Corps developed their prototype remote controlled aircraft and gave it the cover name "Aerial Target" (AT). All the 1917 "Aerial Target" aircraft from the various designers used the radio control system devised by Archibald Low at the RFC's Experimental Works in Feltham. One of Geoffrey de Havilland's "AT" aircraft powered by a Gnat engine that was launched from a pneumatically powered ramp in the RFC trials at Upavon on 21 March 1917 became the world's first powered drone aircraft to fly under radio control. The engine driven actuator applied progressively increasing deflection of the selected control (elevators or rudder) up to its limit until the selection lever was released by the ground operator. With no control demanded the control surface was returned to its trim position by springs. The mechanism was later exhibited by the IWM as "The original model receiving set installed in the radio controlled monoplane used in the trial flight." along with the Selective Transmitter which the operator on the ground used to send control the control signals. Low's system's encoded the command transmissions as a countermeasure to prevent enemy intervention. These codes could be changed daily. By July 1917 six Aerial Targets designed by the Factory had been built and were tested at Northolt. Attempts were made to launch the first three from rails laid on the ground but they all crashed in various ways during the launching process and these trials were terminated. Nevertheless, the Aerial Target was later acknowledged as a viable weapon, stating "aircraft carrying high explosive charges are capable of being controlled by wireless." The "AT" project was transferred to Biggin Hill, to what became the Wireless Experimental Establishment. By 1922 this work had all transferred to the R.A.E., back at Farnborough where it had all begun in 1914. The Royal Flying Corps 1917 Guided Rocket Archibald Low stated "in 1917 the Experimental Works designed an electrically steered rocket... Rocket experiments were conducted under my own patents with the help of Cdr. Brock" Like Low, Brock was an experimental officer. Brock commanded the Royal Navy Experimental Station at Stratford. Pertinent to these rocket experiments, Brock was also a Director of the C.T. Brock & Co. fireworks manufacturers. The patent "Improvements in Rockets" was raised in July 1918 referring by then to the Royal Air Force. It was not published until February 1923 for security reasons. Firing and guidance controls could be either wire or wireless. The propulsion and guidance rocket eflux emerged from the deflecting cowl at the nose. The 1950s IWM exhibition label states "Later in 1917, an electrically steered rocket was designed…. with the designed purpose of pursuing a hostile airman." A model of this dirigible rocket was included in this exhibition. The model was accompanied by a note: "Exhibit that is part of Professor AM Low's exhibits. Model of the wireless controlled dirigible rocket missile designed to pursue a hostile airman." Radio guidance and the Feltham Unit During World War One, work started on radio guided weapons at various establishments, such as the experiments of Capt. Cyril Percy Ryan at Hawkcraig Experimental Station (H.M.S. Tarlair). However, as control by the Munitions Inventions Department over military research was introduced, a centre for the Royal Flying Corps radio guided weapons was established. This was the secret Experimental Works in Feltham. The focus of their work was radio guided systems but the unit also assisted with other tasks. They were involved in testing the Pomeroy bullet and George Constantinescu's synchronization gear. They provided ‘distractions’ for the Zeebrugge raid and its Commanding Officer Archibald Low travelled to France and into neutral Spain during the war to debunk reports of ‘fantastic’ weapons. Low had at least 30 specialists under his command at Feltham supported by their contractors and suppliers. They had motor transport assets and military and police security. Their balloon facility was used to conducted radio reception experiments and they tested their equipment using aircraft with trailing aerials. Low was a qualified RFC Observer. His officers included his second in command Henry Jeffrey Poole, his radio engineer George William Mahoney Whitton, the talented inventor Ernest Windsor Bowen and the carburation specialist Louis Mantell. Low was commended for this work by a number of senior officers including Sir David Henderson (the wartime commander of the RFC) and Admiral Edward Stafford Fitzherbert (Director of Mines and Torpedoes). Sir Henry Norman, 1st Baronet (Chairman of the War Office Committee on Wireless Telegraphy and at this time the Munitions Inventions Department's permanent attaché to the French Ministry of Inventions.) wrote to Low in March 1918, saying "I know of no man who has more extensive and more profound scientific knowledge, combined with a greater gift on imaginative invention than yourself." Their work had started in 1915 the commercial motor garage business owned by Henry Poole. This was in Chiswick. During 1916 the development showed such promise that the RFC established their secret Experimental Works in premises commandeered from the Davis Paraffin Carburettor Company and the Duval Composition Company which were situated in the old Ivory Works in Feltham. Later these Experimental Works were moved to Archibald Low's own premises at 86 High Street, Feltham where all the Navy work was also carried out in 1918. The Adjutant-General investigation Details of the Feltham Experimental Works have survived in the records of a legal claim against Archibald Low. On 5 December 1917 he was accused of plagiarism and abuse of office by a civilian inventor Clifton West. On 26 January 1918 colonel Ernest Swinton provided his friend J. H. Morgan with an assessment of Clifton West as "...a clever man and very ingenious, but tends towards the type of inventions 'crank'. He is also the most perfect mug in the world, as I have told him and is like a bit of toasted cheese to all the rats and crooks within a hundred miles: they smell him coming and get out their Bowie knives." The case against Archibald Low was not pursued. Clifton's plagiarism case involved his Land Torpedo, a rolling cable drum device to snag and destroy barbed wire defences, similar to that patented under instruction from his superiors on behalf of the RFC by Archibald Low. 1918 aircraft-controlled unmanned boats In 1917 the priority for Low's control system changed; the new imperative being to counter the submarine threat. Low and Ernest Bowen transferred into the Royal Navy to adapt the AT system to the airborne control of Royal Navy Distance Control Boats (DCB), a variant of the Coastal Motor Boat to be filled with an explosive charge. Thornycroft were contracted to design these new DCB’s (and the conversions of some of the existing CMBs) to carry this large and heavy explosive payload in the bow. The resulting craft was considered to be fragile though seaworthy (but only in fair weather). The AT work was documented and transferred to Royal Flying Corps radio unit at Biggin Hill. The Feltham Works were still under Low's command and this is where the redevelopment and production of equipment was carried out, clock-driven impulse senders for DCBs being ordered on 13 March 1918. The port/starboard demand from the controller's sender units in the aircraft caused a gyroscope on the boat to change the direction of its axis by "precession" to the "new" required heading. Any "difference" between the boats current heading and the required heading (i.e. the gyroscopes alignment) started an electric motor driving a worm gear in the appropriate direction to turn the rudder. This reduced any "difference" as the boat responded and acquired the new required heading. Thus any difference caused the boat to manoeuvre to keep it on the gyroscope's "required" heading, whether that difference occurred due to wave, wind or tide deflecting the boat or to control signal demands from the "mother" aircraft precessing the gyroscope. Conversions of the 40-ft CMBs Number 3, 9 and 13 were three of the five DCBs built. The extensive trials were successful and the DCB weapon was acknowledged to be "capable of control up to the moment of hitting." Admiral Edward Stafford Fitzherbert (Director of Mines and Torpedoes) stated on 18 March 1918 in a letter concerning Archibald Low's achievements during his Navy tour of duty that "Captain Low was gazetted as Lieut. Commander as from 2 October 1917 recommended by Sir David Henderson, Brig. General Caddell, Brig. General Pitcher and Major, Sir Henry Norman, M.P., P.C.", ... "He has assigned about 14 complete Patent to Services", ... "He has voluntarily lent his entire laboratory and staff to Admiralty etc. where manufacturing is now carried out." and "Three distinct inventions have now been accepted into service after being tested, namely...1. Complete sending control gear for D.C.B. 2. Electrical Gun Timing Apparatus 3. Gun Silencer audiometer Measuring Device". The Royal Navy's D.C.B. Section The secret Distantly Controlled Boat (D.C.B.) Section of the Royal Navy's Signals School, Portsmouth, was set up to develop aerial radio systems for the control of unmanned naval vessels from 'mother' aircraft. This D.C.B. Section was based at Calshot under the command of Eric Gascoigne Robinson VC. On 1 September 1917 George Callaghan, Commander-in-Chief, The Nore, was informed that significant shore and mooring facilities were to be made available for D.C.B. trials and rehearsals in the Thames Estuary. Airfield facilities were also requested in the area for the ‘mother’ aircraft. On 9 October 1917 the Deputy Director of Naval Construction, William Henry Gard assessed HMS Carron for use as a D.C.B. blocking ship but it was considered more suitable as a parent ship and floating repair depot for the D.C.B. Section. By then this D.C.B. Section had access to many vessels including a submarine HMS C4 and to the necessary support of aircraft, pilots and the trained radio control operators. They had conducted trials guiding unmanned boats into the busy waters around Portsmouth Harbour. Then between 28 May and 31 May 1918 trials were undertaken by the Royal Navy Dover Command, using operators in Armstrong Whitworth F.K.8 aircraft Nos. 5082, 5117, in charge of Captain Tate, R.A.F., to control the boats. During these trials Acting Vice-Admiral Sir Roger J. B. Keyes and Rear-Admiral Cecil Frederick Dampier were on board one of these D.C.B.s while it was remotely controlled. Trials included steering them through 'gates' created by motor launches anchored 60 yards apart. A significant number of high ranking and senior Admiralty, Naval and political officials are referenced in the surviving records. Harbour blocking and shipping attacks were considered prime targets for this new weapon. Considerable resources would have been required (and put at risk) to get DCBs within range to launch attacks and this had to be balanced against the chances of success. The launch range was based upon running for 2 hours at 30 knots, the time that the DBC engines could be operated without attention. Capt. Dudley Pound's Admiralty Plans Division report of 6 April 1918 on operations with D.C.B's controlled from aircraft began "It is considered that the time is now ripe to formulate concrete plans. It is assumed that 60–80 miles should be considered the maximum range possible at present under normal conditions. Owing to limitations of wave-lengths, four D.C.B's would be used at a time, but further relays of four boats could be sent at intervals of not less than 5 miles. The D.C.B.s. will contain an explosive charge considerably heavier than any modern Torpedo." The targets he evaluated in detail were enemy vessels at Emden. Zeebrugge. Ostend, at enemy harbours in Adriatic, at Constantinople and its vicinity and at sea. The report states, "As regards lock-gates, wharfs, piers, etc. These can be found at Emden, Zeebrugge, Ostend, etc. Targets on the Elbe are, at present, at rather long range unless it is feasible to employ aircraft in relays." He states, "These boats, with their heavy load of explosive, will tide over the time until suitable aircraft are produced which can carry a torpedo with a head capable of creating a decisive effect on capital ships", and "If 3 or 4 Flotillas (of four each) of these boats were prepared, a continued attack might be made on Ostend". Following a request from the Commander-In-Chief Grand Fleet on 22 July 1918 the report of the Dover Trials assessed the employment of these boats in the Bight or for fleet operations and this report of the 27 September 1918 began with the declaration that stated "Wireless controlling gear for steering a vessel from an aircraft, ship or shore station, is an accomplished fact, and can probably be fitted to any type of vessel. Successful experiments have already been carried out with submarines, motor launches, and 40-foot coastal motor boats." Their main sources of radio control developments were Captain Ryan at the Hawkcraig Experimental Station and Captain Low in the Feltham Experimental Works. However, the DCB Section accessed the work of others such as the Birmingham inventor George Joseph Dallison and the Russian Air Force officer Sergey Alekseevich Oulianine who was based in Paris at this time. As an indication of the extent and urgency of the D.C.B. Section's work, Captain Low recorded the supply of aircraft radio control sender units for trials with DCB No's, 20, 21, 22, and 24 and in one letter stated "... it has meant a very large amount of over time and night work I think it will be necessary to give my men at least two days' rest when once this complete device has been delivered to you." Preparation for D.C.B. Operations Charles Penrose Rushton Coode, The Director of Operations Division (Foreign) suggested that operations would be impeded in Northern areas during the coming winter season. Commanders of the areas covering the targets assessed in the Plans Division report were advised of the capabilities of Distantly Controlled Boats including, on the 7 October 1918, Admiral of the Fleet, Sir Somerset Arthur Gough-Calthorpe, C-in-C, Mediterranean. No D.C.B. operations were mounted before the Armistice. Post War assessment Before the Feltham Experimental Works were closed John Knowles Im Thurn who was at this time the assistant director of Electrical Torpedo and Mining wrote to Archibald Low on 19 May 1919 stating "It is a matter of great regret to me that the Armistice and consequent demobilisation came too soon for your enlarged establishment to fill the important place we had assigned to it, as an experimental offshoot of the Signal School, Portsmouth.....Your extraordinary ability and originality as a designer, combined with your sound scientific training will be a great loss to us.." The Works closed on 13 October 1919. The Final Report of the Post-War Questions Committee, dated 27 March 1920, stated: "We have heard evidence that aircraft carrying high explosive charges are capable of being controlled by wireless as are the Distant Control Boats, but we do not consider that they will be a real menace to Capital Ships." The Questions Committee said on the subject of the DCBs that "it is difficult, if not impossible, for an enemy to interfere with the control by wireless jambing, since each boat works on a different wave length and the discovery of the wave length is a delicate operation" and "these weapons are already capable of being handled in numbers: two of them can be controlled by one aircraft, three of them have been manoeuvred close to one another simultaneously without mutual interference, and probably as many as eight can be handled in a group if the groups are not within about four miles of one another." The committee concluded the DCB weapon "is in a different category from all others in that it is capable of control up to the moment of hitting, and this fact alone justifies close attention to development" into ultimate form as "into a shallow or surface-running torpedo of great size". While they thought that "In its present state of development...that it is not a great menace to the Capital Ship", they said it merited "uninterrupted research both in the perfection of the weapon itself and in the preparation of counter measures". The RFC links to subsequent UAV developments In 1921 the R.A.E. resumed unmanned aircraft development, setting up the Radio Controlled Aircraft Committee. Initially they used their ‘1917 Type Aerial Target’ aircraft refitted with a more powerful 45 h.p. Armstrong Siddeley Ounce engine. In 1925 they developed the `Larynx’. By January 1933 a Fairey Queen IIIF drone target survived unscathed through a major RN gunnery trial. Following further demonstrations using the Queen IIIF ('Faerie Queen') aircraft, the world's first fleet of drones was developed and these entered service in 1935. They were the de Havilland DH.82 Queen Bees. Their control system came out of the same First World War / R.A.E. stable as the original de Havilland 1917 Aerial Target and they were also launched from a pneumatically powered ramp. Over 400 of these were in service before WWII. They were used to test anti-aircraft defences. A 1939 article on the Queen Bee concluded "Twenty years is a long time, but the men who have designed and developed the radio-controlled target aircraft have made full use of that time. Furthermore, the experiments of those twenty years cannot be imitated in a matter of weeks. Not only is Great Britain many years in advance of every other country in this sphere, but she is also likely to remain ahead." The next major development were the first US fleets of target drones during the Second World War. Four veterans of the RFC (and its successor, the Royal Air Force) link the 1917 Aerial Target to these subsequent US drone developments. Archibald Low's commanding officer on the RFC Aerial Target project was Duncan Pitcher. In 1921 he was Robert Loraine's best man. Loraine had a great deal in common with Reginald Denny who founded the Radioplane Company in California. Denny and Loraine were both British actors who had successful careers in the USA. They had been in a West End production together in 1902 in London. They were both veterans of the RFC, they both visited close relative living on the boundaries of Richmond Park in London and they were both flying and making films in Hollywood in the 1930s when Denny became interested in radio controlled aircraft. Denny's Radioplane, the 1940s company that made the first mass-produced drones for the US Army and Navy was eventually acquired by Northrop Grumman who make the RQ-4 Global Hawk drone. The Royal Navy also continued to develop their remote radio control assets. The pre-dreadnought battleship HMS Agamemnon was converted into a remote control target ship in 1920. Imperial War Museum exhibition On 29 June 1955 Low and Lord Brabazon presented a model of the AT and the various artefacts from the Feltham Unit to the Imperial War Museum for their planned exhibition. These included the control system that flew in March 1917. Surviving artefacts and their photographs The Royal Flying Corps' Aerial Target was the world's first drone unmanned aircraft (UAV) to fly under control from the ground. A photograph of this 1917 wingspan Aerial Target aircraft exists. Parts of it were saved by Low and these still exist as well as contemporary photographs although they are not on public display. One of the 1918 Distance Control Boats CMB9/DCB1 has been saved and carefully restored. Recent archival research Until 2016 the RFC Aerial Target project was deemed by most sources to have failed and been terminated. The on-line images of the Imperial War Museum Feltham artefacts were not presented as a collection. Prior to 2019 no known source had published details of the Royal Flying Corps secret patents or demonstrated that they matched and described the items in this IWM collection. The Feltham Works re-application of their system to control the Royal Navy DCBs had not been established. Details of the mysterious Feltham Works were in the National Archives but not published. References to the post war influence of the Feltham Works success as it passed via Biggin Hill to the Royal Aircraft Establishment have now been researched. The suspected influence of Pitcher and Loraine on Denny's involvement with UAVs was recognised in 2019. The Imperial War Museum now state... "The Aerial Target... became the first drone to fly under control when it was tested in March 1917. The pilot (in control of the flight from the ground) on this occasion was the future world speed record holder Henry Segrave". Historical significance During the First World War the Aerial Targets and subsequent DCBs were developed as ripostes to the Central Powers aerial bombing and naval blockade of Britain. The ATs involved the industrial efforts of at least three of the countries major aircraft companies along with the novel engine development of the "Gnat" engine by ABC Motors, the control system development by the Feltham Works and the integration and trials facilities of other RFC bases. The project was sustained over the worst years of the war when continued Munitions Inventions Department approval was required for such projects. The unit also provided radio controls for floating mines. The Feltham Works were one of the precursors of the R.A.E. who inherited the Feltham patents and AT hardware. They resumed development of Remote Piloted Vehicles through the interwar years, leading to the fleet of Queen Bee RPVs. In 1976 Low was inducted into the International Space Hall of Fame and has been called the "Father of Radio Guidance Systems". References Sources Royal Flying Corps British Empire in World War I Drone warfare Science and technology during World War I Radio-controlled aircraft Inventions
British unmanned aerial vehicles of World War I
[ "Technology" ]
4,929
[ "Science and technology during World War I", "Science and technology by war" ]
42,799,166
https://en.wikipedia.org/wiki/Text%20graph
In natural language processing (NLP), a text graph is a graph representation of a text item (document, passage or sentence). It is typically created as a preprocessing step to support NLP tasks such as text condensation term disambiguation (topic-based) text summarization, relation extraction and textual entailment. Representation The semantics of what a text graph's nodes and edges represent can vary widely. Nodes for example can simply connect to tokenized words, or to domain-specific terms, or to entities mentioned in the text. The edges, on the other hand, can be between these text-based tokens or they can also link to a knowledge base. TextGraphs Workshop series The TextGraphs Workshop series is a series of regular academic workshops intended to encourage the synergy between the fields of natural language processing (NLP) and graph theory. The mix between the two started small, with graph theoretical framework providing efficient and elegant solutions for NLP applications that focused on single documents for part-of-speech tagging, word-sense disambiguation and semantic role labelling, got progressively larger with ontology learning and information extraction from large text collections. The 11th edition of the workshop (TextGraphs-11) will be collocated with the Annual Meeting of Association for Computational Linguistics (ACL 2017) in Vancouver, BC, Canada. Areas of interest Graph-based methods for providing reasoning and interpretation of deep learning methods Graph-based methods for reasoning and interpreting deep processing by neural networks, Explorations of the capabilities and limits of graph-based methods applied to neural networks in general Investigation of which aspects of neural networks are not susceptible to graph-based methods. Graph-based methods for Information Retrieval, Information Extraction, and Text Mining Graph-based methods for word sense disambiguation, Graph-based representations for ontology learning, Graph-based strategies for semantic relations identification, Encoding semantic distances in graphs, Graph-based techniques for text summarization, simplification, and paraphrasing Graph-based techniques for document navigation and visualization Reranking with graphs Applications of label propagation algorithms, etc. New graph-based methods for NLP applications Random walk methods in graphs Spectral graph clustering Semi-supervised graph-based methods Methods and analyses for statistical networks Small world graphs Dynamic graph representations Topological and pretopological analysis of graphs Graph kernels, etc. Graph-based methods for applications on social networks Rumor proliferation E-reputation Multiple identity detection Language dynamics studies Surveillance systems, etc. Graph-based methods for NLP and Semantic Web Representation learning methods for knowledge graphs (i.e., knowledge graph embedding) Using graphs-based methods to populate ontologies using textual data, Inducing knowledge of ontologies into NLP applications using graphs, Merging ontologies with graph-based methods using NLP techniques. See also Bag-of-words model Document classification Document-term matrix Hyperlinking Graph database Wiki References External links Gabor Melli's page on text graphs Description of text graphs from a semantic processing perspective. Natural language processing
Text graph
[ "Technology" ]
633
[ "Natural language processing", "Natural language and computing" ]
42,800,254
https://en.wikipedia.org/wiki/Novena%20%28computing%20platform%29
Novena is an open-source computing hardware project designed by Andrew "bunnie" Huang and Sean "Xobs" Cross. The initial design of Novena started in 2012. It was developed by Sutajio Ko-usagi Pte. Ltd. and funded by a crowdfunding campaign which began on April 15, 2014. The first offering was a 1.2 GHz Freescale Semiconductor i.MX6 quad-core ARM architecture computer closely coupled with a Xilinx FPGA. It was offered in "desktop", "laptop", or "heirloom laptop" form, or as a standalone motherboard. On May 19, 2014, the crowdfunding campaign concluded having raised just over 280% of its target. The extra funding allowed the project to achieve the following four "stretch goals", with the three hardware stretch goals being shipped in the form of add-on boards that use the Novena's special high-speed I/O expansion header, as seen in the upper-left of the Novena board: Development of free and open source graphics drivers for the on-board video accelerator (etnaviv) Inclusion of a general-purpose breakout board providing 16 FPGA outputs and eight FPGA inputs (3.3 or 5 V gang-selectable via software), six 10-bit analog inputs (up to 200 ksps sample rate) and two 10-bit analog outputs (~100 ksps max rate) Inclusion of a "ROMulator" breakout board capable of emulating TSOP NAND flash devices Inclusion of a MyriadRF software-defined radio at all hardware-purchasing backing levels. The Novena shipped with a screwdriver, as users are required to install the battery themselves, screw on the LCD bezel of their choice, and obtain speakers as a kit instead of using speaker boxes. Owners of a 3D printer can make and fine tune their own speaker box. The mainboards were manufactured by AQS, an electronics manufacturing services provider. See also Open-source hardware Modular smartphone References External links Novena page on Sutajio Ko-Usagi's wiki Building an Open Source Laptop The Novena Open Laptop Novena Five-Year Anniversary: End-of-Life Laptops Open-source hardware American inventions Computer-related introductions in 2014 Modular design
Novena (computing platform)
[ "Engineering" ]
478
[ "Systems engineering", "Design", "Modular design" ]
42,800,959
https://en.wikipedia.org/wiki/Roofed%20pole
Roofed pole or roofed pillar (, plural: stogastulpiai, from stogas – 'roof' and stulpas – 'pole, pillar') is a traditional Lithuanian wooden shrine. They may have anywhere between one and three layers of stylized roofs. Roofed poles can be simple, or richly decorated. Nowadays the most common ornamentation are a distinctive blend of Christian symbolism and traditional solar, celestial, and nature motifs. Stogastulpiai, together with Lithuanian crosses, are common throughout Lithuania, and can be found in churchyards, village/town squares, cemeteries, farms, parks, in fields and woods, at cross-roads, and as wayside shrines. See also Dievdirbys Lithuanian cross crafting References Architectural elements Architecture in Lithuania Lithuanian folk art
Roofed pole
[ "Technology", "Engineering" ]
163
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
42,801,222
https://en.wikipedia.org/wiki/Breit%E2%80%93Wheeler%20process
The Breit–Wheeler process or Breit–Wheeler pair production is a proposed physical process in which a positron–electron pair is created from the collision of two photons. It is the simplest mechanism by which pure light can be potentially transformed into matter. The process can take the form γ γ′ → e+ e− where γ and γ′ are two light quanta (for example, gamma photons). The multiphoton Breit–Wheeler process, also referred to as nonlinear Breit–Wheeler or strong field Breit–Wheeler in the literature, occurs when a high-energy probe photon decays into pairs propagating through a strong electromagnetic field (for example, a laser pulse). In contrast with the linear process, this can take the form of γ + n ω → e+ e−, where n represents the number of photons, and ω represents the coherent laser field. The inverse process, e+ e− → γ γ′, in which an electron and a positron collide and annihilate to generate a pair of gamma photons, is known as electron–positron annihilation or the Dirac process for the name of the physicist who first described it theoretically and anticipated the Breit–Wheeler process. This mechanism is theoretically characterized by a very weak probability, so producing a significant number of pairs requires two extremely bright, collimated sources of photons having photon energy close to or above the electron and positron rest mass energy. Manufacturing such a source, for instance, a gamma-ray laser, is still a technological challenge. In many experimental configurations, pure Breit–Wheeler is dominated by other more efficient pair creation processes that screen pairs produced via this mechanism. The Dirac process (pair annihilation) has, on the other hand, been extensively verified. This is also the case for the multi-photon Breit–Wheeler, which was observed at the Stanford Linear Accelerator Center in 1997 by colliding high-energy electrons with a counter-propagating terawatt laser pulse. Although this mechanism is still one of the most difficult to be observed experimentally on Earth, it is of considerable importance for the absorption of high-energy photons travelling cosmic distances. The photon–photon and the multiphoton Breit–Wheeler processes are described theoretically by the theory of quantum electrodynamics. History The photon–photon Breit–Wheeler process was described theoretically by Gregory Breit and John A. Wheeler in 1934 in Physical Review. It followed previous theoretical work of Paul Dirac on antimatter and pair annihilation. In 1928, Paul Dirac's work proposed that electrons could have positive and negative energy states following the framework of relativistic quantum theory but did not explicitly predict the existence of a new particle. Experimental observations Photon–photon Breit–Wheeler possible experimental configurations Although the process is one of the manifestations of the mass–energy equivalence, as of 2017, the pure Breit–Wheeler has never been observed in practice because of the difficulty in preparing colliding gamma ray beams and the very weak probability of this mechanism. Recently, different teams have proposed novel theoretical studies on possible experimental configurations to finally observe it on Earth. In 2014, physicists at Imperial College London proposed a relatively simple way to physically demonstrate the Breit–Wheeler process. The collider experiment that the physicists proposed involves two key steps. First, they would use an extremely powerful high-intensity laser to accelerate electrons to nearly the speed of light. They would then fire these electrons into a slab of gold to create a beam of photons a billion times more energetic than those of visible light. The next stage of the experiment involves a tiny gold can called a hohlraum (German for 'empty room' or 'cavity'). Scientists would fire a high-energy laser at the inner surface of this hohlraum to create a thermal radiation field. They would then direct the photon beam from the first stage of the experiment through the centre of the hohlraum, causing the photons from the two sources to collide and form electrons and positrons. It would then be possible to detect the formation of the electrons and positrons when they exited the can. Monte Carlo simulations suggest that this technique is capable of producing of the order of 105 Breit–Wheeler pairs in a single shot. In 2016, a second novel experimental setup was proposed theoretically to demonstrate and study the Breit–Wheeler process by colliding two high-energy photon sources (composed of non-coherent hard x-ray and gamma-ray photons) generated from the interaction of two extremely intense lasers on solid thin foils or gas jets. The forthcoming short-pulse extremely intense lasers, laser interaction with solid target will be the place of strong radiative effects driven by the nonlinear inverse quantum scattering. This effect, negligible so far, will become a dominant cooling mechanism for the extremely relativistic electrons accelerated above the 100 MeV level at the laser-solid interface via different mechanisms. Multiphoton Breit–Wheeler experiments The multiphoton Breit–Wheeler process has already been observed and studied experimentally. One of the most efficient configurations to maximize the multiphoton Breit–Wheeler pair production consists on colliding head-on a bunch of gamma photon with a counter-propagating (or with a slight collision angle, the co-propagating configuration being the less efficient configuration) ultra-high intensity laser pulse. To first create the photons and then have the pair production in an all-in-one setup, the similar configuration can be used by colliding GeV electrons. Depending on the laser intensity, these electrons will first radiate gamma photons via the so-called non-linear inverse Compton scattering mechanism when interacting with the laser pulse. Still interacting with the laser, the photons then turn into multiphoton Breit–Wheeler electron–positron pairs. This method was used in 1997 at the Stanford Linear Accelerator Center. Researchers were able to conduct the multi-photon Breit–Wheeler process using electrons to first create high-energy photons, which then underwent multiple collisions to produce electrons and positrons, all within the same chamber. Electrons were accelerated in the linear accelerator to an energy of 46.6 GeV before being sent head-on into a Neodymium (Nd:glass) linear polarized laser of intensity 1018 W/cm2 (maximal electric field amplitude of around 6×109 V/m), of wavelength 527 nanometers and duration 1.6 picoseconds. In this configuration, it has been estimated that photons of energy up to 29 GeV were generated. This led to the yield of 106 ±14 positrons with a broad energy spectrum in the GeV level (peak around 13 GeV). The aforementioned experiment may be reproduced in the future at SLAC with more powerful laser technologies. The use of higher laser intensities (1020 W/cm2) is now easily achievable with short-pulse titanium-sapphire laser solutions that would significantly enhance process efficiencies (inverse nonlinear Compton and nonlinear Breit–Wheeler pair creation) leading to several orders of magnitude higher antimatter production, enabling higher-resolution measurements, additional mass-shift, as well as nonlinear and spin effects. The extreme intensities expected to be available in future multi-petawatt laser systems will allow all-optical, laser–electron collision experiments where the electron beam is generated from direct laser interaction with a gas jet in a so-called laser wakefield acceleration regime. The resulting electron bunch is then made to interact with a second high-power laser in order to study QED processes. The feasibility of an all-optical multi-photon Breit–Wheeler pair production scheme has first been proposed theoretically in Implementation of this scheme is restricted to multi-beam short-pulse extreme-intensity laser facilities such as the CILEX-Apollon and ELI systems (CPA titanium sapphire technology at 0.8 micrometer, duration of 15–30 femtoseconds). The generation of electron beams of few GeV and few nanocoulomb is possible with a first laser of 1 petawatt combined with the use of tuned and optimized gas-jet density profiles such as two-step profiles. Strong pair generation can be achieved by colliding head-on this electron beam with a second laser of intensity above 1022 W/cm2. In this configuration at this level of intensity, theoretical studies predict that several hundreds of pico-Coulombs of antimatter could be produced. This experimental setup could even be one of the most prolific positron yield factory. This all-optical scenario may be preliminary tested with lower laser intensities of the order of 1021 W/cm2. In July 2021 evidence consistent with the process was reported by the STAR detector one of the four experiments at the Relativistic Heavy Ion Collider although it was unclear if it was due to massless photons or massive virtual photons, vacuum birefringence was also studied obtaining evidence enough to claim the first known observation of the process. See also Two-photon physics References Photonics Hypothetical processes Quantum electrodynamics
Breit–Wheeler process
[ "Physics" ]
1,903
[ "Theoretical physics", "Hypotheses in physics" ]
42,801,231
https://en.wikipedia.org/wiki/AMBAO
AMBAO is a certification mark for chocolate created by the Belgian Ministry of Economic Affairs. The mark certifies that the product has been made without any other vegetable fats other than cocoa fats, or any artificial additives. The AMBAO scheme was designed to resist the effects of the European Cocoa and Chocolate Directive, which allowed the use of up to 5% non-cocoa vegetable fats in chocolate. References Chocolate industry Certification marks Belgian legislation Year of introduction missing
AMBAO
[ "Mathematics" ]
97
[ "Symbols", "Certification marks" ]
42,801,873
https://en.wikipedia.org/wiki/Raffaelea%20quercivora
Raffaelea quercivora is a species of fungus in the family Ophiostomataceae. It causes Japanese oak wilt disease, and is spread by the ambrosia beetle (Platypus quercivorus). It has small obovoid to pyriform sympodioconidia and slender, long conidiophores. The fungus has been isolated from the body surfaces and mycangia of the beetle. References Further reading External links Fungi described in 2002 Ophiostomatales Fungus species
Raffaelea quercivora
[ "Biology" ]
113
[ "Fungi", "Fungus species" ]
42,801,895
https://en.wikipedia.org/wiki/Ghana%20Planetarium
The Ghana Planetarium is located behind the Ghana Police Headquarters in Cantonments, Accra. It is open throughout the year. The planetarium of Accra was founded by Dr. Jacob and Jane Ashong and was constructed from their pension in 2009. It was officially opened on Thursday January 22, 2009 by the British High Commissioner, Nicholas Westcott. Also attending were The British Council Director, The French Ambassador and the Chief of Nungua and his entourage. References 2009 establishments in Ghana Planetaria Accra
Ghana Planetarium
[ "Astronomy" ]
102
[ "Astronomy education", "Astronomy organization stubs", "Astronomy stubs", "Astronomy organizations", "Planetaria" ]
42,801,977
https://en.wikipedia.org/wiki/Desulfohalobium%20retbaense
Desulfohalobium retbaense is a bacterium and serves as the type species of its genus. It is halophilic, sulfate-reducing, motile, nonsporulating and rod-shaped with polar flagella. The type strain is strain DSM 5692. Its genome has been sequenced. References Further reading External links LPSN Type strain of Desulfohalobium retbaense at BacDive - the Bacterial Diversity Metadatabase Bacteria described in 1991 Desulfovibrionales
Desulfohalobium retbaense
[ "Biology" ]
110
[ "Bacteria stubs", "Bacteria" ]
42,802,029
https://en.wikipedia.org/wiki/Geobacter%20bemidjiensis
Geobacter bemidjiensis is a Fe(III)-reducing bacteria. It is Gram-negative, slightly curved rod-shaped and is motile via means of monotrichous flagella. Its type strain is BemT (=ATCC BAA-1014T =DSM 16622T =JCM 12645T). See also List of bacterial orders List of bacteria genera References Further reading External links LPSN Type strain of Geobacter bemidjiensis at BacDive - the Bacterial Diversity Metadatabase Bacteria described in 2005 Thermodesulfobacteriota
Geobacter bemidjiensis
[ "Biology" ]
126
[ "Bacteria stubs", "Bacteria" ]
42,802,042
https://en.wikipedia.org/wiki/Geobacter%20psychrophilus
Geobacter psychrophilus is a Fe(III)-reducing bacterium. It is Gram-negative, slightly curved, rod-shaped and motile via means of monotrichous flagella. Its type strain is P35T (=ATCC BAA-1013T =DSM 16674T =JCM 12644T). References Further reading External links Type strain of Geobacter psychrophilus at BacDive - the Bacterial Diversity Metadatabase Bacteria described in 2005 Thermodesulfobacteriota
Geobacter psychrophilus
[ "Biology" ]
114
[ "Bacteria stubs", "Bacteria" ]
42,802,053
https://en.wikipedia.org/wiki/Desulfacinum%20infernum
Desulfacinum infernum is a thermophilic sulfate-reducing bacterium, the type species of its genus. Its cells are oval, 1.5 by 2.5-3μm, non-motile and gram-negative. References Further reading Barton, Larry L., and W. Allan Hamilton, eds. Sulphate-reducing bacteria: Environmental and engineered systems. Cambridge University Press, 2007. Vos, P., et al. "Bergey’s Manual of Systematic Bacteriology, Volume 3: The Firmicutes." (2009). External links LPSN Type strain of Desulfacinum infernum at BacDive - the Bacterial Diversity Metadatabase Thermodesulfobacteriota Bacteria described in 1995
Desulfacinum infernum
[ "Biology" ]
162
[ "Bacteria stubs", "Bacteria" ]
42,802,057
https://en.wikipedia.org/wiki/Desulfacinum%20hydrothermale
Desulfacinum hydrothermale is a thermophilic sulfate-reducing bacterium. Its cells are oval-shaped, 0.8–1 μm in width and 1.5–2.5 μm in length, motile and Gram-negative. The type of strain is MT-96T (=DSM 13146). References Further reading Barton, Larry L., and W. Allan Hamilton, eds. Sulphate-reducing bacteria: Environmental and engineered systems. Cambridge University Press, 2007. Vos, P., et al. "Bergey’s Manual of Systematic Bacteriology, Volume 3: The Firmicutes." (2009). External links LPSN WORMS Type strain of Desulfacinum hydrothermale at BacDive - the Bacterial Diversity Metadatabase Thermodesulfobacteriota Bacteria described in 2000
Desulfacinum hydrothermale
[ "Biology" ]
183
[ "Bacteria stubs", "Bacteria" ]
42,802,066
https://en.wikipedia.org/wiki/Desulfovibrio%20halophilus
Desulfovibrio halophilus is a halophilic sulfate-reducing bacterium. References Further reading Staley, James T., et al. "Bergey’s manual of systematic bacteriology, vol. 3. "Williams and Wilkins, Baltimore, MD (2012). Alsharhan, Abdulrahman S., and CHRISTOPHER G. St C. Kendall. "Introduction to Quaternary carbonate and evaporite sedimentary facies and their ancient analogues." Int. Assoc. Sedimentol. Spec. Publ 43 (2011): 1–10. Barton, Larry L., and W. Allan Hamilton, eds. Sulphate-reducing bacteria: Environmental and engineered systems. Cambridge University Press, 2007. External links LPSN Type strain of Desulfovibrio halophilus at BacDive - the Bacterial Diversity Metadatabase Bacteria described in 1991 Desulfovibrio
Desulfovibrio halophilus
[ "Biology" ]
188
[ "Bacteria stubs", "Bacteria" ]
42,802,068
https://en.wikipedia.org/wiki/Desulfovibrio%20gabonensis
Desulfovibrio gabonensis is a moderately halophilic sulfate-reducing bacterium. Its cells are motile curved rods that have a single polar flagellum. Its type strain is SEBR 2840 (= DSM 10636). References Further reading Staley, James T., et al. "Bergey’s manual of systematic bacteriology, vol. 3. "Williams and Wilkins, Baltimore, MD (2012). Alsharhan, Abdulrahman S., and CHRISTOPHER G. St C. Kendall. "Introduction to Quaternary carbonate and evaporite sedimentary facies and their ancient analogues." Int. Assoc. Sedimentol. Spec. Publ 43 (2011): 1–10. Barton, Larry L., and W. Allan Hamilton, eds. Sulphate-reducing bacteria: Environmental and engineered systems. Cambridge University Press, 2007. External links Type strain of Desulfovibrio gabonensis at BacDive - the Bacterial Diversity Metadatabase Bacteria described in 1996 Desulfovibrio
Desulfovibrio gabonensis
[ "Biology" ]
216
[ "Bacteria stubs", "Bacteria" ]
42,802,159
https://en.wikipedia.org/wiki/Deinococcus%20saxicola
Deinococcus saxicola is a species of low temperature and drought-tolerating, UV-resistant bacteria from Antarctica. It is Gram-positive, non-motile and coccoid-shaped. Its type strain is AA-1444T (DSM 15974T). References Further reading Bej, Asim K., Jackie Aislabie, and Ronald M. Atlas, eds. Polar microbiology: the ecology, biodiversity and bioremediation potential of microorganisms in extremely cold environments. CRC Press, 2009. Staley, James T., et al. "Bergey's manual of systematic bacteriology, vol. 3."Williams and Wilkins, Baltimore, MD (1989): 2250–2251. External links LPSN Type strain of Deinococcus saxicola at BacDive - the Bacterial Diversity Metadatabase Polyextremophiles Deinococcales Bacteria described in 2006
Deinococcus saxicola
[ "Biology" ]
203
[ "Bacteria stubs", "Bacteria" ]
42,802,461
https://en.wikipedia.org/wiki/Doubling%20%28textiles%29
Doubling is a textile industry term synonymous with combining. It can be used for various processes during spinning. During the carding stage, several sources of roving are doubled together and drawn, to remove variations in thickness. After spinning, yarn is doubled for many reasons. Yarn may be doubled to produce warp for weaving, to make cotton for lace, crochet and knitting. It is used for embroidery threads and sewing threads, for example: sewing thread is usually 6-cable thread. Two threads of spun 60s cotton are twisted together, and three of these double threads are twisted into a cable, of what is now 5s yarn. This is mercerised, gassed (AKA flamed) and wound onto a bobbin. Processing of cotton Doubling in the carding process In a wider sense carding can refer to the four processes of willowing, lapping, carding and drawing. During willowing the fibres are loosened; in lapping the dust is removed to create a flat sheet or lap of fibres. Carding combs the tangled lap into a thick rope or sliver of 1/2 inch in diameter, and removes the shorter fibres creating a stronger yarn. During the carding process the staples are separated and then assembled into a loose strand (sliver or tow). The carders line up the staples to prepare them for spinning. The carding machine consists mainly of one big roller with smaller ones surrounding it. All of the rollers are covered in small teeth, and as the cotton progresses further on the teeth get finer (i.e. closer together). The cotton leaves the carding machine in the form of a sliver; a large rope of fibres. In drawing, four slivers are combined into one. Repeated drawing decreases the quality of the sliver drastically, disabling finer counts from being spun. Each sliver will have thin and thick spots. By combining, or doubling several slivers together a more consistent size can be reached. The slivers are separated into rovings. These rovings (or slubbings) are then what are used in the spinning process. For machine processing, a roving is about the width of a pencil. The rovings are collected in a drum and proceed to the slubbing frame which adds twist, and winds onto bobbins. Intermediate Frames are used to repeat the slubbing process to produce a finer yarn, and then the roving frames reduces it to a finer thread, gives more twist, makes more regular and even in thickness, and winds onto a smaller tube. Over time the processes were refined. At Masson Mill and at Helmshore, there are Derby Doublers. The Masson machine, built by Platts of Oldham in 1902, doubled rovings from the breaker cards into card lap. This was then passed through the finisher card to produce the rovings This process was known as double carding. The Derby Doubler was patented by Evan Leigh of Ashton-under-Lyne (21 December 1810 – 2 February 1876) and, though superseded, still continued in service in condensing coarse counts. The doubling process Doubling of yarn where two or more stands or ends of yarn are twisted together. There are many purposes where doubled yarn is used. Sometimes thread is doubled to make warp, and it is invariably used for the manufacture of knitting yarn, crochet yarn and sewing yarn. All these yarns must be smooth and free from knots. In a sewing thread, the treads are doubled in two phases. Two or three strands are twisted together then three of these threads are twisted together, to form a six or nine cord. The spun yarn is wound onto a bobbin using a doubling winding machine, and two or more of these bobbins are placed on doubling frame (doubling winding machine). The ends pass through a series of rollers and twisted together onto one bobbin using a spindle and flyer. The process here is similar to that found in one of Arkwrights Water frames, though the size of the ring, spindle and traveller are predictably larger. Alternatively a 'twiner' is used: this is a modified spinning mule and is mainly consigned to the doubling of warp thread. High quality doubling depended on keeping the tension correct and feeding the produced thread evenly and tightly on a bobbin or flangeless paper tubes See also Fine Cotton Spinners and Doublers Association, a major UK cotton spinning company References Notes Bibliography Textiles Textile industry History of the textile industry Textile engineering Textile techniques
Doubling (textiles)
[ "Physics", "Engineering" ]
928
[ "Applied and interdisciplinary physics", "Textile engineering" ]
42,802,598
https://en.wikipedia.org/wiki/Rudolf%20Nadolny
Rudolf Nadolny (12 July 1873 – 18 May 1953) was a military intelligence officer under German Foreign Office cover. During the First World War, he worked in a branch of the German General Staff that experimented in biological warfare. He was the German Ambassador to Turkey (1924-1933) and the Soviet Union (1933-1934) and the head of the German delegation at the World Disarmament Conference (1932-1933). He sought to pursue close relations between the Weimar Republic and the Soviet Union. Nadolny left the diplomatic service in opposition to Hitler's policy towards the Soviets. Early life and career Nadolny was born in Groß Stürlack, East Prussia (now Sterławki Wielkie, Poland) to Heinrich (1847–1944) and Agnes Nadolny, née Trinker (1847–1910). His father's family had been landowners in East Prussia since the 14th century. His mother's ancestors were Protestant exiles from Salzburg. Nadolny passed his Abitur at the gymnasium (school) of Rastenburg in 1892 and studied law at the University of Königsberg. Nadolny joined the German diplomatic service in 1902 and was deployed in St. Petersburg in 1903 to 1907 in which he witnessed the Russian Revolution of 1905 and the Russo-Japanese War. Nadolny was then sent to Persia, Bosnia and Albania. First World War During the First World War, Nadolny led a political section of the German General Staff, the so-called Sektion Politik Berlin des Generalstabs. The group was responsible for acts of sabotage by using explosives and biological warfare. In 1915, Nadolny shipped cultures of anthrax and glanders, a horse disease that is also deadly to humans, to the German embassy in Romania to use them to target animals traded with the Russian Empire. The operation lasted until August 1916. Bacteria used by Nadolny were prepared in Berlin, and from there, Nadolny sent out the biological agents to Spain, the United States, Argentina and Romania It was Nadolny who sent the infamous Anton Dilger to the still-neutral United States, where Dilger engaged in one of the first acts of state sponsored bioterrorism in the 20th century, In July 1916, he became the German chargé d'affaires in Persia but returned to Germany in November 1917 to serve as the acting head of the Eastern Department of the German Foreign Office. As such, Nadolny took part in the negotiations that led to the Treaty of Brest-Litovsk. Interwar era After the end of the First World War, Nadolny was the Foreign Office's representative in the Office of the German President. From January 1920, he led the German legation in Stockholm and became the German ambassador to Turkey in May 1924. During the interwar era, Nadolny wrote that out of mixing of German and "Slavic" blood, a new species and race would be born, an "East-Elbian" race, and he attacked the Czechoslovak national leader Jan Masaryk for criticising the "Prussian Spirit". Nadolny claimed claiming that Czechs were only relatives of Prussians. In November 1928, after the death of Ulrich von Brockdorff-Rantzau, the German ambassador in Moscow, Nadolny applied for this post, but his efforts were vetoed by Gustav Stresemann, who appointed Herbert von Dirksen instead. From February 1932 to October 1933, Nadolny was the head of the German delegation at the World Disarmament Conference in Geneva. In the spring of 1932, when General Kurt von Schleicher brought down the government of Heinrich Brüning and had his friend Franz von Papen appointed chancellor, Nadolny was of the three men to be interviewed by Scheicher as a possible foreign minister for the Papen government. The other two men interviewed to be Foreign Minister were Baron Leopold von Hoesch, the German ambassador to France, and Baron Konstantin von Neurath, the German ambassador to the United Kingdom. Ultimately, Neurath was chosen by Schleicher to be the foreign minister in the "Cabinet of the President's Friends", as the Papen government was known, and he never forgot that Nadolny was disappointed that he did not get the portfolio that he had greatly wanted. Neurath stayed on as Foreign Minister and served in the Papen, Schleicher and Hitler governments until 4 February 1938, he was sacked by Hitler. Nadolny became the German ambassador to the Soviet Union in October 1933. Neurath, who saw Nadolny as a rival, and knew of Hitler's anti-Soviet inclinations and Nadolny's advocacy of better relations with the Soviet Union, gave him the Moscow appointment to ruin his career. Nadolny's attempts to enhance German–Soviet relations on the basis of the Treaty of Rapallo (1922) were largely unsuccessful as it contradicted Hitler's policy. Nadolny believed in 1933 that it was feasible for Nazi Germany to annex Polish territories in Pomerania in exchange for promising Lithuanian Memel to the Poles Nadolny argued against the 1934 German–Polish Non-Aggression Pact because of its influence on German–Soviet relations and urged "decent treatment" of Soviet ambassador Maxim Litvinov "even if he is Jewish". In a conference with Hitler, Nadolny pointed out that in his view close ties with the Soviets were of essential interest, but Hitler rejected any compromise with Bolshevism. However, even Nadolny admitted that a totally-friendly relationship with Russia was impossible. The meeting, which was described as a "stormy one", ended with Hitler declaring the conversation finished; Nadolny answered that "the conversation had just begun". On another occasion, he addressed Hitler as "Herr Reichskanzler", as opposed to the common "Mein Führer", and he refused to use the Nazi salute. Nadolny resigned on 16 June 1934 and worked as an administrator of an estate. Second World War During the Second World War, he served as a captain and later major at the High Command of the Wehrmacht and on the staff of Admiral Wilhelm Canaris. Postwar In 1945 Nadolny, who had no compromising Nazi Party affiliation, became the president of the German Red Cross and was active in the Society for German reunification and the "German Unity Association. With the growing tensions between the Western Allies and the Soviets, Nadolny was sometimes seen as a Soviet agent and was generally mistrusted. During the Blockade of Berlin in 1948 to 1949, Nadolny moved to West Germany. He died in 1953 in Düsseldorf. Family Nadolny married Änny Matthiessen (1882–1977) in 1905. Burkard Nadolny (1905–68) was their son and Sten Nadolny their grandson. References External links 1873 births 1953 deaths People from Ryn People from East Prussia Ambassadors of Germany to the Soviet Union Ambassadors of Germany to Turkey University of Königsberg alumni German military personnel of World War I German Army officers of World War II People related to biological warfare
Rudolf Nadolny
[ "Biology" ]
1,461
[ "People related to biological warfare", "Biological warfare" ]
42,803,144
https://en.wikipedia.org/wiki/Uncertainties%20in%20building%20design%20and%20building%20energy%20assessment
The detailed design of buildings needs to take into account various external factors, which may be subject to uncertainties. Among these factors are prevailing weather and climate; the properties of the materials used and the standard of workmanship; and the behaviour of occupants of the building. Several studies have indicated that it is the behavioural factors that are the most important among these. Methods have been developed to estimate the extent of variability in these factors and the resulting need to take this variability into account at the design stage. Sources of uncertainty Earlier work includes a paper by Gero and Dudnik (1978) presenting a methodology to solve the problem of designing heating, ventilation and air conditioning systems subjected to uncertain demands. Since then, other authors have shown an interest in the uncertainties that are present in building design. Ramallo-González (2013) classified uncertainties in energy building assessment tools in three different groups: Environmental. Uncertainty in weather prediction under changing climate; and uncertain weather data information due to the use of synthetic weather data files: (1) use of synthetic years that do not represent a real year, and (2) use of a synthetic year that has not been generated from recorded data in the exact location of the project but in the closest weather station. Workmanship and quality of building elements. Differences between the design and the real building: Conductivity of thermal bridges, conductivity of insulation, value of infiltration (air leakage), or U-values of walls and windows. Behavioural. All other parameters linked to human behaviour, e.g. opening of doors and windows, use of appliances, occupancy patterns or cooking habits. Weather and climate Climate change Buildings have long life spans: for example, in England and Wales, around 40% of the office blocks existing in 2004 were built before 1940 (30% if considered by floor area), and 38.9% of English dwellings in 2007 were built before 1944. This long life span makes buildings likely to operate with climates that might change due to global warming. De Wilde and Coley (2012) showed how important is to design buildings that take into consideration climate change and that are able to perform well in future weathers. Weather data The use of synthetic weather data files may introduce further uncertainty. Wang et al. (2005) showed the impact that uncertainties in weather data (among others) may cause in energy demand calculations. The deviation in calculated energy use due to variability in the weather data were found to be different in different locations from a range of (-0.5% to 3%) in San Francisco to a range of (-4% to 6%) in Washington D.C. The ranges were calculated using a Typical Meteorological Year (TMY) as the reference. The spatial resolution of weather data files was the concern covered by Eames et al. (2011). Eames showed how a low spatial resolution of weather data files can be the cause of disparities of up to 40% in the heating demand. The reason is that this uncertainty is not understood as an aleatory parameter but as an epistemic uncertainty that can be solved with the appropriate improvement of the data resources or with specific weather data acquisition for each project. Building materials and workmanship A large study was carried out by Leeds Metropolitan University at Stamford Brook in England. This project saw 700 dwellings built to high efficiency standards. The results of this project show a significant gap between the energy used expected before construction and the actual energy use once the house is occupied. The workmanship is analysed in this work. The authors emphasise the importance of thermal bridges that were not considered for the calculations, and that the thermal bridges that have the largest impact on the final energy use are those originated by the internal partitions that separate dwellings. The dwellings that were monitored in use in this study show a large difference between the real energy use and that estimated using the UK Standard Assessment Procedure (SAP), with one of them giving +176% of the expected value when in use. Hopfe has published several papers concerning uncertainties in building design. A 2007 publication looks into uncertainties of types 2 and 3. In this work the uncertainties are defined as normal distributions. The random parameters are sampled to generate 200 tests that are sent to the simulator (VA114), the results from which will be analysed to check the uncertainties with the largest impact on the energy calculations. This work showed that the uncertainty in the value used for infiltration is the factor that is likely to have the largest influence on cooling and heating demands. De Wilde and Tian (2009) agreed with Hopfe on the impact of uncertainties in infiltration upon energy calculations, but also introduced other factors. The work of Schnieders and Hermelink (2006) showed a substantial variability in the energy demands of low-energy buildings designed under the same (Passivhaus) specification. Occupant behaviour Blight and Coley (2012) showed that substantial variability in energy use can be occasioned due to variance in occupant behaviour, including the use of windows and doors. Their paper also demonstrated that their method of modelling occupants’ behaviour accurately reproduces actual behavioural patterns of inhabitants. This modelling method was the one developed by Richardson et al. (2008), using the Time-Use Survey (TUS) of the United Kingdom as a source for real behaviour of occupants, based on the activity of more than 6000 occupants as recorded in 24-hour diaries with a 10-minute resolution. Richardson's paper shows how the tool is able to generate behavioural patterns that correlate with the real data obtained from the TUS. Multifactorial studies In the work of Pettersen (1994), uncertainties of group 2 (workmanship and quality of elements) and group 3 (behaviour) of the previous grouping were considered. This work shows how important occupants’ behaviour is on the calculation of the energy demand of a building. Pettersen showed that the total energy use follows a normal distribution with a standard deviation of around 7.6% when the uncertainties due to occupants are considered, and of around 4.0% when considering those generated by the properties of the building elements. Wang et al. (2005) showed that deviations in energy demand due to local variability in weather data were smaller than the ones due to operational parameters linked with occupants’ behaviour. For those, the ranges were (-29% to 79%) for San Francisco and (-28% to 57%) for Washington D.C. The conclusion of this paper is that occupants will have a larger impact in energy calculations than the variability between synthetically generated weather data files. Another study performed by de Wilde and Wei Tian (2009) compared the impact of most of the uncertainties affecting building energy calculations, including uncertainties in: weather, U-Value of windows, and other variables related with occupants’ behaviour (equipment and lighting), and taking into account climate change. De Wilde and Tian used a two-dimensional Monte Carlo simulation analysis to generate a database obtained with 7280 runs of a building simulator. A sensitivity analysis was applied to this database to obtain the most significant factors on the variability of the energy demand calculations. Standardised regression coefficients and standardised rank regression coefficients were used to compare the impacts of the uncertainties. Their paper compares many of the uncertainties with a good sized database providing a realistic comparison for the scope of the sampling of the uncertainties. See also ASHRAE Building energy simulation References Building engineering
Uncertainties in building design and building energy assessment
[ "Engineering" ]
1,526
[ "Building engineering", "Civil engineering", "Architecture" ]
42,804,165
https://en.wikipedia.org/wiki/Gay-Lussac%E2%80%93Humboldt%20Prize
The Gay-Lussac–Humboldt Prize is a German–French science prize. It was created in 1981 by French President Valéry Giscard d'Estaing and German Chancellor Helmut Schmidt based on the recommendation of the German and French research ministries. The prize money is €60,000. The prize is awarded to researchers that have made outstanding contributions in science, especially in cooperation between the two countries. Four to five German and French scientists from all research disciplines are honored with this award every year. The prize was originally named after Alexander von Humboldt and carries since 1997 the double name Gay-Lussac–Humboldt. The Gay-Lussac-Humboldt Award is granted by the French Ministry of Higher Education and Research to German researchers nominated by French researchers. On the other hand, it is awarded by the Alexander von Humboldt Foundation to French researchers nominated by German scientists. Prize winners References Sources Gay-Lussac–Humboldt Prize (PDF, in French) Laureates 1983–2010 (PDF, in French) 2012 Laureates (PDF, in French) 2013 Laureates (in French) 2014 Laureates (in French) 2015 Laureates (in French) 2016 Laureates (in French) Science and technology awards Alexander von Humboldt Valéry Giscard d'Estaing
Gay-Lussac–Humboldt Prize
[ "Technology" ]
261
[ "Science and technology awards" ]