id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
54,632,457
https://en.wikipedia.org/wiki/Halorubrum%20alkaliphilum
Halorubrum alkaliphilum is a halophilic Archaeon in the family of Halorubraceae. References Euryarchaeota Archaea described in 2005
Halorubrum alkaliphilum
[ "Biology" ]
38
[ "Archaea", "Archaea stubs" ]
54,632,461
https://en.wikipedia.org/wiki/Halorubrum%20coriense
Halorubrum coriense is a halophilic Archaeon in the family of Halorubraceae. References Euryarchaeota Archaea described in 1996
Halorubrum coriense
[ "Biology" ]
37
[ "Archaea", "Archaea stubs" ]
54,632,462
https://en.wikipedia.org/wiki/Halorubrum%20distributum
Halorubrum distributum is a halophilic Archaeon in the family of Halorubraceae. References Euryarchaeota Archaea described in 1989
Halorubrum distributum
[ "Biology" ]
39
[ "Archaea", "Archaea stubs" ]
54,632,469
https://en.wikipedia.org/wiki/Halorubrum%20lacusprofundi
Halorubrum lacusprofundi is a rod-shaped, halophilic Archaeon in the family of Halorubraceae. It was first isolated from Deep Lake in Antarctica in the 1980s. Genome Several strains of H. lacusprofundi have been discovered. The genome sequencing of the strain ACAM 32 was completed in 2008. The organism's genome consists of two circular chromosomes and a single circular plasmid. Chromosome I contains 2,735,295 base pairs encoding 2,801 genes and chromosome II contains 525,943 base pairs encoding 522 genes. The single plasmid contains 431,338 base pairs encoding 402 genes. At least one strain of H. lacusprofundi (R1S1) contains a plasmid (pR1SE) that enables horizontal gene transfer, which takes place via a mechanism that uses vesicle-enclosed virus-like particles. Research Its β-galactosidase enzyme has been extensively studied to understand how proteins function in low-temperature, high-saline environments. References Euryarchaeota
Halorubrum lacusprofundi
[ "Biology" ]
230
[ "Archaea", "Archaea stubs" ]
54,632,483
https://en.wikipedia.org/wiki/Halorubrum%20vacuolatum
Halorubrum vacuolatum is a halophilic archaeon in the family of Halorubraceae. It is an extremophile and is able to survive in water with high salt concentration. References Euryarchaeota Archaea described in 1993
Halorubrum vacuolatum
[ "Biology" ]
57
[ "Archaea", "Archaea stubs" ]
54,632,635
https://en.wikipedia.org/wiki/Cholamine%20chloride%20hydrochloride
Cholamine chloride hydrochloride is one of Good's buffers with a pH in the physiological range. Its pKa at 20°C is 7.10, making it useful in cell culture work. Its ΔpKa/°C is -0.027 and it has a solubility in water at 0°C of 4.2M. References Buffer solutions Quaternary ammonium compounds Amines Hydrochlorides
Cholamine chloride hydrochloride
[ "Chemistry", "Biology" ]
92
[ "Buffer solutions", "Biotechnology stubs", "Functional groups", "Biochemistry stubs", "Biochemistry", "Amines", "Bases (chemistry)" ]
54,632,690
https://en.wikipedia.org/wiki/Glycinamide
Glycinamide is an organic compound with the molecular formula H2NCH2C(O)NH2. It is the amide derivative of the amino acid glycine. It is a water-soluble, white solid. Amino acid amides, such as glycinamide are prepared by treating the amino acid ester with ammonia. It is a ligand for transition metals, related to amino acid complexes. As a neutral ligand, it binds through the amine. In some complexes, it binds through the amine and the carbonyl oxygen, forming a five-membered chelate ring. The hydrochloride salt of glycinamide, glycinamide hydrochloride, is one of Good's buffers with a pH in the physiological range. Glycinamide hydrochloride has a pKa near the physiological pH (8.20 at 20°C), making it useful in cell culture work. Its ΔpKa/°C is -0.029 and it has a solubility in water at 0 °C of 6.4 M. Glycinamide is a reagent used in the synthesis of glycineamide ribonucleotide (an intermediate in de novo purine biosynthesis). References Buffer solutions Carboxamides Amines
Glycinamide
[ "Chemistry", "Biology" ]
272
[ "Buffer solutions", "Biotechnology stubs", "Functional groups", "Biochemistry stubs", "Biochemistry", "Amines", "Bases (chemistry)" ]
54,632,788
https://en.wikipedia.org/wiki/Poppy-seed%20bagel%20theorem
In physics, the poppy-seed bagel theorem concerns interacting particles (e.g., electrons) confined to a bounded surface (or body) when the particles repel each other pairwise with a magnitude that is proportional to the inverse distance between them raised to some positive power . In particular, this includes the Coulomb law observed in Electrostatics and Riesz potentials extensively studied in Potential theory. Other classes of potentials, which not necessarily involve the Riesz kernel, for example nearest neighbor interactions, are also described by this theorem in the macroscopic regime. For such particles, a stable equilibrium state, which depends on the parameter , is attained when the associated potential energy of the system is minimal (the so-called generalized Thomson problem). For large numbers of points, these equilibrium configurations provide a discretization of which may or may not be nearly uniform with respect to the surface area (or volume) of . The poppy-seed bagel theorem asserts that for a large class of sets , the uniformity property holds when the parameter is larger than or equal to the dimension of the set . For example, when the points ("poppy seeds") are confined to the 2-dimensional surface of a torus embedded in 3 dimensions (or "surface of a bagel"), one can create a large number of points that are nearly uniformly spread on the surface by imposing a repulsion proportional to the inverse square distance between the points, or any stronger repulsion (). From a culinary perspective, to create the nearly perfect poppy-seed bagel where bites of equal size anywhere on the bagel would contain essentially the same number of poppy seeds, impose at least an inverse square distance repelling force on the seeds. Formal definitions For a parameter and an -point set , the -energy of is defined as follows: For a compact set we define its minimal -point -energy as where the minimum is taken over all -point subsets of ; i.e., . Configurations that attain this infimum are called -point -equilibrium configurations. Poppy-seed bagel theorem for bodies We consider compact sets with the Lebesgue measure and . For every fix an -point -equilibrium configuration . Set where is a unit point mass at point . Under these assumptions, in the sense of weak convergence of measures, where is the Lebesgue measure restricted to ; i.e., . Furthermore, it is true that where the constant does not depend on the set and, therefore, where is the unit cube in . Poppy-seed bagel theorem for manifolds Consider a smooth -dimensional manifold embedded in and denote its surface measure by . We assume . Assume As before, for every fix an -point -equilibrium configuration and set Then, in the sense of weak convergence of measures, where . If is the -dimensional Hausdorff measure normalized so that , then where is the volume of a d-ball. The constant Cs,p For , it is known that , where is the Riemann zeta function. Using a modular form approach to linear programming, Viazovska together with coauthors established in a 2022 paper that in dimensions and , the values of , , are given by the Epstein zeta function associated with the lattice and Leech lattice, respectively. It is conjectured that for , the value of is similarly determined as the value of the Epstein zeta function for the hexagonal lattice. Finally, in every dimension it is known that when , the scaling of becomes rather than , and the value of can be computed explicitly as the volume of the unit -dimensional ball: The following connection between the constant and the problem of sphere packing is known: where is the volume of a p-ball and where the supremum is taken over all families of non-overlapping unit balls such that the limit exists. See also Hausdorff dimension Geometric measure theory Sphere packing Riemann zeta function References Physics theorems Potentials Dimension Bagels bagel theorem
Poppy-seed bagel theorem
[ "Physics" ]
807
[ "Geometric measurement", "Physical quantities", "Equations of physics", "Theory of relativity", "Dimension", "Physics theorems" ]
54,633,509
https://en.wikipedia.org/wiki/Nidderdale%20Greenway
The Nidderdale Greenway is a path that runs between Harrogate and Ripley in North Yorkshire, England. It uses a former railway line that ran between Harrogate and Pateley Bridge as its course. The route connects to other cycle paths including the Way of the Roses. Route The former Nidd Valley Railway closed completely in 1964 and Leeds-Thirsk railway line was closed in 1969. The Nidderdale Greenway makes use of both of these former railways to provide a traffic-free walking and cycle zone that extends from Bilton (in north eastern Harrogate) to the village of Ripley, which is further north. The Greenway was first proposed in the 1990s and after land purchases, public inquiries and a lottery grant, was officially opened in May 2013. The route is very popular and is used by pedestrians, cyclists, runners and horse-riders. Starting at Bilton (which is on the southern link of the Way of the Roses cycle route), the route heads north-westerly on the former Leeds-Thirsk railway line. At Bilton Beck Wood, it crosses the River Nidd on the grade II-listed seven-arch Nidd Viaduct. The viaduct is at the western end of the Nidd Gorge, where the waters of the River Nidd are funneled into a steep ravine. Just west of the village of Nidd, the route diverges onto the former Nidd valley Railway line until it reaches the A61 road at Killinghall Bridge. It then crosses the A61 by means of a Pegasus crossing and runs parallel to the road into Ripley on its own path. The last section into Ripley was donated by the owners of Ripley castle to allow safe passage into the village without cyclists having to resort to using the A61. The greenway is part of the National Cycle Route 67 which runs from Long Eaton to Northallerton, although parts of it are as yet to be completed. At both ends, the path links into other paths and long distance cycles routes to Brimham Rocks, Fountains Abbey, Knaresborough, and Starbeck. The route is maintained by Sustrans Rangers and in May 2017, a redundant Millennium Milepost was installed on the Greenway carrying information about the route on it. Harrogate Borough Council are working on extending the path from Bilton by a further to the south-west which will connect that part of the route with railway station. A bike sculpture made of stones was unveiled just south of Ripley alongside the greenway to celebrate the 2014 Tour de France which passed by Ripley. There is also a portrait bench at the Bilton end of the Greenway which depicts local cycling heroes. The bench is part of a national scheme operated by Sustrans to promote local cycling heroes on the National Cycle Routes. The route may be under threat by a proposal to build a bypass for the A61 which would go from the east side of Harrogate and re-join the existing A61 east of Ripley. This would avoid the road going through Killinghall, but as yet, the plans have not been published. The greenway is also under threat of a possible resurgent railway between Harrogate and Ripon, as Bilton Viaduct would need to be utilized in any re-opened railway line. Extension An extension to the greenway was added in 2014. This allows users at the north end to go further on than Ripley towards the village of Clint. A further extension on the trackbed of the Nidd Valley railway to Pateley Bridge has been proposed to avoid the necessity for cyclists using the narrow and winding B6165. References External links Sustrans mapping of the route PDF map of the route and other cycle paths in the east Harrogate area Transport in Harrogate Cycleways in England Sustainable transport 2013 establishments in England Cycling in Yorkshire Transport in North Yorkshire Nidderdale Rail trails in England Greenways
Nidderdale Greenway
[ "Physics" ]
792
[ "Physical systems", "Transport", "Sustainable transport" ]
54,633,671
https://en.wikipedia.org/wiki/Khimprom%20%28Volgograd%29
Khimprom (, formerly known as Plant 91) was a major producer of industrial and consumer chemical products based in Volgograd, Russia. The company used to manufacture organophosphorus nerve agents, and as of 2013 still produced dual-use chemicals. History The plant was established in 1931. The plant began production of sarin in 1959, and soman in 1967; production of both was officially ended before 1987. It was claimed that the plant manufactured 5 to 10 tons of binary nerve agent in 1991 as part of the Foliant research program, that was subsequently field tested at the Ust'yurt plateau, Uzbekistan. In the post-Soviet era, the plant manufactured phosphorus oxychloride, and a range of phosphorus- and fluorine-containing compounds. The company's financial situation grew worse in the late 2000s, and it was officially declared bankrupt in 2012. Production at the plant was fully discontinued in 2014. In January 2015, layoffs began as the enterprise was being liquidated. At the same time, projects were launched to restore the environmental damage caused by the plant during decades of chemical production. As of May 2018, the local government is in talks with the Japan-based Marubeni to build a modern methanol plant on the Khimprom site. Present time December 27, 2019 - liquidation of the organization. Bankruptcy trustee is Chertkova Inna Valeryevna (for 2024). References External links Official website Chemical companies of Russia Companies based in Volgograd Oblast Soviet chemical weapons program Chemical warfare facilities Companies formerly listed on the Moscow Exchange Ministry of the Chemical Industry (Soviet Union) Defunct chemical companies Defunct companies of Russia Chemical companies of the Soviet Union
Khimprom (Volgograd)
[ "Chemistry" ]
349
[ "Chemical warfare facilities" ]
54,633,736
https://en.wikipedia.org/wiki/Football%20Live
Football Live was the name given to the project and computer system created and utilised by PA Sport to collect Real Time Statistics from major English & Scottish Football Matches and distribute to most leading media organisations. At the time of its operation, more than 99% of all football statistics displayed across Print, Internet, Radio & TV Media outlets would have been collected via Football Live. Background Prior to implementation of Football Live, the collection process consisted of a news reporter or press officer at each club telephoning the Press Association, relaying information on Teams, Goals and Half-Time & Full Time. The basis for Football Live was to have a representative of the Press Association (FBA - Football Analyst) at every ground. Throughout the whole match they would stay on an open line on a mobile phone to a Sports Information Processor (SIP), constantly relaying in real time statistical information for every : Shot Foul Free Kick Goal Cross Goal Kick Offside This information would be entered in real time and passed to our media customers. The Football Live project was in use from Season 2001/02 until the service was taken over by Opta in 2013/14 Commercial Customers The most famous use for the Football Live data was for the Vidiprinter services on BBC & Sky Sports, allowing goals to be viewed on TV screens within 20 seconds of the event happening. League competitions From its inception in 2001/02 season, the following leagues/competitions were fully covered by Football live English Premier League Championship League One League Two Conference Scottish Premier League English FA Cup English Football League Cup World Cup European Championships Champions League Europa League Football Analysts (FBA's) During the early development stages, the initial idea was to employee ex-referees to act as Football Analysts, but this was soon dismissed in favour of ex-professional Footballers. The most famous of which were Brendon Ormsby, Mel Sterland, Jimmy Case, Neil Webb, John Sitton, Imre Varadi, Brian Kilcline, Gary Chivers, Micky Gynn . All the FBA's were supplied and managed by the Professional Football Association (PFA), with day-to-day responsibility lying with Paul Allen and Chris "Jozza" Joslin from the PFA. References Computer systems Statistical software
Football Live
[ "Mathematics", "Technology", "Engineering" ]
458
[ "Computer engineering", "Computer systems", "Computer science", "Statistical software", "Computers", "Mathematical software" ]
54,634,552
https://en.wikipedia.org/wiki/HEPBS
HEPBS (N-(2-Hydroxyethyl)piperazine-N'-(4-butanesulfonic acid)) is a zwitterionic organic chemical buffering agent; one of Good's buffers. HEPBS and HEPES have very similar structures and properties, HEPBS also having an acidity (pKa) in the physiological range (7.6-9.0 useful range). This makes it possible to use it for cell culture work. References Zwitterions Piperazines Ethanolamines Sulfonic acids Buffer solutions
HEPBS
[ "Physics", "Chemistry" ]
125
[ "Buffer solutions", "Matter", "Functional groups", "Zwitterions", "Sulfonic acids", "Ions" ]
64,488,973
https://en.wikipedia.org/wiki/Thermal%20pressure
In thermodynamics, thermal pressure (also known as the thermal pressure coefficient) is a measure of the relative pressure change of a fluid or a solid as a response to a temperature change at constant volume. The concept is related to the Pressure-Temperature Law, also known as Amontons's law or Gay-Lussac's law. In general pressure, () can be written as the following sum: . is the pressure required to compress the material from its volume to volume at a constant temperature . The second term expresses the change in thermal pressure . This is the pressure change at constant volume due to the temperature difference between and . Thus, it is the pressure change along an isochore of the material. The thermal pressure is customarily expressed in its simple form as Thermodynamic definition Because of the equivalences between many properties and derivatives within thermodynamics (e.g., see Maxwell Relations), there are many formulations of the thermal pressure coefficient, which are equally valid, leading to distinct yet correct interpretations of its meaning. Some formulations for the thermal pressure coefficient include: Where is the volume thermal expansion, the isothermal bulk modulus, the Grüneisen parameter, the compressibility and the constant-volume heat capacity. Details of the calculation: The utility of the thermal pressure The thermal pressure coefficient can be considered as a fundamental property; it is closely related to various properties such as internal pressure, sonic velocity, the entropy of melting, isothermal compressibility, isobaric expansibility, phase transition, etc. Thus, the study of the thermal pressure coefficient provides a useful basis for understanding the nature of liquid and solid. Since it is normally difficult to obtain the properties by thermodynamic and statistical mechanics methods due to complex interactions among molecules, experimental methods attract much attention. The thermal pressure coefficient is used to calculate results that are applied widely in industry, and they would further accelerate the development of thermodynamic theory. Commonly the thermal pressure coefficient may be expressed as functions of temperature and volume. There are two main types of calculation of the thermal pressure coefficient: one is the Virial theorem and its derivatives; the other is the Van der Waals type and its derivatives. Thermal pressure at high temperature As mentioned above, is one of the most common formulations for the thermal pressure coefficient. Both and are affected by temperature changes, but the value of and of a solid much less sensitive to temperature change above its Debye temperature. Thus, the thermal pressure of a solid due to moderate temperature change above the Debye temperature can be approximated by assuming a constant value of and . On the contrary, in the paper, authors demonstrated that, at ambient pressure, the pressure predicted of Au and MgO from a constant value of deviates from the experimental data, and the higher temperature, the more deviation. In addition, the authors suggested a thermal expansion model to replace the thermal pressure model. Thermal pressure in a crystal The thermal pressure of a crystal defines how the unit-cell parameters change as a function of pressure and temperature. Therefore, it also controls how the cell parameters change along an isochore, namely as a function of . Usually, Mie-Grüneisen-Debye and other Quasi harmonic approximation (QHA) based state functions are being used to estimate volumes and densities of mineral phases in diverse applications such as thermodynamic, deep-Earth geophysical models and other planetary bodies. In the case of isotropic (or approximately isotropic) thermal pressure, the unit cell parameter remains constant along the isochore and the QHA is valid. But when the thermal pressure is anisotropic, the unit cell parameter changes so, the frequencies of vibrational modes also change even in constant volume and the QHA is no longer valid. The combined effect of a change in pressure and temperature is described by the strain tensor : Where is the volume thermal expansion tensor and is the compressibility tensor. The line in the P-T space which indicates that the strain is constant in a particular direction within the crystal is defined as: Which is an equivalent definition of the isotropic degree of thermal pressure. See also Isochoric process Pressure Hydrostatic equilibrium References Thermodynamics Fluid mechanics Pressure
Thermal pressure
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
880
[ "Scalar physical quantities", "Mechanical quantities", "Physical quantities", "Pressure", "Civil engineering", "Thermodynamics", "Wikipedia categories named after physical quantities", "Fluid mechanics", "Dynamical systems" ]
64,491,223
https://en.wikipedia.org/wiki/Hydroxyethylethylenediaminetriacetic%20acid
Hydroxyethylethylenediaminetriacetic acid also known as HEDTA is a tricarboxylic acid and amine. It is a hexadentate ligand. It can chelate or form salts with many metals. References Tricarboxylic acids Chelating agents
Hydroxyethylethylenediaminetriacetic acid
[ "Chemistry" ]
62
[ "Chelating agents", "Process chemicals" ]
64,492,718
https://en.wikipedia.org/wiki/Lochnericine
Lochnericine is a major monoterpene indole alkaloid present in the roots of Catharanthus roseus. It is also present in Tabernaemontana divaricata. Chemistry Synthesis Lochnericine is formed from stereoselective epoxidation of carbons 6 and 7 of tabersonine. Derivatives See also Pericine Pervine Tabersonine Vincamine References Indole alkaloids Alkaloids found in Apocynaceae
Lochnericine
[ "Chemistry" ]
101
[ "Alkaloids by chemical classification", "Indole alkaloids" ]
64,492,772
https://en.wikipedia.org/wiki/T.%20G.%20Sitharam
T. G. Sitharam is a civil engineer, professor at IISc Bangalore (on lien), former director at IIT Guwahati. Currently, he is serving as Chairman of the All India Council for Technical Education,(AICTE) from 1 December 2022 onwards. He is known for his works in the fields of rock mechanics, rock engineering and geotechnical earthquake engineering. He is an elected fellow of Indian Geotechnical Society, Institution of Engineers (India) and American Society of Civil Engineers. He is currently serving as the editor-in-chief of Springer Transactions in Civil and Environmental Engineering and several other journals. Awards and honors S.P. Research Award (SAARC) (1998) Sir C.V. Raman State Award for Young Scientists, Government of Karnataka (2002) Prof. Gopal Ranjan Research Award (2014) The Amulya and Vimala Reddy Lecture Award (2014) IGS Kueckelmann Award (2015) Selected patents System and method for fracking of shale rock formation Selected articles Books References Living people Indian Institute of Science alumni University of Waterloo alumni Academic staff of the Indian Institute of Science Indian Institute of Technology directors Indian civil engineers 20th-century Indian engineers American Society of Civil Engineers 1960 births
T. G. Sitharam
[ "Engineering" ]
260
[ "American Society of Civil Engineers", "Civil engineering organizations" ]
64,492,791
https://en.wikipedia.org/wiki/National%20Space%20Science%20Agency
The National Space Science Agency (NSSA) (Arabic: الهيئة الوطنية لعلوم الفضاء) is the Bahraini government entity responsible for space science program. It was established by Royal Decree No. 11 for the Year 2014 and works under the supervision of the Supreme Defense Council. Activities NSSA's current focus is on promoting space science, technology and applications in the Kingdom of Bahrain through many community events; building capacity in the fields of satellite manufacturing, satellite tracking, control and monitoring, earth observation data and image processing and analysis to fulfil stakeholders' national needs; and creating a new “space” sector in the Kingdom. The NSSA offers 3 services: providing high resolution images for the Kingdom of Bahrain in different sizes and formats; processing and analyzing satellite imagery and data to generate useful information to fulfil stakeholders needs; and providing Satellite Images and Data for the Kingdom of Bahrain in different resolutions, bands and selected time. The NSSA have signed many memorandum of understanding with regional and international space agencies such as between NSSA and the Russian state space corporation Roscosmos,the United Arab Emirates Space Agency (UAESA), Italian Space Agency (ASI), Indian Space Research Organisation (ISRO), Mohammed bin Rashid Space Centre (MBRSC) and UK Space Agency (UKSA). References External links NSSA home page Official NSSA Twitter page Official NSSA Instagram page Official NSSA Youtube page Space agencies Space programs by country Government agencies of Bahrain
National Space Science Agency
[ "Engineering" ]
317
[ "Space programs", "Space programs by country" ]
64,492,877
https://en.wikipedia.org/wiki/19-Epivoacristine
19-Epivoacristine is an indole alkaloid found in different species of Tabernaemontana, such as Tabernaemontana dichotoma, as well as in Peschiera affinis. It is also known as 20-epivoacangarine and 19-epi-voacangarine. Potential pharmacology 19-Epivoacristine may be a selective acetylcholinesterase (AChE) inhibitor in vitro. Chemistry 19-Epivoacristine can be prepared by potassium borohydride reduction of voacryptine. See also Voacristine Voacamine Apparicine Lochnericine References Indole alkaloids Alkaloids found in Apocynaceae Heterocyclic compounds with 5 rings Methyl esters Methoxy compounds
19-Epivoacristine
[ "Chemistry" ]
176
[ "Alkaloids by chemical classification", "Indole alkaloids" ]
64,493,304
https://en.wikipedia.org/wiki/Trumpler%2027-1
Trumpler 27-1 is a red supergiant star that is a member of the massive, possible open cluster Trumpler-27, where a blue giant star, a yellow supergiant star, and two Wolf–Rayet stars are also located. Observation history Trumpler 27-1 was discovered and catalogued when the open cluster (not confirmed then) was first identified in the late 20th century. It has since remained largely unobserved, being featured in the Gaia Catalogue and other pieces of literature. Physical properties Trumpler 27-1 is among the largest stars known, with a radius of over 1,360 solar radii. It is also 360,000 times more luminous than the Sun. This star's spectral type is M0Ia, meaning it possesses a cool temperature of below 3,800 K. So far, Trumpler 27-1 is the only identified red supergiant in the open cluster Trumpler 27. Location Trumpler 27–1, and the open cluster in which it is located, is in the constellation of Scorpius. See also Westerlund 1-26 RSGC1 References M-type supergiants Scorpius Suspected variables Durchmusterung objects TIC objects
Trumpler 27-1
[ "Astronomy" ]
254
[ "Scorpius", "Constellations" ]
64,493,307
https://en.wikipedia.org/wiki/Alicante%207
Alicante 7, also known as RSGC5, (Red Supergiant Cluster 5) is an open cluster rich in red supergiants found in the Scutum-Crux Arm of the Milky Way Galaxy, along with RSGC1, Stephenson 2, RSGC3, Alicante 8, and Alicante 10. Alicante 7 contains 7 red supergiants, making it one of the most massive open clusters known. Notes References Scutum (constellation) Open clusters Scutum–Centaurus Arm
Alicante 7
[ "Astronomy" ]
111
[ "Scutum (constellation)", "Constellations" ]
64,494,209
https://en.wikipedia.org/wiki/Henk%20Tennekes%20%28toxicologist%29
Henk Tennekes (21 November 1950 – 7 July 2020) was a Dutch toxicologist. Tennekes worked as doctor and researcher at the Philipps-Universiteit Marburg; the German research cancer centre Krebsforschungszentrum in Heidelberg; Sandoz in Muttenz, Switzerland and at the Research and Consulting Company in Itingen. Since 1992 he was an independent researcher. Tennekes was born in Zutphen. He studied between 1968 and 1974 and earned his PhD in 1979 at the Wageningen University and Research with the thesis The Relationship between Microsomal Enzyme Induction and Liver Tumour Formation". Tennekes wrote in 2010 the book "A disaster in the making", about the dangers of neonicotinoids, a new generation of pesticides for insects and bees in particular. He discovered that Bayer had researched the effects of neonicotinoid on flies back in 1991, and that the effect was irreversible. Bayer nowadays claims the contrary. Initially, his vision had strong opposition as his findings were criticized, but follow-up research partially confirmed his warning. Since the end of 2018, the use of three neonicotinoids (clothianidin, thiamethoxam and imidacloprid) has been banned in the European Union. Tennekes who used to work as a freelance researcher for chemical companies found himself blacklisted, and lost all his clients, but that didn't deter him, because he considered it his moral duty. Tennekes was suffering from a rare pulmonary disease, and opted for euthanasia. He died on 7 July 2020, aged 69. References 1950 births 2020 deaths 2020 suicides Dutch biologists Deaths by euthanasia Toxicologists People from Zutphen Drug-related suicides in the Netherlands
Henk Tennekes (toxicologist)
[ "Environmental_science" ]
368
[ "Toxicologists", "Toxicology" ]
64,495,795
https://en.wikipedia.org/wiki/Efmoroctocog%20alfa
Efmoroctocog alfa, sold under the brand name Elocta among others, is a medication for the treatment and prophylaxis of bleeding in people with hemophilia A. Efmoroctocog alfa is a recombinant human coagulation factor VIII, Fc fusion protein (rFVIIIFc). It is produced by recombinant DNA technology in a human embryonic kidney (HEK) cell line. It was approved for medical use in the United States in June 2014, and for use in the European Union in November 2015. Medical uses In the United States, efmoroctocog alfa (Eloctate) is indicated for adults and children with Hemophilia A for (1) on-demand treatment and control of bleeding episodes, (2) perioperative management, and (3) routine prophylaxis to prevent or reduce the frequency of bleeding episodes. In the European Union, efmoroctocog alfa (Elocta) is indicated for treatment and prophylaxis of bleeding in people with haemophilia A. References Recombinant proteins
Efmoroctocog alfa
[ "Biology" ]
238
[ "Recombinant proteins", "Biotechnology products" ]
64,496,462
https://en.wikipedia.org/wiki/Time%27s%20Arrow%20and%20Archimedes%27%20Point
Time's Arrow and Archimedes Point: New Directions for the Physics of Time is a 1996 book by Huw Price, on the physics and philosophy of the arrow of time. It explores the problem of the direction of time, looking at issues in thermodynamics, cosmology, electromagnetism, and quantum mechanics. Price argues that it is fruitful to think about time from a hypothetical Archimedean point – a viewpoint outside of time. In later chapters, Price argues that retrocausality can resolve many of the philosophical issues facing quantum mechanics and along these lines proposes an interpretation involving what he calls 'advanced action'. Summary Chapter 1 – The View From Nowhen Price briefly introduces the stock philosophical questions about time, starting with Saint Augustine's observations in Confessions, highlighting the questions 'What is the difference between past and future?', 'Could the future affect the past?' and 'what gives time its direction?'. He then introduces the block universe view where the 'present' is regarded as a subjective notion, which changes from observer to observer, in the same way that the concept of 'here' changes depending on where the observer is. The block universe view rejects the notion that there exists an objective present and grants that the past, present and future are all equally real. He then surveys reasons to favour this view and common objections to it. Price then introduces the idea of viewing the block universe from an Archimedean point from outside of time, which is the view that is taken in the rest of the book. Finally, Price introduces two problems regarding the arrow of time, which he calls the taxonomy problem and the genealogy problem. The taxonomy problem is the problem characterizing and finding the relationship between different arrows of time (e.g. the thermodynamic and cosmological arrows of time). The genealogy problem is to explain why asymmetries (i.e. arrows) exist in time, given that the laws of physics seem to be reversible (i.e. symmetric) in time. Chapter 2 – "More apt to be Lost than Got": The Lessons of the Second Law Covers the thermodynamics arrow of time, arising from the second law of thermodynamics. Discusses Ludwig Boltzmann and his development of the second law as a statistical law. The chapter also discusses Boltzmann's H-theorem and Loschmidt's paradox. Price takes a time-symmetric view and comes to the conclusion that the mystery of the second law is not the question of the why entropy increases, but why entropy was low at the beginning of the universe. Taking a time-symmetric view, he then speculates that entropy may decrease again, reaching a minimum at the end of the universe. Chapter 3 – New Light of the Arrow of Radiation This chapter discusses the apparent asymmetry of radiation. Namely, radiation is often observed spreading outwards from a source, but coherent radiation is not observed converging in a sink. Price criticizes explanations of this phenomenon from Karl Popper and Paul Davies and Dieter Zeh. The Wheeler–Feynman absorber theory is discussed and Price concludes that the arrow of time from radiation is a more general case of the thermodynamic arrow of time. Chapter 4 – Arrows and Errors in Contemporary Cosmology In this chapter Price tackles the problem of why entropy was low at the big bang and whether or not we should expect entropy to be low at the other temporal extreme of the universe. He introduces the Gold Universe model, which suggests that the universe will begin and end in a low entropy state. Explanations from Stephen Hawking and Paul Davies of the low entropy big bang are scrutinized. Price concludes that both Hawking and Davies apply a 'temporal double standard' with different standards being applied towards the past and the future. Thus, Price concludes that the arguments are flawed. The Gold Universe view is defended and some of its implications are explored. Chapter 5 – Innocence and Symmetry in Microphysics Price explores what he calls 'The Principle of Independence of Incoming Influences', which is the idea that systems are uncorrelated before they interact, but become correlated after interaction. He distinguishes two versions of this claim. The first is the macroscopic version which Price claims is associated with the low entropy past. The second is the microscopic version, which Price terms μInnocence. Price argues that, while the low entropy past gives us some reason to accept the macroscopic version, there is less reason to accept μInnocence. It is argued that, while μInnocence is intuitively plausible, it arises from a temporal double standard with respect to causality. Chapter 6 – In Search of the Third Arrow This chapter explores the idea of causation. Price argues that ideas about causation exert greater influence on physicists than is generally acknowledged. He explores the argument that the temporal asymmetry of causation comes from physical asymmetry, but ultimately finds this argument unconvincing, especially on the microscopic level. He concludes the chapter by claiming that the most plausible explanation is that the apparent asymmetry of causation is anthropocentric. That is: causation is not asymmetric in time, but we view it as being so because we (human beings) are ourselves thermodynamically asymmetric in time. Chapter 7 – Convention Objectified and the Past Unlocked The chapter introduces the 'conventionalist view' of causation: that the direction of causation is an anthropocentric convention and addresses some common criticisms of the view. The 'bilking argument' against retrocausality is introduced, and Michael Dummett's strategy for avoiding paradoxes in a world with retrocausality is examined. Chapter 8 – Einstein's Issue: The Puzzle of Contemporary Quantum Theory This chapter is a self-contained introduction to quantum mechanics. It introduces the EPR paradox and the measurement problem. Bell's theorem and the GHZ experiment are then introduced in the context of hidden-variables interpretations of quantum mechanics. De Broglie–Bohm theory, the many-worlds interpretation, the many-minds interpretation and the quantum decoherence approach are all examined, though Price finds them all ultimately unconvincing. He points out that Bell's theorem relies on the assumption that, when a measurement basis is chosen, this choice is independent of the state of the quantum system being measured. This, he points out, would not necessarily be the case in a world with advanced action. Chapter 9 – The Case for Advanced Action Price notes that the independence assumption in Bell's theorem can be relaxed in two ways: the first being that the measurement basis and the state of the quantum system are correlated through a common cause in the past, and the second being what Price calls 'advanced action' – a 'common cause' in the future. He argues against superdeterminism, the idea that a quantum system and measurement apparatus are correlated due to a common cause in the past. In contrast, he suggests that the 'advanced action' interpretation is elegant and appealing and fits in better with his 'Archimedean viewpoint'. He briefly discusses the relationship between advanced action and free will. Release The book was published by Oxford University Press on 9 October 1997. It was initially released in hardback, but is now available in hardback, paperback and ebook formats. Reception Time's Arrow and Archimedes' Point was generally well received. Many reviewers found Price's arguments stimulating and praised his explanations of the issues. However, many took issues with some of his specific arguments. Joel Lebowitz gave the book a mixed review for Physics Today where he called Price's arguments regarding backward causation "unconvincing", but praised the section on quantum mechanics, writing "his discussion ... of the Bohr-Einstein 'debate' about the completeness of the quantum description of reality is better than much of the physics literature". Peter Coveney gave the book a mixed review for the New Scientist, criticizing Price's treatment of non-equilibrium statistical mechanics, but concluding by saying "[a]lthough I didn't find many of the arguments convincing, Price's book is a useful addition to the literature on time, particularly as it reveals the influence of modern science on the way a philosopher thinks. But given its restricted and idiosyncratic character, this book should be read only in conjunction with more broadly based works." John D. Barrow reviewed the book in Nature, strongly criticizing the chapter on the cosmological arrow of time but writing "the author has done physicists a great service in laying out so clearly and critically the nature of the various time-asymmetry problems of physics". Craig Callender gave the book a detailed, positive review for The British Journal for the Philosophy of Science, calling it "exceptionally readable and entertaining" as well as "a highly original and important contribution to the philosophy and physics of time". Gordon Belot reviewed the book for The Philosophical Review, writing "[t]his is a fertile and fascinating area, and Price's book provides an exciting entree, even if it does not provide all the answers". Carlo Rovelli chose the book as one of his favourite books on the subject of time, calling Huw Price "one of the best living philosophers" and saying that it "teaches us an important lesson: we are so used to think time as naturally oriented that we instinctively think that the future is determined by the past even if we try not to" References Philosophy of time Philosophy of physics 1996 non-fiction books
Time's Arrow and Archimedes' Point
[ "Physics" ]
1,986
[ "Philosophy of physics", "Applied and interdisciplinary physics", "Physical quantities", "Time", "Philosophy of time", "Spacetime" ]
64,497,150
https://en.wikipedia.org/wiki/Techman
Techman Robot Inc., formerly a business division of Quanta Storage Inc., is an independent company under Quanta Computer established in 2015. The company is most recognized for its cobot with a built-in vision system: the TM Robot series. This robot series previously won the COMPUTEX D&I Gold Award. Techman Robot Inc. is also among the first companies to receive the TARS certification. Techman Robot Inc. is headquartered in Taiwan and has overseas branches in Shanghai, Shenzhen, Chongqing, Busan, and Alblasserdam. The company also partners with distributors in the United States, Europe, China, Japan, South Korea, and Southeast Asia. Overview According to Nikkei Asia they are "a leader in the field of collaborative robots." Founded as a subsidiary of Quanta in 2016 Techman is based in Taoyuan's Hwa Ya Technology Park. Quanta head Barry Lam has a mobile Techman robot in his office which serves refreshments to guests. History Techman Robot Inc., a former business division of Quanta Storage Inc. (TWSE 6188), was founded as an independent subsidiary in 2015. Mr. Shi-chi Ho, General Manager of Quanta Storage Inc. at the time, established a robotics laboratory as a business division for the company in 2012. The laboratory developed the first collaborative robot (cobot) with a built-in vision system – the TM5 In 2014 the TM5 was subsequently unveiled at the International Robot Exhibition (iREX) held in Tokyo, Japan, in 2015. The first commercial TM5 was shipped at the end of 2016, and the company became the world's second-largest cobot brand by 2018. Techman Robot signed an MOU with automated product giant Omron in Kyoto on 11 May 2018, committing to the promotion of cobots to various industries worldwide. At the end of 2019, the TM AI+ software solution was introduced to expand cobot applications from pick-and-place to product inspection. The TM AI+ has now been widely incorporated in the semiconductor and logistics industries. By 2019 it was the world's second largest manufacturer of cobots after Universal Robots. As of 2021 they were still the second largest manufacturer of cobots with 10% of the market share to Universal Robotic’s 50%. In 2021 Omron and Techman announced that Omron would be acquiring a 10% stake in Techman. The value of the investment was undisclosed. Leadership Haw Chen is the CEO of Techman. Collaborations Techman is working with Vincennes University and Telamon Robotics to develop a cobot training curriculum. Products TM Robot Series TM Operator Series TM Smart Factory Series Industrial Applications Manufacturing TM Robot is suitable for manufacturing a range of products, including automobiles, electronics, semiconductors, machinery, home appliances, furniture, toys, and apparel. It can also be applied to streamline the production lines of metal and plastic products, parts, and accessories. Catering TM Robots can be applied in restaurants and kitchens to streamline catering processes. Warehousing TM Robots make up for the lack of transporters and movers in the logistics and warehousing industries. They can be equipped on AGVs or AMRs to facilitate logistical operations. Entertainment TM Robots can be used to carry heavy camera equipment and manoeuvred through narrow spaces, providing the freedom and flexibility for videographers to capture unique or fast-paced shots. Education Students and researchers can use TM cobots to learn about robotics and programming. TM cobots support manual programming instruction for students to move the arm manually or remotely while recording or saving relevant coordinates, which is used at a later stage to recreate certain motions. Major Milestones 2012~2013 Robot Laboratory is established as a new business unit at Quanta Storage Inc. First SCARA Robot and Dual-Arm SCARA are developed. 2014 Robot Business Division- is formally established. 2015 The first TM5 collaborative robot was born. TM5 is unveiled at the 2015 iRex exhibition in Tokyo. Techman Electronics Inc. is established. 2016 Techman Electronics is formally renamed Techman Robot Inc. TM5 is officially released to the market. Techman Robot attends the TAIROS exhibition and CIIF in Shanghai 2017 Techman Robot Inc. attends the Hannover Messe exhibition and expands into the European market. 2018 Techman Robot Inc. becomes the second-largest cobot brand in the world. Techman Robot Inc. attends the IMTS exhibition and expands into the US market. TM5 is awarded the 2018 Red Dot Design Award, iF Design Award, and 2018 Taiwan Excellence Award. TM12, TM14, and TM Manager are officially announced. Techman Robot Inc. signs an MOU with Omron. 2019 TM Palletizing Operator is officially announced. Established Techman Robot (Shanghai) Subsidiary Company. 2020 TM AI+ is officially announced. Overseas branch offices are established in South Korea and the Netherlands. TM12 is awarded the 2020 Taiwan Excellence Award. Awards International Certification and Patents Techman Robot Inc. has secured patents in Taiwan, the United States, and China. Its cobots have received TARS, ISO10218-1, and ISO/TS15066 certification, and its factories comply with ISO9001 and ISO14001 specifications. References Taiwanese companies established in 2016 Companies based in Taoyuan City Robotics companies Engineering companies of Taiwan Industrial machine manufacturers Industrial robotics companies Taiwanese brands
Techman
[ "Engineering" ]
1,123
[ "Industrial machine manufacturers", "Industrial machinery" ]
64,497,665
https://en.wikipedia.org/wiki/List%20of%20lost%20buildings%20and%20structures%20in%20Hong%20Kong
The following list is of buildings and structures in Hong Kong that have been demolished or destroyed. Buildings are arranged by the historical period in which they were constructed. First British era (1841-1945) Second British era (1945-1997) See also Architecture of Hong Kong Heritage conservation in Hong Kong List of buildings and structures in Hong Kong List of the oldest buildings and structures in Hong Kong References Culture of Hong Kong History of Hong Kong Architecture by city Architecture in Hong Kong
List of lost buildings and structures in Hong Kong
[ "Engineering" ]
94
[ "Architecture by city", "Architecture" ]
64,498,283
https://en.wikipedia.org/wiki/Ethics%20of%20quantification
Ethics of quantification is the study of the ethical issues associated to different forms of visible or invisible forms of quantification. These could include algorithms, metrics/indicators, statistical and mathematical modelling, as noted in a review of various aspects of sociology of quantification. According to Espeland and Stevens an ethics of quantification would naturally descend from a sociology of quantification, especially at an age where democracy, merit, participation, accountability and even ‘‘fairness’’ are assumed to be best discovered and appreciated via numbers. In his classic work Trust in Numbers Theodore M. Porter notes how numbers meet a demand for quantified objectivity, and may for this be by used by bureaucracies or institutions to gain legitimacy and epistemic authority. For Andy Stirling of the STEPS Centre at Sussex University there is a rhetoric element around concepts such as ‘expected utility’, ‘decision theory’, ‘life cycle assessment’, ‘ecosystem services’ ‘sound scientific decisions’ and ‘evidence-based policy’. The instrumental application of these techniques and their use of quantification to deliver an impression of accuracy may raise ethical concerns. For Sheila Jasanoff these technologies of quantification can be labeled as 'Technologies of hubris', whose function is to reassure the public while keeping the wheels of science and industry turning. The downside of the technologies of hubris is that they may generate overconfidence thanks to the appearance of exhaustivity; they can preempt a political discussion by transforming a political problem into a technical one; and remain fundamentally limited in processing what takes place outside their restricted range of assumptions. Jasanoff contrasts technologies of hubris with 'technologies of humility' which admit the existence of ambiguity, indeterminacy and complexity, and strive to bring to the surface the ethical nature of problems. Technologies of humility are also sensitive to the need to alleviate known causes of people’s vulnerability, to pay attention to the distribution of benefits and risks, and to identify those factors and strategies which may promote or inhibit social learning. For Sally Engle Merry, studying indicators of human rights, gender violence and sex trafficking, quantification is a technology of control, but whether it is reformist or authoritarian depends on who has harnessed its power and for what purpose. She notes in order to make indicators less misleading and distorting some principles should be followed: democratize the production of indicators develop in parallel qualitative research to verify the validity of assumptions keep it the indicators simple test or adopt multiple framings admit the limits of the various measures The field of algorithms and artificial intelligence is the regime of quantification where the discussion about ethics, is more advanced, see e.g. Weapons of Math Destruction of Cathy O'Neil. While objectivity and efficiency are some positive properties associated with the use of algorithms, ethical issues are posed by these tools coming in the form of black boxes. Thus algorithms have the power to act upon data and make decisions, but they are to a large extent beyond query. The existence of a surveillance capitalism in the theme of Shoshana Zuboff 2019 book. A more militant reading of the dangers posed by artificial intelligence is Resisting AI: An Anti-fascist Approach to Artificial Intelligence by Dan McQuillan. See also Webinar at Centre for Science and Technology Studies (CWTS), Leiden University, February 5, 2021: 'Ethics of quantification' Vedeo. Simposium on ethics of quantification Bergen (N0), December 2019 Research workshop on Ethics of quantification, Bergen (N0), December 2019 Sociology of quantification Society for the Social Studies of Quantification - SSSQ Special issue on Humanities and Social Sciences Communications: Ethics of Quantification: Big Data and Governing through Numbers, July 2020 Ethics in mathematics References Quantification (science) Ethics of science and technology
Ethics of quantification
[ "Mathematics", "Technology" ]
794
[ "Quantity", "Quantification (science)", "Ethics of science and technology" ]
64,498,554
https://en.wikipedia.org/wiki/HAT-P-21
HAT-P-21 is a G-type main-sequence star about 927 light-years away. The star has amount of metals similar to solar abundance. The survey in 2015 has failed to detect any stellar companions. The star is rotating rapidly, being spun up by the tides of giant planet on close orbit. Naming In 2019, the HAT-P-21 star received the proper name Mazalaai while its planet HAT-P-21b received the name Bambaruush at an international NameExoWorlds contest. These names refer to the Mongolian name for the endangered Gobi bear subspecies, and the Mongolian term for 'bear cub', respectively. Planetary system In 2010 a transiting hot super-Jovian planet on moderately eccentric orbit was detected. Its equilibrium temperature is 1283 K. The transit-timing variation survey in 2011 have failed to rule out or confirm the existence of additional planets in the system, until the orbital parameters of HAT-P-21b are known with better precision. The planetary orbit is likely aligned with the equatorial plane of the star, misalignment equal to 25 degrees. References Ursa Major G-type main-sequence stars Planetary systems with one confirmed planet Planetary transit variables J11250598+4101406
HAT-P-21
[ "Astronomy" ]
262
[ "Ursa Major", "Constellations" ]
68,839,644
https://en.wikipedia.org/wiki/Jennifer%20Schomaker
Jennifer Schomaker is an American chemist who is a professor at the University of Wisconsin–Madison. Her research considers the total synthesis of natural and unnatural products. She was selected as an American Chemical Society Arthur C. Cope Scholar Awardee in 2021. Early life and education Schomaker grew up in Michigan. She was an undergraduate student in chemistry at Saginaw Valley State University. During her studies she worked at the Dow Chemical Company, where she developed biocatalytic methods. She moved to Central Michigan University, where she completed a master's degree under the supervision of Thomas Delia. Her master's research involved the synthesis of aniline derivatives of trihalopyrimidines. She completed her master's research whilst raising two young daughters. After graduating, Schomaker moved to Michigan State University, where she joined the research group of Babak Borhan. For her doctorate she studied the synthesis of (+)-tanikolide and haterumalide NA. Research and career Schomaker moved to the University of California, Berkeley as a postdoctoral researcher with Robert G. Bergman. Her postdoctoral research identified new modes of reactivity in cobalt dinitrosoalkane complexes. In 2009, Schomaker joined the faculty at the University of Wisconsin–Madison. Her initial research involved densely functionalized, stereochemically complex amine-containing natural products. At UW–Madison, Schomaker focused on the synthesis of natural and unnatural products, as well as the design of catalysts for tunable, chemo-, regio-, and enantioselective C-H amination reactions. Awards and honors 2013 Sloan Research Fellow 2013 National Science Foundation CAREER Award 2013 Michigan State University Distinguished Alumni Lecturer 2013 Journal of Physical Organic Chemistry Early Excellence Profile 2014 American Chemical Society Women's Chemists Committee Rising Star Award 2015 Michigan State University Recent Alumni Award 2016 Kavli Fellow 2019 University of California, Berkeley Somojai Miller Visiting Professor 2021 Arthur C. Cope Scholar Award Selected publications References Living people American women academics American women chemists Year of birth missing (living people) Michigan State University alumni Central Michigan University alumni Saginaw Valley State University alumni University of California, Berkeley faculty University of Wisconsin–Madison faculty 21st-century American women American organic chemists Chemists from Michigan
Jennifer Schomaker
[ "Chemistry" ]
469
[ "Organic chemists", "American organic chemists" ]
68,840,281
https://en.wikipedia.org/wiki/Grant%20Robert%20Sutherland
Grant Robert Sutherland (born 2 June 1945) is a retired Australian human geneticist and cytogeneticist. He was the Director, Department of Cytogenetics and Molecular Genetics, Adelaide Women's and Children's Hospital for 27 years (1975-2002), then became the Foundation Research Fellow there until 2007. He is an Emeritus Professor in the Departments of Paediatrics and Genetics at the University of Adelaide. He developed methods to allow the reliable observation of fragile sites on chromosomes. These studies culminated in the recognition of fragile X syndrome as the most common familial form of intellectual impairment, allowing carriers to be identified and improving prenatal diagnosis. Clinically, his book on genetic counselling for chromosome abnormalities has become the standard work in this area. He is a past President of the Human Genetics Society of Australasia and of the Human Genome Organisation. Early life and education Sutherland was born in Bairnsdale, Victoria, on 2 June 1945. His father had served as a soldier in World War II and qualified for the soldier settlement farm scheme; as such, when Grant was 12, the family moved to a dairy farm at Numurkah. As a teenager, he bred budgerigars, which he credits for starting his interest in genetics. After completing at Numurkah High School, he left home and moved to Melbourne. He studied at the University of Melbourne, graduating in 1967 with a BSc major in genetics and a sub-major in zoology. During vacations, he worked at the CSIRO as a technician, in the team that was developing a vaccine for contagious bovine pleuropneumonia. Still at the University of Melbourne, he went on to graduate with a MSc in 1971. He undertook his doctoral studies at the University of Edinburgh, graduating with a PhD in 1974 and a DSc in 1984, presenting the thesis Studies in human genetics and cytogenetics Career After graduating with his BSc in 1967, Sutherland starting work as a cytogeneticist in the Chromosome Laboratory of the Mental Health Authority, Melbourne. In 1971, he became the Cytogeneticist-in-Charge in the Department of Pathology, Royal Hospital for Sick Children, Edinburgh, a role he held until 1974. After graduating with his PhD, in 1975, Sutherland took up the role of Director of the Department of Cytogenetics and Molecular Genetics at the Women's and Children's Hospital (WCH) in Adelaide. In 2002, he moved to the role of Foundation Research Fellow at WCH, a position which he held until 2007. In 1990, he also took on the role of Affiliate Professor in the Departments of Paediatrics and Genetics, University of Adelaide, and became Emeritus Professor in 2017. Research While at WCH, Sutherland's principal focus was on chromosomal fragile sites. Large family studies of genetic diseases revealed unexpected patterns, where some men were "carriers" who did not display the disease themselves but passed it on to their daughters. This was contrary to conventional genetic wisdom: "There was no way a male could pass on an X-linked disease without having it himself, or so we thought," Sutherland said. "We'd go to medical conferences with photos of these men, photos of their businesses and copies of their university degrees to show the sceptics they were normal. They didn't believe that a male could have this genetic mutation and be OK." The explanation was in the DNA, which Sutherland commenced mapping in detail. He found that the fragile X fault behaved differently to most genetic mutations; it builds up as it replicates through generations until it reaches a threshold where the full-blown syndrome is triggered. Such a disease mechanism, where genetic abnormalities accumulate until they reach a critical level, had not been observed before. He developed techniques to observe fragile sites, which allowed him to specify critical DNA fragments on the fragile X chromosome and led him to identify fragile X syndrome as the most common cause of hereditary intellectual disability; in Australia it affects about 60 children each year. These findings allowed him to improve diagnostic tools and techniques, making identification of carriers more reliable and ultimately improving prenatal diagnosis. As part of the Human Genome Project, his group mapped much of chromosome 16 and positional cloning of genes on this chromosome. In 1998, Sutherland and Associate Professor Eric Haan discovered Sutherland–Haan Syndrome, which is another genetic disease that causes intellectual and physical problems among males. In 2004, they identified the specific genetic sequences that cause the condition. The discovery means that future generations who are at risk will be able to know if they are carriers and to test in utero for the disease. The proposal of prenatal testing to diagnose genetic diseases has sometimes been controversial for Sutherland, because it raises the question of what to do if problems are detected. Service to professional organisations Sutherland was the president of the Human Genome Organization (HUGO) from 1996 to 1997, and he was involved in establishing the professional body in 1977, which grew into the Human Genetics Society of Australasia, and he served as its president from 1989 to 1991. Recognition In the 1998 Australia Day Honours, Sutherland was appointed a Companion of the Order of Australia (AC) for service to science and in 2001, he was awarded a Centenary Medal. Other significant awards include: 1983 - Fulbright Senior Scholar 1996 - Julian Wells Lecture and medal, Australian Genome Conference Board 1996 1996 - Orator, Errol Solomon Meyers Memorial Lecture, The University of Queensland Medical Society 1998 - Australia Prize (later renamed the Prime Minister's Prize) for Science (joint winner with three others in the field of genetics) 2001 - Macfarlane Burnet Medal and Lecture, Australian Academy of Science 2001 - Ramaciotti Medal for Excellence in Biomedical Research 2004 - Included in "The Magnificent Seventeen, Giants of Australian research" having the most highly cited papers across all fields 2013 - Honorary Doctor of Medicine awarded by the University of Adelaide 2013 - The Australian National Health and Medical Research Council named him an "all-time high achiever" Since 1994 he has been an Honorary Fellow of the Royal College of Pathologists of Australasia. Professional society fellowships include the Royal Society of London (1996) and the Australian Academy of Science (1997). In 2005, the Human Genetics Society of Australasia introduced the annual "Sutherland Lecture" in his honour, allowing outstanding mid-career researchers to showcase their work. Publications Journal articles Scopus lists 458 documents by Sutherland, and calculates his h-index as 83. Books References Further reading Fragile sites on human chromosomes- a personal odyssey - Grant R Sutherland's narrative on his retirement, containing much more technical information than this present article. 1945 births Living people Alumni of the University of Edinburgh Fellows of the Australian Academy of Science Australian geneticists Human geneticists Human Genome Project scientists Companions of the Order of Australia Australian fellows of the Royal Society University of Melbourne alumni Australian medical researchers
Grant Robert Sutherland
[ "Engineering" ]
1,406
[ "Human Genome Project scientists" ]
68,841,311
https://en.wikipedia.org/wiki/Syntactic%20parsing%20%28computational%20linguistics%29
Syntactic parsing is the automatic analysis of syntactic structure of natural language, especially syntactic relations (in dependency grammar) and labelling spans of constituents (in constituency grammar). It is motivated by the problem of structural ambiguity in natural language: a sentence can be assigned multiple grammatical parses, so some kind of knowledge beyond computational grammar rules is needed to tell which parse is intended. Syntactic parsing is one of the important tasks in computational linguistics and natural language processing, and has been a subject of research since the mid-20th century with the advent of computers. Different theories of grammar propose different formalisms for describing the syntactic structure of sentences. For computational purposes, these formalisms can be grouped under constituency grammars and dependency grammars. Parsers for either class call for different types of algorithms, and approaches to the two problems have taken different forms. The creation of human-annotated treebanks using various formalisms (e.g. Universal Dependencies) has proceeded alongside the development of new algorithms and methods for parsing. Part-of-speech tagging (which resolves some semantic ambiguity) is a related problem, and often a prerequisite for or a subproblem of syntactic parsing. Syntactic parses can be used for information extraction (e.g. event parsing, semantic role labelling, entity labelling) and may be further used to extract formal semantic representations. Constituency parsing Constituency parsing involves parsing in accordance with constituency grammar formalisms, such as Minimalism or the formalism of the Penn Treebank. This, at the very least, means telling which spans are constituents (e.g. [The man] is here.) and what kind of constituent it is (e.g. [The man] is a noun phrase) on the basis of a context-free grammar (CFG) which encodes rules for constituent formation and merging. Algorithms generally require the CFG to be converted to Chomsky Normal Form (with two children per constituent), which can be done without losing any information about the tree or reducing expressivity using the algorithm first described by Hopcroft and Ullman in 1979. CKY The most popular algorithm for constituency parsing is the Cocke–Kasami–Younger algorithm (CKY), which is a dynamic programming algorithm which constructs a parse in worst-case time, on a sentence of words and is the size of a CFG given in Chomsky Normal Form. Given the issue of ambiguity (e.g. preposition-attachment ambiguity in English) leading to multiple acceptable parses, it is necessary to be able to score the probability of parses to pick the most probable one. One way to do this is by using a probabilistic context-free grammar (PCFG) which has a probability of each constituency rule, and modifying CKY to maximise probabilities when parsing bottom-up. A further modification is the lexicalized PCFG, which assigns a head to each constituent and encodes rule for each lexeme in that head slot. Thus, where a PCFG may have a rule "NP → DT NN" (a noun phrase is a determiner and a noun) while a lexicalized PCFG will specifically have rules like "NP(dog) → DT NN(dog)" or "NP(person)" etc. In practice this leads to some performance improvements. More recent work does neural scoring of span probabilities (which can take into account context unlike (P)CFGs) to feed to CKY, such as by using a recurrent neural network or transformer on top of word embeddings. In 2022, Nikita Kitaev et al. introduced an incremental parser that first learns discrete labels (out of a fixed vocabulary) for each input token given only the left-hand context, which are then the only inputs to a CKY chart parser with probabilities calculated using a learned neural span scorer. This approach is not only linguistically-motivated, but also competitive with previous approaches to constituency parsing. Their work won the best paper award at ACL 2022. Transition-based Following the success of transition-based parsing for dependency grammars, work began on adapting the approach to constituency parsing. The first such work was by Kenji Sagae and Alon Lavie in 2005, which relied on a feature-based classifier to greedily make transition decisions. This was followed by the work of Yue Zhang and Stephen Clark in 2009, which added beam search to the decoder to make more globally-optimal parses. The first parser of this family to outperform a chart-based parser was the one by Muhua Zhu et al. in 2013, which took on the problem of length differences of different transition sequences due to unary constituency rules (a non-existent problem for dependency parsing) by adding a padding operation. Note that transition-based parsing can be purely greedy (i.e. picking the best option at each time-step of building the tree, leading to potentially non-optimal or ill-formed trees) or use beam search to increase performance while not sacrificing efficiency. Sequence-to-sequence A different approach to constituency parsing leveraging neural sequence models was developed by Oriol Vinyals et al. in 2015. In this approach, constituent parsing is modelled like machine translation: the task is sequence-to-sequence conversion from the sentence to a constituency parse, in the original paper using a deep LSTM with an attention mechanism. The gold training trees have to be linearised for this kind of model, but the conversion does not lose any information. This runs in with a beam search decoder of width 10 (but they found little benefit from greater beam size and even limiting it to greedy decoding performs well), and achieves competitive performance with traditional algorithms for context-free parsing like CKY. Dependency parsing Dependency parsing is parsing according to a dependency grammar formalism, such as Universal Dependencies (which is also a project that produces multilingual dependency treebanks). This means assigning a head (or multiple heads in some formalisms like Enhanced Dependencies, e.g. in the case of coordination) to every token and a corresponding dependency relation for each edge, eventually constructing a tree or graph over the whole sentence. There are broadly three modern paradigms for modelling dependency parsing: transition-based, grammar-based, and graph-based. Transition-based Many modern approaches to dependency tree parsing use transition-based parsing (the base form of this is sometimes called arc-standard) as formulated by Joakim Nivre in 2003, which extends on shift-reduce parsing by keeping a running stack of tokens, and deciding from three operations for the next token encountered: (current token is a child of the top of the stack, is not added to stack) (current token is the parent of the top of the stack, replaces top) (add current token to the stack) The algorithm can be formulated as comparing the top two tokens of the stack (after adding the next token to the stack) or the top token on the stack and the next token in the sentence. Training data for such an algorithm is created by using an oracle, which constructs a sequence of transitions from gold trees which are then fed to a classifier. The classifier learns which of the three operations is optimal given the current state of the stack, buffer, and current token. Modern methods use a neural classifier which is trained on word embeddings, beginning with work by Danqi Chen and Christopher Manning in 2014. In the past, feature-based classifiers were also common, with features chosen from part-of-speech tags, sentence position, morphological information, etc. This is an greedy algorithm, so it does not guarantee the best possible parse or even a necessarily valid parse, but it is efficient. It is also not necessarily the case that a particular tree will have only one sequence of valid transitions that can reach it, so a dynamic oracle (which may permit multiple choices of operations) will increase performance. A modification to this is arc-eager parsing, which adds another operation: (remove the top token on the stack). Practically, this results in earlier arc-formation. These all only support projective trees so far, wherein edges do not cross given the token ordering from the sentence. For non-projective trees, Nivre in 2009 modified arc-standard transition-based parsing to add the operation (swap the top two tokens on the stack, assuming the formulation where the next token is always added to the stack first). This increases runtime to in the worst-case but practically still near-linear. Grammar-based A chart-based dynamic programming approach to projective dependency parsing was proposed by Michael Collins in 1996 and further optimised by Jason Eisner in the same year. This is an adaptation of CKY (previously mentioned for constituency parsing) to headed dependencies, a benefit being that the only change from constituency parsing is that every constituent is headed by one of its descendant nodes. Thus, one can simply specify which child provides the head for every constituency rule in the grammar (e.g. an NP is headed by its child N) to go from constituency CKY parsing to dependency CKY parsing. McDonald's original adaptation had a runtime of , and Eisner's dynamic programming optimisations reduced runtime to . Eisner suggested three different scoring methods for calculating span probabilities in his paper. Graph-based Exhaustive search of the possible edges in the dependency tree, with backtracking in the case an ill-formed tree is created, gives the baseline runtime for graph-based dependency parsing. This approach was first formally described by Michael A. Covington in 2001, but he claimed that it was "an algorithm that has been known, in some form, since the 1960s". The problem of parsing can also be modelled as finding a maximum-probability spanning arborescence over the graph of all possible dependency edges, and then picking dependency labels for the edges in tree we find. Given this, we can use an extension of the Chu–Liu/Edmonds algorithm with an edge scorer and a label scorer. This algorithm was first described by Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajič in 2005. It can handle non-projective trees unlike the arc-standard transition-based parser and CKY. As before, the scorers can be neural (trained on word embeddings) or feature-based. This runs in with Tarjan's extension of the algorithm. Evaluation The performance of syntactic parsers is measured using standard evaluation metrics. Both constituency and dependency parsing approaches can be evaluated for the ratio of exact matches (percentage of sentences that were perfectly parsed), and precision, recall, and F1-score calculated based on the correct constituency or dependency assignments in the parse relative to that number in reference and/or hypothesis parses. The latter are also known as the PARSEVAL metrics. Dependency parsing can also be evaluated using attachment score. Unlabelled attachment score (UAS) is the percentage of tokens with correctly assigned heads, while labelled attachment score (LAS) is the percentage of tokens with correctly assigned heads and dependency relation labels. Conversion between parses Given that much work on English syntactic parsing depended on the Penn Treebank, which used a constituency formalism, many works on dependency parsing developed ways to deterministically convert the Penn formalism to a dependency syntax, in order to use it as training data. One of the major conversion algorithms was Penn2Malt, which reimplemented previous work on the problem. Work in the dependency-to-constituency conversion direction benefits from the faster runtime of dependency parsing algorithms. One approach is using constrained CKY parsing, ignoring spans which obviously violate the dependency parse's structure and thus reducing runtime to . Another approach is to train a classifier to find an ordering for all the dependents of every token, which results in a structure isomorphic to the constituency parse. References Further reading Dependency parsing Natural language parsing Natural language processing
Syntactic parsing (computational linguistics)
[ "Technology" ]
2,554
[ "Natural language processing", "Natural language and computing" ]
68,842,679
https://en.wikipedia.org/wiki/Ro20-8065
Ro20-8065 (8-Chloronorflurazepam) is a benzodiazepine derivative with anticonvulsant and muscle relaxant effects, which has been sold as a designer drug. See also Fludiazepam Norflurazepam Ro07-5220 Ro20-8552 References Designer drugs 2-Fluorophenyl compounds Benzodiazepines Chlorobenzene derivatives
Ro20-8065
[ "Chemistry" ]
95
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
68,842,860
https://en.wikipedia.org/wiki/Ro07-5220
Ro07-5220 (6'-Chlorodiclazepam) is a benzodiazepine derivative with sedative, anxiolytic, anticonvulsant and muscle relaxant effects, which has been sold as a designer drug. See also Diclazepam Difludiazepam Ro20-8065 References Designer drugs Benzodiazepines Chlorobenzene derivatives
Ro07-5220
[ "Chemistry" ]
91
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
68,842,959
https://en.wikipedia.org/wiki/List%20of%20IBM%20PS/2%20models
The Personal System/2 or PS/2 was a line of personal computers developed by International Business Machines Corporation (IBM). Released in 1987, the PS/2 represented IBM's second generation of personal computer following the original IBM PC series, which was retired following IBM's announcement of the PS/2 in April 1987. Most PS/2s featured the Micro Channel architecture bus—a closed standard which was IBM's attempt at recapturing control of the PC market. However some PS/2 models at the low end featured ISA buses, which IBM included with their earlier PCs and which were widely cloned due to being a mostly-open standard. Many models of PS/2 were made, which came in the form of desktops, towers, all-in-ones, portables, laptops and notebooks. Notes Legend Explanatory notes Built-in or optional monitors are CRTs unless mentioned otherwise. The Space Saving Keyboard is a 87-key numpad-less version of the Model M. The 25 Collegiate, intended for college students, had two 720 KB floppy drives, maxed out the RAM to 640 KB, and came packaged with the official PS/2 Mouse, Windows 2.0, and four blank floppy disks. Financial workstations came packaged with a 50-key function keypad and were intended for use in banks. LS models are "LAN Stations": essentially the same as their non-LS counterparts but without floppy drives or hard drives and that connect to networks using Ethernet or Token Ring adapters (in essence, diskless workstations). Ultimedia models shipped with a microphone and included SCSI CD-ROMs, M-Audio sound adapter cards and volume controls and headphone and microphone jacks at the front of the case. Array models are PS/2 Servers with support for RAID. Models Main line PS/2 Server Portables Related systems See also List of third-party Micro Channel computers List of IBM Personal Computer models References IBM lists IBM PS 2 models
List of IBM PS/2 models
[ "Technology" ]
415
[ "Computing-related lists", "IBM lists", "Lists of computer hardware" ]
68,844,892
https://en.wikipedia.org/wiki/Rachel%20Takserman-Krozer
Rachel "Raya" Takserman-Krozer ([]; 31 December 1921 – 28 September 1987) was a theoretical physicist and professor of rheology. Takserman-Krozer worked on diverse aspects of theoretical physics ranging from theory of relativity to studies of polymers and their flow. Her scientific work includes contributions to behaviour of polymers and polymers solutions in velocity fields, theory of spinnability, problems of phenomenological rheology, and molecular-statistical theory of polymer networks. Takserman-Krozer worked across several countries including Russia, Poland, Israel, and Germany. She received the Polish Chemical Society Scientific Award in 1963, and was listed in Who's who in Israel 1972 and the International Register of Profiles of the International Biographical Centre Biography Rachel (Raya) Takserman-Krozer was born on 31 December 1921 in Haisyn, Ukrainian SSR into a Jewish family. After moving to Odesa, Takserman-Krozer finished her school education and was admitted to the Faculty of Physics at the University of Odessa. During the Second World War she volunteered as a nurse and continued her study of physics at the university of Tashkent, Uzbekistan, where she obtained an MSc degree in Theoretical Physics in 1945. She married Szymon (Simon) Krozer in 1946, who studied experimental physics at the university of Tashkent and later became professor of experimental polymer physics. In 1948 she started her doctoral studies on theory of relativity with V. A. Fock at the Institute of Theoretical Physics in Leningrad University now St. Petersburg State University. During this time she lectured at the Petrozavodsk State University. During this time she gave birth to three sons: Anatol Krozer in 1949, who became a surface science physicist focusing on development of bio- and chemical sensors at ACREO, Research Institutes of Sweden (RISE), Gothenburg, Sweden, Georgij (Yoram) Krozer in 1953, who studied biology and economics and works as an associated professor on innovations for sustainable development at the University of Twente, Netherlands, and Viktor Krozer in 1958, who studied electrical engineering and is now professor of Terahertz Photonics and Electronics at faculty of physics at the Goethe University Frankfurt, Germany. In 1959 she resumed her PhD studies at the Laboratory of Polymer Physics of the Industrial Chemistry Institute, headed by Andrzej Ziabicki. She completed her PhD (Warsaw, 1962) and habilitation (Łódź, 1966), both in polymer physics. In 1967 and 1968 she worked in the Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland on hydrodynamics and rheology of polymer solutions and suspensions, "spinnability of liquids", extensional flows and problems related to the theory of polymer processing. In August 1968 she obtained a position as a professor at the Department of Mechanics at the Israel Institute of Technology (Technion) headed, at that time, by Marcus Reiner, the founder of Rheology, where she worked on the subjects of Microrheology and Nonlinear Rheology. In 1972, she continued her research at the RWTH Aachen University, University of Karlsruhe and finally at the University of Stuttgart, working together with Ekkehart Kröner. She deceased in Trier, Germany, after three years of fight against lung cancer, on 28 September 1987. According to her last wish, she had been entombed in Amsterdam, Netherlands. Scientific work Taksmeran-Krozer's main contributions are in the field of Polymer Physics and Rheology. She has made substantial contributions to the field of rheology together with Marcus Reiner and to the field of behaviour of polymer solutions and networks. Her research work has been published predominantly in the Journal of Polymer Science, the Bulletin de L’Académie Polonaise des Sciences, Rheologica Acta, and Colloid and Polymer Science. Behaviour of polymer solutions in velocity fields Takserman-Krozer focused on various problems related to polymer processing - structure development, rheology, relation between structure and physical properties, contributing to the understanding of elongational (extensional) flow of polymer suspensions and solutions. Her solution of the ellipsoid orientation distribution in form of series expansion was later supplemented by an exact solution of a similar problem (orientation in shear flow) by Werner Kuhn. Then Takserman-Krozer and Andrzej Ziabicki made a systematic study of elongational flow behaviour of various molecular models and rheological behaviour of polymers. Theory of spinnability Spinnability of liquids was another problem related to fibre spinning studied in parallel with hydrodynamics and rheology of polymer solutions and suspensions. Takserman-Krozer and Andrzej Ziabicki found factors limiting spinnability, i.e. the maximum length of liquid threads, to obey two mechanisms: development of Rayleigh capillary waves on the surface of liquid threads and their break-up into drops, and brittle fracture of viscoelastic threads when the accumulated elastic energy compensates cohesive forces. Direct observations of capillary break-up and cohesive fracture also spoke in favour of the proposed theory. In spite of later efforts by Takserman-Krozer and coworkers the theory of spinnability never reached the level of a quantitative theory. Phenomenological rheology Takserman Krozer investigated basic problems of phenomenological (classical) rheology and closure of the gap between the micro structure of the materials and their macro mechanical behaviour. In her work with Marcus Reiner, the phenomenological rheology approach of a combination of linear bodies was used for the analysis of the dynamic strength limit of both viscoelastic solid and elastic-viscous liquid. The general formulation of the problem characterizes the rheological stress/strain relationship of a general compressible fluid, the basic hydrodynamic equations of continuity, motion and energy transfer, and the dependence of the rheological and physical parameters of the material on its thermodynamic state. A perturbation solution for pressure- and temperature-dependent viscosity has shown that pressure and heating effects may result in apparently non-Newtonian behaviour, with a non-uniform pressure gradient and variable velocity profile along the pipe. Molecular-statistical theory of polymer networks Takserman-Krozer established the dynamical theory of macromolecular (permanent) networks at the Institute of Theoretical and Applied Physics of the University of Stuttgart with Ekkehart Kröner. The Hartree self-consistent field approach as a method to take into account many-body forces such as Oseen's hydrodynamic interaction had been considered. She then, together with Ekkehart Kröner, developed the thermodynamics as well as the molecular-statistical theory of so-called temporary polymer networks, both for equilibrium and nonequilibrium. The theory now stands quantitatively. Properties investigated were in particular the non-Newtonian fluid flow for shear and elongation flow and the stress overshoot maximum after abrupt start of the flow. Honours and prizes In 1963, Takserman-Krozer received the Polish Chemical Society Scientific Award. She is listed in the Who's Who in Israel 1972 and in International Register of Profiles of the International Biographical Centre. Notes References 1921 births 1987 deaths 20th-century atheists Russian women physicists Israeli women physicists German women physicists Polish Academy of Sciences Theoretical physicists Ukrainian women physicists People from Vinnytsia Oblast Academic staff of Technion – Israel Institute of Technology
Rachel Takserman-Krozer
[ "Physics" ]
1,578
[ "Theoretical physics", "Theoretical physicists" ]
68,846,045
https://en.wikipedia.org/wiki/Constrained%20equal%20awards
Constrained equal awards (CEA), also called constrained equal gains, is a division rule for solving bankruptcy problems. According to this rule, each claimant should receive an equal amount, except that no claimant should receive more than his/her claim. In the context of taxation, it is known as leveling tax. Formal definition There is a certain amount of money to divide, denoted by (=Estate or Endowment). There are n claimants. Each claimant i has a claim denoted by . Usually, , that is, the estate is insufficient to satisfy all the claims. The CEA rule says that each claimant i should receive , where r is a constant chosen such that . The rule can also be described algorithmically as follows: Initially, all agents are active, and all agents get 0. While there are remaining units of the estate: The next estate unit is divided equally among all active agents. Each agent whose total allocation equals its claim becomes inactive. Examples Examples with two claimants: ; here . In general, when all claims are at least , each claimant receives exactly . ; here . Examples with three claimants: ; here . ; here . ; here . ; here . ; here . Usage In the Jewish law, if several creditors have claims to the same bankrupt debtor, all of which have the same precedence (e.g. all loans have the same date), then the debtor's assets are divided according to CEA. Characterizations The CEA rule has several characterizations. It is the only rule satisfying the following sets of axioms: Equal treatment of equals, invariance under truncation of claims, and composition up; Conditional full compensation, and composition down; Conditional full compensation, and claims-monotonicity. Dual rule The constrained equal losses (CEL) rule is the dual of the CEA rule, that is: for each problem , we have . References Bankruptcy theory
Constrained equal awards
[ "Mathematics" ]
395
[ "Game theory", "Bankruptcy theory" ]
68,846,141
https://en.wikipedia.org/wiki/Constrained%20equal%20losses
Constrained equal losses (CEL) is a division rule for solving bankruptcy problems. According to this rule, each claimant should lose an equal amount from his or her claim, except that no claimant should receive a negative amount. In the context of taxation, it is known as poll tax. Formal definition There is a certain amount of money to divide, denoted by (=Estate or Endowment). There are n claimants. Each claimant i has a claim denoted by . Usually, , that is, the estate is insufficient to satisfy all the claims. The CEL rule says that each claimant i should receive , where r is a constant chosen such that . The rule can also be described algorithmically as follows: Initially, all agents are active, and each agent gets his full claim. While the total allocation is larger than the estate: Remove one unit equally from all active agents. Each agent whose total allocation drops to zero becomes inactive. Examples Examples with two claimants: ; here . ; here too. ; here . Examples with three claimants: ; here . ; here . ; here . Usage In the Jewish law, if several bidders participate in an auction and then revoke their bids simultaneously, they have to compensate the seller for the loss. The loss is divided among the bidders according to the CEL rule. Characterizations The CEL rule has several characterizations. It is the only rule satisfying the following sets of axioms: Equal treatment of equals, minimal rights first, and composition down; Conditional null compensation, and composition up; Conditional null compensation, and the dual of claims-monotonicity. Dual rule The constrained equal awards (CEA) rule is the dual of the CEL rule, that is: for each problem , we have . References Bankruptcy theory
Constrained equal losses
[ "Mathematics" ]
363
[ "Game theory", "Bankruptcy theory" ]
68,846,197
https://en.wikipedia.org/wiki/Proportional%20rule%20%28bankruptcy%29
The proportional rule is a division rule for solving bankruptcy problems. According to this rule, each claimant should receive an amount proportional to their claim. In the context of taxation, it corresponds to a proportional tax. Formal definition There is a certain amount of money to divide, denoted by (=Estate or Endowment). There are n claimants. Each claimant i has a claim denoted by . Usually, , that is, the estate is insufficient to satisfy all the claims. The proportional rule says that each claimant i should receive , where r is a constant chosen such that . In other words, each agent gets . Examples Examples with two claimants: . That is: if the estate is worth 100 and the claims are 60 and 90, then , so the first claimant gets 40 and the second claimant gets 60. , and similarly . Examples with three claimants: . . . Characterizations The proportional rule has several characterizations. It is the only rule satisfying the following sets of axioms: Self-duality and composition-up; Self-duality and composition-down; No advantageous transfer; Resource linearity; No advantageous merging and no advantageous splitting. Truncated-proportional rule There is a variant called truncated-claims proportional rule, in which each claim larger than E is truncated to E, and then the proportional rule is activated. That is, it equals , where . The results are the same for the two-claimant problems above, but for the three-claimant problems we get: , since all claims are truncated to 100; , since the claims vector is truncated to (100,200,200). , since here the claims are not truncated. Adjusted-proportional rule The adjusted proportional rule first gives, to each agent i, their minimal right, which is the amount not claimed by the other agents. Formally, . Note that implies . Then, it revises the claim of agent i to , and the estate to . Note that that . Finally, it activates the truncated-claims proportional rule, that is, it returns , where . With two claimants, the revised claims are always equal, so the remainder is divided equally. Examples: . The minimal rights are . The remaining claims are and the remaining estate is ; it is divided equally among the claimants. . The minimal rights are . The remaining claims are and the remaining estate is . . The minimal rights are . The remaining claims are and the remaining estate is . With three or more claimants, the revised claims may be different. In all the above three-claimant examples, the minimal rights are and thus the outcome is equal to TPROP, for example, . See also Proportional division Proportional representation References Bankruptcy theory
Proportional rule (bankruptcy)
[ "Mathematics" ]
552
[ "Game theory", "Bankruptcy theory" ]
68,846,918
https://en.wikipedia.org/wiki/Terminal%20digit%20preference
Terminal digit preference, terminal digit bias, or end-digit preference is a commonly-observed statistical phenomenon whereby humans recording numbers have a bias or preference for a specific final digit in a number. In medical science, this is often seen when recording measurements such as blood pressure by hand, where those taking measurements will round to the nearest 5 or 0. The phenomenon has been blamed for misdiagnoses. Terminal digit bias has been used to identify errors in research, and is one method used in the identification of scientific fraud. Severe terminal digit bias has been found in datasets for scientific papers that were later retracted See also Benford's law References Medical error Quantitative research
Terminal digit preference
[ "Mathematics" ]
136
[ "Mathematical objects", "Numbers", "Number stubs" ]
68,847,122
https://en.wikipedia.org/wiki/Atogepant
Atogepant, sold under the brand name Qulipta among others, is a medication used to prevent migraines. It is a gepant, an orally active calcitonin gene-related peptide receptor antagonist. The most common side effects include nausea, constipation, tiredness, somnolence (sleepiness), decreased appetite, and decreased weight. Atogepant was approved for medical use in the United States in September 2021, and in the European Union in August 2023. Medical Uses Atogepant is indicated for the preventive treatment of episodic migraine in adults. In the European Union, atogepant (Aquipta) is indicated for prophylaxis (prevention) of migraine in adults who have at least four migraine days per month. History Atogepant was developed by the biopharmaceutical company AbbVie. The benefits and side effects of atogepant were evaluated in two clinical trials of 1,562 participants with a history of migraine headaches occurring on 4 to 14 days per month. The two trials to show the benefits were designed similarly. Trials 1 and 2 assigned participants to one of several doses of atogepant or placebo daily for three months. Neither the participants nor the health care providers knew which treatment was being given until after the trial was completed. The benefit of atogepant was assessed based on the change from baseline in the number of migraine days per month to the last month of the three-month treatment period, comparing participants in the atogepant and placebo groups. The trials were conducted at over 100 sites in the United States. The safety of atogepant was evaluated in 1,958 participants with migraine who received at least one dose of atogepant; therefore, the number of participants representing efficacy findings may differ from the number of participants representing safety findings due to different pools of study participants analyzed for efficacy and safety. The UK’s National Institute for Health and Care Excellence has issued draft guidance recommending atogepant for preventing episodic and chronic migraine in NHS patients. It's approved for those experiencing at least 4 migraine days per month after failing 3 prior treatments. Atogepant costs £463 monthly but includes a confidential discount. Research Atogepant demonstrated efficacy in two phase 3 trials (ADVANCE and PROGRESS) by significantly reducing monthly migraine days, acute medication use, and improving quality of life in patients with episodic and chronic migraine over 12 weeks compared to placebo. Common side effects included nausea, constipation, and fatigue/somnolence. A study presented at the 2023 meeting for the American Academy of Neurology also showed that atogepant may help prevent migraines in patients who have had no prior success with other preventative drugs. References Drugs developed by AbbVie Antimigraine drugs Calcitonin gene-related peptide receptor antagonists Carboxamides Organofluorides Piperidines Pyridines Pyrroles Spiro compounds
Atogepant
[ "Chemistry" ]
640
[ "Organic compounds", "Spiro compounds" ]
68,847,626
https://en.wikipedia.org/wiki/Psilocybe%20alutacea
Psilocybe alutacea is a species of agaric fungus in the family Hymenogastraceae. It was described in 2006 and is known from Australia and New Zealand. It is coprophilous, growing on animal dung. The fruitbodies have a small conical or convex cap, subdistant gills with an adnate attachment, a slender brown stipe and a faint blueing reaction to damage. As a blueing member of the genus Psilocybe it contains the psychoactive compounds psilocin and psilocybin. Taxonomy and naming Psilocybe alutacea was described by Y.S. Chang and A.K. Mills in 2006. The holotype was collected by Chang in 1990 in Tasmania and deposited at the herbarium in Hobart, with the accession number HO132672. The species was placed in the Psilocybe section Semilanceatae according to Gúzman due to macroscopic and microscopic similarities with Psilocybe semilanceata; notably a faint blueing reaction to damage, conical cap shape, adnate gill attachment and elipsoid-oval spores. Etymology The alutacea epithet refers to the colour of tanned leather. It comes from high soft leather, tanned with alum. Description The cap is 10–13 mm in diameter, conical to convex in shape, somewhat sticky or tacky when moist, hygrophanous (abruptly changing colour from wet to dry), smooth, radially striate at the edge, coloured leathery brown to ochraceous brown. The gills are adnate, subdistant and greyish brown with white edges, sometimes unevenly coloured. The stipe is 25–46 mm x 1–2.5 mm, pale brown, cylindrical and stuffed. There is a blueing reaction to damage but it is faint and slow, only showing at the edges of the gills and occasionally on the stipe. The spore print is purple-brown. Microscopic characteristics Spores measure 11.7-15.8 (-16.7) x 7.9-9.2 μm and are ellipsoid with a distinct germ pore. Basidia are 25.8 - 34.2 x 9.2-12.1 μm, 4-spored, transparent, clavate or obovate. Cheilocystidia measure 22.5-35.9 (-44.2) x 5 - 10 μm, are transparent with long necks of 6.7-15 μm, simple, bifurcate or trifurcate (one, two or three prongs or forks). Pleurocystidia are rare, measure 17.5-30.4 x 4.6 - 10 μm and are lageniform (shaped like a bottle or flask) with long necks. Subhymenium is subcellular. Trama regular, pale brown in 5% KOH, with hyphae measuring 3.3-15 μm. Epicutis is a layer of subgelatinised, encrusted hyphae with brown pigments, 2.5-5 μm broad. Clamp connections are present. Distribution and habitat Present in Australia and New Zealand. In Tasmania collections were made at Snug Falls Track, Mount Field National Park (Pandanus Walk) and Kermandie Falls (Upper Track). Found growing solitary to sub-gregarious on cow dung; also collected on horse and wombat dung. Sometimes in leaf litter or from soil in mossy areas. Similar species Members of the Psilocybe section Semilanceatae, genetically similar species and small brown corprophilous fungi. Psilocybe semilanceata is more umbonate and grows in grasslands and paddocks from decaying grass roots, not on animal dung. Psilocybe fimetaria has a stipe that discolours yellow with handling or age, and is known from the Pacific Northwest region of the United States and Canada, Chile, Great Britain, and Europe. Psilocybe liniformans has a convex to applanate cap, and is known from the Pacific Northwest and Chile. Psilocybe pelliculosa is closely related with a similar appearance. It occurs predominantly in the Pacific Northwest region of the United States and Canada, where it grows in litter in coniferous woods. Psilocybe tasmaniana is similar in its original description, distribution and corprophilous habit but microscopic features differ; pleurocystidia in that species are reportedly abundant, and fusiod-ventricose, with short necks. Deconica corprophila and similar Deconica species are close lookalikes but with subdecurrent gills and no blueing reaction. Panaeolus species' spores are brown, greyish or black, not purple-brown. Protostropharia semiglobata has a slippery glutinous stipe when wet and no blueing reaction. See also List of psilocybin mushrooms References External links Some new species in the Strophariaceae (Agaricales) in Tasmania PDF of the original description as published in Australian Mycologist, by Chang, Gates and Ratkowsky in 2006. New Zealand records of this species provided by Manaaki Whenua - Landcare Research. Observations on iNaturalist. Observations on Mushroom Observer. alutacea Entheogens Fungi described in 2006 Fungi of Australia Fungi of New Zealand Psychedelic tryptamine carriers Psychoactive fungi Fungus species
Psilocybe alutacea
[ "Biology" ]
1,136
[ "Fungi", "Fungus species" ]
68,849,776
https://en.wikipedia.org/wiki/DL%20Tauri
DL Tauri is a young T Tauri-type pre-main sequence stars in the constellation of Taurus about away, belonging to the Taurus Molecular Cloud. It is partially obscured by the foreground gas cloud rich in carbon monoxide, and is still accreting mass, producing 0.14 due to release of accretion energy. The stellar spectrum shows the lines of ionized oxygen, nitrogen, sulfur and iron. Protoplanetary disk Star is surrounded by a massive (0.029 ) protoplanetary disk, which is extensive yet relatively flattened and rich in large grains, indicated a significantly evolved state. With a mass this massive the disk can possibly form a brown dwarf. The area of disk about 100 AU from the star may be on the verge of the gravitational instability. The disk have a multiple dust rings with poorly resolved gaps between. Suspected planetary companion The object 2MASS J04333960+2520420, designated DL Tau/cc1 in 2008, is a suspected superjovian planet with mass about 12 on the likely bound orbit around DL Tauri. The object is either a sub-brown dwarf or a low mass brown dwarf or even a low-mass ultra-cool red dwarf star if strongly veiled by accretion disk, which is not unusual for the young star systems. References T Tauri stars Circumstellar disks Taurus (constellation) J04333906+2520382 Hypothetical planetary systems Tauri, DL
DL Tauri
[ "Astronomy" ]
306
[ "Taurus (constellation)", "Constellations" ]
68,850,058
https://en.wikipedia.org/wiki/NGC%207492
NGC 7492 is a globular cluster in the constellation Aquarius. It was discovered by the astronomer William Herschel on September 20, 1786. It resides in the outskirts of the Milky Way, about 80,000 light-years away, more than twice the distance between the Sun and the center of the galaxy, and is a benchmark member of the outer galactic halo. The cluster is immersed in, but does not kinematically belong to, the Sagittarius Stream. NGC 7492 possess a tidal tail 3.5 degrees long, embedded into an over-density of stars which may be the remnants of a disrupted dwarf galaxy. The shape of the cluster is flattened rather than spherical, likely due to dynamical interaction with the Milky Way. References Globular clusters Aquarius (constellation) 7492
NGC 7492
[ "Astronomy" ]
169
[ "Constellations", "Aquarius (constellation)" ]
68,851,513
https://en.wikipedia.org/wiki/Cannabimovone
Cannabimovone (CBM) is a phytocannabinoid first isolated from a non-psychoactive strain of Cannabis sativa in 2010, which is thought to be a rearrangement product of cannabidiol. It lacks affinity for cannabinoid receptors, but acts as an agonist at both TRPV1 and PPARγ. See also Cannabichromene Cannabicitran Cannabicyclol Cannabielsoin Cannabigerol Cannabinodiol Cannabitriol Delta-6-CBD References Cannabinoids Ketones Isopropenyl compounds Cyclopentanols Phenols
Cannabimovone
[ "Chemistry" ]
144
[ "Ketones", "Isopropenyl compounds", "Functional groups" ]
68,852,259
https://en.wikipedia.org/wiki/Wild%20Animal%20Ethics
Wild Animal Ethics: The Moral and Political Problem of Wild Animal Suffering is a 2020 book by the philosopher Kyle Johannsen, that examines whether humans, from a deontological perspective, have a duty to reduce wild animal suffering. He concludes that such a duty exists and recommends effective interventions that could be potentially undertaken to help these sentient individuals. Summary Johannsen starts by examining the question of what is good about nature. He puts forward a number of arguments for why wild animals generally do not live good lives, such as the dominance of reproductive strategies which mean that large numbers of offspring are born, of which the great majority experience suffering and die before reaching adulthood. He also highlights different forms of suffering that these sentient individuals experience including predation, weather conditions, starvation, stress, injury and parasitism. Johannsen then explores the value of naturalness and the popularity of a positive view of nature. In the following two sections, Johannsen asserts that humans have a collective obligation to intervene in nature to reduce the suffering of wild animals and evaluates the risks associated with intervention. He then explores the concept of editing nature, using technologies such as CRISPR and gene drives. The final section investigates how intervention relates to animal rights advocacy. Reception Symposium A symposium was held on the book in April 2021, hosted by Animals in Philosophy, Politics, Law, and Ethics (APPLE) at Queens University, featuring commentaries by Nicolas Delon, Bob Fischer, Gary O'Brien, and Clare Palmer; these were later published in the journal Philosophia. Nicolas Delon's commentary argues that the book largely overlooks the issues of agency and freedom. Despite this, he gives Johannsen credit for considering liberty as an issue and for favoring interventions which minimize infringements of liberty. Bob Fischer's commentary challenges Johannsen's claims on habitat destruction in two ways. The first questions his calculation of the quantity of animals that experience overall positive lives. The second aspect acknowledges Johannsen's perspective on the balance between lives with positive and negative outcomes, but refutes the notion that this leads to his desired conclusion due to separate justifications. O'Brien's commentary takes exception with Johanssen's assertion that the non-identity problem has no effect on the reasons to intervene in nature. He argues that large scale interventions in nature will, in turn, change the types of animals that will come into existence and, as a result, enable harms experienced by and inflicted by these individuals. In conclusion, he asserts that "by causing animals to exist, knowing that they will inflict and suffer harms, we become morally responsible for those harms." Palmer's commentary questions Johannsen's claim that naturalness, or wildness, is not intrinsically valuable and the assertion that the majority of wild animals have terrible lives. On the latter, Palmer asserts that more evidence is needed and for the former she contends that Johannsen mischaracterizes the significance of the value of wildness which could lead to conflicts with his suggested wide-scale interventions. She concludes that if he wants to gain democratic legitimacy for such interventions, he needs to give more serious attention to such conflicts. Johannsen responds to the commentaries in his paper "Defending Wild Animal Ethics". He defends his arguments regarding intrinsic value and valuing of harmful natural processes, rejecting the notion of intrinsic valuing. Johannsen evaluates intentional habitat destruction as a response to wild animal suffering, contending that it is unjustified within a moderate deontological framework. The article also examines the role of agency in wild animal wellbeing, its connection to exercise of agency, and its impact on quality of life. Furthermore, Johannsen addresses the concept of identity-affecting actions and the potential generation of secondary duties, extending considerations of rectificatory justice to interventions aimed at mitigating harm to wild animals. Reviews Jeff Sebo describes the book as "an excellent book that makes a powerful case for reducing wild animal suffering". Jeff McMahan asserts that: "The suffering of animals in the wild is a serious moral issue, to which this book is a sensible, well-argued, and humane response. Elizabeth Mullineaux is positive about the book in her review, asserting that it presents well-reasoned arguments that are accessible to readers regardless of their background in philosophy, ethics, or animal welfare and contending that the book offers a blend of agreeable insights and thought-provoking ideas, fostering a deeper understanding of wild animal suffering alleviation strategies and warranting a strong recommendation for readers interested in the subject. References Further reading External links Reducing wild animal suffering with Kyle Johannsen - Knowing Animals podcast Kyle Johannsen, "Wild Animal Ethics: The Moral and Political Problem of Wild Animal Suffering" (Routledge, 2020) - New Books in Philosophy podcast 2020 non-fiction books Animal ethics books Books about wild animal suffering Books in political philosophy English-language non-fiction books Environmental ethics books Routledge books
Wild Animal Ethics
[ "Environmental_science" ]
1,006
[ "Environmental ethics books", "Environmental ethics" ]
62,216,798
https://en.wikipedia.org/wiki/Hiyangthang%20Lairembi%20Temple
The Hiyangthang Lairembi Temple () is an ancient temple of Goddess Hiyangthang Lairembi (also known as Irai Leima) of Meitei religion (Sanamahism). The sacred building is situated on the hilltop of Heibok Ching in the Hiyangthang, Manipur. Thousands of devotees thronged at the holy site during the festival time of Lai Haraoba of Sanamahism as well as Durga Puja of Hinduism. History The worship of Goddess Hiyangthang Lairembi (alias Irai Leima) was started by the reign of King Senbi Kiyamba (1467-1508 AD) in Manipur. Right from his era, Sarangthem family members hold grand feasts (Chaklen Katpa) every year in honor of the goddess. In the 18th century AD, during the reign of King Garib Niwaj (alias Pamheiba), Goddess Hiyangthang Lairembi (alias Irai Leima) was converted to Hindu goddess Kamakhya (a form of Durga). The 3rd day of Durga Puja is celebrated as the "Bor Numit" (literally, Boon Day) in the temple. On 22 March 1979, an association was formed to worship Hiyangthang Lairembi (Ireima), the traditional goddess. Legends Irai Leima (later known as Hiyangthang Lairembi) was an exceptionally beautiful princess of Heibok Ching. King Heibok Ningthou, her father was a wizard and black magician. One day, King Kwakpa (Kokpa) of Khuman dynasty saw Irai Leima fishing in the Liwa river. He fell in love with her. He proposed her. Her answer was that her parents' wish will be her wish. So, King Kwakpa and his subjects presented Heibok Ningthou many gifts. King Kwakpa was about to marry Irai Leima if her father didn't reject or to bring her by force if her father rejected. Seeing the immodesty of Khuman Kwakpa, Heibok Ningthou turned all the presents into stone. Kwakpa returned home disappointed. One day, King Kwakpa got drunk with a juice of the roots of Tera plant (Bombax malabaricum). He went to meet Irai Leima, riding on a Hiyang boat. Seeing him coming, she fled to Pakhra Ching. Kwakpa followed her. Seeing all these, Heibok Ningthou turned the Hiyang boat into stone and the oar into a growing tree. Getting furious, Kwakpa turned on Heibok Ningthou to kill him. Then, Heibok Ningthou turned Khuman Kwakpa also into a stone. Irai Leima saw all this and was frightened. She ran away from her own father. She passed the Pakhra Ching, crossed the Liwa river and hid herself inside the grain storehouse of Sarangthem Luwangba. When Luwangba and his good lady Thoidingjam Chanu Amurei left the house for paddy field, Irai Leima came out from her hideout. Meanwhile, she did all the household works for them. When they returned home, she also returned to her hideout again. They were amazed at this but it happened daily. So, one day, the man returned home earlier than normal timing. He found out what was really going on. But when he came near Irai Leima, she was gone below the grain storehouse. He saw nothing under the granary. He was astonished at this. So, he discussed the matter with his clan members. They searched her everywhere but didn't find her anymore. Irai Leima came to the dream of Sarangthem Luwangba. She told him that she was merged into his clan and became his daughter. The story was told to King Senbi Kiyamba of Ningthouja dynasty. The King sent maibas and maibis to inspect the case. The maibas and the maibis concluded the strange lady to be a goddess and the deity to be worshipped. King Kiyaamba told Luwangba to worship her. From that year onwards, Irai Leima was worshipped as a goddess. The first day on which Luwangba saw Irai Leima was the first Monday of the Meitei lunar month of Lamta (Lamda). And the day on which the maibas and maibis examine the case was the first Tuesday of Lamta (Lamda). Even today, right from the era of King Senbi Kiyamba (1467-1508 AD), the Sarangthem family members organise a grand feast (Chaklen Katpa) in honor of the goddess every year. Later, Irai Leima came to be known as Hiyangthang Lairembi. Festival Devotees believe that Goddess Hiyangthang Lairembi (alias Irai Leima) fulfills one's wish if asked for a blessing on the "boon day" at the right time. The boon day (Bor Numit) coincides with the third day of Hindu festival Durga Puja. References Meitei architecture Temples in Manipur
Hiyangthang Lairembi Temple
[ "Engineering" ]
1,101
[ "Meitei architecture", "Architecture" ]
62,217,327
https://en.wikipedia.org/wiki/Histopathology%20of%20colorectal%20adenocarcinoma
The histopathology of colorectal cancer of the adenocarcinoma type involves analysis of tissue taken from a biopsy or surgery. A pathology report contains a description of the microscopical characteristics of the tumor tissue, including both tumor cells and how the tumor invades into healthy tissues and finally if the tumor appears to be completely removed. The most common form of colon cancer is adenocarcinoma, constituting between 95% and 98% of all cases of colorectal cancer. Other, rarer types include lymphoma, adenosquamous and squamous cell carcinoma. Some subtypes have been found to be more aggressive. Macroscopy Cancers on the right side of the large intestine (ascending colon and cecum) tend to be exophytic, that is, the tumor grows outwards from one location in the bowel wall. This very rarely causes obstruction of feces, and presents with symptoms such as anemia. Left-sided tumors tend to be circumferential, and can obstruct the bowel lumen, much like a napkin ring, and results in thinner caliber stools. Microscopy Adenocarcinoma is a malignant epithelial tumor, originating from superficial glandular epithelial cells lining the colon and rectum. It invades the wall, infiltrating the muscularis mucosae layer, the submucosa, and then the muscularis propria. Tumor cells describe irregular tubular structures, harboring pluristratification, multiple lumens, reduced stroma ("back to back" aspect). Sometimes, tumor cells are discohesive and secrete mucus, which invades the interstitium producing large pools of mucus. This occurs in mucinous adenocarcinoma, in which cells are poorly differentiated. If the mucus remains inside the tumor cell, it pushes the nucleus at the periphery, this occurs in "signet-ring cell." Depending on glandular architecture, cellular pleomorphism, and mucosecretion of the predominant pattern, adenocarcinoma may present three degrees of differentiation: well, moderately, and poorly differentiated. Micrographs (H&E stain) Microscopic criteria A lesion at least "high grade intramucosal neoplasia" (high grade dysplasia) has: Severe cytologic atypia Cribriform architecture, consisting of juxtaposed gland lumens without stroma in between, with loss of cell polarity. Rarely, they have foci of squamous differentiation (morules). This should be distinguished from cases where piles of well-differentiated mucin-producing cells appear cribriform. In such piles, nuclei show regular polarity with apical mucin, and their nuclei are not markedly enlarged. Invasive adenocarcinoma commonly displays: Varying degrees of gland formation with tall columnar cells Frequently desmoplasia Dirty necrosis, consisting of extensive central necrosis with granular eosinophilic karyorrhectic cell detritus. It is located within the glandular lumina, or often with a garland of cribriform glands in their vicinity. Subtyping Determining the specific histopathologic subtype of colorectal adenocardinoma is not as important as its staging (see #Staging section below), and about half cases do not have any specific subtype. Still, it is customary to specify it where applicable. Differential diagnosis Colorectal adenocarcinoma is distinguished from a colorectal adenoma (mainly tubular and ⁄or villous adenomas) mainly by invasion through the muscularis mucosae. In carcinoma in situ (Tis), cancer cells invade into the lamina propria, and may involve but not penetrating the muscularis mucosae. This can be classified as an adenoma with "high-grade dysplasia", because prognosis and management are essentially the same. Grading Conventional adenocarcinoma may be graded as follows Staging Staging is typically made according to the TNM staging system from the WHO organization, the UICC and the AJCC. The Astler-Coller classification (1954) and the Dukes classification (1932) are now less used. T stands for tumor stage and ranges from 0, no evidence of primary tumor, to T4 when the tumor penetrates the surface of the peritoneum or directly invades other organs or structures. The N stage reflects the number of metastatic lymph nodes and ranges from 0 (no lymph node metastasis) to 2 (four or more lymph node metastasis), and the M stage gives information about distant metastasis (M0 stands for no distant metastasis, and M1 for the presence of distant metastasis). A clinical classification (cTNM) is done at diagnosis and is based on MRI and CT, and a pathological TNM (pTNM) classification is performed after surgery. The most common metastasis sites for colorectal cancer are the liver, the lung and the peritoneum. Tumor budding Tumor budding in colorectal cancer is loosely defined by the presence of individual cells and small clusters of tumor cells at the invasive front of carcinomas. It has been postulated to represent an epithelial–mesenchymal transition (EMT). Tumor budding is a well-established independent marker of a potentially poor outcome in colorectal carcinoma that may allow for dividing people into risk categories more meaningful than those defined by TNM staging, and also potentially guide treatment decisions, especially in T1 and T3 N0 (Stage II, Dukes’ B) colorectal carcinoma. Unfortunately, its universal acceptance as a reportable factor has been held back by a lack of definitional uniformity with respect to both qualitative and quantitative aspects of tumor budding. Immunohistochemistry In cases where a metastasis from colorectal cancer is suspected, immunohistochemistry is used to ascertain correct diagnosis. Some proteins are more specifically expressed in colorectal cancer and can be used as diagnostic markers such as CK20 and MUC2. Immunohistochemistry can also be used to screen for Lynch syndrome, a genetic disorder with increased risk of colorectal and other cancers. The diagnosis of Lynch syndrome is made by looking for specific genetic mutations in genes MLH1, MSH2, MSH6, and PMS2. Immunohistochemical testing can also be used to guide treatment and assist in determining the prognosis. Certain markers isolated from the tumor can indicate specific cancer types or susceptibility to different treatments. References Colorectal cancer Histopathology
Histopathology of colorectal adenocarcinoma
[ "Chemistry" ]
1,414
[ "Histopathology", "Microscopy" ]
62,217,733
https://en.wikipedia.org/wiki/Paul%20J.%20Tesar
Paul J. Tesar is an American developmental biologist. He is the Dr. Donald and Ruth Weber Goodman Professor of Innovative Therapeutics at Case Western Reserve University School of Medicine. His research is focused on regenerative medicine. Early life and education Tesar was born in Cleveland, Ohio. He graduated with a BSc in biology from Case Western Reserve University in 2003. As part of the National Institutes of Health Oxford-Cambridge Scholar Program, he earned a PhD in 2007. Career While a graduate student, Tesar published a paper describing epiblast-derived stem cells, a new type of pluripotent stem cell, research for which he received both the Beddington Medal of the British Society for Developmental Biology and the Harold M. Weintraub Award of the Fred Hutchinson Cancer Research Center. In 2010 he returned to Case Western Reserve University School of Medicine to teach. In 2014 he was appointed to the Dr. Donald and Ruth Weber Goodman chair in innovative therapeutics. Research Tesar developed methods to generate and grow oligodendrocytes and oligodendrocyte progenitor cells (OPCs) from pluripotent stem cells and skin cells. He also made human brain organoids containing human myelin, called oligocortical spheroids. Tesar identified drugs that stimulate myelin regeneration and reverse paralysis in mice with multiple sclerosis. Tesar also identified CRISPR and antisense oligonucleotide therapeutics that restored myelination and extended the lifespan of mice with Pelizaeus–Merzbacher disease. Awards Beddington Medal from the British Society for Developmental Biology Harold M. Weintraub Award Outstanding Young Investigator Award, International Society for Stem Cell Research Senior Member of the National Academy of Inventors, References American medical researchers Stem cell researchers Living people Case Western Reserve University faculty 1981 births Case Western Reserve University School of Medicine alumni Alumni of the University of Oxford
Paul J. Tesar
[ "Biology" ]
389
[ "Stem cell researchers", "Stem cell research" ]
62,218,033
https://en.wikipedia.org/wiki/24%20Comae%20Berenices
24 Comae Berenices is a triple star system in the northern constellation of Coma Berenices. It is visible to the naked eye, with the brightest component being an orange-hued star with an apparent visual magnitude of 5.03. The system is located at a distance of approximately 269 light-years from the Sun based on parallax, and is drifting further away with radial velocities of 3–5 km/s. This system can be resolved in a telescope as a pair of stars with an angular separation of along a position angle of 272°, as of 2018. They share a common motion through space and thus appear to be physically associated, with a wide projected separation of or greater. If they are bound in an orbit, the estimated period is approximately 28,000 years. The brighter member of this system is an aging giant or bright giant star with a stellar classification of K0II-III. It has exhausted the supply of hydrogen at its core and expanded to 20 times the girth of the Sun. This is a suspected variable that has been recorded ranging in brightness from magnitude 4.98 down to 5.06. The star is radiating 173 times the luminosity of the Sun from its swollen photosphere at an effective temperature of 4,688 K. The fainter component at magnitude 6.57 is a double-lined spectroscopic binary with an orbital period of 7.33 days and an eccentricity of 0.26. The primary member of this pair is an A-type main-sequence star with a stellar classification of A9V. It is a metallic-lined Am star with 2.2 times the radius of the Sun. The stars radiate about 16 and 7 times the Sun's luminosity from its photosphere, respectively, at effective temperatures of 7,630 and 7180 K, respectively. Both have relatively low projected rotational velocity of around 14 km/s, and it is suspected the rotations of this binary system may be synchronized. The system is a source for X-ray emission, which is most likely coming from the secondary. References K-type bright giants K-type giants Suspected variables A-type main-sequence stars Am stars Spectroscopic binaries Triple stars Coma Berenices Durchmusterung objects Coma Berenices, 24 109510/1 61415/61418 4791/2
24 Comae Berenices
[ "Astronomy" ]
493
[ "Coma Berenices", "Constellations" ]
62,219,051
https://en.wikipedia.org/wiki/Janet%20Barlow%20%28scientist%29
Janet Barlow is a Scottish scientist and professor of environmental physics at the University of Reading. She is an experimental physicist who has made significant contributions to our understanding of urban meteorology, with particular regards to weather forecasting, urban sustainability, indoor and outdoor air quality, building ventilation, and environmental wind engineering. Education and research career Barlow completed a BSc in Applied Physics with German at UMIST in 1994, followed by an MSc in Applied Meteorology and Agriculture at the University of Reading in 1995. In 1999, Barlow completed a PhD on the turbulent transfer of space charge in the atmospheric boundary layer at the University of Reading. After a 3-year postdoctoral research associate post, she took up a lectureship at the University of Reading in 2002. From 2011 to 2014, Barlow was director of the Centre for Technologies for Sustainable Built Environments (TSBE) at the University of Reading. Urban meteorology Barlow's work is largely experimental in nature, using both wind-tunnel based physical modelling, and urban observational campaigns. Using a unique observatory at the top of the BT Tower in London, Barlow has researched the effect of weather and climate on urban pollutants and air quality. In addition to urban meteorology, she has studied boundary layer flow effects around wind farms and integration of renewable energy into the energy system. Barlow has also research the effect of urban environments on the generation of wind energy. Recognition and community duties 2010-14 Member of Board of Urban Environment, American Meteorological Society 2003-7 Elected Member of Board of the International Association of Urban Climate. 2017- Member of UK Met Office Scientific Advisory Committee References Living people Environmental scientists Scottish meteorologists Year of birth missing (living people)
Janet Barlow (scientist)
[ "Environmental_science" ]
336
[ "Environmental scientists", "British environmental scientists" ]
62,220,759
https://en.wikipedia.org/wiki/QuTiP
QuTiP, short for the Quantum Toolbox in Python, is an open-source computational physics software library for simulating quantum systems, particularly open quantum systems. QuTiP allows simulation of Hamiltonians with arbitrary time-dependence, allowing simulation of situations of interest in quantum optics, ion trapping, superconducting circuits and quantum nanomechanical resonators. The library includes extensive visualization facilities for content under simulations. QuTiP's API provides a Python interface and uses Cython to allow run-time compilation and extensions via C and C++. QuTiP is built to work well with popular Python packages NumPy, SciPy, Matplotlib and IPython. History The idea for the QuTip project was conceived in 2010 by PhD student Paul Nation, who was using the quantum optics toolbox for MATLAB in his research. According to Paul Nation, he wanted to create a python package similar to qotoolbox because he "was not a big fan of MATLAB" and then decided to "just write it [him]self". As a postdoctoral fellow, at the RIKEN Institute in Japan, he met Robert Johansson and the two worked together on the package. In contrast to its predecessor qotoolbox, which relies on the proprietary MATLAB environment, it was published in 2012 under an open source license. The Version created by Nation and Johansson already contained the most important features of the package, but QuTips scope and features are constantly being extended by a large community of contributors. It has grown in popularity amongst physicists, with over 250.000 downloads in the year 2021. Examples Creating quantum objects >>> import qutip >>> import numpy as np >>> psi = qutip.Qobj([[0.6], [0.8]]) # create quantum state from a list >>> psi Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket Qobj data = [[0.6] [0.8]] >>> phi=qutip.Qobj(np.array([0.8, -0.6])) # create quantum state from a numpy-array >>> phi Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket Qobj data = [[ 0.8] [-0.6]] >>> e0=qutip.basis(2, 0) # create a basis vector >>> e0 Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket Qobj data = [[1.] [0.]] >>> A=qutip.Qobj(np.array([[1,2j], [-2j,1]])) # create quantum operator from numpy array >>> A Quantum object: dims = [[2], [2]], shape = (2, 2), type = oper, isherm = True Qobj data = [[1.+0.j 0.+2.j] [0.-2.j 1.+0.j]] >>> qutip.sigmay() # some common quantum objects, like pauli matrices, are predefined in the qutip package Quantum object: dims = [[2], [2]], shape = (2, 2), type = oper, isherm = True Qobj data = [[0.+0.j 0.-1.j] [0.+1.j 0.+0.j]] Basic operations >>> A*qutip.sigmax()+qutip.sigmay() # we can add and multiply quantum objects of compatible shape and dimension Quantum object: dims = [[2], [2]], shape = (2, 2), type = oper, isherm = False Qobj data = [[0.+2.j 1.-1.j] [1.+1.j 0.-2.j]] >>> psi.dag() # hermitian conjugate Quantum object: dims = [[1], [2]], shape = (1, 2), type = bra Qobj data = [[0.6 0.8]] >>> psi.proj() # projector onto a quantum state Quantum object: dims = [[2], [2]], shape = (2, 2), type = oper, isherm = True Qobj data = [[0.36 0.48] [0.48 0.64]] >>> A.tr() # trace of operator 2.0 >>> A.eigenstates() # diagonalize an operator (array([-1., 3.]), array([Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket Qobj data = [[-0.70710678+0.j ] [ 0. -0.70710678j]] , Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket Qobj data = [[-0.70710678+0.j ] [ 0. +0.70710678j]] ], dtype=object)) >>> (1j * A).expm() # matrix exponential of an operator Quantum object: dims = [[2], [2]], shape = (2, 2), type = oper, isherm = False Qobj data = [[-0.2248451-0.35017549j -0.4912955-0.7651474j ] [ 0.4912955+0.7651474j -0.2248451-0.35017549j]] >>> qutip.tensor(qutip.sigmaz(), qutip.sigmay()) # tensor product Quantum object: dims = [[2, 2], [2, 2]], shape = (4, 4), type = oper, isherm = True Qobj data = [[0.+0.j 0.-1.j 0.+0.j 0.+0.j] [0.+1.j 0.+0.j 0.+0.j 0.+0.j] [0.+0.j 0.+0.j 0.+0.j 0.+1.j] [0.+0.j 0.+0.j 0.-1.j 0.+0.j]] Time evolution >>> Hamiltonian = qutip.sigmay() >>> times = np.linspace(0, 2, 10) >>> result = qutip.sesolve(Hamiltonian, psi, times, [psi.proj(), phi.proj()]) # unitary time evolution of a system according to schroedinger equation >>> expectpsi, expectphi = result.expect # expectation values of projectors onto psi and phi >>> plt.figure(dpi=200) >>> plt.plot(times, expectpsi) >>> plt.plot(times, expectphi) >>> plt.legend([r"$\psi$",r"$\phi$"]) >>> plt.show() Simulating a non-unitary time evolution according to the Lindblad Master Equation is possible with the qutip.mesolve function References External links Articles with example Python (programming language) code Computational physics Free software programmed in Python Simulation software Software using the BSD license Quantum Monte Carlo
QuTiP
[ "Physics", "Chemistry" ]
1,759
[ "Quantum Monte Carlo", "Quantum chemistry", "Computational physics" ]
62,221,489
https://en.wikipedia.org/wiki/21%20Comae%20Berenices
21 Comae Berenices is a variable star in the northern constellation of Coma Berenices. It has the variable star designation UU Comae Berenices, while 21 Comae Berenices is the Flamsteed designation. About According to R. H. Allen, English orientalist Thomas Hyde attributed the ancient title Kissīn to this star, a name that comes from a climbing plant – either bindweed or dog rose. This star has a white hue and is just visible to the naked eye with an apparent visual magnitude that fluctuates around 5.47. Based upon parallax measurements, it is located at a distance of approximately 270 light years away from the Sun. It is a single star but is a confirmed physical member of the Melotte 111 open cluster. History This object has been studied extensively since 1953, producing some occasionally contradictory results such as hints of pulsational behavior or a binary companion. It is a weakly magnetic chemically peculiar star of type CP2, or Ap star, that is most likely on the main sequence. The stellar classification is A3p SrCr, where the suffix notation indicates abundance anomalies of the iron-peak element chromium, as well as strontium. This is an Alpha2 Canum Venaticorum (ACV) variable, which indicates it varies in luminosity as it rotates due to spots on its surface created by a magnetic field. The range of variation has an amplitude of 0.02 magnitude and a period of just over two days. Samus et al. (2017) have it classified as a low-amplitude Delta Scuti variable, although this is disputed. The age of the Melotte 111 cluster, and therefore this star, lies in the range of 400–800 million years. The star has a projected rotational velocity of 63 km/s, with a polar inclination of 64° or greater, resulting in a rotation period of 2.05 days. Stellar evolutionary models yield a mass of around 2.3 times that of the Sun and 2.6 times the Sun's radius for this object. It is radiating 38 times the luminosity of the Sun from its photosphere at an effective temperature of 8,900 K. References A-type main-sequence stars Ap stars Alpha2 Canum Venaticorum variables Delta Scuti variables Coma Berenices 4968 BD+18 2697 Comae Berenices, 42 0501 114378 064241 Comae Berenices, UU
21 Comae Berenices
[ "Astronomy" ]
517
[ "Coma Berenices", "Constellations" ]
62,221,665
https://en.wikipedia.org/wiki/Joanna%20Clark
Joanna Clark (1978 - 4 August 2022) was professor of environmental science at the University of Reading. She worked on aspects of carbon and water cycles in terrestrial and freshwater ecosystems from test-tube to catchment scale. She is founder and was director of the Loddon Observatory, which aims to bring together academia, charities, public sector and business to support sustainable societies. Education and research career Clark completed a BSc in geography at the University of Durham in 1999, followed by an MSc in monitoring, modelling and management of environmental change at King's College London in 2000. She completed a PhD in physical geography at the University of Leeds in 2005, before undertaking postdoctoral research associate positions at Leeds, Bangor and Imperial College London. She moved to the University of Reading in 2010. Carbon and water cycle research Clark's research is focussed on understanding the interactions between water, carbon and other biogeochemical cycles within terrestrial and freshwater ecosystems. She has specific interests in peatland biogeochemistry. Clark uses lab simulation experiments, field monitoring, modelling and remote sensing. Her work on natural flood management uses natural land-based measures to reduce the risk of flooding for communities. Clark's work with the water sector has addressed issues relating to continued supply of clean water in face of growing population, ageing infrastructure and the impacts of climate change. Clark has also promoted the use of agroforestry for removal of greenhouse gases from the atmosphere. Recognition and service Editor for PeerJ Member of the editorial board for Scientific Reports References 1978 births 2022 deaths Environmental scientists Alumni of Durham University Alumni of King's College London Alumni of the University of Leeds Academics of the University of Reading Place of birth missing Place of death missing 21st-century British earth scientists British women earth scientists
Joanna Clark
[ "Environmental_science" ]
357
[ "Environmental scientists", "British environmental scientists" ]
62,222,414
https://en.wikipedia.org/wiki/Enrique%20Lores
Enrique José Lores Obradors (born 1965) is a Spanish business executive, and the CEO of HP Inc. since November 2019. Lores was born in Madrid. He earned a bachelor's degree in electrical engineering from the Polytechnic University of Valencia, and an MBA from ESADE Business School in Barcelona. Lores joined HP as an engineering intern in 1989. In November 2019, Lores succeeded Dion Weisler as CEO of HP Inc, after he stepped down due to "a family health matter". Lores had been president of HP’s imaging, printing and solutions business. References Living people American chief executives of Fortune 500 companies Hewlett-Packard people 1960s births ESADE alumni Technical University of Valencia alumni Businesspeople from Madrid
Enrique Lores
[ "Technology" ]
152
[ "Lists of people in STEM fields", "Proprietary technology salespersons" ]
62,222,719
https://en.wikipedia.org/wiki/Nokia%202720%20Flip
The Nokia 2720 Flip is a Nokia-branded flip phone developed by HMD Global. The 2720 Flip was created, as an updated version of the Nokia 2720 Fold, which debuted in 2009. It was unveiled at IFA 2019 together with the Nokia 110 (2019), Nokia 800 Tough, Nokia 6.2, and Nokia 7.2. It runs the KaiOS operating system, a web-based operating system based on B2G OS. Target Audience According to a 2019 article from The Verge, "If you’re seriously considering buying a feature phone like the Nokia 2720 Flip in 2019 then I think you’re likely to be one of three kinds of people. Either you live or work in the developing world, where smartphones sometimes aren’t a practical option, you’re the kind of person who thrives on nostalgia for a simpler time, or else you’re trying to do a “digital detox,” and decrease the amount of time you seem to waste staring at screens every day". This fits in with the general trend of modern flip-phones being seen as a less-harmful alternative to cell phones. Software The Nokia 2720 Flip runs on the web-based operating system KaiOS. It runs many more modern apps including, but not limited to, WhatsApp, Facebook, Google Assistant and YouTube. Reception In February 2020, the Nokia 2720 Flip received an iF Design Award 2020 by iF International Forum Design. See also Nokia 8110 4G Nokia 800 Tough References External links 2720 Flip KaiOS phones Mobile phones introduced in 2019 Mobile phones with user-replaceable battery
Nokia 2720 Flip
[ "Technology" ]
330
[ "Mobile technology stubs", "Mobile phone stubs" ]
62,223,258
https://en.wikipedia.org/wiki/Fantastic%20Fungi
Fantastic Fungi is a 2019 American documentary film directed by Louie Schwartzberg. The film combines time-lapse cinematography, CGI, and interviews in an overview of the biology, environmental roles, and various uses of fungi. The film features interview segments with Paul Stamets and Michael Pollan, and is narrated by Brie Larson. Reception On review aggregator website Rotten Tomatoes, the film has approval rating based on reviews, with an average ranking of . The site's critical consensus reads, "As visually dazzling as it is thought-provoking, Fantastic Fungi sets out to make audiences see mushrooms differently -- and brilliantly succeeds." On Metacritic, the film has a score of 70 out of a 100 by 8 reviews, indicating "generally favorable reviews". Critics praised Schwartzberg's time-lapse cinematography. Some critics found the narration unnecessary. Josh Kupecki of The Austin Chronicle said "visual affectations aside, Fantastic Fungi is an engaging look at the scope of an organism that is so much more than a pizza topping or an ingredient in beef stroganoff". Andrew Pulver of The Guardian wrote "With its spectacular footage of growth and decay and impassioned speeches about the magic of mushrooms, this documentary is a treat for the eye and ear". Rex Reed of The New York Observer called the documentary "charming", while John DeFore of The Hollywood Reporter called the film an "[e]ye-opening eye candy". According to Robert Abele of the Los Angeles Times "it edges a little too close to being a commercial, but that's a nitpick when the totality of Fantastic Fungi is so entertaining, informative and appealingly hopeful about the hard-working cure-all for our ailing world lying beneath our feet". See also Edible mushroom Evolution of fungi References External links American documentary films Fungi and humans 2010s English-language films 2010s American films English-language documentary films
Fantastic Fungi
[ "Biology" ]
392
[ "Fungi and humans", "Fungi", "Humans and other species" ]
62,223,295
https://en.wikipedia.org/wiki/Nokia%20800%20Tough
The Nokia 800 Tough is a Nokia-branded mobile phone developed by HMD Global. It was unveiled at IFA 2019 together with the Nokia 110 (2019), Nokia 2720 Flip, Nokia 6.2, and Nokia 7.2. It was preceded by the Nokia 2720 Flip. The device can survive a free fall from up to 1.8 meters (6'), is IP68 rated and conforms to MIL-STD-810G standard. It runs on the KaiOS operating system, and has a non-removable battery with a capacity of 2100 mAh. The device has an MP3/WAV/AAC/MP4/H.264 player, predictive T9 typing, supports SNS (Social Networking Service) apps including WhatsApp, Facebook and Google Assistant. In February 2020, the device received an iF Design Award 2020 by iF International Forum Design. See also Nokia 2720 Flip Nokia 8110 4G References External links 800 Tough KaiOS phones Mobile phones introduced in 2019 Nokia phones by series
Nokia 800 Tough
[ "Technology" ]
216
[ "Mobile technology stubs", "Mobile phone stubs" ]
62,223,323
https://en.wikipedia.org/wiki/Square%20root%20of%206
The square root of 6 is the positive real number that, when multiplied by itself, gives the natural number 6. It is more precisely called the principal square root of 6, to distinguish it from the negative number with the same property. This number appears in numerous geometric and number-theoretic contexts. It can be denoted in surd form as and in exponent form as . It is an irrational algebraic number. The first sixty significant digits of its decimal expansion are: . which can be rounded up to 2.45 to within about 99.98% accuracy (about 1 part in 4800); that is, it differs from the correct value by about . It takes two more digits (2.4495) to reduce the error by about half. The approximation (≈ 2.449438...) is nearly ten times better: despite having a denominator of only 89, it differs from the correct value by less than , or less than one part in 47,000. Since 6 is the product of 2 and 3, the square root of 6 is the geometric mean of 2 and 3, and is the product of the square root of 2 and the square root of 3, both of which are irrational algebraic numbers. NASA has published more than a million decimal digits of the square root of six. Rational approximations The square root of 6 can be expressed as the simple continued fraction The successive partial evaluations of the continued fraction, which are called its convergents, approach : Their numerators are 2, 5, 22, 49, 218, 485, 2158, 4801, 21362, 47525, 211462, …, and their denominators are 1, 2, 9, 20, 89, 198, 881, 1960, 8721, 19402, 86329, …. Each convergent is a best rational approximation of ; in other words, it is closer to than any rational with a smaller denominator. Decimal equivalents improve linearly, at a rate of nearly one digit per convergent: The convergents, expressed as , satisfy alternately the Pell's equations When is approximated with the Babylonian method, starting with and using , the th approximant is equal to the th convergent of the continued fraction: The Babylonian method is equivalent to Newton's method for root finding applied to the polynomial . The Newton's method update, is equal to when . The method therefore converges quadratically. Geometry In plane geometry, the square root of 6 can be constructed via a sequence of dynamic rectangles, as illustrated here. In solid geometry, the square root of 6 appears as the longest distances between corners (vertices) of the double cube, as illustrated above. The square roots of all lower natural numbers appear as the distances between other vertex pairs in the double cube (including the vertices of the included two cubes). The edge length of a cube with total surface area of 1 is or the reciprocal square root of 6. The edge lengths of a regular tetrahedron (), a regular octahedron (), and a cube () of equal total surface areas satisfy . The edge length of a regular octahedron is the square root of 6 times the radius of an inscribed sphere (that is, the distance from the center of the solid to the center of each face). The square root of 6 appears in various other geometry contexts, such as the side length for the square enclosing an equilateral triangle of side 2 (see figure). Trigonometry The square root of 6, with the square root of 2 added or subtracted, appears in several exact trigonometric values for angles at multiples of 15 degrees ( radians). :{| class="wikitable" style="text-align: center;" !Radians!!Degrees!!!!!!!!!!!! |- ! !! ||| || || || || |- ! !! ||| || || || || |} In culture Villard de Honnecourt's 13th century construction of a Gothic "fifth-point arch" with circular arcs of radius 5 has a height of twice the square root of 6, as illustrated here. See also Square root Square root of 2 Square root of 3 Square root of 5 Square root of 7 References Mathematical constants Quadratic irrational numbers
Square root of 6
[ "Mathematics" ]
917
[ "Mathematical constants", "Mathematical objects", "Numbers", "nan" ]
62,223,541
https://en.wikipedia.org/wiki/Sachdev%E2%80%93Ye%E2%80%93Kitaev%20model
In condensed matter physics and black hole physics, the Sachdev–Ye–Kitaev (SYK) model is an exactly solvable model initially proposed by Subir Sachdev and Jinwu Ye, and later modified by Alexei Kitaev to the present commonly used form. The model is believed to bring insights into the understanding of strongly correlated materials and it also has a close relation with the discrete model of AdS/CFT. Many condensed matter systems, such as quantum dot coupled to topological superconducting wires, graphene flake with irregular boundary, and kagome optical lattice with impurities, are proposed to be modeled by it. Some variants of the model are amenable to digital quantum simulation, with pioneering experiments implemented in nuclear magnetic resonance. Model Let be an integer and an even integer such that , and consider a set of Majorana fermions which are fermion operators satisfying conditions: Hermitian ; Clifford relation . Let be random variables whose expectations satisfy: ; . Then the SYK model is defined as . Note that sometimes an extra normalization factor is included. The most famous model is when : , where the factor is included to coincide with the most popular form. See also Non-Fermi liquid References Lattice models
Sachdev–Ye–Kitaev model
[ "Physics", "Materials_science" ]
256
[ "Statistical mechanics", "Condensed matter physics", "Lattice models", "Computational physics" ]
62,223,644
https://en.wikipedia.org/wiki/Climate%20communication
<noinclude> Climate communication or climate change communication is a field of environmental communication and science communication focused on discussing the causes, nature and effects of anthropogenic climate change. Research in the field emerged in the 1990s and has since grown and diversified to include studies concerning the media, conceptual framing, and public engagement and response. Since the late 2000s, a growing number of studies have been conducted in countries in the Global South and have been focused on climate communication with marginalized populations. Most research focuses on raising public knowledge and awareness, understanding underlying cultural values and emotions, and bringing about public engagement and action. Major issues include familiarity with the audience, barriers to public understanding, creating change, audience segmentation, changing rhetoric, public health, storytelling, media coverage, and popular culture. History Scholar Amy E. Chadwick identifies Climate Change Communication as a new field of scholarship that truly emerged in the 1990s. In the late 80s and early 90s, research in developed countries (e.g. the United States, New Zealand, and Sweden) was largely concerned with studying the public's perception and comprehension of climate change science, models, and risks and guiding further development of communication strategies. These studies showed that while the public was aware of and beginning to notice climate change effects (increasing temperatures and changing precipitation patterns), the public's understanding of climate change was interlinked with ozone depletion and other environmental risks but not human-produced emissions. This understanding was coupled with varied yet overall increased net concern that continued through the mid-2000s. In studies from the mid-2000s to the late 2000s, there is evidence of rising global skepticism despite growing consensus and evidence of increasingly polarized views due to climate change's growing use as a political "litmus test." In 2010, researcher Susanne C. Moser viewed both the expansion of climate change communication's focus, which began to include subjects such as materialized evidence of climate change effects in addition to science and policy, as well as more prolific conversation/communication from a variety of voices as increasing climate change's relevance to society. Surveys through the mid-2010s showed mixed concern for climate change depending on global region —notably consistent concern in developed Western countries but a trend towards global unconcern in countries such as China, Mexico, and Kenya. In 2016, Moser noted an increase in the total number of climate communication studies in both Westernized countries and the Global South and an increased focus on climate communication with indigenous peoples and other marginalized communities since 2010. As of 2017, research remained focused on public understanding and had since begun to also analyze the relevance of the media, conceptual framing, public engagement and response, and persuasive strategies. This expansion has legitimated climate change communication as its own academic field and has yielded a group of experts specific to it. Primary goals of climate communication Most climate communication and research within the field is concerned with (1) the mechanisms related to the public's understanding/awareness of and perception of climate change which are intertwined with (2) personal cultural values and emotions related to social norms and (3) how these components can influence the engagement and action that may emerge as a response to communication. Within the academic field, there are debates over which is more important: knowledge-based communication or emotion-driven communication. Though both are inherently linked to action, researchers often view increased understanding as leading to increased action. A 2020 study by Kris De Meyer et al. attempts to push back against that notion and argues that action produces belief. Analyzing and increasing public understanding and perception One line of climate communication study is concerned with analyzing public understanding and risk perception. Understanding public perception of risk and its relevant influences, as well as public knowledge, concern, consensus, and imagery is thought to help policymakers better address the concerns of constituents and inform further climate communication. This notion has opened the realm of climate communications to political communications, sociology, and psychology. Achieving increased public understanding is often associated with communicating levels of scientific consensus and other scientific facts or futures in order to spur action and address the "information-deficit" model but can also be related to connecting with values and emotions. Perception is often related to personal recognition to impacted locations, times (the present vs. the future), weather events, or economics, which has placed emphasis on different methods of framing (linking concepts) and rhetoric when communicating. Connection of the self with events, such as those mentioned and often times through perceiving problems as local, increases recognition of the larger problem of climate change. These methods of communication presently include scientific communication, knowledge transfer, social media, news media, and entertainment amongst others, which are also studied individually regarding climate change. Some experts focus on how public perceptions of climate change can be related to public perceptions of smaller parts of the environment. Through teaching about the interconnectedness of humans and nature, some environmental writers believe that a fundamental shift in thinking is possible, and that this in turn would lead to greater desire to preserve the natural world. Connecting to values and emotions In addition to studies regarding knowledge, climate communication researchers inspect existing values and emotions related to climate change and how they are impacted by various communication strategies and can influence the effects of communication modes. Understanding and relating to the audiences' moral, cultural, religious, and political values, identities, and emotions (like fear) are viewed as imperative to appropriate and effective communication because climate change can otherwise seem intangible due to uncertainty and distance (physical, social, temporal). Recognizing and understanding these values is key to impacting perception of climate science and mitigative action because values serve as filters through which information is processed. Emotional reactions to climate change and the role emotions can play in decision-making have encouraged researchers to study the emotional side of climate change. Appeals to emotions (such as fear and hope) and to values can also be used in communication strategies. It is unclear whether negative emotions (e.g. concern and fear) or positive emotions (e.g. hope) better promote climate change action. Emotions can also be analyzed by their level of pleasantness and/or to the extent they evoke action, which is often understudied. Producing engagement and action Studying climate communications can also be focused on civic engagement and the production of behavior changes for adapting or increasing resiliency to climate change. Engagement and action can occur on multiple geographic scales (local, regional, national, or international), and examples include participation in climate justice movements, support for policies or politics, changes to agricultural practices, and addresses to vulnerabilities to extreme weather vulnerabilities. Behavioral changes can also address more fundamental norms and values that influence lifestyles, life choices, and society as a whole. Engagement can also involve how those who communicate climate change interact with researchers studying the field of communications. Studies have recognized that increased understanding and perception does not automatically produce action and have argued for increased means of enabling action in communication methods. Research into engagement and action often focuses on the perception and understanding of different demographics and geographic locations. Some politicians, such as Arnold Schwarzenegger with his slogan "terminate pollution", say that activists should generate optimism by focusing on the health co-benefits of climate action. Major issues Barriers to understanding Climate communications is heavily focused on methods for inviting larger scale public action to address climate change. To this end, a lot of research focuses on barriers to public understanding and action on climate change. Scholarly evidence shows that the information deficit model of communication—where climate change communicators assume "if the public only knew more about the evidence they would act"—doesn't work. Instead, argumentation theory indicates that different audiences need different kinds of persuasive argumentation and communication. This is counter to many assumptions made by other fields such as psychology, environmental sociology, and risk communication. Additionally, climate denialism by organizations, such as The Heartland Institute in the United States, and individuals introduces misinformation into public discourse and understanding. There are several models for explaining why the public doesn't act once more informed. One of the theoretical models for this is the 5 Ds model created by Per Epsten Stoknes. Stoknes describes 5 major barriers to creating action from climate communication: Distance – many effects and impacts of climate change feel distant from individual lives Doom - when framed as a disaster, the message backfires, causing Eco-anxiety Dissonance – a disconnect between the problems (mainly the fossil fuel economy) and the things that people choose in their lives Denial -- psychological self defense to avoid becoming overwhelmed by fear or guilt iDentity -- disconnects created by social identities, such as conservative values, which are threatened by the changes that need to happen because of climate change. In her book Living in Denial: Climate Change, Emotions, and Everyday Life, Kari Norgaard's study of Bygdaby—a fictional name used for a real city in Norway—found that non-response was much more complex than just a lack of information. In fact, too much information can do the exact opposite because people tend to neglect global warming once they realize there is no easy solution. When people understand the complexity of the issue, they can feel overwhelmed and helpless which can lead to apathy or skepticism. A study published in PLOS Climate studied defensive and secure forms of national identity—respectively called "national narcissism" and "secure national identification"—for their correlation to support for policies to mitigate climate change and to transition to renewable energy. The researchers concluded that secure national identification tends to support policies promoting renewable energy; however, national narcissism was found to be inversely correlated with support for such policies—except to the extent that such policies, as well as greenwashing, enhance the national image. Right-wing political orientation, which may indicate susceptibility to climate conspiracy beliefs, was also concluded to be negatively correlated with support for genuine climate mitigation policies. A study published in PLOS One in 2024 found that even a single repetition of a claim was sufficient to increase the perceived truth of both climate science-aligned claims and climate change skeptic/denial claims—"highlighting the insidious effect of repetition". This effect was found even among climate science endorsers. Climate literacy Though communicating the science about climate change under the premises of an Information deficit model of communication is not very effective in creating change, comfort with and literacy in the main issues and topics of climate change is important for changing public opinion and action. Several agencies and educational organizations have developed frameworks and tools for developing climate literacy, including the Climate Literacy Lab at Georgia State university, and National Oceanic and Atmospheric Administration. Such resources in English have been collected by the Climate Literacy and Awareness Network. Creating change As of 2008, most of the environmental communications evidence for effecting individual or social change were focused on behavior changes around: household energy consumption, recycling behaviours, changing transportation behavior and buying green products. At that time, there were few examples of multi-level communications strategies for effecting change. Behaviour change Since much of Climate communication is focused on engaging broad public action, much of the studies are focused on effecting behavior change. Typically, effective climate communication has three parts: cognitive, affective and place based appeals. Audience segmentation Different parts of different populations respond differently to climate change communication. Academic research since 2013 has seen an increasing number of audience segmentation studies, to understand different tactics for reaching different parts of populations. Major segmentation studies include: Segmentation of the American audiences into 6 groups: Alarmed, Concerned, Cautious, Disengaged, Doubtful and Dismissive. Segmentation of Australians into 4 segments in 2011, and 6 segments analogous to the Six America's model. Segmentation of German populations into 5 segments Segmentation of Indian populations into the 6 segments Segmentation of Singapore audiences into 3 segments Segmentation of France audiences intro 6 segments mixing climate attidues and values. Changing rhetoric A significant part of the research and public advocacy conversations about climate change have focused on the effectiveness of different terms used to describe "global warming". More recently, the focus has shifted to rhetoric describing all aspects and effects of climate change, including human-non-human relationships. History of global warming Advocating change in the way non-humans are referred to In her book Braiding Sweetgrass, author and botanist Robin Wall Kimmerer has suggested that the way in which animals and plants are referred to in language, specifically the English language, impact how they are perceived and therefore treated by persons who speak that language. Her ideas have gained attention and inspired other considerations of how language involving non-human species/groups affects views of and actions taken that involve them. The ways animals, plants, rivers, mountains, etc. are expressed in legislation can, in the view of University of Waterloo Professor, Jennifer Clary-Lemon, be damaging to perceptions as they seem to carry a persuasive tone, in favor of seeing these pieces of nature as less than; not recognizing their importance. Analysis of current conversations on rhetorical changes in climate communication There is not enough contribution to the field of climate change rhetoric to adequately implement rhetorical changes, despite the presumed effectiveness. Professor of Writing and Rhetoric, Eileen E. Schell of Syracuse University has described a lack of attention to conversations concerning changing rhetoric used to discuss climate change and other environmental problems. Experts believe research needs to be done in this area and then it could be applied to climate communication and could be effective in creating better messaging that spurs greater engagement and action. Health Climate change exacerbates a number of existing public health issues, such as mosquito-borne disease, and introduces new public health concerns related to changing climate, such as increase in health concerns after natural disasters or increases in heat illnesses. Thus the field of health communication has long acknowledged the importance of treating climate change as a public health issue, requiring broad population behavior changes that allow societal climate change adaptation. A December 2008 article in the American Journal of Preventive Medicine recommended using two broad sets of tools to effect this change: communication and social marketing. A 2018 study, found that even with moderates and conservatives who were skeptical of the importance of climate change, exposure to information about the health impacts of climate change creates greater concern about the issues. Climate change is also expected to impact mental health significantly. With the increase in emotional responses to climate change, there is a growing need for greater resilience and tolerance to emotional experiences. Research has indicated that these emotional experiences can be adaptive when they are supported and processed appropriately. This support requires the facilitation of emotional processing and reflective functioning. When this occurs, individuals increase in tolerance to emotion and resilience, and are then able to support others through crisis. Importance of Storytelling Framing climate change information as a story has been shown to be an effective form of communication. In a 2019 study, climate change narratives structured as stories were better at inspiring pro-environmental behavior. The researchers propose that these climate stories spark action by allowing each experimental subject to process the information experientially, increasing their affective engagement and leading to emotional arousal. Stories with negative endings, for example, influenced cardiac activity, increasing inter-beat (RR) intervals. The story signalled the brain to be alert and take action against the threat of climate change. A similar study has shown that sharing personal stories about experiences with climate change can convince climate change deniers. Hearing about how climate change has influenced someone's life elicits emotions like worry and compassion, which can shift beliefs about climate change. Media coverage The effect of mass media and journalism on the public's attitudes towards climate change has been a significant part of communications studies. In particular, scholars have looked at how the media's tendency to cover climate change in different cultural contexts, with different audiences or political positions (for example Fox News's dismissive coverage of climate change news), and the tendency of newsrooms to cover climate change as an issue of uncertainty or debate, in order to give a sense of balance. Popular culture Further research has explored how popular media, like the film The Day After Tomorrow, popular documentary An Inconvenient Truth, and climate fiction change public perceptions of climate change. Effective climate communication Effective climate communications require audience and contextual awareness. Different organizations have published guides and frameworks based on experience in climate communications. This section documents those various guidelines. General guidance A 2009 handbook developed by the Center for Research on Environmental Decisions at the Earth Institute at Columbia University describes eight main principles for communications based on the psychological research about Environmental decisions: Know your audience Get the Audience's Attention Translate Scientific Data into Concrete Experiences Beware the Overuse of Emotional Appeals Address Scientific and Climate Uncertainties Tap into Social Identities and Affiliates Encourage Group Participation Make Behavior Change Easier A strategy playbook, developed based on lessons learned from the COVID pandemic communication, was released On Road Media in the UK in 2020. The framework is focused on developing positive messages that help people feel optimistic about learning more to address climate change. This framework included six recommendations: Make it do-able and show change is possible Focus on the big things and how we can change them Normalize action and change, not inaction Connect the planet's health with our own health Emphasis our shared responsibility for future generations Keep it down to earth By experts In 2018, the IPCC published a handbook of guidance for IPCC authors about effective climate communication. It is based on extensive social studies research exploring the impact of different tactics for climate communication. The guidelines focus on six main principles: Be a confident communicator Talk about the real world, not abstract ideas Connect with what matters to your audience Tell a human story Lead with what you know Use the most effective visual communication Visuals A 2018 study concluded that graphical illustrations such as charts and graphs more effectively overcome misperceptions than the same information presented in text. Separately, Climate Visuals a nonprofit, published in 2020 a set of guidelines based on evidence for climate communications. They recommend that visual communications include: Show real people Tell new stories Show climate change causes at scale Show emotionally powerful impacts Understand your audience Show local (serious) impacts Be careful with protest imagery. Applying findings from psychology Psychologists have increasingly been assisting the worldwide community in facing the difficult challenge of organizing effective climate change mitigation efforts. Much work has been done on how to best communicate climate related information so that it has positive psychological impact, leading to people engaging in the problem, rather than evoking psychological defenses like denial, distance or a numbing sense of doom. As well as advising on the method of communication, psychologists have investigated the difference it make when the right sort of person is doing the communication – for example, when addressing American conservatives, climate related messages have been shown to be received more positively if delivered by former military officers. Various people who are not primarily psychologists have also been advising on psychological matters related to climate change. For example, Christiana Figueres and Tom Rivett-Carnac, who led the efforts to organize the unprecedentedly successful 2015 Paris Agreement, have since campaigned to spread the view that a "stubborn optimism" mindset should ideally be part of an individual's psychological response to the climate change challenge. A study from 2020 found that persuasive messaging that explains the mechanisms behind climate change, rather than the risks or consequences of climate change, was more effective in changing beliefs, especially among conservatives. Noting multiple studies showing that people often prefer receiving numerical details over purely verbal communication, a study by science communicators Ellen Peters and David M. Markowitz reported that participants responded more favorably to messages with precise numeric information on climate change consequences, trusting the messages more, and thinking the message sender was more likely an expert. However, the researchers stated that people's math anxiety and level of mathematical ability suggested limiting the quantity of numerical information that should be presented. Sustainable development The impacts of climate change are exacerbated in low- and middle income countries; higher levels of poverty, less access to technologies, and less education, means that this audience needs different information. The Paris Agreement and IPCC both acknowledge the importance of sustainable development in addressing these differences. In 2019 the nonprofit, Climate and Development Knowledge Network published a set of lessons learned and guidelines based on their experience communicating climate change in Latin America, Asia and Africa. Organizations Research centers in climate communication include: Yale Program on Climate Change Communication Center for Climate Change Communication at George Mason University Climate Outreach (UK) Climate Commission (Australia) Other bodies that research climate communication International Organizations the Intergovernmental Panel on Climate Change (IPCC) the UN Climate Change Secretariat NGOs Climate and Development Knowledge Network Climate Council New Zero World Re.Climate (Canada) Parlons Climat (France) Act Climate Labs (USA) Notes See also Climate crisis Climate emergency declaration References Works cited Bibliography Further reading Kleemann, Katrin, and Jeroen Oomen, eds. "Communicating the Climate: From Knowing Change to Changing Knowledge," RCC Perspectives: Transformations in Environment and Society 2019, no. 4. doi.org/10.5282/rcc/8822. Environmental communication
Climate communication
[ "Environmental_science" ]
4,293
[ "Environmental communication", "Environmental social science" ]
62,224,619
https://en.wikipedia.org/wiki/Undercut%20%28welding%29
In welding, undercutting is when the weld reduces the cross-sectional thickness of the base metal. This type of defect reduces the strength of the weld and workpieces. One reason for this defect is excessive current, causing the edges of the joint to melt and drain into the weld; this leaves a drain-like impression along the length of the weld. Another reason is if a poor technique is used that does not deposit enough filler metal along the edges of the weld. A third reason is using an incorrect filler metal, because it will create greater temperature gradients between the center of the weld and the edges. Other causes include too small of an electrode angle, a dampened electrode, excessive arc length, and slow speed. References Welding
Undercut (welding)
[ "Engineering" ]
159
[ "Welding", "Mechanical engineering" ]
62,226,149
https://en.wikipedia.org/wiki/Desnitro-imidacloprid
Desnitro-imidacloprid is a metabolite of the insecticide imidacloprid, a very common insecticide and the most important member of the class of insecticides called neonicotinoids, the only significant new class of insecticides to be developed between 1970 and 2000. While imidacloprid has proved highly selective against insects, the desnitro- version is highly toxic to mammals, due to its agonist action at the alpha4beta2 nicotinic acetylcholine receptor (nAChR) in the mammalian brain, at least as demonstrated in experiments involving mice. References Insecticides Imidazolines Chloropyridines
Desnitro-imidacloprid
[ "Chemistry", "Biology" ]
143
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
53,340,420
https://en.wikipedia.org/wiki/Single-shot%20multi-contrast%20X-ray%20imaging
Single-shot multi-contrast x-ray imaging is an efficient and a robust x-ray imaging technique which is used to obtain three different and complementary types of information, i.e. absorption, scattering, and phase contrast from a single exposure of x-rays on a detector subsequently utilizing Fourier analysis/technique. Absorption is mainly due to the attenuation and Compton scattering from the object, while phase contrast corresponds to phase shift of x-rays. The technique obtain images from the biological and non-biological objects. The research purposes include radiography, scattering imaging, differential phase contrast, and diffraction imaging. It is also possible to adjust and modify the experiment based on what information is of most importance. Almost every application that utilize this technique have the same approach, mathematics and science behind it such as the experimental setup, complementary information and Fourier analysis. Single-shot multi-contrast x-ray imaging gained its importance recently in contrast to Talbot–Lau interferometer because of the less optical element such as diffraction gratings being used under it and hence obtaining every information digitally. References Spatial harmonic method Interferometry-based setups Hybrid detectors and coded apertures Fourier analysis Imaging Interferometry X-ray instrumentation
Single-shot multi-contrast X-ray imaging
[ "Technology", "Engineering" ]
251
[ "X-ray instrumentation", "Measuring instruments" ]
53,341,083
https://en.wikipedia.org/wiki/Automated%20efficiency%20model
An automated efficiency model (AEM) is a mathematical model that estimates a real estate property’s efficiency (in terms of energy, commuting, etc) by using details specific to the property which are available publicly and/or housing characteristics which are aggregated over a given area such as a zip code. AEMs have some similarities to an automated valuation model (AVM) in terms of concept, advantages and disadvantages. AEMs calculate specific efficiencies such as location, water, energy or solar efficiency. The Council of Multiple Listing Services defines an AEM as, “any algorithm or scoring model that estimates the [efficiency] of a home without an on-site inspection. They are similar to Automated Valuation Models (AVMs), but are more reliant on public data such as square footage...and estimated energy usage.” Most AEMs calculate a property’s selected efficiency by analyzing available public information and may also apply proprietary data or formulas, and allow for a user such as a home owner to make additional inputs. Housing characteristics such as age of the home or square footage may be obtained by data providers such as those on this list of online real estate databases or a similar offerings. Estimates of energy usage may be available from published sources such as through the Residential Energy Consumption Survey by the Energy Information Administration. Examples of use By design, the AEM score output is provided as a preliminary comparison tool so the score of one property may be compared to other homes, against an average score for the area, etc. Primary users may vary from buyers and sellers to real estate agents and appraisers as they complete relevant comparisons. For example, REColorado, the multiple listing service covering the Denver metro area, presents a UtilityScore widget on homes for sale. Zillow publishes a Sun Number score on the home fact sheet so website visitors can compare the solar energy potential of prospective properties. Trulia has published a report using automated estimates from UtilityScore to present water, natural gas and electric rates into a single price per square foot by zip code. Beyond usage for consumer preliminary comparisons, usage of AEMs varies by industry. AEMs may also be used by solar installers, home improvement contractors, efficiency inspectors, and mortgage lenders. In the photovoltaics industry, installers use Sun Number to reduce the soft costs associated with motivating consumers to invest in solar systems and in recording property specifications to create quotes. The U.S. Department of Energy has found that Sun Number eliminates 7–10 days from the quotation process when solar suitability is determined digitally and eliminates the need for an onsite inspection. AEMs have been used in the mortgage industry to support a niche loan product called a Location Efficient Mortgage (LEM). During underwriting, an AEM such as the H+T Affordability Index is used to calculate the location efficient value According to National Mortgage Professional Magazine AEMs may one day be incorporated into loan underwriting as well, “Since utilities are as big or bigger part of home expenses than even real estate taxes, we may see [estimated utility usage] begin to be factored into underwriting.” Methodology AEMs generate a score for a specific property based on both publicly available housing characteristics about the subject property as well as mathematical modeling. AEMs are technology-driven scores without an onsite inspection or human assessment. For more accurate information unique to a specific property an onsite inspection such as an energy audit is required. Detailed information on the data accessed to calculate an AEM, the modeling formulas and algorithms are generally not published. A summary of general information is listed in the table below: Advantages As shown in the section above, AEMs tend to rely on public information rather than information which is private to the resident such as actual utility bills. Utility bills can vary based on the occupancy and personal property within a structure. The public information used in AEMs is relatively static as it is focused on details of the structure, location and/or mechanical systems and therefore tends to reflect the real property transferred during a real estate transaction. According to the Council of Multiple Listing Services advantages are, “AEMs provide consumers with a quick comparison of all properties across a specified market. Since most focus on the attached systems and structure, they are only meant to reflect the efficiency of the real property.” Disadvantages According to the Council of Multiple Listing Services advantages are, “AEMs are dependent on data used, the assumptions made, and the model methodology. Since models and methodologies differ and no on-site inspections are performed, accuracy may vary among scoring systems.” References Mathematical modeling
Automated efficiency model
[ "Mathematics" ]
948
[ "Applied mathematics", "Mathematical modeling" ]
53,341,538
https://en.wikipedia.org/wiki/Psi%20Crateris
Psi Crateris, Latinized from ψ Crateris, is the Bayer designation for a visual binary star system in the southern constellation of Crater. It is faintly visible to the naked eye with an apparent visual magnitude of 6.13. According to the Bortle scale, it requires dark suburban or rural skies to view. Based upon an annual parallax shift of 6.5 mas, the system is located approximately 500 light years away from the Sun. The components in this star system have an orbital period of about 366 years with an eccentricity of 0.43. The angular size of the orbit's semimajor axis is about half an arc second. The primary member, component A, is an ordinary A-type main sequence star with a visual magnitude of 6.24 and a stellar classification of A0 V. It was a candidate λ Boötis star, but this was later rejected when the spectrum was found to be normal. Any peculiarities may have instead resulted from the overlapping spectra of the two stars. The star is radiating about 75 times the solar luminosity from it outer atmosphere at an effective temperature of 9,199 K. The fainter secondary, component B, has a visual magnitude of 8.34 and a class of A3. References A-type main-sequence stars Spectroscopic binaries Crater (constellation) Crateris, Psi Durchmusterung objects 097411 054742 4347
Psi Crateris
[ "Astronomy" ]
294
[ "Crater (constellation)", "Constellations" ]
53,341,840
https://en.wikipedia.org/wiki/Yong%20Tan
Yong Tan is the Neal and Jan Dempsey Professor of Information Systems at the University of Washington Foster School of Business. He is the director of Center for Data Analytics at USTC-UW Institute for Global Business and Finance Innovation and proved instrumental in establishing the Institute with the University of Science and Technology of China (USTC), Tan’s alma mater. He was named a Chang Jiang Scholar by the Chinese Ministry of Education and Hong Kong Li Ka Shing Foundation, serving as a chair visiting professor at the School of Economics and Management at Tsinghua University. In 2016, he won the INFORMS ISS Distinguished Fellow Award. He received the 2017 Best Paper Award in Information Systems from Management Science. Tan received the Best Publication Award from the Association of Information Systems. Yong Tan received Bachelor of Science in Physics from University of Science and Technology of China (USTC) in 1987, and was selected as one of the 915 students in the CUSPEA program created by Nobel laureate Tsung-Dao Lee. Yong Tan received his Ph.D. in Physics from the University of Washington in 1993, advised by Nobel laureate David J. Thouless. He joined the Foster School faculty full-time after earning his Ph.D. in Information Systems from the University of Washington in 2000, where Vijay Mookerjee was his doctoral advisor. References Year of birth missing (living people) Living people Information systems researchers
Yong Tan
[ "Technology" ]
283
[ "Information systems", "Information systems researchers" ]
53,341,985
https://en.wikipedia.org/wiki/Bickley%E2%80%93Naylor%20functions
In physics, engineering, and applied mathematics, the Bickley–Naylor functions are a sequence of special functions arising in formulas for thermal radiation intensities in hot enclosures. The solutions are often quite complicated unless the problem is essentially one-dimensional (such as the radiation field in a thin layer of gas between two parallel rectangular plates). These functions have practical applications in several engineering problems related to transport of thermal or neutron, radiation in systems with special symmetries (e.g. spherical or axial symmetry). W. G. Bickley was a British mathematician born in 1893. Definition The nth Bickley−Naylor function is defined by and it is classified as one of the generalized exponential integral functions. All of the functions for positive integer n are monotonously decreasing functions, because is a decreasing function and is a positive increasing function for . Properties The integral defining the function generally cannot be evaluated analytically, but can be approximated to a desired accuracy with Riemann sums or other methods, taking the limit as a → 0 in the interval of integration, [a, /2]. Alternative ways to define the function include the integral, integral forms the Bickley-Naylor function: where is the modified Bessel function of the zeroth order. Also by definition we have . Series expansions The series expansions of the first and second order Bickley functions are given by: where is the Euler constant and is the th harmonic number. Recurrence relation The Bickley functions also satisfy the following recurrence relation: where . Asymptotic expansions The asymptotic expansions of Bickley functions are given as for . Successive differentiation Differentiating with respect to x gives Successive differentiation yields The values of these functions for different values of the argument x were often listed in tables of special functions in the era when numerical calculation of integrals was slow. A table that lists some approximate values of the three first functions Kin is shown below. Computer code Computer code in Fortran is made available by Amos. See also Exponential integral References Special functions
Bickley–Naylor functions
[ "Mathematics" ]
426
[ "Special functions", "Combinatorics" ]
53,342,056
https://en.wikipedia.org/wiki/Promoting%20Women%20in%20Entrepreneurship%20Act
The Promoting Women in Entrepreneurship Act (, ) is a public law amendment to the Science and Engineering Equal Opportunities Act () to authorize the National Science Foundation to encourage its entrepreneurial programs to recruit and support women to extend their focus beyond the laboratory and into the commercial world. Background The Promoting Women in Entrepreneurship Act was introduced in the United States House of Representatives on January 4, 2017, by Representative Elizabeth Esty of Connecticut and signed into law by President Donald Trump on February 28, 2017. According to the Bureau of Labor Statistics, women account for 47% of the workforce, but make up only 25.6 percent of computer and mathematical occupations. In addition, only 15.4 percent of architecture and engineering jobs are filled by women. Congress also found that only 26 percent of women who earned STEM degrees actually worked in STEM related jobs. The president stated, “() enables the National Science Foundation to support women inventors – of which there are many – researchers and scientists in bringing their discoveries to the business world, championing science and entrepreneurship and creating new ways to improve people’s lives.” Trump signed the bill in a room full of women including Representative Barbara Comstock, who introduced the Inspire Women Act, Senator Heidi Heitkamp, and First Lady Melania Trump. The bill was supported by both parties, with 36 Democrats and 8 Republicans signing as co-sponsors. Impact The bill was designed to primarily improve the programs in place at the National Science Foundation in order to encourage more women to enter into the STEM fields. The Science and Engineering Equal Opportunities Act allocates funding for educational programs and for research in STEM fields, and this bill adds the ability for the Science Foundation to allocate new funding towards incentivizing women to join their educational and entrepreneurial programs. There has been little news regarding this act and its effects recently and the expected results have yet to come to fruition. However, the act still represents a trend within the Trump administration with regard to technology and women. The president has said that this issue was, "going to be addressed by my administration over the years with more and more of these bills coming out and address the barriers faced by female entrepreneurs and by those in STEM fields." Despite this, since the day of the law being signed, the Trump administration has yet to give a statement regarding future legislation that would further help improve the numbers of women in science and technology. See also Timeline of women's legal rights in the United States (other than voting) References Women in science and technology Acts of the 115th United States Congress
Promoting Women in Entrepreneurship Act
[ "Technology" ]
519
[ "Women in science and technology" ]
53,342,399
https://en.wikipedia.org/wiki/Evolutionary%20rescue
Evolutionary rescue is a process by which a population—that would have gone extinct in the absence of evolution—persists due to natural selection acting on heritable variation. Coined by Gomulkiewicz & Holt in 1995, evolutionary rescue was described as a continuously changing environment predicted to appear as a stable lag of the mean trait value behind a moving environmental optimum, where the rate of evolution and change in the environment are equal. Evolutionary rescue is often confused with two other commonplace forms of rescue: genetic rescue and demographic rescue-in nature due to overlapping similarities. Figure 1 highlights the different pathways that result in their respective rescue. History The earliest recorded observation of the concept of evolutionary rescue was made by English philosophers Haldane in 1937 and Simpson who tossed around the idea of how populations might evolve in response to changes in their environment. In 1995, Gomulkiewicz & Holt observed the population dynamics of two processes: the exponential decline of sensitive types and the exponential increase in resistant types. Orr & Unckless (2014) then furthered Gomulkiewicz & Holt's work by describing these processes together to create the U-shaped abundance trajectory. In the changing world, evolutionary rescue is described as the phenotypes/genotypes of a population adapting to its environment under the threat of extinction by increasing the frequency of adaptive alleles. The U-shaped curve After a sudden change in the environment, evolutionary rescue is predicted to create a U-shaped curve of population dynamics, as the original genotypes, which are unable to replace themselves, are replaced by genotypes that are able to increase in numbers. The left half of the curve predicts the declining original genotypes that are unable to replace themselves, and the right half of the curve predicts resistant genotypes that can increase the population number. The probability of evolutionary rescue is dependent on the resistant allele originating before or after the environmental change. In a continuously changing environment, evolutionary rescue is predicted to appear as a stable lag of the mean trait value behind a moving environmental optimum, where the rate of evolution and rate of change in the environment are equal. The theory has been reviewed by Alexander et al in 2014 and continues to grow rapidly, adding both genetic and ecological complexity. Evolutionary rescue is distinct from demographic rescue, where a population is sustained by continuous migration from elsewhere, without the need for evolution. On the other hand, genetic rescue, where a population persists because of migration that reduces inbreeding depression, can be thought of a special case of evolutionary rescue (but see ). Genetic factors For a population to undergo evolutionary rescue, the frequency of resistance alleles being present dictates the probability of evolutionary rescue occurring. Natural populations threatened by extinction are under stress by invasive pests or pathogens that have increased resistance to pesticides and antibiotics. These populations may also be constrained by the genetic variation present because of the lack of sufficiently resistant alleles being able to propagate. This results in the absence of the third phase of the U-shaped curve leading to extirpation. Recombination (Epistasis) Recombination either increases or decreases the probability of evolutionary rescue from occurring. Epistasis then modifies the recombination by creating linkage disequilibria (LD). Together, the linkage allows the recombination of two beneficial alleles to enhance the fitness of that population, thus giving rise to adaptations that succeed in evolutionary rescue. In evolutionary rescue, sudden environmental changes affect the epistasis of alleles in the population. As such, negative epistasis (the removal of a resistant allele via mutation) means LD is negative, therefore lowering the chances of evolutionary rescue from occurring. Similarly, if epistasis is positive (the introduction of a resistant allele), LD is also positive meaning the probability of evolutionary rescue increases. Dispersal The limitation of dispersal occurring in a population is dependent on the compatibility of the habitat being dispersed in terms of climate conditions, geographic accessibility, and more. Populations in relocated habitats with abundant genotypes to adapt to their environment have increased chances of surviving by undergoing evolutionary rescue. As populations disperse, their population's distribution range of genetic information increases, which allows the gene flow of beneficial alleles to spread between the new sub-populations of the species. Within each sub-population increases the probability of local adaptation (beneficial alleles appearing within the genotype) and thus gene flow from one sub-population to another increases the chances of that beneficial allele propagating and successfully triggering evolutionary rescue. Dispersal, however, also negatively affects the local adaptation of a population under heterogeneous environmental conditions through maladaptation. Mismatched genotypes increase the migration load of the population resulting in a much lower overall fitness. Extrinsic factors Human impact Destruction of natural habitats by human influence limits the ability of a population to increase and disperse, thus interrupting evolutionary rescue from succeeding. Urbanization, agriculture, and transport roads of habitats increase the risk of extirpation of the local populations. As a result, constraints of the environment pressures local species to adapt or die out. Empirical evidence Evolutionary rescue has been demonstrated in many different experimental evolution studies, such as yeast evolving to tolerate previously lethal salt concentrations. There are also a large number of examples of evolutionary rescue in the wild, in the forms of drug resistance, herbicide resistance, other types of pesticide resistance, and genetic rescue. References Evolutionary ecology Evolutionary biology
Evolutionary rescue
[ "Biology" ]
1,119
[ "Evolutionary biology" ]
53,343,992
https://en.wikipedia.org/wiki/IPOP
IPOP (IP-Over-P2P) is an open-source user-centric software virtual network allowing end users to define and create their own virtual private networks (VPNs). IPOP virtual networks provide end-to-end tunneling of IP or Ethernet over “TinCan” links setup and managed through a control API to create various software-defined VPN overlays. History IPOP started as a research project at the University of Florida in 2006. In its first-generation design and implementation, IPOP was built atop structured P2P links managed by the C# Brunet library. In its first design, IPOP relied on Brunet’s structured P2P overlay network for peer-to-peer messaging, notifications, NAT traversal, and IP tunneling. The Brunet-based IPOP is still available as open-source code; however, IPOP’s architecture and implementation have evolved. Starting September 2013, the project has been funded by the National Science Foundation under the SI2 (Software Infrastructure for Sustained Innovation) program to enable it as open-source “scientific software element” for research in cloud computing. The second-generation design of IPOP incorporates standards (XMPP, STUN, TURN) and libraries (libjingle) that have evolved since the project’s beginning to create P2P tunnels – which we refer to as TinCan links. The current TinCan-based IPOP implementation is based on modules written in C/C++ that leverage libjingle to create TinCan links, and exposing a set of APIs to controller modules that manage the setup, creation and management of TinCan links. For enhanced modularity, the controller module runs as a separate process from the C/C++ module that implements TinCan links and communicate through a JSON-based RPC system; thus the controller can be written in other languages such as Python. See also OpenConnect, implements a TLS and DTLS-based VPN OpenSSH, which also implements a layer-2/3 "tun"-based VPN OpenVPN, SSL/TLS based user-space VPN Point-to-Point Tunneling Protocol (PPTP) Microsoft method for implementing VPN Secure Socket Tunneling Protocol (SSTP) Microsoft method for implementing PPP over SSL VPN Social VPN, an open-source VPN based on relationships SoftEther VPN, an open-source VPN server program which supports OpenVPN protocol stunnel encrypt any TCP connection (single port service) over SSL UDP hole punching, a technique for establishing UDP "connections" between firewalled/NATed network nodes References External links Peer-to-peer-based VPN Alternatives - Linux Magazine Tutorial: Deploying Your Own P2P Overlay for IPOP VPNs - FutureGrid Install package network:vpn:ipop / ipop Google Summer of Code > 2015 > IP-over-P2P Project Free security software Tunneling protocols Unix network-related software Virtual private networks
IPOP
[ "Engineering" ]
646
[ "Computer networks engineering", "Tunneling protocols" ]
53,345,171
https://en.wikipedia.org/wiki/Thomas%20Glanville%20Taylor
Thomas Glanville Taylor (22 November 1804 – 4 May 1848) was an English astronomer who worked extensively at the Madras Observatory and produced the Madras Catalogue of Stars from around 1831 to 1839. Life He was the son of Thomas Taylor, assistant at the Royal Greenwich Observatory, and his wife Susannah née Glanville, born at Ashburton, Devon. John Pond, the Astronomer Royal, suggested that the young boy choose a career in astronomy and he joined the observatory in 1820. From August 1822 he was in charge of making transit observations, and his ability was noted by Sir Edward Sabine. Taylor then worked on Stephen Groombridge's star catalogue. Taylor was appointed director of the East India Company's observatory at Madras, arriving there on 15 September 1830. He brought with him new equipment including transit telescopes and a mural circle. He worked with four Indian assistants, who took observations when he went to join the Great Trigonometrical Survey. Taylor collaborated with John Caldecott of the Travancore observatory to make observations on the magnetic field, especially the magnetic equator, of the earth around 1837. A Fellow of the Royal Astronomical Society and the Royal Society (elected 10 February 1842) Taylor helped establish an observatory at Doddabetta in Ootacamund. He was suffering from tuberculosis when he went to visit his ailing daughter in England in 1848. She died in April, and he himself died a month later, in Southampton. He was succeeded at the Madras Observatory by William Stephen Jacob (1813–-1862). Works Taylor began the publication of the Madras General Catalogue of Stars which was praised by Sir George Airy. His catalogues were of importance in navigation and in the Trigonometrical Survey for determining longitude as well as latitude. Family Taylor married Eliza Baratty, daughter of Colonel Eley, on 4 July 1832. They had three sons and a daughter. References External links Results Of Astronomical Observations Made At The Honorable The East India Company's Observatory At Madras, Vol.1 For The Year 1831, Volume IV A General Catalogue of the Principal Fixed Stars from observations made at the Honorable, The East India Company's Observatory at Madras 1804 births 1848 deaths Astronomers from British India British people in colonial India People from the Madras Presidency 19th-century English astronomers Fellows of the Royal Society People from Ashburton, Devon
Thomas Glanville Taylor
[ "Astronomy" ]
470
[ "Astronomers", "Astronomer stubs", "Astronomy stubs" ]
53,345,922
https://en.wikipedia.org/wiki/DNADynamo
DNADynamo is a commercial DNA sequence analysis software package produced by Blue Tractor Software Ltd that runs on Microsoft Windows, Mac OS X and Linux It is used by molecular biologists to analyze DNA and Protein sequences. A free demo is available from the software developers website. Features DNADynamo is a general purpose DNA and Protein sequence analysis package that can carry out most of the functions required by a standard research molecular biology laboratory DNA and Protein Sequence viewing, editing and annotating Contig assembly and chromatogram editing including comparison to a reference sequence to identify mutations Global Sequence alignment with ClustalW and MUSCLE and editing. Select and drag Sequence alignment editing for hand made dna vs protein alignments Restriction site analysis - for viewing restriction cut sites in tables and on linear and circular maps. A Subcloning tool for the assembly of constructs using Restriction Sites or Gibson assembly, Agarose Gel simulation. Online Database searching - Search public databases at the NCBI such as Genbank and UniProt. Online BLAST searches. Protein analysis including estimation of Molecular Weight, Extinction Coefficient and pI. PCR Primer design, including an interface to Primer3 3D structure viewing via an interface to Jmol History DNADynamo has been developed since 2004 by BlueTractorSoftware Ltd, a software development company based in North Wales, UK References External links DNADynamo homepage Bioinformatics software Computational science
DNADynamo
[ "Mathematics", "Biology" ]
290
[ "Computational science", "Applied mathematics", "Bioinformatics", "Bioinformatics software" ]
53,345,935
https://en.wikipedia.org/wiki/Asynchronous%20procedure%20call
An asynchronous procedure call (APC) is a unit of work in a computer. Definition Procedure calls can be synchronous or asynchronous. Synchronous procedure calls are made on one thread in a series, with each call waiting for the prior call to complete. on some thread. APCs instead are made without waiting for prior calls to complete. For example, if some data are not ready (for example, a program waits for a user to reply), then stopping other activity on the thread is expensive, the thread has consumed memory and potentially other resources. Structure An APC is typically formed as an object with a small amount of memory and this object is passed to a service which handles the wait interval, activating it when the appropriate event (e.g., user input) occurs. The life cycle of an APC consists of 2 stages: the passive stage, when it passively waits for input data, and active state, when that data is calculated in the same way as at the usual procedure call. A reusable asynchronous procedure is termed Actor. In the Actor model two ports are used: one to receive input, and another (hidden) port to handle the input. In Dataflow programming many ports are used, passing to an execution service when all inputs are present. Implementations In Windows, APC is a function that executes asynchronously in the context of a specific thread. APCs can be generated by the system (kernel-mode APCs) or by an application (user mode APCs). See also Signal References Computer programming
Asynchronous procedure call
[ "Technology", "Engineering" ]
334
[ "Software engineering", "Computer programming", "Computers" ]
53,346,445
https://en.wikipedia.org/wiki/NGC%20422
NGC 422 is an open cluster located in the constellation Tucana. It was discovered on September 21, 1835, by John Herschel. It was described by John Louis Emil Dreyer as "very faint (in Nubecular Minor)", with Nubecular Minor being the Small Magellanic Cloud. It was also described by DeLisle Stewart as "only 3 extremely faint stars, close together, not a nebula." References External links 0422 18350921 Tucana Open clusters Small Magellanic Cloud
NGC 422
[ "Astronomy" ]
107
[ "Tucana", "Constellations" ]
53,346,999
https://en.wikipedia.org/wiki/V1309%20Scorpii
V1309 Scorpii (also known as V1309 Sco) is a contact binary that merged into a single star in 2008 in a process known as a luminous red nova. It was the first star to provide conclusive evidence that contact binary systems end their evolution in a stellar merger. Its similarities to V838 Monocerotis and V4332 Sagittarii allowed scientists to identify these stars as merged contact binaries as well. Discovery V1309 Scorpii was discovered independently on 2 September 2008 by three groups: Koichi Nishiyama and Fujio Kabashima, Yukio Sakurai, and Guoyou Sun and Xing Gao. It was originally identified as a transient object located near the galactic bulge at right ascension ± 0s.01 and declination ± 0.1. The astronomers who found it noted that it had been invisible to their 12 mag limit telescope just a few days prior to its discovery, indicating that it had recently gone nova. Before its eruption, its faintness and close proximity to USNO-B1.0 star 0592-0608962 (magnitude B = 16.9 and R = 14.8) just 1.14 away made it difficult to detect. When discovered, V1309 Scorpii was believed to be nothing more than a classical nova. Identification as a stellar merger Immediately following its eruption, a group of astrophysicists led by Elena Mason at the European Southern Observatory conducted a study of V1309 Sco's post-outburst spectrum. Originally, the focus of this study was to analyze heavy-metal absorption patterns in a classical nova, but the authors did not realize that this was not a classical nova. In analyzing the spectrum, Mason et al. posited that V1309 Scorpii was surrounded by a slowly expanding gas shell which is denser in the equatorial plane, giving way to a narrow absorption spectrum from this dense region and a broader emission spectrum surrounding it. The incline of this equatorial plane from the observer's line of sight leaves mostly just the polar cap visible. This region would then be approaching the observer as indicated by the overall blueshift of the spectrum. Furthermore, the presence of ejecta from the polar cap at various velocities would account for the observed high velocity wings in the Balmer series. The behavior of the Hα/Hβ ratio, which decreased for a little over a month before shooting up to saturated levels and remaining high months after, was one of many spectral characteristics, also including distinct forbidden lines, that made V1309 Scorpii distinct from classical novae and more similar to red novae. Following up on the Mason et al. study, Romuald Tylenda and colleagues, who had previous used theoretical models to support that red novae could be the result of stellar mergers, turned to investigate V1309 Scorpii. Due to its proximity to the Galactic Center, V1309 Scorpii was within the field of view of the Optical Gravitational Lensing Experiment (OGLE) telescope, which had been collecting magnitude data of V1309 Scorpii to a precision of 0.01 magnitudes for several years prior to its eruption. The star gradually grew in brightness between 2001 and 2007, before dipping just a little prior to its 2008 eruption. During this eruption, it increased in brightness by 10 mag, or by about a factor of . The star then rapidly subsided in brightness through the period spectrally observed by Mason et al. Prior to outburst, the star's magnitude had a period of around 1.4 days that decreased exponentially until the outburst. Following the model of a typical contact binary, V1309 Scorpii had two peaks in magnitude per cycle, corresponding to times when the two stars were perpendicular to the observer's line of sight. However, in its case, the second peak in each period began to gradually decrease until its light curve only showed one peak per period. This was because the secondary star began orbiting faster than the envelope of the primary star could keep up with. Because the stars are in contact, the velocity difference begins to dissipate as energy at their point of contact. Thus when the secondary star was approaching the line of sight, it appeared brighter and when it was moving away from the line of sight, it appeared fainter. By 2007, the two stars were so close to merging, that its magnitude, as measured on Earth, appeared roughly spherical, leading to the loss of the second maximum immediately prior to its outburst. This evidence was the first of its kind to conclusively demonstrate that a contact binary star can end its evolution in a stellar merger and also gave scientists a framework within which to identify other stars as contact binaries and predict future mergers. Post-identification studies Since the identification of V1309 Scorpii, further studies of the star have been focussed both on modelling its evolution and collecting additional spectral data. Further spectral research One of these follow-up studies continued Mason et al.'s 2010 spectroscopic study by analyzing the evolution of a wider spectrum on a longer time scale. In this study, Kaminsky et al. unexpectedly found a strong spectral signature from CrO in the near infrared, which was the first known discovery of CrO in a stellar spectrum. Present chemical models do not have an explanation for why red novae are the only stars to display this CrO line. This finding may also give further insight into the unexpectedly high amounts of 54Cr that have been observed in our solar system, which was recently found to not originate solely from supernovae. Theoretical research Understanding that contact binary stars end their lives in mergers has also spawned theoretical research. Notably, a 2015 study investigated contact binaries within globular clusters and determined that the stellar merger hypothesis may be a leading cause in the formation of blue straggler stars in these regions. Identifying other stellar mergers As more is known about V1309 Scorpii and its progenitor than other red novae, it has been described as a "Rosetta Stone" in our understanding of stellar mergers that can help to identify other nova as stellar mergers. For example, data on V1309 Scorpii has already been used to try to explain the mysterious outburst of CK Vulpeculae in 1670–1672 that has puzzled scientists for centuries. Past spectroscopic studies of other stars have turned up more red novae candidates, including V1148 Sagittarii, which was studied as early as 1949. These retrospective inferences have also identified potential red novae like M31 RV that are outside of the Milky Way, including M31LRN 2015, M85 OT2006, NGC300OT2008, and SN2008S. More recent studies have been more forward-looking, trying to identify stars that match the profile of V1309 Scorpii's progenitor. A search among other contact binaries by OGLE found 14 different contact binary systems with decreasing periods over 0.8 days that are all candidates for upcoming stellar mergers. References Binary stars Scorpii, V1309 Luminous red novae Scorpius
V1309 Scorpii
[ "Astronomy" ]
1,488
[ "Scorpius", "Constellations" ]
53,347,752
https://en.wikipedia.org/wiki/Home%20idle%20load
Home idle load is the continuous residential electric energy consumption as measured by smart meters. It differs from standby power (loads) in that it includes energy consumption by devices that cycle on and off within the hourly period of standard smart meters (such as fridges, aquarium heaters, wine coolers, etc.). As such, home idle loads can be measured accurately by smart meters. As at 2014, home idle load constituted an average of 32% of household electricity consumption in the U.S. Type of devices The primary categories of devices that contribute to Home Idle Load include: Electronic devices that consume electricity while not being actively used (including televisions, game consoles, digital picture frames, etc.) Home infrastructure devices (including analog thermostats, doorbells, telephones, clocks, GFCI outlets, smoke alarms, continuous hot water recirculation pumps, etc.). Any type of device used to maintain a continuous temperature differential (including freezers, icemakers, refrigerators, wine coolers, terrarium heaters, heated floors, instant hot water dispensers, etc.). Although such devices may need to stay on continuously, more recent models have proven to be more efficient and can result in considerably lower home idle loads. Reducing home idle load Approaches to reduce home idle loads include: Disabling electronic devices with standby power loads either manually (unplugging) or by managing power strips (including smart power socket types) Using a timer switch that stops electric consumption from devices when not in use Using a smart power strip with a master outlet that manages electricity for multiple devices Replacing older (or malfunctioning) devices with more efficient options References Electricity Energy conservation Environmental impact of the energy industry Electronics and the environment Electric power
Home idle load
[ "Physics", "Engineering" ]
360
[ "Power (physics)", "Electrical engineering", "Electric power", "Physical quantities" ]
53,349,568
https://en.wikipedia.org/wiki/Telotristat%20ethyl
Telotristat ethyl (USAN, brand name Xermelo) is a prodrug of telotristat, which is an inhibitor of tryptophan hydroxylase. It is formulated as telotristat etiprate — a hippurate salt of telotristat ethyl. On February 28, 2017, the U.S. Food and Drug Administration (FDA) approved telotristat ethyl in combination with somatostatin analog (SSA) therapy for the treatment of adults with diarrhea associated with carcinoid syndrome that SSA therapy alone has inadequately controlled. Telotristat ethyl was approved for use in the European Union in September 2017. The U.S. Food and Drug Administration (FDA) considers it to be a first-in-class medication. Pharmacology Telotristat is an inhibitor of tryptophan hydroxylase, which mediates the rate-limiting step in serotonin biosynthesis. Adverse effects Common adverse effects noted in clinical trials include nausea, headache, elevated liver enzymes, depression, accumulation of fluid causing swelling (peripheral edema), flatulence, decreased appetite, and fever. Constipation is also common, and may be serious or life-threatening (especially in overdose). Formulations It is marketed by Lexicon Pharmaceuticals (as telotristat etiprate). 328 mg telotristat etiprate is equivalent to 250 mg telotristate ethyl. References Further reading Amines Ethyl esters Guanidines Chloroarenes CYP3A4 inducers Trifluoromethyl compounds Phenethylamines Prodrugs Pyrazoles Aminopyrimidines Tryptophan hydroxylase inhibitors
Telotristat ethyl
[ "Chemistry" ]
372
[ "Guanidines", "Functional groups", "Prodrugs", "Amines", "Chemicals in medicine", "Bases (chemistry)" ]
53,350,799
https://en.wikipedia.org/wiki/Polyquaternium-7
Polyquaternium-7 is an organic compound in the polyquaternium class of chemicals and used in the personal care industry. It is the copolymer of acrylamide and the quaternary ammonium salt diallyldimethylammonium chloride. Its molecular formula is: (C8H16ClN)n(C3H5NO)m. Functions Polyquaternium-7 is used for its antistatic and film forming properties. Usage Polyquaternium 7 is the designation established with the original association, Cosmetics, Toiletries, and Fragrance Association (CTFA), now known as the Personal Care Products Council. There are an abundance of product names containing the same or a similar active ingredient for applications outside the cosmetics and personal care industry. Polyquaternium-7 is applied in waste treatment for laundry, emulsion breaking, sludge dewatering and drainage and retention aid. It is a cationic polyelectrolyte. Polyquaternium-7 is used as modifier, for example in shampoo, hair conditioner, hair spray, mousse, soap, gel, styling agent, shaving product, deodorant and antiperspirant. The DADMAC monomer is highly hydrophilic. Absorption of moisture from the air lends "conditioning" properties to the products that contain the copolymer such as shampoos, hair and skin conditioners and other personal care products including some bar soaps. Safety According to its safety data sheet, it is not persistent, bioaccumulative, or toxic. Not a hazardous substance or mixture according to Regulation (EC) No. 1272/2008 or EC-directives 67/548/EEC or 1999/45/EC. Scientific studies The influence of the synthetic cationic polymer polyquaternium-7 on the rheology and microstructure of creams was investigated. Efficacy, mechanism and test methods of conditioning polymers in some shampoo formulations was investigated. References Cosmetics chemicals Organic polymers Quaternary ammonium compounds
Polyquaternium-7
[ "Chemistry" ]
428
[ "Organic compounds", "Organic polymers" ]
53,351,715
https://en.wikipedia.org/wiki/Mars%20habitability%20analogue%20environments%20on%20Earth
Mars habitability analogue environments on Earth are environments that share potentially relevant astrobiological conditions with Mars. These include sites that are analogues of potential subsurface habitats, and deep subsurface habitats. A few places on Earth, such as the hyper-arid core of the high Atacama Desert and the McMurdo Dry Valleys in Antarctica approach the dryness of current Mars surface conditions. In some parts of Antarctica, the only water available is in films of brine on salt / ice interfaces. There is life there, but it is rare, in low numbers, and often hidden below the surface of rocks (endoliths), making the life hard to detect. Indeed, these sites are used for testing sensitivity of future life detection instruments for Mars, furthering the study of astrobiology, for instance, as a location to test microbes for their ability to survive on Mars, and as a way to study how Earth life copes in conditions that resemble conditions on Mars. Other analogues duplicate some of the conditions that may occur in particular locations on Mars. These include ice caves, the icy fumaroles of Mount Erebus, hot springs, or the sulfur rich mineral deposits of the Rio Tinto region in Spain. Other analogues include regions of deep permafrost and high alpine regions with plants and microbes adapted to aridity, cold and UV radiation with similarities to Mars conditions. Precision of analogues Mars surface conditions are not reproduced anywhere on Earth, so Earth surface analogues for Mars are necessarily partial analogues. Laboratory simulations show that whenever multiple lethal factors are combined, the survival rates plummet quickly. There are no full-Mars simulations published yet that include all of the biocidal factors combined. Ionizing radiation. Curiosity rover measured levels on Mars similar to the interior of the International Space Station (ISS), which is far higher than surface Earth levels. Atmosphere. The Martian atmosphere is a near vacuum while Earth's is not. Through desiccation resistance, some life forms can withstand the vacuum of space in dormant state. UV levels. UV levels on Mars are much higher than on Earth. Experiments show that a thin layer of dust is enough to protect microorganisms from UV radiation. Oxidizing surface. Mars has a surface layer which is highly oxidizing (toxic) because it contains salts such as perchlorates, chlorates, cholorites, and sulfates pervasive in the soil and dust, and hydrogen peroxide throughout the atmosphere. Earth does have some areas that are highly oxidizing, such as the soda lakes, and though not direct analogues, they have conditions that may be duplicated in thin films of brines on Mars. Temperature. Nowhere on Earth reproduces the extreme changes in temperature that happen within a single day on Mars. Dry ice. The Mars surface consists of dry ice (CO2 ice) in many areas. Even in equatorial regions, dry ice mixed with water forms frosts for about 100 days of the year. On Earth, although temperatures on Earth briefly get cold enough for dry ice to form in the Antarctic interior at high altitudes, the partial pressure of carbon dioxide in Earth's atmosphere is too low for dry ice to form because the depositional temperature for dry ice on Earth under 1 bar of pressure is and the lowest temperature recorded in Antarctica is , recorded in 2010 by satellite. These partial analogues are useful, for instance for: Testing life detection equipment which may one day be sent to Mars Studying conditions for preservation of past life on Mars (biosignatures) Studying adaptations to conditions similar to those that may occur on Mars As a source of microbes, lichens etc. that can be studied as they may exhibit resistance to some conditions present on Mars. Atacama Desert The Atacama Desert plateau lies at an altitude of 3,000 meters and lies between the Pacific and the Andes mountains. Its Mars-like features include Hyper arid conditions Cold compared to most arid deserts because of the altitude High levels of UV light (because it is relatively cloudless, also the higher altitude means less air to filter the UV out, and the ozone layer is somewhat thinner above sites in the southern hemisphere than above corresponding sites in the northern hemisphere) Salt basins, which also include perchlorates making them the closest analogues to Martian salts on Earth. Yungay area The Yungay area at the core of the Atacama Desert used to be considered the driest area on Earth for more than a decade, until the discovery in 2015 that Maria Elena South is drier. It can go centuries without rainfall, and parts of it have been hyper-arid for 150 million years. The older regions in this area have salts that are amongst the closest analogues of salts on Mars because these regions have nitrate deposits that contain not only the usual chlorides, but also sulfates, chlorates, chromates, iodates, and perchlorates. The infrared spectra are similar to the spectra of bright soil regions of Mars. The Yungay area has been used for testing instruments intended for future life detection missions on Mars, such as the Sample Analysis at Mars instruments for Curiosity, the Mars Organic Analyzer for ExoMars, and Solid3 for Icebreaker Life, which in 2011, in a test of its capabilities, was able to find a new "microbial oasis" for life two meters below the surface of the Atacama desert. It is the current testing site for the Atacama Rover Astrobiology Drilling Studies (ARADS) project to improve technology and strategies for life detection on Mars. Experiments conducted on Mars have also been successfully repeated in this region. In 2003, a group led by Chris McKay repeated the Viking Lander experiments in this region and got the same results as those of the Viking landers on Mars: decomposition of the organics by non-biological processes. The samples had trace elements of organics, no DNA was recovered, and extremely low levels of culturable bacteria. This led to increased interest in the site as a Mars analogue. Although hardly any life, including plant or animal life, exists in this area, the Yungay area does have some microbial life, including cyanobacteria, both in salt pillars, as a green layer below the surface of rocks, and beneath translucent rocks such as quartz. The cyanobacteria in the salt pillars have the ability to take advantage of the moisture in the air at low relative humidities. They begin to photosynthesize when the relative humidity rises above the deliquescence relative humidity of salt, at 75%, presumably making use of deliquescence of the salts. Researchers have also found that cyanobacteria in these salt pillars can photosynthesize when the external relative humidity is well below this level, taking advantage of micropores in the salt pillars which raise the internal relative humidity above the external levels. Maria Elena South This site is even drier than the Yungay area. It was found through a systematic search for drier regions than Yungay in the Atacama Desert, using relative humidity data loggers set up from 2008 to 2012, with the results published in 2015. The relative humidity is the same as the lowest relative humidity measured by Curiosity rover. A 2015 paper reported an average atmospheric relative humidity 17.3%, and soils relative humidity a constant 14% at depth of 1 meter, which corresponds to the lowest humidity measured by Curiosity rover on Mars. This region's maximum atmospheric relative humidity is 54.7% compared with 86.8% for the Yungay region. The following living organisms were also found in this region: Actinomycetota: Actinobacterium, Aciditerrmonas, and Geodermatophilus Pseudomonadota: Caulobacter and Sphingomonas Bacillota: Clostridiales Acidobacteriota: Acidobacterium 16 new species of Streptomyces, 5 of Bacillus and 1 of Geodermatophilus. There was no decrease in the numbers of species as the soil depth increased down to a depth of one meter, although different microbes inhabited different soil depths. There was no colonization of gypsum, showing the extreme dryness of the site. No archaea was detected in this region using the same methods that detected archaea in other regions of the Atacama Desert. The researchers said that if this is confirmed in studies of similarly dry sites, it could mean that "there may be a dry limit for this domain of life on Earth." McMurdo Dry Valleys in Antarctica These valleys lie on the edge of the Antarctic plateau. They are kept clear of ice and snow by fast katabatic winds that blow from the plateau down through the valleys. As a result, they are amongst the coldest and driest areas in the world. The central region of Beacon Valley is considered to be one of the best terrestrial analogues for the current conditions on Mars. There is snowdrift and limited melting around the edges and occasionally in the central region, but for the most part, moisture is only found as thin films of brine around permafrost structures. It has slightly alkaline salt rich soil. Don Juan Pond Don Juan Pond is a small pond in Antarctica, 100 meters by 300 meters, and 10 cm deep, that is of great interest for studying the limits of habitability in general. Research using a time-lapse camera shows that it is partly fed by deliquescing salts. The salts absorb water by deliquescence only, at times of high humidity, then flows down the slope as salty brines. These then mix with snow melt, which feeds the lake. The first part of this process may be related to the processes that form the Recurring Slope Lineae (RSLs) on Mars. This valley has an exceptionally low water activity (aw) of 0.3 to 0.6. Though microbes have been retrieved from it, they have not been shown to be able to reproduce in the salty conditions present in the lake, and it is possible that they only got there through being washed in by the rare occasions of snow melt feeding the lake. Blood Falls This unusual flow of melt water from below the glacier gives scientists access to an environment they could otherwise only explore by drilling (which would also risk contaminating it). The melt water source is a subglacial pool of unknown size which sometimes overflows. Biogeochemical analysis shows that the water is marine in source originally. One hypothesis is that the source may be the remains of an ancient fjord that occupied the Taylor valley in the tertiary period. The ferrous iron dissolved in the water oxidizes as the water reaches the surface, turning the water red. Its autotrophic bacteria metabolize sulfate and ferric ions. According to geomicrobiologist Jill Mikucki at the University of Tennessee, water samples from Blood Falls contained at least 17 different types of microbes and almost no oxygen. An explanation may be that the microbes use sulfate as a catalyst to respire with ferric ions and metabolize the trace levels of organic matter trapped with them. Such a metabolic process had never before been observed in nature. This process is of astrobiological importance as an analogue for environments below the Glaciers on Mars, if there is any liquid water there, for instance through hydrothermal melting (though none such has been discovered yet). This process is also an analogue for cryovolcanism in icy moons such as Enceladus. Subglacial environments in Antarctica need similar protection protocols to interplanetary missions. Blood Falls was used as the target for testing IceMole in November 2014. This is being developed in connection with the Enceladus Explorer (EnEx) project by a team from the FH Aachen in Germany. The test returned a clean subglacial sample from the outflow channel from Blood Falls. Ice Mole navigates through the ice by melting it, also using a driving ice screw, and using differential melting to navigate and for hazard avoidance. It is designed for autonomous navigation to avoid obstacles such as cavities and embedded meteorites, so that it can be deployed remotely on Encladus. It uses no drilling fluids, and can be sterilized to suit the planetary protection requirements as well as the requirements for subglacial exploration. The probe was sterilized to these protocols using hydrogen peroxide and UV sterilization. Also, only the tip of the probe samples the liquid water directly. Qaidam Basin At , Qaidam Basin is the plateau with highest average elevation on the Earth. The atmospheric pressure is 50% - 60% of sea level pressures, and as a result of the thin atmosphere it has high levels of UV radiation, and large temperature swings from day to night. Also, the Himalayas to the South block humid air from India, making it hyper arid. In the most ancient playas (Da Langtang) at the north west of the plateau, the evaporated salts are magnesium sulfates (sulfates are common on Mars). This, combined with the cold and dryness conditions make it an interesting analogue of the Martian salts and salty regolith. An expedition found eight strains of Haloarchaea inhabiting the salts, similar to some species of Virgibacillus, Oceanobacillus, Halobacillus, and Ter-ribacillus. Mojave Desert The Mojave Desert is a desert within the United States that is often used for testing Mars rovers. It also has useful biological analogues for Mars. Some arid conditions and chemical processes are similar to Mars. Has extremophiles within the soils. Desert varnish similar to Mars. Carbonate rocks with iron oxide coatings similar to Mars - niche for microbes inside and underneath the rocks, protected from the sun by the iron oxide coating, if microbes existed or exist on Mars they could be protected similarly by the iron oxide coating of rocks there. Other analogue deserts Namib Desert - oldest desert, life with limited water and high temperatures, large dunes and wind features Ibn Battuta Centre Sites, Morocco - several sites in the Sahara desert that are analogues of some of the conditions on present day Mars, and used for testing of ESA rovers and astrobiological studies. Axel Heiberg Island (Canada) Two sites of special interest: Colour Peak and Gypsum Hill, two sets of cold saline springs on Axel Heiberg Island that flow with almost constant temperature and flow rate throughout the year. The air temperatures are comparable to the McMurdo Dry Valleys, range -15 °C to -20 °C (for the McMurdo Dry Valleys -15 °C to -40 °C). The island is an area of thick permafrost with low precipitation, leading to desert conditions. The water from the springs has a temperature of between -4 °C and 7 °C. A variety of minerals precipitate out of the springs including gypsum, and at Colour Peak crystals of the metastable mineral ikaite (·6) which decomposes rapidly when removed from freezing water. Some of the extremophiles from these two sites have been cultured in simulated Martian environment, and it is thought that they may be able to survive in a Martian cold saline spring, if such exist. Colour Lake Fen This is another Mars analogue habitat in Axel Heiberg Island close to Colour Peak and Gypsum Hill. The frozen soil and permafrost hosts many microbial communities that are tolerant of anoxic, acid, saline and cold conditions. Most are in survival rather than colony forming mode. Colour Lake Fen is a good terrestrial analogue of the saline acidic brines that once existed in the Meridani Planum region of Mars and may possibly still exist on the martian surface. Some of the microbes found there are able to survive in Mars-like conditions. Rio Tinto, Spain Rio Tinto is the largest known sulfide deposit in the world, and it is located in the Iberian Pyrite Belt. (IPB). Many of the extremophiles that live in these deposits are thought to survive independently of the Sun. This area is rich in iron and sulfur minerals such as hematite () which is common in the Meridiani Planum area of Mars explored by Opportunity rover and thought to be signs of ancient hot springs on Mars. jarosite (), discovered on Mars by Opportunity and on Earth forms either in acid mine drainage, during oxidation of sulphide minerals, and during alteration of volcanic rocks by acidic, sulphur-rich fluids near volcanic vents. Permafrost soils Much of the water on Mars is permanently frozen, mixed with the rocks. So terrestrial permafrosts are a good analogue. And some of the Carnobacterium species isolated from permafrosts have the ability to survive under the conditions of the low atmospheric pressures, low temperatures and dominated anoxic atmosphere of Mars. Ice caves Ice caves, or ice preserved under the surface in cave systems protected from the surface conditions, may exist on Mars. The ice caves near the summit of Mount Erebus in Antarctica, are associated with fumaroles in a polar alpine environments starved in organics and with oxygenated hydrothermal circulation in highly reducing host rock. Cave systems Mines on Earth give access to deep subsurface environments which turn out to be inhabited, and deep caves may possibly exist on Mars, although without the benefits of an atmosphere. Basaltic lava tubes The only caves found so far on Mars are lava tubes. These are insulated to some extent from surface conditions and may retain ice also when there is none left on the surface, and may have access to chemicals such as hydrogen from serpentization to fuel chemosynthetic life. Lava tubes on Earth have microbial mats, and mineral deposits inhabited by microbes. These are being studied to help with identification of life on Mars if any of the lava tubes there are inhabited. Lechuguilla Cave First of the terrestrial sulfur caves to be investigated as a Mars analogue for sulfur based ecosystems that could possibly exist underground also on Mars. On Earth, these form when hydrogen sulfide from below the cave meets the surface oxygenated zone. As it does so, sulfuric acid forms, and microbes accelerate the process. The high abundance of sulfur on Mars combined with presence of ice, and trace detection of methane suggest the possibility of sulfur caves below the surface of Mars like this. Cueva de Villa Luz The Snottites in the toxic sulfur cave Cueva de Villa Luz flourish on Hydrogen Sulfide gas and though some are aerobes (though only needing low levels of oxygen), some of these species (e.g. Acidianus), like those that live around hydrothermal vents, are able to survive independent of a source of oxygen. So the caves may give insight into subsurface thermal systems on Mars, where caves similar to the Cueva de Villa Luz could occur. Movile Cave Movile Cave is thought to have been isolated from the atmosphere and sunlight for 5.5 million years. Atmosphere rich in and with 1% - 2% (methane) It does have some oxygen, 7-10% in the cave atmosphere, compared to 21% in the air Microbes rely mainly on sulfide and methane oxidation. Has 33 vertebrates and a wide range of indigenous microbes. Magnesium sulfate lakes Opportunity found evidence for magnesium sulfates on Mars (one form of it is epsomite, or "Epsom salts"), in 2004. Curiosity rover has detected calcium sulfates on Mars. Orbital maps also suggest that hydrated sulfates may be common on Mars. The orbital observations are consistent with iron sulfate or a mixture of calcium and magnesium sulfate. Magnesium sulfate is a likely component of cold brines on Mars, especially with the limited availability of subsurface ice. Terrestrial magnesium sulfate lakes have similar chemical and physical properties. They also have a wide range of halophilic organisms, in all the three Kingdoms of life (Archaea, Bacteria and Eukaryota), in the surface and near subsurface. With the abundance of algae and bacteria, in alkaline hypersaline conditions, they are of astrobiological interest for both past and present life on Mars. These lakes are most common in western Canada, and the northern part of Washington state, USA. One of the examples, is Basque Lake 2 in Western Canada, which is highly concentrated in magnesium sulfate. In summer it deposits epsomite ("Epsom salts"). In winter, it deposits meridianiite. This is named after Meridiani Planum where Opportunity rover found crystal molds in sulfate deposits (Vugs) which are thought to be remains of this mineral which have since been dissolved or dehydrated. It is preferentially formed at subzero temperatures, and is only stable below 2 °C, while Epsomite (·7) is favored at higher temperatures. Another example is Spotted Lake, which shows a wide variety of minerals, most of them sulfates, with sodium, magnesium and calcium as cations. Some of the microbes isolated have been able to survive the high concentrations of magnesium sulfates found in Martian soils, also at low temperatures that may be found on Mars. Sulfates (for instance of sodium, magnesium and calcium) are also common in other continental evaporates (such as the salars of the Atacama Desert), as distinct from salt beds associated with marine deposits which tend to consist mainly of halites (chlorides). Subglacial lakes Subglacial lakes such as Lake Vostok may give analogues of Mars habitats beneath ice sheets. Sub glacial lakes are kept liquid partly by the pressure of the depth of ice, but that contributes only a few degrees of temperature rise. The main effect that keeps them liquid is the insulation of the ice blocking escape of heat from the interior of the Earth, similarly to the insulating effect of deep layers of rock. As for deep rock layers, they don't require extra geothermal heating below a certain depth. In the case of Mars, the depth needed for geothermal melting of the basal area of a sheet of ice is 4-6 kilometers. The ice layers are probably only 3.4 to 4.2 km in thickness for the north polar cap. However, it was shown that the situation is different when considering a lake that is already melted. When they applied their model to Mars, they showed that a liquid layer, once melted (initially open to the surface of the ice), could remain stable at any depth over 600 meters even in absence of extra geothermal heating. According to their model, if the polar regions had a subsurface lake perhaps formed originally through friction as a subglacial lake at times of favourable axial tilt, then supplied by accumulating layers of snow on top as the ice sheets thickened, they suggest that it could still be there. If so, it could be occupied by similar life forms to those that could survive in Lake Vostok. Ground penetrating radar could detect these lakes because of the high radar contrast between water and ice or rock. MARSIS, the ground penetrating radar on ESA's Mars Express detected a subglacial lake in Mars near the south pole. Subsurface life kilometers below the surface Investigations of life in deep mines, and drilling beneath the ocean depths may give an insight into possibilities for life in the Mars hydrosphere and other deep subsurface habitats, if they exist. Mponeng gold mine in South Africa bacteria obtain their energy from hydrogen oxidation linked to sulfate reduction, living independent of the surface nematodes feeding on those bacteria, again living independent of the surface. 3 to 4 km depth Boulby Mine on the edge of the Yorkshire moors 250 million year halite (chloride) and sulfate salts High salinity and low water activity 1.1. km depth Anaerobic microbes that could survive cut off from the atmosphere Alpine and permafrost lichens In high alpine and polar regions, lichens have to cope with conditions of high UV fluxes low temperatures and arid environments. This is especially so when the two factors, polar regions and high altitudes are combined. These conditions occur in the high mountains of Antarctica, where lichens grow at altitudes up to 2,000 meters with no liquid water, just snow and ice. Researchers described this as the most Mars-like environment on the Earth. See also References Astrobiology Exploration of Mars Environmental science
Mars habitability analogue environments on Earth
[ "Astronomy", "Biology", "Environmental_science" ]
5,031
[ "Origin of life", "Speculative evolution", "Astrobiology", "nan", "Biological hypotheses", "Astronomical sub-disciplines" ]
53,353,992
https://en.wikipedia.org/wiki/Perturb-seq
Perturb-seq (also known as CRISP-seq and CROP-seq) refers to a high-throughput method of performing single cell RNA sequencing (scRNA-seq) on pooled genetic perturbation screens. Perturb-seq combines multiplexed CRISPR mediated gene inactivations with single cell RNA sequencing to assess comprehensive gene expression phenotypes for each perturbation. Inferring a gene’s function by applying genetic perturbations to knock down or knock out a gene and studying the resulting phenotype is known as reverse genetics. Perturb-seq is a reverse genetics approach that allows for the investigation of phenotypes at the level of the transcriptome, to elucidate gene functions in many cells, in a massively parallel fashion. The Perturb-seq protocol uses CRISPR technology to inactivate specific genes and DNA barcoding of each guide RNA to allow for all perturbations to be pooled together and later deconvoluted, with assignment of each phenotype to a specific guide RNA. Droplet-based microfluidics platforms (or other cell sorting and separating techniques) are used to isolate individual cells, and then scRNA-seq is performed to generate gene expression profiles for each cell. Upon completion of the protocol, bioinformatics analyses are conducted to associate each specific cell and perturbation with a transcriptomic profile that characterizes the consequences of inactivating each gene. History In the December 2016 issue of the Cell journal, two companion papers were published that each introduced and described this technique. A third paper describing a conceptually similar approach (termed CRISP-seq) was also published in the same issue. In October 2016, the CROP-seq method for single-cell CRISPR screening was presented in a preprint on bioRxiv and later published in the Nature Methods journal. While each paper shared the core principles of combining CRISPR mediated perturbation with scRNA-seq, their experimental, technological and analytical approaches differed in several aspects, to explore distinct biological questions, demonstrating the broad utility of this methodology. For example, the CRISPR-seq paper demonstrated the feasibility of in vivo studies using this technology, and the CROP-seq protocol facilitates large screens by providing a vector that makes the guide RNA itself readable (rather than relying on expressed barcodes), which allows for single-step guide RNA cloning. A June 2022 paper in Cell published results from one of the first genome-scale Perturb-seq screens, which uncovered new perturbations that promote chromosomal instability as well as variations in the expression of mitochondrially encoded transcripts in response to different forms of mitochondrial stress. Experimental workflow CRISPR Single Guide RNA Library design and selection Pooled CRISPR libraries that enable gene inactivation can come in the form of either knockout or interference. Knockout libraries perturb genes through double stranded breaks that prompt the error prone non-homologous end joining repair pathway to introduce disruptive insertions or deletions. CRISPR interference (CRISPRi) on the other hand utilizes a catalytically inactive nuclease to physically block RNA polymerase, effectively preventing or halting transcription. Perturb-seq has been utilized with both the knockout and CRISPRi approaches in the Dixit et al. paper and the Adamson et al. paper, respectively. Pooling all guide RNAs into a single screen relies on DNA barcodes that act as identifiers for each unique guide RNA. There are several commercially available pooled CRISPR libraries including the guide barcode library used in the study by Adamson et al. CRISPR libraries can also be custom made using tools for sgRNA design, many of which are listed on the CRISPR/cas9 tools Wikipedia page. Lentiviral vectors The sgRNA expression vector design will depend largely on the experiment performed but requires the following central components: Promoter Restriction sites Primer Binding Sites sgRNA Guide Barcode Reporter gene: Fluorescent gene: vectors are often constructed to include a gene encoding a fluorescent protein, such that successfully transduced cells can be visually and quantitatively assessed by their expression. Antibiotic resistance gene: similar to fluorescent markers, antibiotic resistance genes are often incorporated into vectors to allow for selection of successfully transduced cells. CRISPR-associated endonuclease: Cas9 or other CRISPR-associated endonucleases such as Cpf1 must be introduced to cells that do not endogenously express them. Due to the large size of these genes, a two-vector system can be used to express the endonuclease separately from the sgRNA expression vector. Transduction and selection Cells are typically transduced with a Multiplicity of Infection (MOI) of 0.4 to 0.6 lentiviral particles per cell to maximize the likelihood of obtaining the most cells which contain a single guide RNA. If the effects of simultaneous perturbations are of interest, a higher MOI may be applied to increase the amount of transduced cells with more than one guide RNA. Selection for successfully transduced cells is then performed using a fluorescence assay or an antibiotic assay, depending on the reporter gene used in the expression vector. Single-cell library preparation After successfully transduced cells have been selected for, isolation of single cells is needed to conduct scRNA-seq. Perturb-seq and CROP-seq have been performed using droplet-based technology for single cell isolation, while the closely related CRISP-seq was performed with a microwell-based approach. Once cells have been isolated at the single cell level, reverse transcription, amplification and sequencing takes place to produce gene expression profiles for each cell. Many scRNA-seq approaches incorporate unique molecular identifiers (UMIs) and cell barcodes during the reverse transcription step to index individual RNA molecules and cells, respectively. These additional barcodes serve to help quantify RNA transcripts and to associate each of the sequences with their cell of origin. Bioinformatics analysis Read alignment and processing are performed to map quality reads to a reference genome. Deconvolution of cell barcodes, guide barcodes and UMIs enables the association of guide RNAs with the cells that contain them, thus allowing the gene expression profile of each cell to be affiliated with a particular perturbation. Further downstream analyses on the transcriptional profiles will depend entirely on the biological question of interest. T-distributed Stochastic Neighbor Embedding (t-SNE) is a commonly used machine learning algorithm to visualize the high-dimensional data that results from scRNA-seq in a 2-dimensional scatterplot. The authors who first performed Perturb-seq developed an in-house computational framework called MIMOSCA that predicts the effects of each perturbation using a linear model and is available on an open software repository. Advantages and limitations Perturb-seq makes use of current technologies in molecular biology to integrate a multi-step workflow that couples high-throughput screening with complex phenotypic outputs. When compared to alternative methods used for gene knockdowns or knockouts, such as RNAi, zinc finger nucleases or transcription activator-like effector nucleases (TALENs), the application of CRISPR-based perturbations enables more specificity, efficiency and ease of use. Another advantage of this protocol is that while most screening approaches can only assay for simple phenotypes, such as cellular viability, scRNA-seq allows for a much richer phenotypic readout, with quantitative measurements of gene expression in many cells simultaneously. Perturb-seq can therefore combine the high throughput of forward genetics, in terms of the number of genetic perturbations, with the rich phenotype dimension of reverse genetics. However, while a large and comprehensive amount of data can be a benefit, it can also present a major challenge. Single cell RNA expression readouts are known to produce ‘noisy’ data, with a significant number of false positives. Both the large size and noise that is associated with scRNA-seq will likely require new and powerful computational methods and bioinformatics pipelines to better make sense of the resulting data. Another challenge associated with this protocol is the creation of large scale CRISPR libraries. The preparation of these extensive libraries depends upon a comparative increase in the resources required to culture the massive numbers of cells that are needed to achieve a successful screen of many perturbations. In parallel to these single-cell methods, other approaches have been developed to reconstruct genetic pathways using whole-organism RNA-sequencing. These methods use a single aggregate statistic, called the transcriptome-wide epistasis coefficient, to guide pathway reconstruction. In contrast with the statistical framework of the methods described above, this coefficient may be more robust to noise and is intuitively interpretable in terms of Batesonian epistasis. This approach was used to identify a new state in the life cycle of the nematode C. elegans. Applications Perturb-seq or other conceptually similar protocols can be used to address a broad scope of biological questions and the applications of this technology will likely grow over time. Three papers on this topic, published in the December 2016 issue of the Journal Cell, demonstrated the utility of this method by applying it to the investigation of several distinct biological functions. In the paper, “Perturb-Seq: Dissecting Molecular Circuits with Scalable Single-Cell RNA Profiling of Pooled Genetic Screens”, the authors used Perturb-seq to conduct knockouts of transcription factors related to the immune response in hundreds of thousands of cells to investigate the cellular consequences of their inactivation. They also explored the effects of transcription factors on cell states in the context of the cell cycle. In the study led by UCSF, “A Multiplexed Single-Cell CRISPR Screening Platform Enables Systematic Dissection of the Unfolded Protein Response” the researchers suppressed multiple genes in each cell to study the unfolded protein response (UPR) pathway. With a similar methodology, but using the term CRISP-seq instead of Perturb-seq, the paper "Dissecting Immune Circuits by Linking CRISPR-Pooled Screens with Single-Cell RNA-Seq" performed a proof of concept experiment by using the technique to probe regulatory pathways related to innate immunity in mice. Lethality of each perturbation and epistasis analyses in cells with multiple perturbations was also investigated in these papers. Perturb-seq has so far been used with very few perturbations per experiment, but it can theoretically be scaled up to address the whole genome. Finally, the October 2016 preprint and subsequent paper demonstrate the bioinformatic reconstruction of the T cell receptor signaling pathway in Jurkat cells based on CROP-seq data. Recently, the Perturb-seq (CROP-seq) workflow has been adapted to enable genome-scale CRISPRi (CRISPR interference) screens in Jurkat cells at single-cell resolution. The first-of-its-kind genome-scale CRISPRi screen was conducted to verify factors involved in TCR signaling pathways. In more detail, a guide RNA library targeting 18,595 human genes was utilized for CRISPR-based gene knockdowns in Jurkat cells expressing the dCas9-KRAB fusion endonuclease. In total, one million Jurkat cells were processed for single-cell RNA sequencing allowing transcriptomic readouts of a final list of 374 marker genes involved in TCR signaling. The bioinformatic analysis confirmed more than 70 known activators and repressors of TCR signaling cascades, hence showcasing the potential of Perturb-seq (CROP-seq) screens to support translational research. While these publications used these protocols for answering complex biological questions, this technology can also be used as a validation assay to ensure the specificity of any CRISPR based knockdown or knockout; the expression levels of the target genes as well as others can be measured with single cell resolution in parallel, to detect whether the perturbation was successful and to assess the experiment for off target effects. Furthermore, these protocols make it possible to perform perturbation screens in heterogeneous tissues, while obtaining cell type specific gene expression responses. References RNA sequencing Genomics Bioinformatics Molecular biology techniques
Perturb-seq
[ "Chemistry", "Engineering", "Biology" ]
2,585
[ "Genetics techniques", "Biological engineering", "Bioinformatics", "RNA sequencing", "Molecular biology techniques", "Molecular biology" ]
53,354,629
https://en.wikipedia.org/wiki/CRISPR-Display
CRISPR-Display (CRISP-Disp) is a modification of the CRISPR/Cas9 (Clustered regularly interspaced short palindromic repeats) system for genome editing. The CRISPR/Cas9 system uses a short guide RNA (sgRNA) sequence to direct a Streptococcus pyogenes Cas9 nuclease, acting as a programmable DNA binding protein, to cleave DNA at a site of interest. CRISPR-Display, in contrast, uses a nuclease deficient Cas9 (dCas9) and an engineered sgRNA with aptameric accessory RNA domains, ranging from 100bp to 5kb, outside of the normal complementary targeting sequence. The accessory RNA domains can be functional domains, such as long non-coding RNAs (lncRNAs), protein-binding motifs, or epitope tags for immunochemistry. This allows for investigation of the functionality of certain lncRNAs, and targeting of ribonucleoprotein (RNP) complexes to genomic loci. CRISPR-Display was first published in Nature Methods in July 2015, and developed by David M. Shechner, Ezgi Hacisuleyman, Scott T. Younger and John Rinn at Harvard University and Massachusetts Institute of Technology (MIT), USA. Background The CRISPR/Cas9 system is based on an adaptive immune system of prokaryotic organisms, and its use for genome editing was first proposed and developed in collaboration between Jennifer Doudna (University of California, Berkeley) and Emmanuelle Charpentier (Max Planck Institute for Infection Biology, Germany). The method, and its application in editing human cells, was published in Science on August 17, 2012. In January 2013, the Feng Zhang lab at the Broad Institute at MIT published another method in Science, having further optimized the sgRNA structure and expression for use in mammalian cells. By the beginning of 2014, almost 2500 studies mentioning CRISPR in their title has been published. Non-coding RNAs (ncRNAs) are RNA transcripts that are not translated into a protein product, but instead exert their function as RNA molecules. They are involved in a range of processes, like post-transcriptional regulation of gene expression, genomic imprinting, and regulating the chromatin state, and thereby the expression, of a given locus. Many ncRNAs have been discovered, but in many cases, their function has yet to be accurately dissected due to technical challenges. ncRNA function is often not affected by introducing point mutations and premature stop codons. ncRNAs are also thought to regulate gene expression, so deletion studies have a hard time distinguishing effects of ncRNA loss from effects of gene misregulation due to the deletion. Studies of ncRNAs have also lacked the throughput necessary for discerning the RNA based functionality. To meet these challenges, the Rinn lab therefore developed a synthetic biology approach, using CRISPR/Cas9 system, with the Cas9 acting as a conduit, to target ncRNA modules to ectopic genomic locations, and investigating the ncRNAs effects on reporter genes and other genomic features at that site. Method development CRISPR-Disp modifies the CRISPR/Cas9 technology by using a catalytically inactive, i.e. nuclease deficient, Cas9 mutant (dCas9), and altering the RNA used for targeting Cas9 to a genomic location. Since sgRNAs are usually expressed by RNA polymerase III, which limits the length of the RNA domain that can be inserted, CRISPR-Display incorporates RNA polymerase II to permit expression of longer transcripts (~80–250 nucleotides) to overcome this limitation. CRISPR-Display can therefore add larger RNA domains, like natural and lncRNA domains, without affecting dCas9 localization. The sgRNA is engineered with an aptameric accessory RNA domain in the sequence outside of the targeting sequence. In the development of the technique, five model cofactors with different topology constructs were used: TOP1-4 and INT with an accessory domain (P4-P6 domain) at different positions, including the 5’ and 3’ end and internally within the sgRNA. Each domain contained a stem-loop that can be recognized by a PP7 bacteriophage coat protein. The complex was delivered into mammalian cells (HEK293FT cells) by a lentiviral vector. To ensure that the attached RNA module both retains targeting functionality as well as the resulting complex drive transcriptional activation at a specific site of interest, transient reporter gene expression of luciferase and fluorescent protein was measured. Two variations of such a transcription activator assay was performed; directly with a dCas9 fused to a transcriptional activator/repressor (VP64, a factor known to enhance gene expression) (Direct activation) or indirectly where the transcriptional activator is fused to an RNA binding protein module on the sgRNA (Bridged activation). Reporter gene activation through direct activation imply the sgRNA variant binds and targets dCas9 efficiently. All the five topologies showed direct activation except TOP3 and TOP4, which showed reduced activity. Bridged activation indicates that the fused RNA accessory domain is intact in mature dCas9 complexes. Bridged activation was observed with TOP1, TOP3 and INT. The results were recapitulated at endogenous loci by targeting minimal sgRNA and selective expanded topologies (TOP1 and INT) to human ASCL1, IL1RN, NTF3 and TTN promoters. Direct and bridged activation were observed by qRT-PCR for each construct proving that CRISP-Disp allows deployment of large RNA domains to genomic loci. The effect of internal (stem-loop) insertion size on dCas9 complex was assessed using INT-like constructs with cassettes of PP7 stem loops with a size range from 25 nt to 247 nt. Each construct induced significant activation in the reporter assays signifying that internal insertion size does not influence the dCas9 complex function. Similarly, the effect of internal insert sequence was also determined through a set of unique sgRNA variants displaying cassettes of 25 random nucleotides. Reporter assays and RIP-Seq confirmed that sequence does not govern complex efficacy. The utility of CRISP-Disp was explored with an array of functional RNA domains such as natural protein binding motifs, artificial aptamers and small molecules with varying size. While all the complexes were functional and viable, and successfully deployed the RNA domains at endogenous loci, the efficacy changed with length and expression levels. This suggests that optimization of structure and sequence might be important required before designing the construct. To determine if artificial lncRNA scaffolds can be used with CRISPR-Display, dCas9 complexes were assembled with artificial RNA with a size comparable to lncRNAs. The constructs were expanded to ~650nt size with an additional P4-P6 domain with hairpin loops that can be recognized by another phage coat protein, MS2. These topology constructs were called double TOP0-2 with the two domains either together at 5’ or 3’end or separately at each end. Transient reporter assays followed by confirmation with RNA Immunoprecipitation sequencing (RIP-qPCR) showed that all the three constructs retained both the domains in the complex. This was also tested with natural lncRNA domains by building Pol II-driven TOP1 and INT constructs fused with human lncRNA domains. lncRNAs used had lengths between ~90–4800 nt, and included the NoRC-binding pRNA, three enhancer-transcribed RNAs (eRNAs) FALEC, TRERNA1 and ncRNA-a3), Xist A-repeat (RepA), and the 4,799-nt transcriptional activator HOTTIP. While all the constructs showed significant direct activation, it decreased with increasing lncRNA-sgRNA length. These lncRNA domains could regulate the reporters independent of dCas9 with pRNA and RepA repressing the GLuc reporter expression (repressors) and TRERNA1, ncRNA-a3 and HOTTIP inducing activation (activators), but were properly targeted to an ectopic location of interest by using the CRISP-Disp system. Thus, CRISP-Disp enables control of gene expression with deployment of both artificial scaffolds as well as natural lncRNA domains. Applications CRISPR-Display allows for previously unavailable studies of lncRNA functionality, artificial ncRNA functionalization, recruitment of endogenous and engineered proteins to genomic loci, and locus affinity tagging for cell imaging. lncRNA domain localization CRISPR-Display allows targeted localization of natural lncRNAs to ectopic sites for investigation of their function. Exposing various ectopic DNA loci to natural lncRNAs can help show the effects of lncRNAs on gene expression and chromatin state, and help dissect the mechanism of such effects. One of the major outstanding questions in the study of lncRNAs is whether effects on chromatin state or gene expression adjacent to a lncRNA locus is due to functional, sequence-specific mechanisms of the lncRNA itself, or due simply to the act of transcribing the lncRNA. Localizing lncRNA to ectopic sites with CRISPR-Display can help separate the function of the RNA itself from the effects of transcribing such RNA species. Before CRISPR-Display, such studies were challenging due to low throughput, and inability to distinguish lncRNA function from other confounding factors like cryptically encoded peptides or functional DNA elements. Artificial ncRNA functionalization CRISPR-Display also allows for targeted use of the wide array of artificial RNAs with specific functionality, such as RNAs for recruitment of endogenous RNA-binding proteins, antibody affinity tagging, and recruitment of tagged fusion proteins. Affinity tagging for live cell imaging One example of artificial ncRNA functionalization is incorporating RNA domains recognized by specific antibodies to the sgRNA. CRISPR-Display can target the sgRNA with a particular epitope sequence to various loci, and fluorescently tagged antibodies can be used to image the locus, showing its localization in the nucleus, and possible interactions with other tagged proteins or genomic loci. Recruitment of endogenous or engineered RNA binding proteins for gene regulation Endogenous proteins known to bind a specific RNA motif can be recruited to ectopic genomic locations by incorporating the RNA motif into the sgRNA. CRISPR-Display can also recruit fusion proteins engineered to bind specific RNA sequences. Recruiting these proteins can allow studies of specific proteins’ and protein complexes’ effects on gene regulation and chromatin states, as well as specific regulation of certain genes for investigation of gene function. Multiplexed functional studies Due to the modularity of the sgRNA, several different sgRNAs with distinct functional modules can be expressed in each cell at once. The different RNA modules can then work simultaneously and independently, allowing for, for example, regulation of one genomic location whilst imaging the effects of the regulation at another location. The possible applications of CRISPR-Display will continue to increase with further development and understanding of ncRNA functionalization. It is not unreasonable to think that CRISPR-Display may one day enable complex synthetic biology systems, with distinct temporal expression of sgRNAs, and networks and circuits of gene regulation by targeting of regulatory proteins. Advantages Can easily accommodate large RNA cargo (up to ~4.8kb, possibly even larger) within the sgRNA core. Therefore, structured RNA domains, natural long lncRNAs, artificial RNA modules and pools of random sequences can be used with dCas9. sgRNA-dCas9 complexation is not limited by sequence composition of the RNA cargo, but seems independent of the RNA modules used. CRISPR-Display is a modular method, which allows different functions to be simultaneously performed at diverse loci in the same cell. Using a single construct with orthogonal RNA binding proteins where each protein is fused to a unique functional domain and targeted by sgRNA containing its related RNA motif. (multiplexing) (already included in the applications) RNA modules can be added at different locations within the sgRNA sequence (internally, or at the 5’ or 3’ ends), so the location of the RNA module can be optimized or best function. Allows construction of Cas9 complexes with protein binding cassettes, artificial aptamers, pools of random sequences as well as natural lncRNAs. Limitations CRISPR-Display is currently limited by the number of available functional RNA motifs and RNA binding protein functions. As more such motifs and proteins are discovered and developed, further applications of CRISPR-Display may become possible. dCas9 complexation decreases with increasing sgRNA-lncRNA length. The quantitative yields of intact lncRNA domains are, however, recovered relative to the respective sgRNA. Therefore, construct integrity can depend on factors like length and RNA structure. Design of a high efficiency CRISPR-Display construct may require some structural or sequence optimization, which can lead to variable construct efficacy. References Biotechnology Genetic engineering Genome editing LncRNA Synthetic biology
CRISPR-Display
[ "Chemistry", "Engineering", "Biology" ]
2,762
[ "Synthetic biology", "Genetics techniques", "Biological engineering", "Genome editing", "Genetic engineering", "Biotechnology", "Bioinformatics", "Molecular genetics", "nan", "Molecular biology" ]
65,996,113
https://en.wikipedia.org/wiki/Radiofrequency%20Echographic%20Multi%20Spectrometry
Radiofrequency Echographic Multi Spectrometry (REMS) is a non-ionizing technology for osteoporosis diagnosis and for fracture risk assessment. REMS processes the raw, unfiltered ultrasound signals acquired during an echographic scan of the axial sites, femur and spine. The analysis is performed in the frequency domain. Bone mineral density (BMD) is estimated by comparing the results against reference models. The accuracy has been tested by comparing it against to DXA technology. Working principles Traditionally, ultrasound B-Mode imaging has been designed for allowing a visual evaluation of human organs and their features by clinicians; however, this implies that the huge quantity of information carried by ultrasound signals is processed and significantly reduced for visualization purposes. REMS technology instead analyses the raw, unfiltered ultrasound signals by comparing their spectral representation with the spectral models stored in a proprietary database which has been previously obtained from healthy and osteoporotic patients; these models are specific and vary with sex, age, BMI and skeletal site. The comparison allows the BMD estimation of the patient as well as a both fast and reliable diagnostic classification, compliant to the recommendations and diagnostic criteria defined by the World Health Organization. Spectral and statistical processing of the acquired data REMS scans on femur and spine last 40 and 80 seconds, respectively, allowing the acquisition of several thousands of ultrasound signals related to the skeletal site under examination. The patented algorithm (see for more details) automatically processes these signals on the basis of their spectral features; each signal can be classified as reliable and included in the pipeline for the computation of the diagnostic parameters or, alternatively, classified as unreliable and discarded. During the analysis phase, the acquired spectra are compared to the spectral models stored in the database; afterwards, the values obtained by each comparison are averaged, leading to a precise and repeatable estimation of the diagnostic parameters of interest. Automatic artifact removal If substantial differences are detected between one or more acquired signals and the reference spectral models, these samples are identified, classified as unreliable and automatically discarded: for instance, spectra which are not clearly associated to bone portions but to osteophytes or calcifications. Hence, this approach natively identifies and eliminates outliers, bringing significant advantages with respect to the clinical reliability of the obtained results. Performance comparison to the current Gold Standard REMS technology performance has been evaluated through multicentre clinical studies. The work of Di Paola et al. has investigated precision and diagnostic accuracy of REMS in comparison with DXA on a sample of 2000 patients. A very high correlation has been observed between the T-Score values obtained by both technologies (Pearson correlation coefficient > 0.93; Cohen’s Kappa equals to 0.82 for lumbar spine and 0.79 for femoral neck) as well as a very low average BMD difference between the two techniques (mean ± 2 standard deviations): −0.004±0.088 g/cm2 for lumbar spine and −0.006±0.076 g/cm2 for femoral neck. Furthermore, specificity and sensitivity of REMS in the discrimination between osteoporotic and non-osteoporotic patients has been evaluated: sensitivity and specificity exceed 91% for both skeletal sites. Additional outcomes of this study are the values of precision and repeatability of REMS estimates, assessed using the Root Mean Square Coefficient of Variation (CV-RMS): precision has been evaluated as being 0.38% for lumbar spine and 0.32% for femoral neck, whereas the Least Significant Change (LSC) resulted in 1.05% and 0.88%, respectively. Finally, inter-operator repeatability has been calculated, which has resulted in 0.54% for lumbar spine and 0.48% for femoral neck. These values are significantly lower than those reported about DXA in the scientific literature and offer concrete advantages from the point of view of short-term follow-up of patients undergoing therapeutic treatments. Fracture risk evaluation Observational longitudinal studies have further evaluated REMS T-score performance in the identification of patients at risk for fragility fracture. Specifically, in Adami et al., a group of more than 1.500 patients has undergone both DXA and REMS scans. Afterwards, these patients have been monitored for a period up to 5 years in order to estimate the incidence of fragility fractures in relationship with the T-score values previously obtained with both technologies. The study has demonstrated that REMS T-score is an effective parameter for the prediction of the occurrence of fragility fractures, leading the authors to positive conclusions about the effectiveness of REMS technology in the identification of patients at risk for osteoporotic fracture. REMS Technology and Fragility Score As widely reported in the scientific literature, bone density is just one of the components of bone strength thus it only partially predicts bone fragility. In order to overcome this limitation, a novel parameter, Fragility Score, has been developed. Fragility Score evaluates bone microstructural features independently from BMD and it is based on the assumption that a fragile bone structure has microstructural features which, in turn, influence the spectral characteristics of the acquired ultrasound signal, being different from those reflecting a robust bone structure. Fragility Score is an adimensional parameter, ranging from 0 to 100, obtained by comparing the spectra of the acquired ultrasound signals with the spectral reference models obtained from patients who did, or did not, developed an osteoporotic fracture. This parameter has been validated through clinical studies and its accuracy has demonstrated a performance similar to DXA BMD. International recognition and clinical use In a recent publication, REMS technology has received the attention of the European Society for Clinical and Economic Aspects of Osteoporosis, Osteoarthritis and Musculoskeletal Diseases (ESCEO). In this work, all the available technologies for bone strength assessment and fracture risk estimation have been reviewed and discussed in relation to the clinical needs currently unmet. In this context, REMS has been considered a valuable approach for osteoporosis diagnosis and for fracture risk assessment, at the same time overcoming several of the current limitations acknowledged for the currently available bone health assessment technologies. One example is the work of Degennaro et al. in which a significant BMD reduction has been detected in pregnant women compared to non-pregnant women for the very first time. Several international working groups has used REMS technology for research purposes: Bojincă et al. has proven the effectiveness of REMS BMD estimates in patients affected by rheumatoid arthritis. Kirilova et al. assessed the values of lumbar spine and hip REMS-based BMD in premenopausal and postmenopausal women. In Khu et al. REMS has been used for characterizing the relationship between body mass index and bone health. The growing interest towards REMS is also demonstrated by the publication of scientific review papers focused on this technology. References Medical diagnosis Spectroscopy
Radiofrequency Echographic Multi Spectrometry
[ "Physics", "Chemistry" ]
1,466
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
65,996,644
https://en.wikipedia.org/wiki/Denis%20Browne%20Gold%20Medal
Denis Browne Gold Medal is a medal that was first struck in 1968, one year after the death of the paediatric surgeon Denis Browne and is awarded for outstanding contributions to paediatric surgery worldwide and is an honour bestowed by The British Association of Paediatric Surgeons. Recipients References Awards established in 1968 British science and technology awards Medicine awards Paediatrics in the United Kingdom
Denis Browne Gold Medal
[ "Technology" ]
80
[ "Science and technology awards", "Medicine awards" ]
65,997,053
https://en.wikipedia.org/wiki/James%20Mannin
James Mannin (died June 1779) was an artist, painter and draughtsman who lived in Ireland. Life There are no known details of James Mannin's early life. Some early sources state that he may have been French, but the surname Mannin is most commonly found in northern Italy. The first records of Mannin in Dublin date from 1753, when he is recorded as a designer of ornamental patterns. It was this work that brought him to the attention of the Dublin Society, which began an association which lasted the rest of his career. He supplied the Society with designs for items including carpets and picture frames during the 1750s, and in 1767 he designed the president's chair carved by Richard Cranfield (1731–1809). On 18 October 1769 he married Mary Maguire in St Andrew's Church, Dublin. He lived in Lazer's Hill from 1770 to 1775, before moving to King Street. Career From 1753, Mannin worked as a private drawing teacher. In May 1754, Mannin took on a number of young Irish artists as apprentices with the Society, to teach them ornamental drawing and design. This first group included Hugh Douglas Hamilton. This was the first time in Ireland that design was formally taught and reflected the Society's mission to promote high quality design in Ireland. This was further cemented when Mannin became a salaried employee of the Society in May 1756 as the master of the school of ornament, a post he would hold until just before his death. During his tenure, he taught many Irish artists such as John James Barralet, George Mullins, and Thomas Roberts. There are no surviving drawings attributed to Mannin, but given his influence it is believed he looked to French taste and in particular Rococo. The Society was interested in the development of art education in France, and purchased prints after works of French artists to be used as teaching aids. The Society had a strong role in shaping Mannin's teaching, to ensure that the teaching was of a standard commensurate with the Society's fees. He was instructed to teach his students in pattern drawing based on Hamburg damasks in March 1765, which reflected the promotion of damask weaving in the Irish linen industry. Mannin also taught drawing for engraving. Mannin continued to work as a painter of landscapes, still lifes, and flowers in a private capacity throughout this time. In 1765 and 1766 he exhibited with the Society of Artists in Hawkins Street. The Dublin Society awarded him premiums for landscape three times, in 1763, 1769, and 1770. He also produced his own ornamental designs, including a staircase for the Society of Artists in 1765, and carriage designs for coachbuilders in 1770. He also taught art privately, and even complained in an address in June 1766 that the Dublin Society's teaching demands encroached on his ability to pursue this work. He became ill in early 1779, leading him to suggest Barralet to be appointed master of the school of ornamental drawing in his place. His death was announced by the Dublin Society on 24 June 1779. References 1779 deaths 18th-century textile artists 18th-century Irish painters 18th-century Irish male artists Irish male painters Draughtsmen Artists from Dublin (city)
James Mannin
[ "Engineering" ]
661
[ "Design engineering", "Draughtsmen" ]
65,997,451
https://en.wikipedia.org/wiki/Octochara
Octochara is a genus of fossil charophyte (aquatic green alga) from the Famennian (Late Devonian). It is one of two genera of charophyte described from the Waterloo Farm lagerstätte in southern Africa. It and Hexachara, from the same locality provide the oldest record of reconstruct able charophytes with in situ oogonia. Octochara is derived from a Greek word "octo", meaning eight, a reference to the octoradial symmetry, and "chara", referring to membership of the Charales. In Octochara, a whorl of eight laterals are borne at each node. Each lateral is branched to produce four secondary branches and bears an oogonium. Two species of Octochara have been described, Octochara crassa and Octochara gracilis. The two species differ in the size and the shape of their secondary branchlets and oogonia. The specific name of O. crassa is derived from Latin, "crassus", meaning fat or thick, a reference to the branchlets. This species has whorls up to 14 mm in diameter and the branchlets are relatively broad with rounded terminations. In O. crassa, the internode parts of axis are unknown at this stage but, based on measurements of the central hole through the whorls, they are estimated to be about 0.7 mm in diameter. Oogonia in O. crassa are attached to the junction at which the radial branches divide and are supported within the four secondary branchlets. Each oogonium is almost spherical, about 1.7 mm long and 1.6 mm wide at widest point but tapering slightly towards the point of attachment They are helically striated in a sinistral direction with 3–5 stria visible in plan view. O. crassa is differentiated from O. gracilis by the relatively greater width of its branches and branchlets which have rounded lobate, as opposed to pointed terminations. The whorls also have a larger diameter. The specific name of O. gracilis is derived from Latin, "gracills", meaning slender, a reference to its more slender branchlets. O. gracilis comprises whorls up to 10 mm in diameter in which branchlets are narrow with slender tapering terminations. The internode portions of axis in this species appear uncorticated and vary in diameter up to 0.6 mm. In O. gracilis, chains of whorls show that whorl diameter diminishes distally. Each whorl has eight radial branches that quadrifurcate after about one-third of their length. Branchlets of O. gracilis are slender with sharply pointed terminations. The oogonia are attached at the point of division of the branches and is supported within four branchlets. Each oogonium is ellipsoidal, about 1.5mm long and 0.9 mm wide at its broadest point. Charophyte algae are non marine and can currently be found in lakes and less saline parts of estuaries. Likewise the charophytes at Waterloo Farm have been interpreted as being derived from less saline portions of the palaeo-estuarine environment. The charophyte algae genus is formed in less saline portions of the estuarine system. References Charophyta Fossil algae Charophyta genera
Octochara
[ "Biology" ]
712
[ "Fossil algae", "Algae" ]
65,997,474
https://en.wikipedia.org/wiki/Ashcroft%20and%20Mermin
Solid State Physics, better known by its colloquial name Ashcroft and Mermin, is an introductory condensed matter physics textbook written by Neil Ashcroft and N. David Mermin. Published in 1976 by Saunders College Publishing and designed by Scott Olelius, the book has been translated into over half a dozen languages and it and its competitor, Introduction to Solid State Physics (often shortened to Kittel), are considered the standard introductory textbooks of condensed matter physics. Content The Drude Theory of Metals The Sommerfeld Theory of Metals Failures of the Free Electron Model Crystal Lattices The Reciprocal lattice Determination of Crystal Structures by X-Ray Diffraction Classification of Bravais Lattices and Crystal Structures Electron Levels in a Periodic Potential: General Properties Electrons in a Weak Periodic Potential The Tight-Binding Method Other Methods for Calculating Band Structure The Semiclassical Model of Electron Dynamics The Semiclassical Theory of Conduction in Metals Measuring the Fermi Surface Band Structure of Selected Metals Beyond the Relaxation-Time Approximation Beyond the Independent Electron Approximation Surface Effects Classification of Solids Cohesive Energy Failures of the Static Lattice Model Classical Theory of the Harmonic Crystal Quantum Theory of the Harmonic Crystal Measuring Phonon Dispersion Relations Anharmonic Effects in Crystals Phonons in Metals Dielectric Properties of Insulators Homogeneous Semiconductors Inhomogeneous Semiconductors Defects in Crystals Diamagnetism and Paramagnetism Electron Interactions and Magnetic Structure Magnetic Ordering Superconductivity Reception The book has been reviewed several times and has been recommended in many other works. In a review of another work by the MRS Bulletin in 2011, the book was said to be "the indispensable work on electronic systems for experimental condensed matter physicists", due largely to the book's "lucidity and panache". The book is also recommended in other textbooks on condensed matter physics, including The Solid State by Harold Max Rosenberg in 1979, where it is called a "detailed, higher-level, modern treatment." The textbook Solid-State Physics for Electronics by Andre Moliton states in the foreword that the book aims to prepare students to "use by him- or herself the classic works of taught solid state physics, for example, those of Kittel and Ashcroft and Mermin." Along with Kittel, the textbook Introduction to Solid State Physics and Crystalline Nanostructures by Giuseppe Iadonisi, Giovanni Cantele, and Maria Luisa Chiofalo included the book in the "Acknowledgements" section as "special mentions". It is also called one of the standard textbooks of solid state physics in the textbook Polarized Electrons In Surface Physics. In a 2003 article detailing Mermin's contributions to solid state physics, the book was said to be "an extraordinarily readable textbook of the subject, which introduced a whole generation of solid state specialists to a subtle and elegant way of doing theoretical physics." The book, along with Kittel is also used as a benchmark for other books on solid-state physics; the publisher's description for the book Advanced Solid State Physics by Philip Phillips that was supplied to the Library of Congress for its bibliography entry states: "This is a modern book in solid state physics that should be accessible to anyone who has a working level of solid state physics at the Kittel or Ashcroft/Mermin level." Reviews The book received several reviews, including published articles in Science, Physics Today, and Physics Bulletin in 1977. It was also reviewed in German. Impressionism, Realism, and the aging of Ashcroft and Mermin In July 2013, José Menéndez, a physics professor at the Arizona State University Tempe campus published an article titled "Impressionism, Realism, and the aging of Ashcroft and Mermin" in Physics Today that stated: "It is undoubtedly one of the best physics books ever written, but it is not aging well". Both Ashcroft and Mermin wrote separate responses that were published in the same issue, addressing Menéndez's concerns. In his reply, Ashcroft wrote: "Over the years many readers have remarked that the initial edition of our book should 'not be touched'; it is just right in its treatments of the fundamentals." He then went on to say that writing a sequel "encompassing the many advances in condensed-matter physics that have occurred over the past 38 years" could be an option, but pointed to the fact that the book was translated into French, German, and Portuguese in the previous ten years as evidence that others agree it should be left as is. Release details References External links 1976 non-fiction books Physics textbooks Condensed matter physics Harcourt (publisher) books Henry Holt and Company books
Ashcroft and Mermin
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
950
[ "Phases of matter", "Condensed matter physics", "Matter", "Materials science" ]
65,998,440
https://en.wikipedia.org/wiki/Armando%20Bukele%20Katt%C3%A1n
Armando Bukele Kattán (16 December 1944 – 30 November 2015) was a Salvadoran businessman of Palestinian origin and Muslim religious leader and father of current Salvadoran President Nayib Bukele. Early years Armando Bukele Kattán was born in San Salvador, on December 16, 1944, the son of Humberto Bukele Salman and Victoria Kattán de Bukele. His parents were Palestinian Christians from Bethlehem, Mutasarrifate of Jerusalem, Ottoman Empire who had emigrated to El Salvador at the beginning of the 20th century as part of an emigration wave. He completed his high school studies at the Liceo Salvadoreño. Studies In 1967 he graduated as a doctor in Industrial Chemistry from the University of El Salvador. Entrepreneur He founded companies dedicated to the textile industry, commerce, pharmaceuticals, advertising and the media. He was also involved with the philanthropic efforts of the Kiwanis Club, a community service institution that brings together entrepreneurs and professionals. Religious leader Bukele converted from Christianity to Islam in the 1980s and founded four mosques during his lifetime, including the first mosque in El Salvador in 1992. He served as imam of the Salvadoran Islamic Community and was part of the Islamic Organization for Latin America and the Caribbean. He was a founding member of the Council of Religions for Peace of El Salvador. Published books The ABCs of Islam Clarifying Concepts in Physics In 2017, the posthumous book “The precise relativity of the point” was published, an extensive compilation of Bukele's thoughts gathered on his Twitter account and his program “Clarifying Concepts”. References 1944 births 2015 deaths People from San Salvador People from San Salvador Department Salvadoran people of Palestinian descent University of El Salvador alumni Chemical engineers Armando Bukele Kattan Converts to Islam from Christianity Armando Bukele Kattán Armando Bukele Kattán
Armando Bukele Kattán
[ "Chemistry", "Engineering" ]
370
[ "Chemical engineering", "Chemical engineers" ]
65,999,448
https://en.wikipedia.org/wiki/Near-Earth%20Object%20Coordination%20Centre
The Near-Earth Object Coordination Centre (NEOCC) is the main centre of the Planetary Defence Office of the European Space Agency (ESA). The NEOCC, which is based at ESRIN in Frascati, Italy, coordinates observations of small bodies such as asteroids and comets in the Solar System in order to evaluate and monitor the threat posed by those potentially hazardous. The Coordination Centre also conducts studies with the purpose of improving near-Earth object warning services. These are necessary to give real-time alerts to different organizations, scientific bodies, and decision-makers. References Planetary defense organizations
Near-Earth Object Coordination Centre
[ "Astronomy" ]
120
[ "Planetary defense organizations", "Astronomy organizations" ]
66,000,776
https://en.wikipedia.org/wiki/Content%20house
A content house, or also known as a collab house, creator house, content collective or influencer group, is a residential property which is most commonly used by internet celebrities, social media influencers or content creators in order to provide a focus on creating content for social media platforms, such as YouTube, TikTok, and Instagram. Content houses are intended to provide a fertile ground for influencers to help provide content for their viewers, in addition to helping grow their profile and brand through collaborations with other members of the house. They are most associated with the users of TikTok, a video-sharing social networking service; and have been referred to as "TikTok houses". History An early example of a content house was first seen in the 1999 reality television show Big Brother, and the franchise that the show inspired. Contestants lived together in a home specifically designed to be isolated from the outside world, and the drama of the series derived from the interactions between its "housemates". The first social media content houses were created in 2012, with one of the earliest formed by YouTuber Connor Franta for the YouTube channel Our Second Life. Notable content houses include the former Team 10 house inhabited by Jake Paul, the FaZe House, the Hype House, the Sway House and The Creature House. The origins of collab houses date back to 2014 when the members of Our Second Life lived and created content in their 02L Mansion. In 2015 popular users of Vine occupied an apartment at 1600 Vine Street in Los Angeles. The proximity of fellow content creators and the availability of emotional support from their peers have contributed to the popularity of collab houses. It is essential that a collab house has lots of natural light and privacy from fans and neighbors. Harper's Magazine, described collab houses as "grotesquely lavish abodes where teens and early twenty somethings live and work together, trying to achieve viral fame on a variety of media platforms" and attributed their rise in popularity to the COVID-19 pandemic when they "began to proliferate in impressive if not mind-boggling numbers, to the point where it became difficult for a casual observer even to keep track of them". The reporter stayed at the Clubhouse For the Boys in Los Angeles and felt that the management of the clubhouse "actually care[d] very little about the long-term fates of these kids. After all, there's a fungible supply of well-complected youngsters constantly streaming into Los Angeles. Only a very small percentage of these kids will actually make it in the industry; the rest of them, Amir [Ben-Yohanan] tells me, will eventually just "cycle through". The Clubhouse For the Boys in Los Angeles was based in a 7,000sq ft house valued at $8 million. The occupants of the house were expected to post three to five videos a week to social media accounts linked to the Clubhouse in exchange for free room and board. The house was owned by external investors who took up to 20% of the earnings of the occupants. The house had House Rules listed on a whiteboard, which included exhortations to refrain from drinking alcohol between Sunday and Thursday and to "finish brand deliverables before inviting guests". The popularity of collab houses arose at the same time as the burgeoning COVID-19 pandemic in the United States. The reporter felt that several articles in The New York Times about the collab houses had characterized their residents as "incorrigible Dionysians" as a result of the disparity between their lifestyle and the demands of the public health emergency. A January 2020 article in The New York Times described Los Angeles as "home to a land rush" of collab houses. Hype House, a collective of content creators was set in a 'Spanish-style mansion perched at the top of a hill on a gated street' with a 'a palatial backyard, a pool and enormous kitchen, dining and living quarters' and was home to four members of the group. Hype House was formed in December 2019, TikTok videos tagged #hypehouse had accrued 100 million views by January 2020. On April 22, 2021, Netflix announced that it was in production of a reality television series entitled The Hype House, which is set at the content house of the same name. The Hype House is set to star various content creators such as Nikita Dragun, Lil Huddy (also known as Chase Hudson), and Thomas Petrou. Reception to the announcement on social media was mostly negative, with some Netflix subscribers threatening to cancel their subscriptions if the series was aired. Partial list of content houses Byte House Clubhouse BH Clubhouse Beverly Hills Clubhouse FTB Clubhouse For the Boys Drip House Myth Crib FaZe House Fenty Beauty House Girls in the Valley Hype House Not a Content House Sway House The House of Collab 'YouTuber' mansions The Vlog Squad house in Studio City Jake Paul's Team 10 in West Hollywood and Calabasas The Clout House in the Hollywood Hills References Social media TikTok Instagram YouTube Online media collectives Art venues History of the Internet Artist groups and collectives
Content house
[ "Technology" ]
1,058
[ "Computing and society", "Social media" ]
66,000,884
https://en.wikipedia.org/wiki/List%20of%20nominees%20for%20the%20Nobel%20Prize%20in%20Chemistry
The Nobel Prize in Chemistry () is awarded annually by the Royal Swedish Academy of Sciences to scientists who have made outstanding contributions in chemistry. It is one of the five Nobel Prizes which were established by the will of Alfred Nobel in 1895. Every year, the Royal Swedish Academy of Sciences sends out forms, which amount to a personal and exclusive invitation, to about three thousand selected individuals to invite them to submit nominations. The names of the nominees are never publicly announced, and neither are they told that they have been considered for the Prize. Nomination records are strictly sealed for fifty years. Currently, the nominations for the years 1901 to 1970 are publicly available. Despite the annual sending of invitations, the prize was not awarded in eight years (1916, 1917, 1919, 1924, 1933, 1940–42) and have been delayed for a year nine times (1914, 1918, 1920, 1921, 1925, 1927, 1938, 1943, 1944). From 1901 to 1970, there have been 641 scientists nominated for the prize, 79 of which were awarded either jointly or individually. 18 more scientists from these nominees were awarded after 1970 and Frederick Sanger was awarded second time on 1980. Of only 15 women nominees, three were awarded. The first woman to be nominated was Marie Skłodowska Curie. She was nominated on 1911 by Swedish scientist Svante Arrhenius and French mathematician Gaston Darboux and eventually won the prize on the same year. She is the only woman to win twice the Nobel Prize: Physics (1903) and Chemistry (1911). Besides 27 and 13 scientists from these nominees won the prizes in Physiology or Medicine and in Physics (including one woman more) correspondingly (including years after 1970). Only one company has been nominated, the Geigy SA for the year 1947. Despite the long list of nominated noteworthy chemists, physicists and engineers, there have still been other scientists who were overlooked for the prize in chemistry such as Per Teodor Cleve, Jannik Petersen Bjerrum, Ellen Swallow Richards, Alice Ball, Vladimir Palladin, Sergey Reformatsky, Prafulla Chandra Ray, Alexey Favorsky, Rosalind Franklin and Joseph Edward Mayer. In addition, nominations of 21 scientists and four corporations more were declared invalid by the Nobel Committee. Nominees by their first nomination 1901–1909 1910–1919 1920–1929 1930–1939 1940–1949 1950–1959 1960–1969 1970–1973 Nominees are published 50 years later so 1973 nominees should be published in 2024. See also List of Nobel laureates in Chemistry List of female nominees for the Nobel Prize List of nominees for the Nobel Prize in Physics List of nominees for the Nobel Prize in Literature Motivations and remarks References + Lists of scientists Lists of chemists
List of nominees for the Nobel Prize in Chemistry
[ "Chemistry", "Technology" ]
559
[ "Lists of scientists", "Lists of people in STEM fields", "Lists of chemists" ]
66,001,403
https://en.wikipedia.org/wiki/Wilhelm%20Walter
Wilhelm Walter (16 June 1850, Rüdenhausen - 8 February 1914, Berlin) was a German architect and construction manager who worked with the Reichspost. Life and work Walter was the son of a pastor and attended the in Meiningen, from which he graduated in 1870. Shortly after, he joined the Army and fought with a field artillery regiment in the Franco-Prussian War. After returning from France, he served a brief apprenticeship as a construction worker, before enrolling at the Technical University of Hanover, where he studied with Conrad Wilhelm Hase. After graduating, he found work on several church projects, under the direction of Gotthilf Ludwig Möckel. His talent and preference for Medieval architecture led him to a large number of commissions in Pomerania and Silesia. Due to these numerous activities, he didn't pass the Staatsexamen until he was forty-two. He then received a certification as a Royal Prussian Master Builder. With these credentials, he was able to find employment as a Master Builder with the Reichspost in Berlin which, at that time, was being directed by Heinrich von Stephan and undergoing a major reorganization.. Stephan recognized Walter's talent, giving him extra duties, as well as enabling him to make research trips to Italy and England. His first major, independent project involved designing, planning and managing construction of the in Karlsruhe (1897-1900). When he returned to Berlin, he was appointed Chief Construction Inspector and, later, Imperial Building Officer. Over the next few years, several post offices were built according to his designs. In 1911, he was named a Privy Councillor for construction-related matters, and became involved in projects throughout Germany. His last major project was a complex that included the parcel post center and the in Berlin. It was incomplete at the time of his death. References "Walter, Wilhelm". In: Hans Vollmer (Ed.): Allgemeines Lexikon der Bildenden Künstler von der Antike bis zur Gegenwart, Vol.35: Waage–Wilhelmson. E. A. Seemann, Leipzig 1942, pg.121. External links Data on Wilhelm Walter @ Architekten und Künstler mit direktem Bezug zu Conrad Wilhelm Hase 1850 births 1914 deaths 19th-century German architects Construction management Privy counsellors People from Kitzingen (district)
Wilhelm Walter
[ "Engineering" ]
480
[ "Construction", "Construction management" ]
66,001,552
https://en.wikipedia.org/wiki/Attention%20%28machine%20learning%29
Attention is a machine learning method that determines the relative importance of each component in a sequence relative to the other components in that sequence. In natural language processing, importance is represented by "soft" weights assigned to each word in a sentence. More generally, attention encodes vectors called token embeddings across a fixed-width sequence that can range from tens to millions of tokens in size. Unlike "hard" weights, which are computed during the backwards training pass, "soft" weights exist only in the forward pass and therefore change with every step of the input. Earlier designs implemented the attention mechanism in a serial recurrent neural network (RNN) language translation system, but a more recent design, namely the transformer, removed the slower sequential RNN and relied more heavily on the faster parallel attention scheme. Inspired by ideas about attention in humans, the attention mechanism was developed to address the weaknesses of leveraging information from the hidden layers of recurrent neural networks. Recurrent neural networks favor more recent information contained in words at the end of a sentence, while information earlier in the sentence tends to be attenuated. Attention allows a token equal access to any part of a sentence directly, rather than only through the previous state. History Academic reviews of the history of the attention mechanism are provided in Niu et al. and Soydaner. Predecessors Selective attention in humans had been well studied in neuroscience and cognitive psychology. In 1953, Colin Cherry studied selective attention in the context of audition, known as the cocktail party effect. In 1958, Donald Broadbent proposed the filter model of attention. Selective attention of vision was studied in the 1960s by George Sperling's partial report paradigm. It was also noticed that saccade control is modulated by cognitive processes, insofar as the eye moves preferentially towards areas of high salience. As the fovea of the eye is small, the eye cannot sharply resolve the entire visual field at once. The use of saccade control allows the eye to quickly scan important features of a scene. These research developments inspired algorithms such as the Neocognitron and its variants. Meanwhile, developments in neural networks had inspired circuit models of biological visual attention. One well-cited network from 1998, for example, was inspired by the low-level primate visual system. It produced saliency maps of images using handcrafted (not learned) features, which were then used to guide a second neural network in processing patches of the image in order of reducing saliency. A key aspect of attention mechanism can be written (schematically) as: where the angled brackets denote dot product. This shows that it involves a multiplicative operation. Multiplicative operations within artificial neural networks had been studied under the names of Group Method of Data Handling (1965) (where Kolmogorov-Gabor polynomials implement multiplicative units or "gates"), higher-order neural networks, multiplication units, sigma-pi units, fast weight controllers, and hyper-networks. In fast weight controller (Schmidhuber, 1992), one of its two networks has "fast weights" or "dynamic links" (1981). A slow neural network learns by gradient descent to generate keys and values for computing the weight changes of the fast neural network which computes answers to queries. This was later shown to be equivalent to the unnormalized linear Transformer. A follow-up paper developed a similar system with active weight changing. Recurrent attention During the deep learning era, attention mechanism was developed to solve similar problems in encoding-decoding. In machine translation, the seq2seq model, as it was proposed in 2014, would encode an input text into a fixed-length vector, which would then be decoded into an output text. If the input text is long, the fixed-length vector would be unable to carry enough information for accurate decoding. An attention mechanism was proposed to solve this problem. An image captioning model was proposed in 2015, citing inspiration from the seq2seq model. that would encode an input image into a fixed-length vector. Xu et al (2015), citing Bahdanau et al (2014), applied the attention mechanism as used in the seq2seq model to image captioning. Transformer One problem with seq2seq models was their use of recurrent neural networks, which are not parallelizable as both the encoder and the decoder must process the sequence token-by-token. Decomposable attention attempted to solve this problem by processing the input sequence in parallel, before computing a "soft alignment matrix" (alignment is the terminology used by Bahdanau et al) in order to allow for parallel processing. The idea of using the attention mechanism for self-attention, instead of in an encoder-decoder (cross-attention), was also proposed during this period, such as in differentiable neural computers and neural Turing machines. It was termed intra-attention where an LSTM is augmented with a memory network as it encodes an input sequence. These strands of development were brought together in 2017 with the Transformer architecture, published in the Attention Is All You Need paper. Overview The attention network was designed to identify high correlations patterns amongst words in a given sentence, assuming that it has learned word correlation patterns from the training data. This correlation is captured as neuronal weights learned during training with backpropagation. This attention scheme has been compared to the Query-Key analogy of relational databases. That comparison suggests an asymmetric role for the Query and Key vectors, where one item of interest (the Query vector "that") is matched against all possible items (the Key vectors of each word in the sentence). However, both Self and Cross Attentions' parallel calculations matches all tokens of the K matrix with all tokens of the Q matrix; therefore the roles of these vectors are symmetric. Possibly because the simplistic database analogy is flawed, much effort has gone into understand Attention further by studying their roles in focused settings, such as in-context learning, masked language tasks, stripped down transformers, bigram statistics, N-gram statistics, pairwise convolutions, and arithmetic factoring. Machine translation The seq2seq method developed in the early 2010s uses two neural networks: an encoder network converts an input sentence into numerical vectors, and a decoder network converts those vectors to sentences in the target language. The Attention mechanism was grafted onto this structure in 2014 and shown below. Later it was refined into the Transformer design (2017). Interpreting Attention weights In translating between languages, alignment is the process of matching words from the source sentence to words of the translated sentence. In the I love you example above, the second word love is aligned with the third word aime. Stacking soft row vectors together for je, t''', and aime yields an alignment matrix: Sometimes, alignment can be multiple-to-multiple. For example, the English phrase look it up corresponds to cherchez-le. Thus, "soft" attention weights work better than "hard" attention weights (setting one attention weight to 1, and the others to 0), as we would like the model to make a context vector consisting of a weighted sum of the hidden vectors, rather than "the best one", as there may not be a best hidden vector. This view of the attention weights addresses some of the neural network explainability problem. Networks that perform verbatim translation without regard to word order would show the highest scores along the (dominant) diagonal of the matrix. The off-diagonal dominance shows that the attention mechanism is more nuanced. On the first pass through the decoder, 94% of the attention weight is on the first English word I, so the network offers the word je. On the second pass of the decoder, 88% of the attention weight is on the third English word you, so it offers t. On the last pass, 95% of the attention weight is on the second English word love, so it offers aime. seq2seq Problem statement Consider the seq2seq language English-to-French translation task. To be concrete, let us consider the translation of "the zone of international control <end>", which should translate to "la zone de contrôle international <end>". Here, we use the special <end> token as a control character to delimit the end of input for both the encoder and the decoder. An input sequence of text is processed by a neural network (which can be an LSTM, a Transformer encoder, or some other network) into a sequence of real-valued vectors , where stands for "hidden vector". After the encoder has finished processing, the decoder starts operating over the hidden vectors, to produce an output sequence , autoregressively. That is, it always takes as input both the hidden vectors produced by the encoder, and what the decoder itself has produced before, to produce the next output word: (, "<start>") → "la" (, "<start> la") → "la zone" (, "<start> la zone") → "la zone de" ... (, "<start> la zone de contrôle international") → "la zone de contrôle international <end>" Here, we use the special <start> token as a control character to delimit the start of input for the decoder. The decoding terminates as soon as "<end>" appears in the decoder output. Attention weights As hand-crafting weights defeats the purpose of machine learning, the model must compute the attention weights on its own. Taking analogy from the language of database queries, we make the model construct a triple of vectors: key, query, and value. The rough idea is that we have a "database" in the form of a list of key-value pairs. The decoder sends in a query, and obtains a reply in the form of a weighted sum of the values, where the weight is proportional to how closely the query resembles each key. The decoder first processes the "<start>" input partially, to obtain an intermediate vector , the 0th hidden vector of decoder. Then, the intermediate vector is transformed by a linear map into a query vector . Meanwhile, the hidden vectors outputted by the encoder are transformed by another linear map into key vectors . The linear maps are useful for providing the model with enough freedom to find the best way to represent the data. Now, the query and keys are compared by taking dot products: . Ideally, the model should have learned to compute the keys and values, such that is large, is small, and the rest are very small. This can be interpreted as saying that the attention weight should be mostly applied to the 0th hidden vector of the encoder, a little to the 1st, and essentially none to the rest. In order to make a properly weighted sum, we need to transform this list of dot products into a probability distribution over . This can be accomplished by the softmax function, thus giving us the attention weights:This is then used to compute the context vector: where are the value vectors, linearly transformed by another matrix to provide the model with freedom to find the best way to represent values. Without the matrices , the model would be forced to use the same hidden vector for both key and value, which might not be appropriate, as these two tasks are not the same.This is the dot-attention mechanism. The particular version described in this section is "decoder cross-attention", as the output context vector is used by the decoder, and the input keys and values come from the encoder, but the query comes from the decoder, thus "cross-attention". More succinctly, we can write it aswhere the matrix is the matrix whose rows are . Note that the querying vector, , is not necessarily the same as the key-value vector . In fact, it is theoretically possible for query, key, and value vectors to all be different, though that is rarely done in practice. Variants Many variants of attention implement soft weights, such as fast weight programmers, or fast weight controllers (1992). A "slow" neural network outputs the "fast" weights of another neural network through outer products. The slow network learns by gradient descent. It was later renamed as "linearized self-attention". Bahdanau-style attention, also referred to as additive attention, Luong-style attention, which is known as multiplicative attention, highly parallelizable self-attention introduced in 2016 as decomposable attention and successfully used in transformers a year later, positional attention and factorized positional attention. For convolutional neural networks, attention mechanisms can be distinguished by the dimension on which they operate, namely: spatial attention, channel attention, or combinations. Much effort has gone into understand Attention further by studying their roles in focused settings, such as in-context learning, masked language tasks, stripped down transformers, bigram statistics, N-gram statistics, pairwise convolutions, and arithmetic factoring. These variants recombine the encoder-side inputs to redistribute those effects to each target output. Often, a correlation-style matrix of dot products provides the re-weighting coefficients. In the figures below, W is the matrix of context attention weights, similar to the formula in Core Calculations section above. Self-attention Self-attention is essentially the same as cross-attention, except that query, key, and value vectors all come from the same model. Both encoder and decoder can use self-attention, but with subtle differences. For encoder self-attention, we can start with a simple encoder without self-attention, such as an "embedding layer", which simply converts each input word into a vector by a fixed lookup table. This gives a sequence of hidden vectors . These can then be applied to a dot-product attention mechanism, to obtainor more succinctly, . This can be applied repeatedly, to obtain a multilayered encoder. This is the "encoder self-attention", sometimes called the "all-to-all attention", as the vector at every position can attend to every other. Masking For decoder self-attention, all-to-all attention is inappropriate, because during the autoregressive decoding process, the decoder cannot attend to future outputs that has yet to be decoded. This can be solved by forcing the attention weights for all , called "causal masking". This attention mechanism is the "causally masked self-attention". Optimizations Flash attention The size of the attention matrix is proportional to the square of the number of input tokens. Therefore, when the input is long, calculating the attention matrix requires a lot of GPU memory. Flash attention is an implementation that reduces the memory needs and increases efficiency without sacrificing accuracy. It achieves this by partitioning the attention computation into smaller blocks that fit into the GPU's faster on-chip memory, reducing the need to store large intermediate matrices and thus lowering memory usage while increasing computational efficiency. Mathematical representation Standard Scaled Dot-Product Attention For matrices: and , the scaled dot-product, or QKV attention is defined as: where denotes transpose and the softmax function is applied independently to every row of its argument. The matrix contains queries, while matrices jointly contain an unordered set of key-value pairs. Value vectors in matrix are weighted using the weights resulting from the softmax operation, so that the rows of the -by- output matrix are confined to the convex hull of the points in given by the rows of . To understand the permutation invariance and permutation equivariance properties of QKV attention, let and be permutation matrices; and an arbitrary matrix. The softmax function is permutation equivariant in the sense that: By noting that the transpose of a permutation matrix is also its inverse, it follows that: which shows that QKV attention is equivariant with respect to re-ordering the queries (rows of ); and invariant to re-ordering of the key-value pairs in . These properties are inherited when applying linear transforms to the inputs and outputs of QKV attention blocks. For example, a simple self-attention function defined as: is permutation equivariant with respect to re-ordering the rows of the input matrix in a non-trivial way, because every row of the output is a function of all the rows of the input. Similar properties hold for multi-head attention, which is defined below. Masked Attention When QKV attention is used as a building block for an autoregressive decoder, and when at training time all input and output matrices have rows, a masked attention variant is used: where the mask, is a strictly upper triangular matrix, with zeros on and below the diagonal and in every element above the diagonal. The softmax output, also in is then lower triangular, with zeros in all elements above the diagonal. The masking ensures that for all , row of the attention output is independent of row of any of the three input matrices. The permutation invariance and equivariance properties of standard QKV attention do not hold for the masked variant. Multi-Head Attention Multi-head attention where each head is computed with QKV attention as: and , and are parameter matrices. The permutation properties of (standard, unmasked) QKV attention apply here also. For permutation matrices, : from which we also see that multi-head self-attention''': is equivariant with respect to re-ordering of the rows of input matrix . Bahdanau (Additive) Attention where and and are learnable weight matrices. Luong Attention (General) where is a learnable weight matrix. See also Recurrent neural network seq2seq Transformer (deep learning architecture) Attention Dynamic neural network References External links Dan Jurafsky and James H. Martin (2022) Speech and Language Processing (3rd ed. draft, January 2022), ch. 10.4 Attention and ch. 9.7 Self-Attention Networks: Transformers Alex Graves (4 May 2020), Attention and Memory in Deep Learning (video lecture), DeepMind / UCL, via YouTube Machine learning
Attention (machine learning)
[ "Engineering" ]
3,861
[ "Artificial intelligence engineering", "Machine learning" ]
66,001,579
https://en.wikipedia.org/wiki/Even%E2%80%93even%20nucleus
In atomic physics, even–even (EE) nuclei are nuclei with an even number of neutrons and an even number of protons. Even-mass-number nuclei, which comprise 151/251 = ~60% of all stable nuclei, are bosons, i.e. they have integer spin. The vast majority of them, 146 out of 151, belong to the EE class; they have spin 0 because of pairing effects. See also Even and odd atomic nuclei Nuclear shell model References Bosons Atomic physics Subatomic particles with spin 0
Even–even nucleus
[ "Physics", "Chemistry" ]
112
[ " and optical physics stubs", "Quantum mechanics", "Bosons", "Subatomic particles", " molecular", "Atomic physics", "Atomic", "Physical chemistry stubs", "Matter", " and optical physics" ]
66,002,370
https://en.wikipedia.org/wiki/Vaccination%20requirements%20for%20international%20travel
Vaccination requirements for international travel are the aspect of vaccination policy that concerns the movement of people across borders. Countries around the world require travellers departing to other countries, or arriving from other countries, to be vaccinated against certain infectious diseases in order to prevent epidemics. At border checks, these travellers are required to show proof of vaccination against specific diseases; the most widely used vaccination record is the International Certificate of Vaccination or Prophylaxis (ICVP or Carte Jaune/Yellow Card). Some countries require information about a passenger's vaccination status in a passenger locator form. Historic requirements Smallpox (1944–1981) The first International Certificate of Vaccination against Smallpox was developed by the 1944 International Sanitary Convention (itself an amendment of the 1926 International Sanitary Convention on Maritime Navigation and the 1933 International Sanitary Convention for Aerial Navigation). The initial certificate was valid for a maximum of three years. The policy had a few flaws: the smallpox vaccination certificates were not always checked by qualified airport personnel, or when passengers transferred at airports in smallpox-free countries. Travel agencies mistakenly provided certificates to some unvaccinated customers, and there were some instances of falsified documents. Lastly, a small number of passengers carrying valid certificates still contracted smallpox because they were improperly vaccinated. However, all experts agree that the mandatory possession of vaccination certificates significantly increased the number of travellers who were vaccinated, and thus contributed to preventing the spread of smallpox, especially when the rapid expansion of air travel in the 1960s and 1970s reduced the travelling time from endemic countries to all other countries to just a few hours. After smallpox was successfully eradicated in 1980, the International Certificate of Vaccination against Smallpox was cancelled in 1981, and the new 1983 form lacked any provision for smallpox vaccination. Current requirements Yellow fever Travellers who wish to enter certain countries or territories must be vaccinated against yellow fever ten days before crossing the border, and be able to present a vaccination record/certificate at the border checks. In most cases, this travel requirement depends on whether the country they are travelling from has been designated by the World Health Organization as being a "country with risk of yellow fever transmission". In a few countries, it does not matter which country the traveller comes from: everyone who wants to enter these countries must be vaccinated against yellow fever. There are exemptions for newborn children; in most cases, any child who is at least nine months or one year old needs to be vaccinated. Polio Travellers who wish to enter or leave certain countries must be vaccinated against polio, usually at most twelve months and at least four weeks before crossing the border, and be able to present a vaccination record/certificate at the border checks. Most requirements apply only to travel to or from so-called polio-endemic, polio-affected, polio-exporting, polio-transmission, or "high-risk" countries. As of August 2020, Afghanistan and Pakistan are the only polio-endemic countries in the world (where wild polio has not yet been eradicated). Several countries have additional precautionary polio vaccination travel requirements, for example to and from "key at-risk countries", which as of December 2020 include China, Indonesia, Mozambique, Myanmar, and Papua New Guinea. Meningococcal meningitis Travellers who wish to enter or leave certain countries or territories must be vaccinated against meningococcal meningitis, preferably 10–14 days before crossing the border, and be able to present a vaccination record/certificate at the border checks. Countries with required meningococcal vaccination for travellers include The Gambia, Indonesia, Lebanon, Libya, the Philippines, and most importantly and extensively Saudi Arabia for Muslims visiting or working in Mecca and Medina during the Hajj or Umrah pilgrimages. For some countries in African meningitis belt, vaccinations prior to entry are not required, but highly recommended. COVID-19 During the COVID-19 pandemic, several COVID-19 vaccines were developed, and in December 2020 the first vaccination campaign was planned. Anticipating the vaccine, on 23 November 2020, Qantas announced that the company would ask for proof of COVID-19 vaccination from international travellers. According to Alan Joyce, the firm's CEO, a coronavirus vaccine would become a "necessity" when travelling, "We are looking at changing our terms and conditions to say for international travellers, we will ask people to have a vaccination before they can get on the aircraft." Australian Prime Minister Scott Morrison subsequently announced that all international travellers who fly to Australia without proof of a COVID-19 vaccination will be required to quarantine at their own expense. Victoria Premier Daniel Andrews and the CEOs of Melbourne Airport, Brisbane Airport and Flight Centre all supported the Morrison government's "no jab, no fly" policy, with only Sydney Airport's CEO suggesting advanced testing might also be sufficient to eliminate quarantine in the future. The International Air Transport Association (IATA) announced that it was almost finished with developing a digital health pass which states air passengers' COVID-19 testing and vaccination information to airlines and governments. Korean Air and Air New Zealand were seriously considering mandatory vaccination as well, but would negotiate it with their respective governments. KLM CEO Pieter Elbers responded on 24 November that KLM does not yet have any plans for mandatory vaccination on its flights. Brussels Airlines and Lufthansa said they had no plans yet on requiring passengers to present proof of vaccination before boarding, but Brussels Airport CEO Arnaud Feist agreed with Qantas' policy, stating: "Sooner or later, having proof of vaccination or a negative test will become compulsory." Ryanair announced it would not require proof of vaccination for air travel within the EU, EasyJet stated it would not require any proof at all. The Irish Times commented that a vaccination certificate for flying was quite common in countries around the world for other diseases, such as for yellow fever in many African countries. On 25 November, separately from IATA's digital health pass initiative, five major airlines – United Airlines, Lufthansa, Virgin Atlantic, Swiss International Air Lines, and JetBlue – announced the 1December 2020 introduction of the CommonPass, which shows the results of passengers' COVID-19 tests. It was designed as an international standard by the World Economic Forum and The Commons Project, and set up in such a way that it could also be used to record vaccination results in the future. It standardises test results and aims to prevent forgery of vaccination records, while storing only limited data on a passenger's phone to safeguard their privacy. The CommonPass had already successfully undergone a trial period in October with United Airlines and Cathay Pacific Airways. On 26 November, the Danish Ministry of Health confirmed that it was working on a COVID-19 "vaccine passport" or simply Vaccination card which would likely not only work as proof of vaccination for air travel, but also for other activities such as concerts, private parties and access to various businesses, a perspective welcomed by the Confederation of Danish Industry. The Danish College of General Practitioners also welcomed the project, saying that it doesn't force anyone to vaccinate, but encourages them to do so if they want to enjoy certain privileges in society. Irish Foreign Minister Simon Coveney said on 27 November 2020 that, although he "currently has no plans" for a passport vaccination stamp, his government was working on changing the passenger locator form to include proof of PCR negative tests for the coronavirus, and that it was likely to be further adjusted to include vaccination data when a COVID-19 vaccine would become available. Coveney stressed that "We do not want, following enormous efforts and sacrifices from people, to reintroduce the virus again through international travel, which is a danger if it is not managed right." IATA Travel Pass app The IATA Travel Pass application for smartphone has been developed by the International Air Transport Association (IATA) in early 2021. The mobile app standardizes the health verification process confirming whether passengers have been vaccinated against, or tested negative for, COVID-19 prior to travel. Passengers will use the app to create a digital passport linked to their e-passport, receive test results and vaccination details from laboratories, and share that information with airlines and authorities. The application is intended to replace the existing paper-based method of providing proof of vaccination in international travel, colloquially known as the Yellow Card. Trials of the application are carried out by a number of airlines including Singapore Airlines, Emirates, Qatar Airways, Etihad and Air New Zealand. It has been opined that many countries will increasingly consider the vaccination status of travellers when deciding to allow them entry or not or require them to quarantine since recently published research shows that the Pfizer vaccine effect lasts for at least six months. Recommendations Various vaccines are not legally required for travellers, but highly recommended by the World Health Organization. For example, for areas with risk of meningococcal meningitis infection in countries in African meningitis belt, vaccinations prior to entry are not required by these countries, but nevertheless highly recommended by the WHO. As of July 2019, ebola vaccines and malaria vaccines were still in development and not yet recommended for travellers. Instead, the WHO recommends various other means of prevention, including several forms of chemoprophylaxis, in areas where there is a significant risk of becoming infected with malaria. See also International Certificate of Vaccination or Prophylaxis (ICVP), also known as Carte Jaune or Yellow Card Travel medicine Immunity passport Notes References External links Vaccination requirements and recommendations for international travellers EU & COVID vaccine: Health requirements to travel to Europe Information about Vaccine Passports Immunology International travel documents Medical records Passports International responses to the COVID-19 pandemic Tropical diseases Vaccination law World Health Organization Impact of the COVID-19 pandemic on tourism
Vaccination requirements for international travel
[ "Biology" ]
2,110
[ "Biotechnology law", "Immunology", "Vaccination law", "Vaccination" ]
66,002,558
https://en.wikipedia.org/wiki/Onion%20yellow%20dwarf%20virus
Onion yellow dwarf virus (OYDV) is a plant virus in the genus Potyvirus that has been identified worldwide and mainly infects species of Allium such as onion, garlic, and leek. The virus causes mild to severe leaf malformation, and bulb reduction up to sixty percent has been observed in garlic. Genome The full genome of OYDV is around 10,538 nucleotides long and encodes a polyprotein of 3,403 amino acids. Its P3 gene is longer than those of other known Potyviruses. OYDV is the first potyvirus found which has natural deletion mutants lacking the N-terminal region of helper-component proteinase (HC-Pro). The mutant isolates are common. Garlic plants grown commercially are generally co-infected with both the normal and attenuated isolates. RNA silencing suppressor activities in isolates, which lack the long stretch of the N-terminal amino acids (~ 100 residues) in their HC-Pro gene, are observed to be low. Transmission Isolates with complete HC-Pro sequences were non-persistently transmitted by aphids on their own, while the isolates with short HC-Pros (OYDV-S) were only aphid transmissible when they were co-infected with leek yellow stripe virus (LYSV), another potyvirus that mostly infects Allium spp. LYSV HC-Pro was assumed to interlink both LYSV and OYDV-S with the aphid stylet. OYDV is not transmitted by dodder. References Potyviruses Viral plant pathogens and diseases
Onion yellow dwarf virus
[ "Biology" ]
348
[ "Virus stubs", "Viruses" ]
66,004,557
https://en.wikipedia.org/wiki/Cora%20G.%20Burwell
Cora Gertrude Burwell (June 25, 1883 – June 20, 1982) was an American astronomical researcher specialized in stellar spectroscopy. She was based at Mount Wilson Observatory from 1907 to 1949. Early life Cora Gertrude Burwell was born in Massachusetts and raised in Stafford Springs, Connecticut. She graduated from Mount Holyoke College in 1906 and was active in Holyoke alumnae activities in the Los Angeles area. Career In July, 1907, Burwell was appointed to a "human computer" position at Mount Wilson Observatory. In 1910, she attended the fourth conference of the International Union for Cooperation in Solar Research, when it was held at Mount Wilson. Burwell specialized in stellar spectroscopy. She was solo author on some scientific publications, and co-authored several others (some of which she was lead author), with notable collaborators including Dorrit Hoffleit, Henrietta Swope, Walter S. Adams, and Paul W. Merrill. With Merrill she compiled several catalogs of Be stars, in 1933, 1943, 1949, and 1950. She also helped to tend the Mount Wilson Observatory Library. She retired from the observatory in 1949, but continued speaking about astronomy to community groups. She also published a book of poetry, Neatly Packed. Personal life Cora Burwell lived in Pasadena, and later in Monrovia with her sister, Priscilla Burwell. She died in 1982, two days before her 99th birthday, in Los Angeles. References 1883 births 1982 deaths 20th-century American women scientists Human computers Mount Holyoke College alumni American women astronomers People from Stafford Springs, Connecticut Scientists from Massachusetts Scientists from Connecticut 20th-century American astronomers Spectroscopists
Cora G. Burwell
[ "Technology" ]
337
[ "Human computers", "History of computing" ]
66,004,740
https://en.wikipedia.org/wiki/Energy%20Regulators%20Association%20of%20East%20Africa
The Energy Regulators Association of East Africa (EREA) is a non-profit organisation mandated to spearhead harmonisation of energy regulatory frameworks, sustainable capacity building and information sharing among the List of energy regulatory bodies in the East African Community. Its key objective is to promote the independence of national regulators and support the establishment of a robust East African energy union. Foundation and mission On 28 May 2008, four national energy regulatory authorities voluntarily signed a "Memorandum of Understanding" for the establishment of the Energy Regulators Association of East Africa (EREA). Subsequently, it was recognized by the 8th Sectoral Council on Energy of East African Community (EAC) as a forum of energy regulators in the EAC on 21 June 2013. It was registered by the United Republic of Tanzania on 23 May 2019 into a company limited by guarantees and without share capital under the Companies Act, 2002, and the Memorandum of Association. The EREA represents seven members – the national energy regulators from the EAC Member States. The EREA works closely with the EAC, African Union Eastern Africa Power Pool (EAPP)-Independent Regulatory Board (IRB), National Association of Regulatory Utility Commissioners and The Regional Association of Energy Regulators for Eastern and Southern Africa (RAERESA). EREA's seat is in Arusha, Tanzania. Objectives and functions EREA is composed of nine Key Result Areas and the objectives are summarised as follows: Facilitating the harmonization of NRI’s policies, tariff structures and legislation in the Member States; Sustainable Capacity Building through the establishment of the Energy Regulation Centre of Excellence (ERCE) to support regional member institutions contribute to the advancement of research on regulatory issues Promoting regional co-operation in the planning and development of an integrated energy market and infrastructure. Promoting independent regulation in the East African Community. EREA was established to also, amongst other objectives; strengthen economic, commercial, social, cultural, political, technological and other ties for fast balanced and sustainable development within the East African region. Members and Governance EREA members include Energy and Water Utilities Regulatory Authority (EWURA) of Tanzania, Energy Petroleum Regulatory Authority (EPRA) of Kenya, Zanzibar Utility Regulatory Authority (ZURA) of Zanzibar, and Petroleum Authority of Uganda (PAU) of Uganda. Others include Electricity Regulatory Authority (ERA) of Uganda, Rwanda Utilities Regulatory Authority (RURA) of Rwanda and Autorité de Régulation des secteurs de l’Eau potable et de l’Energie (AREEN) of Burundi. EREA is also supporting the Government of the Republic of South Sudan to establish an independent regulatory authority which will eventually be integrated within the regional regulatory association. The Association has four organs and applies the principle of rotating leadership of the organs among its members. These organs are: (a) The General Assembly (G.A.) – the supreme organ, is currently chaired by AREEN-Burundi. The GA is the meeting of chairpersons and chief executive officers of the national regulatory authorities in EAC. (b) The Executive Council – is currently chaired by EWURA-Tanzania. This a meeting of Chief Executive Officers/Director Generals of the national regulatory authorities in EAC. (c) The Secretariat - is headed by the Executive Secretary-Dr. Geoffrey Aori Mabea, appointed for a three-year term, and the position is on a rotational basis among the countries of East African Community. (d) Three Specialized Portfolio Committees for handling Economic, Legal, and Technical matters of the Association. EAC Electricity Markets The East African Community's Electricity Regulatory Index(ERI) Source: The African Development Bank carried out a third electricity regulatory index for Africa to assess the three main pillars of regulation. They included: the Regulatory Governance Index (RGI); the Regulatory Substance Index (RSI); and the Regulatory Outcome Index (ROI). In the report, the East African Community members states shows a significant improvement in the key regulatory index. According to the African Development Bank, Uganda has maintained the top position for two consecutive years. End User Electricity Tariff Electricity Statistics for EAC See also Common Market for Eastern and Southern Africa (COMESA) Energy Regulators Regional Association (ERRA) Eastern Africa Power Pool (EAPP-IRB) Energy Regulation Centre of Excellence (ERCE) Regional Association of Energy Regulators for Eastern and Southern Africa RAERESA References External links EREA website EREA Magazine website Energy Regulation Centre of Excellence Organizations established in 2008 Energy markets International energy organizations East African Community Energy regulatory authorities Non-profit organizations based in Africa
Energy Regulators Association of East Africa
[ "Engineering" ]
922
[ "International energy organizations", "Energy organizations" ]
66,005,131
https://en.wikipedia.org/wiki/V841%20Ophiuchi
V841 Ophiuchi (Nova Ophiuchi 1848) was a bright nova discovered by John Russell Hind on 27 April 1848. It was the first object of its type discovered since 1670. At the time of its discovery, it had an apparent magnitude of 5.6, but may have reached magnitude 2 at its peak, making it easily visible to the naked eye. Near peak brightness it was described as "bright red" or "scarlet", probably due to Hα line emission. Its brightness is currently varying slowly around magnitude 13.5. The area of the sky surrounding this nova had been examined frequently by astronomers prior to the nova's discovery, because it was near the reported location of "52 Serpentis", a star John Flamsteed had included in his catalogue with erroneous coordinates. Like all cataclysmic variable (CV) stars, novae are short-period binary stars with a "donor" star transferring material to a white dwarf. In the case of V841 Ophiuchi, the orbital period is 14.43 hours, which is unusually long for a CV; the vast majority of such systems have periods below 10 hours. Peters and Thorstensen derive an orbital inclination of with respect to our line of sight which, combined with the relatively large separation implied by the orbital period, would explain why V841 Ophiuchi is not an eclipsing binary. They also found that the donor star is somewhat cooler than the Sun, likely early K-type or late G-type. Modern observations during a 30-year time interval show that V841 Ophiuchi undergoes regular brightness variations with a period of 3.4 years, and an amplitude of about 0.3 magnitudes, which may be due to oscillations in the donor star similar to our Sun's solar cycle. Non-periodic "flickering" brightness variations have been reported, on timescales as short as 100 seconds. References Novae Hercules (constellation) 1848 in science Ophiuchi, V841
V841 Ophiuchi
[ "Astronomy" ]
411
[ "Novae", "Astronomical events", "Hercules (constellation)", "Constellations" ]
66,005,728
https://en.wikipedia.org/wiki/Safe%20listening
Safe listening is a framework for health promotion actions to ensure that sound-related recreational activities (such as concerts, nightclubs, and listening to music, broadcasts, or podcasts) do not pose a risk to hearing. While research shows that repeated exposures to any loud sounds can cause hearing disorders and other health effects, safe listening applies specifically to voluntary listening through personal listening systems, personal sound amplification products (PSAPs), or at entertainment venues and events. Safe listening promotes strategies to prevent negative effects, including hearing loss, tinnitus, and hyperacusis. While safe listening does not address exposure to unwanted sounds (which are termed noise) – for example, at work or from other noisy hobbies – it is an essential part of a comprehensive approach to total hearing health. The risk of negative health effects from sound exposures (be it noise or music) is primarily determined by the intensity of the sound (loudness), duration of the event, and frequency of that exposure. These three factors characterize the overall sound energy level that reaches a person's ears and can be used to calculate a noise dose. They have been used to determine the limits of noise exposure in the workplace. Both regulatory and recommended limits for noise exposure were developed from hearing and noise data obtained in occupational settings, where exposure to loud sounds is frequent and can last for decades. Although specific regulations vary across the world, most workplace best practices consider 85 decibels (dB A-weighted) averaged over eight hours per day as the highest safe exposure level for a 40-year lifetime. Using an exchange rate, typically 3 dB, allowable listening time is halved as the sound level increases by the selected rate. For example, a sound level as high as 100 dBA can be safely listened to for only 15 minutes each day. Because of their availability, occupational data have been adapted to determine damage-risk criteria for sound exposures outside of work. In 1974, the US Environmental Protection Agency recommended a 24-hour exposure limit of 70 dBA, taking into account the lack of a "rest period" for the ears when exposures are averaged over 24 hours and can occur every day of the year (workplace exposure limits assume 16 hours of quiet between shifts and two days a week off). In 1995, the World Health Organization (WHO) similarly concluded that 24-hour average exposures at or below 70 dBA pose a negligible risk for hearing loss over a lifetime. Following reports on hearing disorders from listening to music, additional recommendations and interventions to prevent adverse effects from sound-related recreational activities appear necessary. Public health and community interventions Several organizations have developed initiatives to promote safe listening habits. The U.S. National Institute on Deafness and Other Communication Disorders (NIDCD) has guidelines for safely listening to personal music players geared toward the "tween" population (children aged 9–13 years). The Dangerous Decibels program promotes the use of "Jolene" mannequins to measure output of PLSs as an educational tool to raise awareness of overexposure to sound through personal listening. This type of mannequin is simple and inexpensive to construct and is often an attention-grabber at schools, health fairs, clinic waiting rooms, etc. The National Acoustic Laboratories (NAL), the research division of Hearing Australia, developed the Know Your Noise initiative, funded by the Australian Government Department of Health. The Know Your Noise website has a Noise Risk Calculator that makes it possible and easy for users to identify and understand their levels of noise exposure (at work and play), and possible risks for hearing damage. Users can also take an online hearing test to see how well they hear in a noisy background. The WHO launched the Make Listening Safe initiative as part of the celebration of World Hearing Day on 3 March 2015. The initiative's main goal is to ensure that people of all ages can enjoy listening to music and other audio media in a manner that does not create a hearing risk. Noise-induced hearing loss, hyperacusis, and tinnitus have been associated with the frequent use at high volume of devices such as headphones, headsets, earpieces, earbuds, and True Wireless Stereo technologies of any type. Make Listening Safe aims to: raise awareness about safe listening practices, especially among the younger population; highlight the benefits of safe listening to policy-makers, health professionals, manufacturers, parents, and others; foster the development and implementation of standards applicable to personal audio devices and recreational venues to cover safe listening features become a depository of open-access resources and information on safe listening practices in at least six languages (Arabic, Chinese, English, French, Russian, and Spanish). In 2019 the World Health Organization published a toolkit for safe listening devices and systems that provides the rationale for the proposed strategies, and identifies actions that governments, industry partners and the civil society can take. On 1 November 2023 the WHO launched a Make Listening Safe Campaign (MLSC) in the United Kingdom as a pilot to a strategy to encourage the adoption of safe listening practices amongst those between the ages of ten and forty. The MLSC UK will run a sequence of run short campaigns focused on different themes, starting with avoidable risks amongst headphone users. It will include an ePetition requesting the government to adopt higher hearing safeguarding standards/regulations in line with the WHO/International Telecommunication Union (ITU) recommendations. The plan is to evaluate the effort and later roll it out to its other 193 member states. It includes an in-person launch event, public education focused campaigns, policy advocacy, and collaboration with various stakeholders, including governmental bodies, industry players, and healthcare professionals. Make Listening Safe is promoting the development of features in PLS to raise the users' awareness of risky listening practices. In this context, the WHO partnered with the International Telecommunication Union (ITU) to develop suitable exposure limits for inclusion in the voluntary H.870 safety standards on "Guidelines for safe listening devices/systems." Experts in the fields of audiology, otology, public health, epidemiology, acoustics, and sound engineering, as well as professional organizations, standardization organizations, manufacturers, and users are collaborating on this effort. The Make Listening Safe initiative also covers entertainment venues. Average sound pressure levels (SPL) in nightclubs, discotheques, bars, gyms and live sports venues can be as high as 112 dB (A-weighted); sound levels at pop concerts may be even higher. Frequent exposure or even a short exposure to very high-sound pressure levels such as these can be harmful. WHO reviewed existing noise regulations for various entertainment sites – including clubs, bars, concert venues, and sporting arenas in countries around the world, and released a global Standard for Safe Listening Venues and Events as part of World Hearing Day 2022. Also released in 2022 were: an mSafeListening handbook, on how to create an mHealth safe listening program. and a media toolkit for journalists containing key information and how to talk about safe listening. Sound source interventions Personal listening systems (PLS) Personal listening systems are portable devices – usually an electronic player attached to headphones or earphones – which are designed for listening to various media, such as music or gaming. The output of such systems varies widely. Maximum output levels vary depending upon the specific devices and regional regulatory requirements. Typically, PLS users can choose to limit the volume between 75 and 105 dB SPL. The ITU and the WHO recommend that PLS be programmed with a monitoring function that sets a weekly sound exposure limit and provides alerts as users reach 100% of their weekly sound allowance. If users acknowledge the alert, they can choose to whether or not to reduce the volume. But if the user does not acknowledge the alert, the device will automatically reduce the volume to a predetermined level (based on the mode selected, i.e. 80 or 75 dBA). By conveying exposure information in a way that can be easily understood by end-users, this recommendation aims to make it easier for listeners to manage their exposures and avoid any negative effects. The health app on iPhones, Apple Watches, and iPads incorporated this approach starting in 2019. These feature the opt-in Apple Hearing Study, part of the Research app that is being conducted in collaboration with the University of Michigan School of Public Health. Data is being shared with the WHO's Make Listening Safe initiative. Preliminary results released in March 2021, one year into the study, indicated that 25% of participants experienced ringing in their ears a few times a week or more, 20% of participants have hearing loss, and 10% have characteristics that are typical in cases of noise-induced hearing loss. Nearly 50% of participants reported that they had not had their hearing tested in at least 10 years. In terms of exposure levels, 25% of the participants experienced high environmental sound exposures. The International Technical Commission (ITC) published the first European standard IEC 62368–1 on personal audio systems in 2010. It defined safe output levels for PLSs as 85 dB or less, while allowing users to increase the volume to a maximum of 100 dBA. However, when users raise the volume to the maximum level, the standard specifies that an alert should pop up to warn the listener of the potential for hearing problems. The 2018 ITU and WHO standard H.870 "Guidelines for safe listening devices/systems" focus on the management of weekly sound-dose exposure. This standard was based on the EN 50332-3 standard "Sound system equipment: headphones and earphones associated with personal music players – maximum sound pressure level measurement methodology – Part 3: measurement method for sound dose management." This standard defines a safe listening limit as a weekly sound dose equivalent to 80 dBA for 40 hours/week. Potential differences in children The frequent use of PLS among children has raised concerns about the potential risks that might be associated with such exposure. A systematic review and meta-analysis published in 2022 recorded an increased prevalence of risk of hearing loss compared to 2015 estimates among young people between 12 and 34 years of age who are exposed to high sound pressure levels (SPL) due to use of headphones and entertainment soundscapes. The authors included articles published between 2000 and 2021 that reported unsafe listening practices. The number of young people who may be at risk of hearing loss worldwide has been estimated from the total global estimates of the population aged 12 to 34 years. Thirty-three studies (corresponding to data from 35 medical records and 19,046 individuals) were included; 17 and 18 records focused on the use of SEPs and noisy entertainment venues, respectively. The pooled prevalence estimate of exposure to unsafe listening to EPS was 23.81% (95% CI 18.99% to 29.42%). The model was adjusted according to the intensity and duration of exposure to identify an estimated prevalence of 48.2%. The estimated global number of young people who may be at risk of hearing loss due to exposure to unsafe listening practices ranged from 0.67 to 1.35 billion. The authors concluded that unsafe listening practices are highly prevalent worldwide and may put over 1 billion young people at risk of hearing loss. There is no agreement on the acceptable risk of noise-induced hearing loss in children; and adult damage-risk criteria may not be suitable for establishing safe listening levels for children due to differences in physiology and the more serious developmental impact of hearing loss early in life. One attempt to identify safe levels assumed that the most appropriate exposure limit for recreational noise exposure in children would aim to protect 99% of children from a shift in hearing exceeding 5 dB at 4 kHz after 18 years of noise exposure. Using estimates from the International Organization for Standardization (ISO 1999:2013), the authors calculated that 99% of children who are exposed from birth until the age of 18 years to 8-h average sound levels (LEX) of 82 dBA would have hearing thresholds of about 4.2 dB greater, indicating a shift in hearing ability. By including a 2 dBA margin of safety which reduces the 8-hr exposure allowance to 80 dBA, the study estimated a hearing change of 2.1 dB or less in 99% of children. To preserve the hearing from birth until the age of 18 years, it was recommended that noise exposures be limited to 75 dBA over a 24-hour period. Other researchers recommended that the weekly sound dose be limited to the equivalent of 75 dBA for 40 hours/week for children and users who are sensitive to intense sound stimulation. Personal sound amplification products (PSAPs) Personal sound amplification products are ear-level amplification devices intended for use by persons with normal hearing. The output levels of 27 PSAPs that were commercially available in Europe were analyzed in 2014. All of them had a maximum output level that exceeded 120 dB SPL; 23 (85%) exceeded 125 dB SPL, while 8 (30%) exceeded 130 dB SPL. None of the analyzed products had a level limiting option. The report triggered the development of a few standards for these devices. The ANSI/CTA standard 2051 on "Personal Sound Amplification Performance Criteria" followed in 2017. It specified a maximum output sound pressure level of 120 dB SPL. In 2019, the ITU published standard ITU-T H.871 called "Safe listening guidelines for personal sound amplifiers". This standard recommends that PSAPs measure the weekly sound dose and adhere to a weekly maximum of less than 80 dBA for 40 hours. PSAPs that cannot measure weekly sound dose should limit the maximum output of the device to 95 dBA. It also recommends that PSAPs provide clear alerts in their user guides, packaging, and ads mentioning the risks of ear damage that can result from using the device and providing information on how to avoid these risks. A technical paper describing how to test the compliance of various personal audio systems/devices to the essential/mandatory and optional features of Recommendation ITU-T H.870 was published in 2021. Entertainment venues Both those working in the music industry and those enjoying recreational music at venues and events can be at risk of experiencing hearing disorders. In 2019, the WHO published a report summarizing regulations for control of sound exposure in entertainment venues in Belgium, France, and Switzerland. The case studies were published as an initial step towards the development of a WHO regulatory framework for control of sound exposure in entertainment venues. In 2020, a couple of reports described exposure scenarios and procedures in use during entertainment events. These took into account the safety of those attending an event, those exposed occupationally to the high intensity music, as well as those in surrounding neighborhoods. Technical solutions, practices of monitoring and on-stage sound are presented, as well as the problems of enforcing environmental noise regulations in an urban environment, with country specific examples. Several different regulatory approaches have been implemented to manage sound levels and minimize the risk of hearing damage for those attending music venues. A report published in 2020 identified 18 regulations regarding sound levels in entertainment venues – 12 from Europe and the remainder from cities or states in North and South America. Legislative approaches include: sound level limitations, real-time sound exposure monitoring, mandatory supply of hearing protection devices, signage and warning requirements, loudspeaker placement restrictions, and ensuring patrons can access quiet zones or rest areas. The effectiveness of these measures in reducing the risk of hearing damage has not been evaluated, but the adaptation of the approaches described above is consistent with the general principles of the hierarchy of controls used to manage exposure to noise in workplaces. Patrons of music venues have indicated their preference for lower sound levels and can be receptive when earplugs are provided or made accessible. This finding may be region or country-specific. In 2018, the U.S. Centers for Disease Control and Prevention published the results of a survey of U.S. adults related to the use of a hearing protection device during exposure to loud sounds at recreational events. Overall, more than four of five reported never or seldom wearing hearing protection devices when attending a loud athletic or entertainment event. Adults aged 35 years and older were significantly more likely to not wear hearing protection than were young adults aged 18–24 years. Among adults who frequently enjoy attending sporting events, women were twice as likely as men to seldom or never wear hearing protection. Adults who were more likely to wear protection had at least some college education or had higher household incomes. Adults with hearing impairment or with a deaf or hard-of-hearing household member were significantly more likely to wear their protective devices. The challenges in implementing measures to reduce risks to hearing in a wide range of entertainment venues – whether through mandatory or voluntary guidelines, with or without enforcement – are significant. It requires involvement from many different professional groups and buy-in from both venue managers and users. The WHO and ITU Global Standard for Venues and Events released on World Hearing Day 2022 offers resources to facilitate action. The standard details six features recommended for safe listening venues and events. The standard can be used by Governments to implement legislation, by owners and managers of venues and events to protect their clientele, and by audio engineers, and by other staff. A 2023 survey showed that U.S. adults acknowledge the risks posed by high sound exposures at concerts and other events. Results indicated an interest towards protective actions, such as limiting sound levels, posting warning signs, and wearing hearing protection. Fifty four percent of the study participants agreed that sound levels at concert venues should be limited to reduce risk for hearing disorders, seventy five percent agreed that warning signs should be posted when sound levels are likely to exceed safe levels, and 61% of respondents stated that they would wear hearing protection if s provided when sound levels were likely to exceed safe levels. Personal interventions While establishing effective public and community health interventions, enacting appropriate legislation and regulations, and developing pertinent standards for listening and audio systems are all important in establishing a societal infrastructure for safe listening, Individuals can take steps to ensure that their personal listening habits minimize their risk of hearing problems. Personal safe listening strategies include: Listening to PLSs at safe levels, such as 60% of the volume range. Noise-cancelling headphones and sound-isolating earphones can help one avoid turning the volume up to overcome loud background noise. Sound measurement apps can help one find out how loud sounds are. If not measuring the sound levels, a good rule of thumb is that sounds are potentially hazardous if it is necessary to speak in a raised voice to be heard by someone an arm's length away. Moving away from the sound or using hearing protection are approaches to reduce exposure levels. Monitoring the amount of time spent in loud activities helps one manage risk. Whenever possible, take a break between exposures so the ears can rest and recover. Watching for warning signs of hearing loss. Tinnitus, difficulty hearing high pitched sounds (such as birds singing or cell phone notifications), and trouble understanding speech in background noise can be indicators of hearing loss. Getting a hearing test regularly. The American Speech Language Hearing Association recommends that school-aged children be screened for hearing loss annually from kindergarten through the third grade, then again in 7th and 11th grade. Adults should have their hearing tested every ten years until they reach age 50, and every three years after that. Hearing should be tested sooner if any warning signs develop. Teaching children and young adults about the hazards of overexposure to loud sounds and how to practice safe listening habits could help protect their hearing. Good role models in their own listening habits could also prompt healthy listening habits. Health care professionals have the opportunity to educate patients about relevant hearing risks and promote safe listening habits. As part of their health promotion activities, hearing professionals can recommend appropriate hearing protection when necessary and provide information, training and fit-testing to ensure individuals are adequately but not overly protected. Wearing earplugs to concerts has been shown to be an effective way to reduce post-concert temporary hearing changes. See also Sound Sound power level Noise-induced hearing loss Noise regulation Loud music Global Audiology Health problems of musicians Hearing Electronic Music Foundation Tinnitus Diplacusis Hyperacusis World Hearing Day Safe-in-Sound Award International Society of Audiology Acoustic trauma List of films featuring the deaf and hard of hearing References External links American Academy of Audiology, Audiological Services for Musicians and Music Industry Personnel , 2020. Apple Hearing Study, University of Michigan. Global Audiology, International Society of Audiology World Health Organization (WHO) Childhood hearing loss: act now, here's how infographic. Introduction to the World Health Organization program on hearing and its initiative to Make Listening Safe, Dr. Shelly Chadha, March 2015. World Health Organization (WHO) and International Telecommunication Union (ITU) Consultation on Make Listening Safe initiative, March 2015. World Health Organization (WHO), 2019. Toolkit for safe listening devices and systems. Safe listening devices and systems: a WHO-ITU standard. 2019. World Health Organization, Hearing loss due to recreational exposure to loud sounds: A review. World Health Organization, Regulation for control of sound exposure in entertainment venues. Case studies from Belgium, France and Switzerland. December 2019. World Health Organization, Make Listening Safe, Activities 2019. World Health Organization, Tips for safe listening 2019. Available in several languages. World Health Organization, Consultation on Make Listening Safe Initiative 2020. World Health Organization, World Report on Hearing, 2021. European Association of Hearing Aid Professionals (AEA). Make Listening Safe resources. Standards for Safe Listening – how they align and how some differ, ENT News, May 2020. National Acoustics Laboratories, Know your Noise. Information about noise or music exposure and its impact on your hearing health. Hearing Australia, Tips for safe listening using headphones and earbuds. National Center for Environmental Health, Centers for Disease Control and Prevention, Statistics about the Public Health Burden of Noise-Induced Hearing Loss. National Center for Environmental Health, May is Better Hearing and Speech Month (cdc.gov) 2021. National Center for Environmental Health, Centers for Disease Control and Prevention, Loud noise can cause hearing loss.  Resources. Centers for Disease Control and Prevention, Vital Signs: hearing loss. National Institute for Occupational Safety and Health (NIOSH), Centers for Disease Control and Prevention, Noise and hearing loss prevention. National Institute for Occupational Safety and Health (NIOSH), Centers for Disease Control and Prevention, Reducing the Risk of Hearing Disorders among Musicians. National Institute for Occupational Safety and Health, Centers for Disease Control and Prevention, NIOSH Sound Level Meter app. Safe-in-Sound Excellence in Hearing Loss Prevention Award winners. World Health Organization- Short videos on World Hearing Day materials, available in six languages. Listening Acoustics Audiology Audio engineering Consumer electronics Health communication Loudspeakers World Health Organization Health campaigns Hearing
Safe listening
[ "Physics", "Engineering" ]
4,655
[ "Electrical engineering", "Audio engineering", "Classical mechanics", "Acoustics" ]
66,007,428
https://en.wikipedia.org/wiki/PF-04745637
PF-04745637 is a drug which acts as a potent and selective antagonist for the TRPA1 receptor, with an IC50 of 17nM, vs ~3μM at the related TRPV1 and TRPM channels. It has antiinflammatory effects and was developed as a potential treatment for conditions such as atopic dermatitis. References 4-Chlorophenyl compounds
PF-04745637
[ "Chemistry" ]
89
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
66,007,719
https://en.wikipedia.org/wiki/WASP-63
WASP-63 or Kosjenka, also known as CD-38 2551, is a single star with an exoplanetary companion in the southern constellation of Columba. It is too faint to be visible with the naked eye, having an apparent visual magnitude of 11.1. The distance to this system is approximately based on parallax measurements, but it is drifting closer with a radial velocity of −24 km/s. Nomenclature The designation WASP-63 indicates that this was the 63rd star found to have a planet by the Wide Angle Search for Planets. In August 2022, this planetary system was included among 20 systems to be named by the third NameExoWorlds project. The approved names, proposed by a team from Croatia, were announced in June 2023. WASP-63 is named Kosjenka and its planet is named Regoč, after characters from Croatian Tales of Long Ago by Ivana Brlić-Mažuranić. Stellar properties This is a G-type star with a stellar classification of G8; the luminosity class is currently unknown. The star is much older than the Sun at approximately 8.3 billion years. WASP-63 is slightly enriched in heavy elements, having 120% of the solar abundance of iron. The stellar radius is enlarged for a G8 star, and models suggest it has evolved into a subgiant star. It has 1.1 times the mass of the Sun and is spinning with a projected rotational velocity of 3 km/s. Planetary system In 2012 a transiting gas giant planet WASP-63b was detected on a tight, circular orbit. Its equilibrium temperature is , and measured dayside temperature is . The planet is similar to Saturn in mass but is highly inflated due to proximity to the parent star. The planetary atmosphere contains water and likely has a high cloud deck of indeterminate composition. References G-type subgiants Planetary systems with one confirmed planet Planetary transit variables Columba (constellation) J06172074-3819237 CD-38 02551 063 0483 Kosjenka
WASP-63
[ "Astronomy" ]
433
[ "Astronomy organizations", "Columba (constellation)", "Constellations", "Wide Angle Search for Planets" ]